-
Notifications
You must be signed in to change notification settings - Fork 3.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Trino data lake connectors categorize HDFS quota exceeded exceptions as EXTERNAL but should be USER_ERROR #22462
Comments
cc @pajaks |
Opened #22962 for this |
Discussed this on Slack in #core-dev between myself, @wendigo , and @electrum. A few key points:
The final conclusion was that the "best" path forward is to add a plugin/extension interface to allow for custom exception mappings/wrappings. This is because even for a quota exceeded exception, you can consider that a user error (if quotas are managed by users), or you can consider that an external error (if quotas are managed by system admins). This is very generally useful functionality to allow for custom interpretation of different errors messages and enhancement with site-specific context. So, this specific change won't be made, but we may look into making it possible to make this categorization in a plugin. |
When using a data lake connector (Hive/Iceberg/etc.) to write data to HDFS using Trino, we may see a QuotaExceededException (e.g. namespace quota or disk space quota exceeded). This is a userspace issue, but currently we categorize it as an
EXTERNAL
error.I would like to work on a fix for this, but have a couple of things I'd like to discuss before moving forward:
My initial approach would be to modify
HdfsOutputFile
to add a new catch block here forQuotaExceededException
, which will throw aTrinoException
with typeUSER_ERROR
:trino/lib/trino-hdfs/src/main/java/io/trino/filesystem/hdfs/HdfsOutputFile.java
Lines 102 to 105 in 6697fe2
(I think we would need to modify
HdfsOutputStream#write()
as well to catch disk space quota issues specifically, but need to double-check.)However I noticed that there is only one place in all of the various
trino-filesystem-*
/trino-hdfs
modules where we throw aTrinoException
, so I am wondering if there is a different best-practice for surfacing this kind of issue from the FS layer?Are there analogous concepts on other blob stores (S3/GCS/Azure) that we should handle similarly?
As one example, when writing ORC data from Hive or Iceberg using
OrcFileWriterFactory
(handled here) orIcebergFileWriterFactory
(handled here), you get an error with type(HIVE|ICEBERG)_WRITER_OPEN_ERROR)
and message of simply "Error creating ORC file", which makes it challenging for the end user to understand that there is something on their end to correct (quota issue).The text was updated successfully, but these errors were encountered: