Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Trino data lake connectors categorize HDFS quota exceeded exceptions as EXTERNAL but should be USER_ERROR #22462

Open
xkrogen opened this issue Jun 20, 2024 · 6 comments

Comments

@xkrogen
Copy link
Member

xkrogen commented Jun 20, 2024

When using a data lake connector (Hive/Iceberg/etc.) to write data to HDFS using Trino, we may see a QuotaExceededException (e.g. namespace quota or disk space quota exceeded). This is a userspace issue, but currently we categorize it as an EXTERNAL error.

I would like to work on a fix for this, but have a couple of things I'd like to discuss before moving forward:

  1. My initial approach would be to modify HdfsOutputFile to add a new catch block here for QuotaExceededException, which will throw a TrinoException with type USER_ERROR:

    catch (org.apache.hadoop.fs.FileAlreadyExistsException e) {
    createFileCallStat.recordException(e);
    throw withCause(new FileAlreadyExistsException(toString()), e);
    }

    (I think we would need to modify HdfsOutputStream#write() as well to catch disk space quota issues specifically, but need to double-check.)
    However I noticed that there is only one place in all of the various trino-filesystem-*/trino-hdfs modules where we throw a TrinoException, so I am wondering if there is a different best-practice for surfacing this kind of issue from the FS layer?

  2. Are there analogous concepts on other blob stores (S3/GCS/Azure) that we should handle similarly?

As one example, when writing ORC data from Hive or Iceberg using OrcFileWriterFactory (handled here) or IcebergFileWriterFactory (handled here), you get an error with type (HIVE|ICEBERG)_WRITER_OPEN_ERROR) and message of simply "Error creating ORC file", which makes it challenging for the end user to understand that there is something on their end to correct (quota issue).

@xkrogen
Copy link
Member Author

xkrogen commented Jun 20, 2024

cc @electrum @weijiii @findepi

@wendigo
Copy link
Contributor

wendigo commented Jun 20, 2024

cc @pajaks

@xkrogen
Copy link
Member Author

xkrogen commented Jun 27, 2024

Bump @findepi @pajaks @electrum

If it's helpful to guide the discussion I can put together a PR for this following the proposed approach, but I would love feedback before that to avoid myself any duplicate work in the wrong direction 😄

@xkrogen
Copy link
Member Author

xkrogen commented Jul 23, 2024

One last bump @findepi @pajaks @electrum and then I guess I'll just work on a PR with the approach I described here

@xkrogen
Copy link
Member Author

xkrogen commented Aug 6, 2024

Opened #22962 for this

@xkrogen
Copy link
Member Author

xkrogen commented Sep 19, 2024

Discussed this on Slack in #core-dev between myself, @wendigo , and @electrum.

A few key points:

  • The contract is that filesystems only throw IOException; these would get translated to TrinoException in higher layers
  • We discussed adding a custom QuotaExceededException extends IOException as part of the SPI, and throw it as needed. Note that many places in code would need to change to correctly catch-and-translate this exception.

The final conclusion was that the "best" path forward is to add a plugin/extension interface to allow for custom exception mappings/wrappings. This is because even for a quota exceeded exception, you can consider that a user error (if quotas are managed by users), or you can consider that an external error (if quotas are managed by system admins). This is very generally useful functionality to allow for custom interpretation of different errors messages and enhancement with site-specific context.

So, this specific change won't be made, but we may look into making it possible to make this categorization in a plugin.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
2 participants