Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

S3Client with UploadPartCommand is not working after updating to the latest SDK version from 3.583.0 #6859

Open
3 of 4 tasks
ssafayet opened this issue Jan 31, 2025 · 9 comments
Assignees
Labels
bug This issue is a bug. investigating Issue is being investigated and/or work is in progress to resolve the issue. p2 This is a standard priority issue

Comments

@ssafayet
Copy link

Checkboxes for prior research

Describe the bug

I have updated my SDK js library to 3.738.0 from 3.583.0 and now it is throwing an error after sending it via S3Client using UploadPartCommand.

Params look like this:

new UploadPartCommand({
    Bucket: <BUCKET>
    Key: <KEY>,
    UploadId: <UPLOAD_ID>
    PartNumber: 3,    
    ContentLength: 20992280,
    Body: <FileStream>
})

Error:

An error was encountered in a non-retryable streaming request.
S3ServiceException [InvalidChunkSizeError]: Only the last chunk is allowed to have a size less than 8192 bytes

But this is not the last chunk. It is the number 3 chunk of 11 chunks. This has also failed for subsequent chunks.

Regression Issue

  • Select this option if this issue appears to be a regression.

SDK version number

@aws-sdk/[email protected]

Which JavaScript Runtime is this issue in?

Node.js

Details of the browser/Node.js/ReactNative version

20.13.1

Reproduction Steps

Try to create a createMultipartUpload using s3 client and then try to upload FileStream using UploadPartCommand.

Observed Behavior

This error should occur:

An error was encountered in a non-retryable streaming request.
S3ServiceException [InvalidChunkSizeError]: Only the last chunk is allowed to have a size less than 8192 bytes

Expected Behavior

FileStream should be uploaded correctly.

Possible Solution

No response

Additional Information/Context

No response

@ssafayet ssafayet added bug This issue is a bug. needs-triage This issue or PR still needs to be triaged. labels Jan 31, 2025
@aBurmeseDev aBurmeseDev self-assigned this Feb 3, 2025
@aBurmeseDev
Copy link
Member

Hi @ssafayet - thanks for reaching out.

I'm not able to reproduce this error with reported version on my end. Could you share your code snippet that shows entire Multipart Upload which includes CreateMultipartUploadCommand, UploadPartCommand, CompleteMultipartUploadCommand, AbortMultipartUploadCommand?

Any code changes since your last working version?

@aBurmeseDev aBurmeseDev added response-requested Waiting on additional info and feedback. Will move to \"closing-soon\" in 7 days. p2 This is a standard priority issue and removed needs-triage This issue or PR still needs to be triaged. labels Feb 3, 2025
@abustany
Copy link

abustany commented Feb 5, 2025

We also get the same error since upgrading from 3.713.0 to 3.738.0. In our case, we use a regular PutObjectCommand where the body is a nodejs readable stream (NodeJS version 20.18.1 on linux x64). Could this issue be the JS cousin of aws/aws-sdk-cpp#3132 ?

@ssafayet
Copy link
Author

ssafayet commented Feb 5, 2025

@abustany Yes, same here. Only for Readable Stream. If I convert the readable stream to buffer, the error goes away.

@aBurmeseDev, I will share a code snippet to reproduce this.

@github-actions github-actions bot removed the response-requested Waiting on additional info and feedback. Will move to \"closing-soon\" in 7 days. label Feb 6, 2025
@ssafayet
Copy link
Author

ssafayet commented Feb 6, 2025

@aBurmeseDev Hi, Here is my REPO with the reproducible code. Kindly follow the readme file to reproduce the issue. Let me know if you need any help. Just to mention that if you change back the s3 client version to 3.583.0 in that repo it starts working again.

@macourteau
Copy link

I'm experiencing the same issue. If it's any help, I've narrowed it down to working with 3.726.0 and failing with 3.729.0.

@aBurmeseDev aBurmeseDev added the investigating Issue is being investigated and/or work is in progress to resolve the issue. label Feb 7, 2025
@ssafayet
Copy link
Author

ssafayet commented Feb 7, 2025

@macourteau That's right. Confirming that this is also the case on my side. It works till 3.726.1 then it breaks from 3.729.0

@aBurmeseDev
Copy link
Member

Thanks everyone for reporting, I was able to reproduce using @ssafayet repro although the error occurs intermittently with different file sizes. I'll bring this up internally for further investigation to understand these inconsistencies and determine the root cause.

@kuhe
Copy link
Contributor

kuhe commented Feb 7, 2025

As the error says, you'll want to have the stream that's being sent as the Body to emit chunks of size 8192 bytes or greater, preferably 64kb. This should be configurable, for example with

fs.createReadStream('filepath', { highWaterMark: 64 * 1024 });

since your example appears to be using a file stream.

We can discuss whether the SDK should perform automatic buffering if you give it smaller chunks, but this might be bad for performance.

@ssafayet
Copy link
Author

ssafayet commented Feb 8, 2025

@kuhe I am using the resumablejs library to upload a file in chunks from the frontend to my server as a multipart upload, where Multer handles the multipart data. Each chunk (file.stream) is then passed to the S3 client. Do you suggest that the file.stream I get from Multer for each individual chunk must have the specified highWaterMark value? Has this requirement changed in the recent implementation? My current implementation works till version 3.726.1. It’s odd because file.stream is already a chunk that is also a readable stream. I thought the last chunk(file.stream) that I send via UploadPartCommand can have the chunkSize/ContentLength less than 8192 bytes.

The issue is that Multer uses Busboy to create a readable stream, which doesn’t provide an option to set the highWaterMark value. Even when I supply that value by using a custom Multer instance, it still throws an error. Here is the file.stream after applying highWaterMark:

FileStream {
  _readableState: ReadableState {
    state: 6160,
    highWaterMark: 65536,
    buffer: BufferList { head: [Object], tail: [Object], length: 2 },
    length: 128512,
    pipes: [],
    flowing: null,
    errored: null,
    defaultEncoding: 'utf8',
    awaitDrainWriters: null,
    decoder: null,
    encoding: null,
    [Symbol(kPaused)]: null
  },
  .......
}

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug This issue is a bug. investigating Issue is being investigated and/or work is in progress to resolve the issue. p2 This is a standard priority issue
Projects
None yet
Development

No branches or pull requests

5 participants