-
Notifications
You must be signed in to change notification settings - Fork 1.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Improving memory performance when it comes to snappy #11177
Improving memory performance when it comes to snappy #11177
Conversation
f78bc6a
to
d842e6c
Compare
Codecov ReportAll modified and coverable lines are covered by tests ✅
Additional details and impacted files@@ Coverage Diff @@
## main #11177 +/- ##
=======================================
Coverage 91.55% 91.56%
=======================================
Files 424 425 +1
Lines 20199 20198 -1
=======================================
+ Hits 18494 18495 +1
+ Misses 1320 1319 -1
+ Partials 385 384 -1 ☔ View full report in Codecov by Sentry. |
04d1409
to
4be7ad7
Compare
Moving snappy to lazy read from the original payload instead decompressing it in memory
a17556a
to
ecac030
Compare
ecac030
to
7ff0414
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The issue description makes me believe we were susceptible to a compression bomb attack, similar to the one we had with zstd. However, I remember you adding a test to ensure this is not the case. I'm a bit confused about this PR :-) Can you clarify its purpose?
It is technically, but within my local testing the snappy compression ratio wasn't high enough to do it within one request. Performing some local testing, to match the allowed default content size using snappy (snzip) it roughly expanded out to 430Mb which if you're using the recommended size of 2Gb of RAM per core then you'd roughly need 5 requests to be executed within the same time to breach that memory limit. The potential attack requires more of a setup compared to the previous zstd reported CVE and likelihood is less but non-zero. I had added the tests to help verify that check the read bytes here #11108 but not copied it out to a new PR.
This PR removes the intermediate step copying the compressed buffer from the request into an There is also the nice side affect of addressing the above concern since it is each compressed body is wrapped |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thank you for the explanation, it's now clear to me.
// compressReadCloser couples the original compressed reader | ||
// and the compression reader to ensure that the original body | ||
// is correctly closed to ensure resources are freed. | ||
type compressReadCloser struct { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why not just readCloser
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I mostly wanted to make it clear that the usage is intended for compression.
…11177) #### Description Moving snappy to lazy read from the original payload instead decompressing all of the buffer into memory. This is something I noticed while trying to introduce support for lz4 compression, moved into its own PR at the suggestion of @atoulme. #### Testing All of the tests are already present so no additional tests were needed.
…11177) #### Description Moving snappy to lazy read from the original payload instead decompressing all of the buffer into memory. This is something I noticed while trying to introduce support for lz4 compression, moved into its own PR at the suggestion of @atoulme. #### Testing All of the tests are already present so no additional tests were needed.
Description
Moving snappy to lazy read from the original payload instead decompressing all of the buffer into memory.
This is something I noticed while trying to introduce support for lz4 compression, moved into its own PR at the suggestion of @atoulme.
Testing
All of the tests are already present so no additional tests were needed.