-
Notifications
You must be signed in to change notification settings - Fork 3
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Make Encode
methods accept []byte
#36
base: main
Are you sure you want to change the base?
Conversation
Codecov ReportAttention: Patch coverage is
Additional details and impacted files@@ Coverage Diff @@
## main #36 +/- ##
==========================================
+ Coverage 65.81% 67.31% +1.49%
==========================================
Files 41 41
Lines 1805 1802 -3
==========================================
+ Hits 1188 1213 +25
+ Misses 447 435 -12
+ Partials 170 154 -16
... and 1 file with indirect coverage changes
Flags with carried forward coverage won't be shown. Click here to find out more. |
This reverts commit ffbce94.
@noisersup this pull request has merge conflicts. |
Pull request was converted to draft
must.BeTrue(raw != nil) | ||
return raw, nil | ||
copy(out, raw) // FIXME |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That part is very unfortunate. The whole idea was to avoid making extra copies – and here, we have to make an extra copy to satisfy the interface. Maybe we should drop AnyDocument/AnyArray altogether?..
On hold for now
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I also wasn't satisfied with this solution, but it seems to be the only solution with our current AnyDocument
contract.
If we want to stay with Any(Document/Array) we might consider passing pointer to RawDocument
as an argument instead.
On the other hand the improvements from Encode
of non-raw objects affect benchmark results significantly, so we could merge this one and create another issue to improve raw objects. What do you think?
EDIT: If I think about it know I'm not sure if we even have (RawDocument).Encode() benchmarks
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The benchmark results are a bit misleading: the speedup comes (as expected!) from the fact that we re-use the memory. But to benefit from it, we need to actually reuse the memory in FerretDB, and we haven't done that yet.
Let's release the first version of FerretDB v2 first, and then try again with this PR.
@noisersup this pull request has merge conflicts. |
@noisersup this pull request has merge conflicts. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is this PR still needed?
Closes #21.
Encoding benchmarks sped up by around 85-90%. The geomean is affected by much smaller improvements in logging benchmarks.