-
Notifications
You must be signed in to change notification settings - Fork 6
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Measurement Method Is Not Accurate #9
Comments
The difference in Revenj speed comes from two main sources:
If you try to use Revenj with some other streams it won't be able to leverage various optimizations. I wrote two separate classes for the same model as one is DSL managed and the other is what people usually write. Thus the custom factory which is able to create and initialize both instances. This is really old benchmark and I did not evolve Revenj C# since than. I did the Java version which at the time worked like Revenj, but today is like any other serializer. So all in all, your analysis is flawed and if you are really interested in it, you should redo it with the new information I explained here. |
Thanks for the additional information. I have checked further because this still does not explain why I see different amounts of CPU for the object factories.
I have skipped serialization to measure the object creation overhead only and the numbers did normalize. This looks like a CPU cache artefact. I have also moved object creation out of the measure loop and created the objects beforehand and then randomize array access to get random memory access. The numbers did change a bit but not much. It looks like you are 60% up to two times faster than JIl with your pregenerated code. The only pitty is that your serializer will not create the highly efficient code for random objects and you never did implement that. I cannot reproduce your published numbers with your AMD CPU. I have an i7-4770K CPU @ 3.50GHz where the differences are not so big. So yes good work but in the meantime we have with Utf8Json a serializer which is for small objects even faster and works with pretty much any object. |
The latest bench was done on Windows/Intel with various updates to the libraries. Java DSL version is much closer to the real limit. Not sure how fast Utf8Json is, but if it's noticeably slower than that there is still a lot of room for improvements. I was rather surprised when I originally wrote this how most libraries fail to work with anything non-trivial. And people blamed me/this benchmark that their favorite library is buggy instead of accepting that maybe, just maybe they are drinking some kool-aid and jumping to conclusions. |
I was wondering why in your tests Revenj is so much better. Especially since I know Jil is near the optimum what can be done based on my measurements here:
I add Revenj 1.5.1 Nuget package and tried it vs Jil.
According to my tests JIL is at least 4 times faster. Then I did take a look at your test suite and found that you use factory delegates although at a first look everything points towards to the same factory
This factory for uses dynamic for some reason which looks strange but ok:
After pulling out the heavy stuff like Intels VTune I was checking if your micro benchmark shows differences in Cache Level behavior or other exotic things. It turns out it is much simpler. You are creating entirely different objects which allocate a different amount of data because Models.Small.Message is the type which contains the factory which is used for JIL and other things but SmallObjects.Message is a different object defined in Revenj.Serialization.dll which contains generated code.
That is a little cheating here because you will win every startup measurement with pregenerated code which is not the most fair comparison. But anyway you are faster that is ok.
Now I did take the liberty to add to your test Revenj 1.5 from nuget and tested your serializer with the same data object and for serialize Jil is indeed nearly two times faster than RevenJ.
If I include Serialize and Deseralize then RevenJ is over two times slower if the same data object is used and not some pregenerated code in conjunction with some serializer which has no Nuget package is used.
I can fully support your claim on your main page https://github.com/ngs-doo/json-benchmark
This also includes you.
Please update your test suite with a fair comparison of different serializers which leads to reproducible results. By the way Utf8Json is even faster also with your test suite.
The text was updated successfully, but these errors were encountered: