-
Notifications
You must be signed in to change notification settings - Fork 16
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Some questions about benchmarks #6
Comments
Actually, I used your bench's data and structures to test these libs, and the results show the performance of sonic is better than your pictures look:
I guess it is because I added some warm-up codes before` testing —— sonic's JIT feature will slow down the first-time serialization/deserialization since it needs some time to compile the codec. |
Hi, I apologize for the delay. The benchmark code needed to be cleaned up first. You can see it here: https://github.com/go-json-experiment/jsonbench
Benchmark comparisons were performed based on the default behavior of
That's somewhat unfair to the other implementations. Every package has some degree of "warm-up" logic. If we want to be more fair, we should perform a single |
I'm using a AMD Ryzen 9 5900X, and you can see my benchmark results here: https://raw.githubusercontent.com/go-json-experiment/jsonbench/master/results/results.log |
I would argue that we don't want to hide or amortize the warm-up cost, because it will matter for many real applications. Take |
Thanks for the reply. The warm-up is necessary for sonic and I added it for every test including other libraries for the seek of fairness. In practice, the JIT processes are only triggered once or within minutes for most JIT-based applications, then left running time is non-JIT processes, which means it has much less influence on long-term apps. However, your benchmarks are short-term running (no more than 10 seconds each) and it is not applicable to the production environment. |
Would it be fair to publish benchmarks for both scenarios so that people can consider them in the context of their use cases? |
We can always split benchmark charts across different dimensions to make them more accurate, but at the cost of making it more incomprehensible. At present, we already have 6 charts. Splitting across "startup performance" versus "steady-state performance" will change this into 12 charts. Not to mention, we haven't thrown in other reasonable dimensions to split on (e.g., different architectures). So if we split upon Given that:
I don't think we need to make this distinction between startup cost and steady-state cost. And if we're not going to distinguish between the two, including startup cost in the benchmark for all implementations is the fairest thing to do. |
I'm okay with that - I just don't want us to only show steady-state cost performance, because that is half of the equation :) |
I want to check the benchmark codes in regards to https://github.com/go-json-experiment/json#performance, but the repo (https://github.com/dsnet/jsonbench) you mentioned is missing. Where can I find the codes?
BTW, you mentioned
SonicJSON
doesn't support sorting the keys for amap[string]any
. In fact, there is an optionencoder.SortKeys
orsonic.Config.SortKeys
to support this —— so maybe you can use it on your benchmarks.The text was updated successfully, but these errors were encountered: