-
Notifications
You must be signed in to change notification settings - Fork 1.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
usdc_reader benchmark #15303
base: develop
Are you sure you want to change the base?
usdc_reader benchmark #15303
Conversation
AER Report: CI Core ran successfully ✅AER Report: Operator UI CI ran successfully ✅ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks good!
core/capabilities/ccip/ccip_integration_tests/usdcreader/usdcreader_test.go
Outdated
Show resolved
Hide resolved
@@ -201,7 +207,147 @@ func Test_USDCReader_MessageHashes(t *testing.T) { | |||
}) | |||
} | |||
} | |||
func Benchmark_MessageHashes(b *testing.B) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should we add the gobench results above the test as a comment? (as a reference)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ideally we want to track the performance regressions somehow (maybe it's not in the scope for 1st iteration, but wanted to call it out anyway). Some time ago, I saw this tool https://pkg.go.dev/golang.org/x/perf/cmd/benchstat, but not sure how much it fits our use case
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I agree, benchmark tests provide very few impact if not compared. We could add a tool like benchstat in the CI and fail if we noticed a reg.
}{ | ||
{"Small_Dataset", 100, 1, 5}, | ||
{"Medium_Dataset", 10_000, 1, 10}, | ||
{"Large_Dataset", 100_000, 1, 50}, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nice, was there any major difference between these 3 datasets?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
// Benchmark_MessageHashes/Small_Dataset-14 3723 272421 ns/op 126949 B/op 2508 allocs/op
// Benchmark_MessageHashes/Medium_Dataset-14 196 6164706 ns/op 1501435 B/op 20274 allocs/op
// Benchmark_MessageHashes/Large_Dataset-14 7 163930268 ns/op 37193160 B/op 463954 allocs/op
It's going linearly. I think that the remappings we have (first the messageSentEvents and then out) are a big offender (apart for the GetQuery).
|
||
// Create log entry | ||
logs = append(logs, logpoller.Log{ | ||
EvmChainId: ubig.New(big.NewInt(int64(uint64(source)))), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
hmm, is that right? We should use chain_id, not the chain selectors which is stored in source
, right?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The simulated backend is created with the source chain. The only part we are using the ChainID
is for the transactor creation (as we don't have any other choice here, geth is complaining about it).
Requires
Supports