-
Notifications
You must be signed in to change notification settings - Fork 69
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Running benchmarks with sequencer locally #1629
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
A few comments. Some are just for my own understanding.
sequencer/src/context.rs
Outdated
loop { | ||
match event_stream.next().await { | ||
None => { | ||
panic!("Error! Event stream completed before consensus ended."); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It seems strange that benchmarking logic would add panics. Should these errors be logged instead?
sequencer/src/context.rs
Outdated
} | ||
tracing::warn!("starting consensus"); | ||
self.handle.read().await.hotshot.start_consensus().await; | ||
|
||
#[cfg(feature = "benchmarking")] | ||
if has_orchestrator_client { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Bench-marking logic is adding lot of complexity, can we hide it in a function or method? I'm not sure of the best strategy, but maybe it could just be another method on SequencerContext that we call from here...
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
+1 Yeah I'm also thinking about that
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think you could just add a benchmark()
method, gated by benchmarking
feature to this same impl
. Then just call that method on line 267 (instead of has_orchestrator_client = true
). Then you wouldn't need the has_orchestrator_client
variable or the following if statement that uses it.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
After the restructure (motivation here #1695), I move all the benchmarking logic to submit-transactions.rs
, since we already calculated latency there. So now benchmarking doesn't have its own function but it's together with the calculation of latency after subscribing to availability/stream/blocks/{}
.
sequencer/src/context.rs
Outdated
Some(Event { event, .. }) => { | ||
match event { | ||
EventType::Error { error } => { | ||
tracing::error!("Error in consensus: {:?}", error); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Not sure how this is handled elsewhere, but would it be useful for introspection to distinquish benchmarking log events w/ a specific prefix?
Also cc @babdor for future tooling with benchmarks |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM.
Closes #1628 #1695
This PR:
This PR does not:
start_round
andend_round
, they're hard-coded now. This will be designed later.Key places to review:
How to test this PR:
Create
results.csv
underscripts/benchmarks_results
, then runjust demo-native-benchmark
to test it.