-
Notifications
You must be signed in to change notification settings - Fork 251
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
tracking issue (brain dump): unified scheduler in block-production #3832
Comments
Awesome. I'm excited to see this in mainnet, and generally more variability in scheduling is good for network health & security. Some initial thoughts about trade-offs between unified (unbatched) and central (batched) scheduler variants wrt block-production:
|
thanks for the input.
certainly, there will be some extra overhead left, no matter what I optimize the runtime. However, i think this overhead isn't that large actually as i demo-ed with a hacky bench in the past. In the future, I'll introduce a fast-path of unbatched tx exeuction. in it, there will be less cpu-dcache thrashing, fewer allocations, fewer instructions. Also, most of cpu should be spent on spinning on jitted on-chain program code in principle, which doesn't benefit from batching not so much. With all these personal analysis, I'm betting the overhead can be offset.
Thanks for working on TrnsationView by the way. :) I think it benefits both schedulers to the same extent, roughly speaking. Until the unified scheduler migration is complete, i can bench with some local changes, once after #3820 is merged.
I'm interested in seeing improved perf numbers of central scheduler after SIMD-0083, which will lift the artificial limitation currently only imposed on central scheduler. Regarding the mentioned shared accounts-cache, unified scheduler will also bypass it as part of the unbatched fast-path impl.
yeah, i hope this assumption will be held. |
Problem
my working style is messy.
Proposed Solution
create some sensible place for team collaboration.
and here it is
Overall status
design: 99% done
impl: 95% done
clean up: 90% done (ci is green)
perf eval: 80% done
write test for new code: 20% done
code review: 0% done
Code
the all-in-one messy pr: #2325
list of reviewed (and upcoming) prs:
Proposition/justification
tldr: unbatched scheduler is a thing.
With its final state, simple tx throughput is roughly same with central scheduler (see below early bench results). I think its competitiveness will remain, even after central scheduler is improved. This is because it's shown that the unbatched style of unified scheduler can be optimized extensively, to the point that its inherent overhead compared to batching can well be offset by its advantages: low-latency, maximum-parallelism, local-fee-market adherence
all the advantages contribute to the profit maximization for leaders, which is the ultimate utility of given block production methods from the operator's standpoint. low-latency means higher-paying transactions arriving in the middle of leaders slots can be timely included into blocks. maximum-parallelism means buffered non-conflicting higher-paying transactions can be cleared as fast as possible. Finally local-fee-market adherence means more dense block by just not idling worker threads. These advantages are reflected into the result of
simulate-block-production
as charted in the Google Sheet. All that said, these statements are actually needs to be proven at mainnet-beta....Finally, its scheduling logic sharing between block-verification and block-production means the possibility of wall-time based block metering, instead of CUs without introducing unacceptable variance of block processing duration.
Perf eval
on the same machine (AMD EPYC 7513, 32 Core). some numbers are taken from still-not pushed commits
note: there's upcoming improvement for
central-scheduler
with the transaction view. so, the numbers are tentative.solana-banking-bench
(4 non-vote threads)--write-lock-contention none
:--write-lock-contention none
(large batch):--write-lock-contention full
:solana-banking-bench
(16 non-vote threads)--write-lock-contention none
:--write-lock-contention none
(large batches):--write-lock-contention full
:./multinode-demo/bench-tps.sh
(4 non-vote threads)./multinode-demo/bench-tps.sh
(16 non-vote threads)(note: i commented out some code to disable block cost limits)
simulate-block-production
google sheets
The text was updated successfully, but these errors were encountered: