-
Notifications
You must be signed in to change notification settings - Fork 1.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
perf(tx-pool): reuse write lock to insert txs batch #12806
base: main
Are you sure you want to change the base?
Conversation
c3bdc95
to
feab580
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
thanks, I think this makes sense,
even inserting a large number of transactions shouldn't block too long, can't remember what the numbers are but should only be in the µs
pedantic doc nit
|
||
// If at least one transaction was added successfully, then we enforce the pool size limits. | ||
// A micro-optimization is to reuse the `pool` write lock in the `added` scope above. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this comment seems a bit out of place here?
can we move this into the scope above?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Unlazied myself and implemented it right in this PR now 😅.
for the rpc part, this could be a dedicated task that receives all new txs, batches them and submits them in batches |
feab580
to
cd971ae
Compare
sidecar: sidecar.clone(), | ||
}, | ||
propagate: true, | ||
test_pool.add_transactions( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I changed the test to use the pub
function to ensure the pool always enforces the size in "real usage". The previous tested two private functions used in conjunction which wasn't as guaranteed (removing discarding in add_transactions
would still pass the test).
I cannot work on this immediately so created #12811. Maybe someone familiar with the RPC codebase can kill it better! |
@mattsse Kind reminder that this is available for review again 🙏. |
if discarded.is_empty() { | ||
return added | ||
} | ||
if !discarded.is_empty() { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
why this change?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is only refactoring that doesn't change semantics.
Previously:
- If
discarded
is empty then early returnadded
- Else make some changes to
added
- Return
added
at the end
Now:
- If
discarded
is not empty then make changes toadded
- Return
added
at the end
It also groups discard handling in a self-explanatory if
scope 🙏.
// delete any blobs associated with discarded blob transactions | ||
self.delete_discarded_blobs(discarded.iter()); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this logic has been removed now, right?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It's moved to the only call site (add_transactions
) now, in the if !discarded.is_empty()
check.
Currently, each transaction insertion competes for a write lock of the pool. This can be very congested for performance chains that expect 100k+ new transactions every second. Even more challenging if the block time is small -- canonical state changes also compete for the write lock several times a second.
Ideally, the RPC server can buffer transactions to insert in a batch when the pool write lock is unavailable, instead of competing for the write lock (and block anyway) for every insertion request.
I guess this PR is a baby step towards that 😅, to start reusing a single write lock in
PoolInner::add_transactions
.