Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

fix: reduce max packet receive time during leader window #2801

Merged
merged 2 commits into from
Oct 8, 2024

Conversation

cavemanloverboy
Copy link

Problem

image

In certain cases (if the transaction container is emptied during a leader's window), the scheduler controller may wait up to 100 milliseconds for incoming packets.

Summary of Changes

Reduce the constant max wait from 100 ms to 10 ms.

@mergify mergify bot requested a review from a team August 30, 2024 18:45
@apfitzge apfitzge added the CI Pull Request is ready to enter CI label Aug 30, 2024
@anza-team anza-team removed the CI Pull Request is ready to enter CI label Aug 30, 2024
Copy link

@apfitzge apfitzge left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@bw-solana Could you review this simple change; I have tested very similar changes previouly.

As @cavemanloverboy mentioned, something that can happen is we schedule or drop all txs in our buffer and enter a 100ms receive while leader - this can be extremely slow.

Before we get a better type for deserialization I think this is a reasonable stop-gap solution.
Code change itself is straight-forward, but since I've made previous changes in past I think its' a bit sketch for me to approve this.

@apfitzge apfitzge requested a review from bw-solana August 30, 2024 21:40
Copy link

@bw-solana bw-solana left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM.

"Okay, it's bad we'll wait up to 100ms, but surely we'll hit the packet limit first in the practical case"

checks packet limit

🤡

@cavemanloverboy
Copy link
Author

LGTM.

"Okay, it's bad we'll wait up to 100ms, but surely we'll hit the packet limit first in the practical case"

  • checks packet limit *

🤡

so then I said its okay we just need to get 700,000 packets
image

@apfitzge apfitzge added the CI Pull Request is ready to enter CI label Oct 8, 2024
@anza-team anza-team removed the CI Pull Request is ready to enter CI label Oct 8, 2024
@apfitzge
Copy link

apfitzge commented Oct 8, 2024

i never added the damn ci flag. will watch this and merge

@apfitzge apfitzge added the automerge automerge Merge this Pull Request automatically once CI passes label Oct 8, 2024
@mergify mergify bot merged commit 20e0df4 into anza-xyz:master Oct 8, 2024
42 checks passed
@Jac0xb
Copy link

Jac0xb commented Oct 9, 2024

What are the implications of this change?

@cavemanloverboy
Copy link
Author

What are the implications of this change?

from OP: "In certain cases (if the transaction container is emptied during a leader's window), the scheduler controller may wait up to 100 milliseconds for incoming packets."

There is a (somewhat rare) case where the scheduler will collect packets for 100 ms before it even begins scheduling. That's 1/4 of the a slot... This will reduce this time to 10ms so that there is never a time where the scheduler is twiddling its thumbs waiting for packets.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
automerge automerge Merge this Pull Request automatically once CI passes community need:merge-assist
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants