Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Run again with 480 egines, 20 tasks/engine (top is main, 3093386f , bottom is this pr, a98a4fa). Workload is load-balanced submission of random 0-1s tasks (same seed), 20 tasks/engine for a total of 9600 tasks. #756

Closed
Cathy131415 opened this issue Nov 21, 2022 · 2 comments

Comments

@Cathy131415
Copy link

    Run again with 480 engines, 20 tasks/engine (top is main, 3093386f , bottom is this pr, a98a4fa). Workload is load-balanced submission of random 0-1s tasks (same seed), 20 tasks/engine for a total of 9600 tasks.

Screen Shot 2021-07-21 at 14 25 20

Can see that while the client is working to produce the tasks, there is still contention between serializing in the main thread and actually sending in the io thread until the main thread is done (purple line). This completes 1s faster in this PR (7.6s vs 8.4s). The first result doesn't arrive for 2 more seconds, which is really around when the last real send completes and receives start being processed.

The bubble can be seen around 11s in main, which is where sends and receives are both being processed, and this is gone after this PR.

Originally posted by @minrk in #534 (comment)

@Cathy131415
Copy link
Author

the problem is very difficult, i think it that will python and java that can

@minrk
Copy link
Member

minrk commented Nov 21, 2022

Looks like this issue was created by mistake, copying a comment out of PR #534. @Cathy131415 can I ask what your goal is here?

@minrk minrk closed this as completed Mar 17, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants