-
Notifications
You must be signed in to change notification settings - Fork 44
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Abnormal memory access latency when using multiple servers #8
Comments
Hi @charles-typ , @guowentian , @ooibc88 , @cac2003 Can you give some advice?
|
I suggest you open a separate issue for this. I'm unsure about your problem, but maybe you can refer to my forked repo. I made a significant amount of fixes to make it work, so I hope you can get some help from the changes. |
Hi @ooibc88 @cac2003 @guowentian
I ran some performance benchmarks on GAM that yield unexpected latency numbers when I increase the number of servers, and I was hoping to get some insights from you regarding them. Below are details on the experimental setup, methodology and results.
Experiment setup:
Therefore VM1 and VM2 fetch data from VM3, and keep it in their local cache.
Method:
I replayed several memory traces captured from different applications against GAM, under two scenarios (listed below), and recorded the execution time for both of them. The memory footprint of the application (~1GB) is larger than local cache size (512MB), so there are evictions along with invalidations. All memory accesses are 1 byte.
Scenario 1: Replay the memory traces for 10 threads on VM1, keep VM2 idle.
Scenario 2: Replay the memory traces for 10 threads on VM1 and 10 threads on VM2; this means that there are invalidations between the VMs due to shared memory accesses.
Results:
I expected Scenario 2 to be slower due to more invalidations between VM1 and VM2, but found Scenario 2 was actually faster than Scenario 1.
To understand the results better, I profiled the memory access latency in GAM, separating the latency for local and remote memory accesses (as shown in the table below; only measured for read operations, since write operations are always asynchronous under the PSO model).
Even though there are invalidations in Scenario 2, the remote access latency is smaller for Scenario 2 compared to Scenario 1. Also there is a slight speed up in local memory accesses in Scenario 2.
Despite extensive profiling, I was unable to explain this strange behavior; is this expected? If so, why? Thank you for taking the time to read this issue --- I would really appreciate any help!
The text was updated successfully, but these errors were encountered: