You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
It looks like traceroute-caller, at least with scamper-daemon, has limited capacity, and, even with -p 1000, seem to get slower and slower if there are too many requests coming in.
If we limit the number of concurrent traces, it will improve latency, and likely have little or no effect on throughput.
These dashboard panels for gru01 basically shows that, for current deployment, things work ok until about 60 to 70 concurrent traces, then rapidly start to get much worse. This happen at around 15 trace per minute, which is much too slow for our busier sites.
So, perhaps we should limit the number of concurrent traces we allow to start. We should evaluate the practical throughput with the pending deployment, and set a corresponding threshold for rejecting new traceroute requests. It appears that the limit can be fairly conservative - perhaps 30 or 40, as the throughput is quite insensitive to the concurrency.
The text was updated successfully, but these errors were encountered:
It looks like traceroute-caller, at least with scamper-daemon, has limited capacity, and, even with -p 1000, seem to get slower and slower if there are too many requests coming in.
If we limit the number of concurrent traces, it will improve latency, and likely have little or no effect on throughput.
These dashboard panels for gru01 basically shows that, for current deployment, things work ok until about 60 to 70 concurrent traces, then rapidly start to get much worse. This happen at around 15 trace per minute, which is much too slow for our busier sites.
So, perhaps we should limit the number of concurrent traces we allow to start. We should evaluate the practical throughput with the pending deployment, and set a corresponding threshold for rejecting new traceroute requests. It appears that the limit can be fairly conservative - perhaps 30 or 40, as the throughput is quite insensitive to the concurrency.
The text was updated successfully, but these errors were encountered: