-
Notifications
You must be signed in to change notification settings - Fork 107
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Multiple connections per destination #315
Comments
You can't at the moment. I wonder why you feel this is necessary though? |
sending above 1M points/sec is hitting a limit on the receiver. Both carbon-c-relay and go-carbon barely cope with it via one connection. Multiple connections is the easiest way to fix it. As I understand each connection is processed in its own thread and this imposes the limit. |
hmmm, and how would you like to control the amount of connections to use per destination? |
config parameter, defaulted to 1? |
I am facing the same issue. Pushing millions of metrics per minute through a single tcp connection puts a lot of stress both at the receiving end (carbon-c-relays) and also at the load balancer that is in front of them (in my use case). |
I face the same issue, having one connection to a destination results in also having only one thread doing the write()'s to the connection, the thread saturates its CPU core and becomes a bottleneck. Since this is with both carbon-c-relay and destination server on same machine, I worked around it by doing:
Would be nice to simply have an option to spawn multiple connections/threads writing to the same destination like @azhiltsov suggested. I think the write() thread might also have lower than optimal throughput because it does a seperate write() for each single metric. |
I am wondering is it hard to implement multiple connections per destination, so it would not try to push everything via one tcp connection. While using hashing can I use the same host:port:hash trio in order to do so?
The text was updated successfully, but these errors were encountered: