Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

latency #16

Open
kojeff opened this issue May 25, 2016 · 4 comments
Open

latency #16

kojeff opened this issue May 25, 2016 · 4 comments

Comments

@kojeff
Copy link

kojeff commented May 25, 2016

Using this c++ driver it takes about 30ms to complete an insert with some simple data (using durability=soft and noreply=true it still takes about 30ms to receive the data in another process). Using node or python to insert and receive data on the same system we only have about 1ms of delay. Do you have any idea why we get much larger delays using the c++ driver? Thanks for any help you can provide.

Here is the snippet of how we use the c++ driver -

conn_ = R::connect("localhost", 28015);

R::db("c2c").table(table_name).insert(R::json(json_message)).run(
    *conn_, {
      {"durability", R::Query("soft")},
      {"noreply", R::Query(true)}});
@AtnNn
Copy link
Owner

AtnNn commented May 25, 2016

It is possible there is a bottleneck somewhere in this driver, I have done very little benchmarking.

How are you measuring your numbers? How long does it take for the C++ driver to construct the query? How long does the run take?

@kojeff
Copy link
Author

kojeff commented May 26, 2016

We measure the latency by taking the system timestamp before we do an insert and then taking a system timestamp on a receiver application (both apps run on the same system) after changes() notifies us of an insert and look at the difference in timestamps. In python or node it takes about 1-2 ms from insert to notification, but in c++ it is roughly about 30ms on average. We tested using the same receiver application (written in node) but changed the insert application between node, python, and c++.

I will also do some more careful analysis and post the results. Thanks for your help.

@kojeff
Copy link
Author

kojeff commented May 28, 2016

I was able to look into this further and I saw that the 30+ms latency was occurring after the send() call was made. Upon further investigation I found out the delay was caused by TCP buffering. I was able to resolve this and reduce the latency to 1ms to get similar or better results than node/python by adding the TCP_NODELAY socket option to the socket. I made a fork with the changes I made to net.cc. This could potentially cause performance issues if large json files are being sent, if that's the case we could detect that and use a TCP_CORK socket option to handle that case, but I am currently only worried about small writes.

https://github.com/kojeff/librethinkdbxx

@agauniyal
Copy link

agauniyal commented Aug 20, 2016

TCP will wait for certain amount of data to be accumulated so that number of packets to be sent remains low - read more about Nagle algorithm.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants