Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Using receive(long timeout) to await the reply for RPC call limits throughput to 10/second #65

Open
wheezil opened this issue Oct 1, 2018 · 8 comments
Assignees

Comments

@wheezil
Copy link

wheezil commented Oct 1, 2018

To reproduce:
Unpack and build the attached archive.
On one command-line start the server:
java -cp example-1.0-shaded.jar JMSPassiveRpcServerTest
On a second command-line run the client:
java -cp example-1.0-shaded.jar JMSRpcClientTest
Expected result : throughput is at least 100s/second, especially if the ITER constant is made larger than 100.
Actual result : throughput is less than 10/second.

Note that a native RabbitMQ RPC client/server test is also included in this archive (classes RpcClientTest and RpcServerTest), which achieves good performance.

My analysis:
The MessageConsumer.receive(long timeout) method is implemented using DelayedReceiver. When DelayedReceiver.get() finds that no message is available, it sleeps for 100ms before trying again. Since it is almost always the case that the server has not replied before receive() is called by an RPC client, every call tends to incur this penalty.

Suggested solution:
Implement MessageConsumer.receive() using a background receiver to accept messages and a thread-safe queue to exchange messages between background and foreground, similar to the implementation of the RabbitMQ RPCClient class. Note that this may induce some complexity in the implementation, or make it impossible for a client to both call receive() AND use a MessageListener on the same Connection (however, I think this is already disallowed -- you cannot both have a MessageListener and call receive())..
rabbitmq-rpc-example.zip

@acogoluegnes
Copy link
Contributor

It's hard to compare AMQP RPC support with JMS RPC based on MessageConsumer#receive: the first is based on RabbitMQ direct reply-to and uses a long-running consumer, whereas the second is based on a temporary queue and uses polling.

A first improvement on the JMS side would be to use a MessageListener and a BlockingQueue:

MessageConsumer responseConsumer = session.createConsumer(replyQueue);
BlockingQueue<Message> queue = new ArrayBlockingQueue<>(1);
responseConsumer.setMessageListener(msg -> queue.add(msg));

message.setJMSReplyTo(replyQueue);
producer.send(message);
Message response = queue.poll(5, TimeUnit.SECONDS);

This implies registering a consumer though (one network roundtrip). This could perform better than polling in some cases.

#68 brings support for direct reply-to, combined with MessageListener/BlockingQueue, this would bring the JMS RPC implementation closer to the AMQP one.

At last, a JMS RPC client using the same implementation as the AMQP client's RpcClient would be interesting to test. It's just a matter of registering a single consumer and then dispatching responses based on the correlation ID.

So not sure we should change the current implementation of MessageConsumer#receive, but all this brought up interesting questions about how to properly implement RPC with the JMS client.

@wheezil
Copy link
Author

wheezil commented Oct 3, 2018

@acogoluegnes I can't tell whether you are suggesting that receive() should be "fixed", or JMS-based RPC clients should use different patterns, or we should wait for #68 to be implemented and that will take care of things. The reason I entered this issue is that, as a new user of JMS, I naturally searched for "JMS RPC example", and came up with something very close to the attached example. When this example failed (first due to #47, and second due to this performance limitation), I thought it was a shame because I had such good experiences using RabbitMQ native API. Perhaps publishing a "better JMS RPC example" using the pattern you describe would be sufficient without fixing anything.

@acogoluegnes
Copy link
Contributor

I'm suggesting that we shouldn't draw conclusions too fast from the numbers above and change anything in MessageConsumer#receive.

Once #68 is merged, we'll add some documentation and samples to deal with RPC with the JMS client, release 1.11.0.RC1, and see what comes out of it.

@wheezil
Copy link
Author

wheezil commented Oct 3, 2018

@acogoluegnes: Thanks. It would be nice, if it turns out that RabbitMQ JMS usage requires some special considerations, to make that clear to the first-time user.

@acogoluegnes
Copy link
Contributor

@wheezil JMS client 1.11.0.RC2 with better RPC support is released.

The documentation isn't online yet because this is still a release candidate, but the PR is there: https://github.com/rabbitmq/rabbitmq-website/pull/598/files.

You can give it a try and let us know what you think.

@acogoluegnes
Copy link
Contributor

The current implementation MessageConsumer#receive is fairly complex and changing it could have some impacts, especially when it comes to closing resources.

This implementation has the advantage to rely only on the calling thread. Its main disadvantage is the constant delay incurred because of the polling approach.

Here are some suggestions in order of complexity:

  • make the polling interval configurable. With low values the initial constant delay would have less impact, but there would be more network roundtrips.
  • make the polling more configurable, with e.g. some strategy interface. Exponential delay could be a way to have faster response faster and long response with fewer roundtrips, but predictability would suffer :)
  • make the implementation pluggable/switchabe and provide an implementation based on asynchronous consumer.

@wheezil
Copy link
Author

wheezil commented Oct 9, 2018 via email

@acogoluegnes
Copy link
Contributor

Thanks for the feedback. I was thinking of a delay strategy that could be set up on the RMQConnectionFactory. The default would stick to the current behavior, and the library would provide a fixed delay strategy with a configurable delay (the default one with 100 ms for the delay) and an exponential one with the behavior you're suggesting (initial and max delay).

@acogoluegnes acogoluegnes modified the milestones: 1.11.0, 1.12.0 Nov 22, 2018
@acogoluegnes acogoluegnes modified the milestones: 1.12.0, 1.13.0 Jul 5, 2019
@acogoluegnes acogoluegnes modified the milestones: 1.13.0, 1.14.0 Aug 29, 2019
@acogoluegnes acogoluegnes removed this from the 1.14.0 milestone Oct 18, 2019
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

2 participants