Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

dispatchEventQueue #50

Open
iposter2016 opened this issue Feb 15, 2019 · 14 comments
Open

dispatchEventQueue #50

iposter2016 opened this issue Feb 15, 2019 · 14 comments

Comments

@iposter2016
Copy link

iposter2016 commented Feb 15, 2019

When market data volume peaks, dispatchEventQueue doesn't return for a very long time (at least a minute or more), and then I see the memory footage of the application keeps growing.
I tried giving no timeout, or setting timeout=1, and both showed the same issue.
I'm testing with US stock market and it's been happening near market close when the data volume is high.
Has anyone experienced a similar problem?

@wiwat-tharateeraparb
Copy link
Contributor

Timeout in dispatchEventQueue is to tell the function to wait for updates before returning. In this case, the process seems unable to dispatch events fast enough. Does your program do anything with the updates or just dispatching data? And do you monitor your system resource e.g. CPU?

@iposter2016
Copy link
Author

iposter2016 commented Feb 16, 2019

CPU usage doesn't change but memory footage keeps growing when data volume surges. I do additional low-latency processing for each update. However, I was measuring time for dispatchEventQueue alone and it was taking a very long time.
I also tried setting a separate thread for dispatchEventQueue that will put the updates in a local queue. In this case, I saw the same type of latency with dispatchEventQueue, but my local queue was not exploding, which suggests it probably wasn't my local processing that was slowing it down.

@wiwat-tharateeraparb
Copy link
Contributor

wiwat-tharateeraparb commented Feb 17, 2019

Would you be able to try below PyRFA configuration for higher data volume? Change Connection_RSSL and Session1 to match yours.

\Connections\Connection_RSSL\numInputBuffers = "1000"
\Sessions\Session1\threadModel = "Dual"
\Sessions\Session1\responseQueueBias = "500"
  • numInputBuffers by default is 5.
  • threadModel by default is 'Dual'. By setting to Single it will be suitable for low-latency data.
  • responseQueueBias by default is 50. This is the number of messages when PyRFA attempts to dispatch each time. This might helps.

@iposter2016
Copy link
Author

iposter2016 commented Feb 17, 2019

Thank you. I'll try them when the market opens again.
By the way, I'm confused with your comment about "threadModel". Did you mean I should try making it "Single" or "Dual"?

@wiwat-tharateeraparb
Copy link
Contributor

It should be “Dual” in this case.

@iposter2016
Copy link
Author

iposter2016 commented Feb 19, 2019

I tried your suggestion. Unfortunately, I still see the same problem. I even removed all my local processing except counting the number of updates. I see the slowdown less immediately but after a couple of hours, it came back and seems to be stuck. It is very strange.

@wiwat-tharateeraparb
Copy link
Contributor

Are you able to share your code and your server specs e.g. RAM and CPU.

@iposter2016
Copy link
Author

iposter2016 commented Feb 19, 2019

I can't but I can say RAM and CPU should never be a bounding factor, especially because I do the same job written in a traditional RFA API (c++, java) without a problem. The number of symbols I subscribe to is between 3k and 5k.

@iposter2016
Copy link
Author

Could there be any other connection configs I can play with? Thank you very much for the help.

@iposter2016
Copy link
Author

iposter2016 commented Feb 20, 2019

Now, below is basically what I did other than occasional print which I didn't show below. So I'm doing almost nothing other than dispatching.
It still got stuck and memory started blowing up. It happened after len(updates) was 110,862.
Please disregard quote symbols. I don't know how to make them disappear.

    `while True:
        t0 = time.time()
        updates = cns.dispatchEventQueue(5)
        disptime += time.time() - t0
        dispcnt += 1

        if updates:
            t0 = time.time()
            tickcnt += len(updates)
            tickproctime += time.time() - t0`

@wiwat-tharateeraparb
Copy link
Contributor

Can you change to use time.sleep() instead:

while True:
    time.sleep(0.005)
    updates = cns.dispatchEventQueue()
...

@iposter2016
Copy link
Author

iposter2016 commented Feb 20, 2019

It's interesting. That actually reduced the dispatch time by an order of magnitude.
But I see latency of dispatch becomes longer after a certain tipping point. It looks as if something is blocking the dispatch function from returning.

I'm wondering, whether dispatchEventQueue is not fast enough to handle the volume when tick data surges, especially because it creates a python list of dictionary from field updates.
Sometimes one call of dispatchEventQueue returns 250k updates and this takes about 6 seconds.

@wiwat-tharateeraparb
Copy link
Contributor

One thing that dispatchEventQueue does is to keep dispatching until no more updates in queue before returning tuples to Python.

We will look into it.

@iposter2016
Copy link
Author

Thanks. That sounds like a very likely cause.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants