-
Notifications
You must be signed in to change notification settings - Fork 3.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Daphne is slowly leaking memory via channels-redis #7720
Comments
This comment has been minimized.
This comment has been minimized.
In addition to the general leak described in Daphne, I've found another way to get Daphne to grow memory in an unbounded way. Internally, as Unfortunately, Daphne will continually grow this per-channel buffer in an unbounded way even if nothing seems to be reading from the other end (i.e., if the connection is closed, or if it's just reading too slowly). To illustrate why this can become a problem:
~ tc qdisc add dev eth0 root netem delay 500ms loss 50%
This is a fairy close approximation of the bug outlined at django/channels_redis#384 as it might affect AWX's busiest channel, the websocket backplane we use for broadcasting events to peers in a cluster. The messages in my testing are fairly small, so it takes awhile for memory to grow, but you could imagine that large messages (like lots of fact collection) would cause quicker memory growth. related: django/channels_redis#384 |
@ryanpetrello is this resolved via #8094 ? |
Yes. Thanks, @kdelee. |
Upstream patch is merged and released and patch as been verified in production by users experiencing the bug. We've bumped the versions we depend on to use fix, closing. |
see: django/channels#1181 (comment)
The text was updated successfully, but these errors were encountered: