-
Notifications
You must be signed in to change notification settings - Fork 7
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Keep alive? #5
Comments
@hunt3r thanks for your feedback. I will try to make test to repeat it. I think we have to support this feature, something like ping connection for long time period. |
It's more about inactivity. This is a web application that will run for days-weeks without a deploy. We need a reliable way to recycle the connections. The issue seems to be that when we receive a ClientError, our tornado process goes zombie, then when the process is cleaned up by gunicorn, we get a new connection and all is well, but the request that hit the issue is given a 504 that first time. This happens for each of the load balanced processes, 6 servers * 17 processes, so you can see, this is a lot of 504 timeouts, and problematic when there is limited activity in our lower environments for testing. Employees come in the morning and the servers are all not very responsive. If we have this issue in production we have much bigger issues 😆 . It seems like in a busy server the regular A sample of what we sometimes see, like from our health check is the following:
I've added a ping that we can call from the async call back which will do a time comparison on each run and I was going to do a test and get it into one of our lower environments to see if it helped. My concern with this approach is if it was possible to clear the pool while connections were waiting under heavy load. I've subclassed
I added a I'm using the ping method in my Application singleton:
We wrap the asyncmc.Client in a factory because we support other types of caching, and that method does the following, this will enable us to trap client errors, and clear the pool if it happens in the flow.
It would be nice to have an option on the Glad to help in any way I can, thanks for your work on this by the way, its definitely the most viable non-blocking memcached client for Tornado that I've found! 🍻 |
Just an FYI this approach is working for us for now, but seems a bit like a sledge hammer. I think the better way to handle this would be on a per connection basis. We could iterate over the connection pool and mark connections for removal after some amount of time, on each request if a connection was fetched that was marked for removal, it would dispose of the socket and instantiate a new one. |
Didn't mean to close. |
For our application, we are trying to setup asyncmc as a singleton, instantiated at application startup. This way we can reuse the connection many times over without instantiating a new socket each time. We are running into issues in some lesser used environments where memcached connection seem to be getting "stale" and we will receive a timeout and a 504. The connection will be restablished and things go on working fine. This is fairly disruptive.
I thought that using an async callback might work, but it doesn't seem to.
Is there a desired approach for working with asyncmc over longer periods?
The text was updated successfully, but these errors were encountered: