Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

What's the connection lifetime in the pool? / When are idle connections released? #211

Closed
nvivo opened this issue Mar 30, 2017 · 15 comments
Closed
Labels

Comments

@nvivo
Copy link

nvivo commented Mar 30, 2017

Hi,

I'm investigating why I'm getting more open connections with this driver than with the oracle one, and I got stuck on this question: after a connection is returned to the pool, how long does it stay idle before released?

I noticed that the original driver has the Connection Lifetime option, the doc says:

When a connection is returned to the pool, its creation time is compared with the current time, and the connection is destroyed if that time span (in seconds) exceeds the value specified by Connection Lifetime. This is useful in clustered configurations to force load balancing between a running server and a server just brought online. A value of zero (0) causes pooled connections to have the maximum connection timeout.

How does it work with this driver?

@bgrainger
Copy link
Member

Connection Lifetime is not currently supported; see #105.

Thus, the lifetime will be controlled by the server, which should close them after wait_timeout seconds (which defaults to 28800 = eight hours).

@bgrainger
Copy link
Member

Opened #212 to implement this connection string option.

@nvivo
Copy link
Author

nvivo commented Mar 30, 2017

That explains it. I have a cluster, and after a peak of queries during startup, each node is retaining 100+ connections, exhausting the server after some nodes go up. I took a look at the wait_timeout, but I got a little worried about TIME_WAITs like described here, and since I'm using RDS, I'm not sure how Amazon will handle this.

Right now I have a deadline to put a get a new cluster running and I'd rather not go back to the old driver, so let me ask this:

Is it possible to control this manually with ClearPool? Should there be a problem is I call ClearPoolAsync() every 10 minutes while having around 30 connections being used?

And is there any way to get the number of open sessions in the pool without reflection?

@bgrainger
Copy link
Member

Should there be a problem is I call ClearPoolAsync() every 10 minutes while having around 30 connections being used?

This should not be a problem at all; it will iterate over all idle connections in the pool (or all pools, if you call MySqlConnector.ClearAllPools) and close them. Connections that are in use will not be affected.

And is there any way to get the number of open sessions in the pool without reflection?

There isn't a way to query MySqlConnector for how many connections it has open. (Do you know if Oracle's connector has a method to expose this? I haven't seen one.) However you could SELECT COUNT(*) FROM information_schema.PROCESSLIST; (adding WHERE USER = ? AND HOST = ? AND DB = ? as necessary). (I'm not sure if this requires specific privileges on the server, though.)

@caleblloyd
Copy link
Contributor

caleblloyd commented Mar 30, 2017

That explains it. I have a cluster, and after a peak of queries during startup, each node is retaining 100+ connections, exhausting the server after some nodes go up

Do you have a cluster of App Servers, or a cluster of MySQL Servers? If you have a cluster of App Servers and a single MySQL server, you should configure your Max Pool Size connection string parameter so that:

(# of App Servers) * (Max Pool Size) < (Connection Limit)

Otherwise if you get a lot of traffic on your app servers, you could spike over the Connection Limit at any time.

In the MySQL documentation for the Connection Lifetime, I believe this portion of the documentation is referring to a cluster of MySQL servers, since it would be beneficial for the client to periodically open new connections to hit different MySQL servers behind the same hostname in a cluster:

This is useful in clustered configurations to force load balancing between a running server and a server just brought online

@bgrainger
Copy link
Member

A relevant quote from Using Connector/Net with Connection Pooling:

Starting with MySQL Connector/Net 6.2, there is a background job that runs every three minutes and removes connections from pool that have been idle (unused) for more than three minutes. The pool cleanup frees resources on both client and server side. This is because on the client side every connection uses a socket, and on the server side every connection uses a socket and a thread.

Prior to this change, connections were never removed from the pool, and the pool always contained the peak number of open connections. For example, a web application that peaked at 1000 concurrent database connections would consume 1000 threads and 1000 open sockets at the server, without ever freeing up those resources from the connection pool. Connections, no matter how old, will not be closed if the number of connections in the pool is less than or equal to the value set by the Min Pool Size connection string parameter.

MySqlConnector does not currently implement any logic like this; for now you could accomplish something similar by setting wait_timeout=180 on the server and letting it expire idle connections.

@nvivo
Copy link
Author

nvivo commented Mar 30, 2017

@caleblloyd, My issue is this: I have a cluster os app servers, they're not public so I can control how much work they do. But usually during startup there are a lot of queries to run, and the node easily uses more than 100 connections, then after a minute or so it gets stable to ~20.

The problem is that those 80+ connections are kept open and never released.

@bgrainger My worry with wait_timeout is that closing the connection on the server may cause tons of TIME_WAITs as described. I had this problem with TIME_WAITs before, and it usually requires restarting the process, so I'd rather not try this in production if there are precedents. It would be much safer to close the connections from the other side.

@nvivo
Copy link
Author

nvivo commented Mar 30, 2017

@caleblloyd Now that I paid more attention to your answer, makes sense. Max Pool Size is a good enough workaround. It still maintains the problem if not closing those connections later, and will cause lots of connections to be opened and closed all the time, but at lease won't keep them hanging on the server.

@nvivo
Copy link
Author

nvivo commented Mar 30, 2017

Just to make sure, Max Pool Size means that once that limit is reached, new connections won't go back to the pool , but I'll be able to open them, right?

@bgrainger
Copy link
Member

My worry with wait_timeout is that closing the connection on the server may cause tons of TIME_WAITs as described.

@nvivo I just tried an experiment locally (with wait_timeout) and I saw a lot of connections in the FIN_WAIT2 state (that were eventually cleaned up after about a minute). This needs further investigation but initially it does seem like it would be better to close them on the client side (rather than having the server close them).

@bgrainger
Copy link
Member

Just to make sure, Max Pool Size means that once that limit is reached, new connections won't go back to the pool , but I'll be able to open them, right?

No, it does not mean that. The way Oracle implements it (and which we have followed) is that it actually enforces a maximum concurrent number of connections (for connections opened with that exact same connection string). If MaximumPoolSize connections are currently open, the client will wait for Connect Timeout seconds to get an idle connection from the pool before throwing an exception.

This seems like a strange choice to me, but it's what Oracle's connector does and I don't have a compelling rationale yet to break compatibility. I'd personally prefer to give the connector the ability to peak over MaximumPoolSize up to the server's maximum connection limit, and let Maximum Pool Size solely control the number of idle connections that the connector holds on to. (In that scenario, users would have to be aware that MySqlConnection.Open would be more likely to throw and need to implement exception handling.)

If a consumer wants to limit the total simultaneous number of connections, they could implement that logic on top of MySqlConnector. And if they have multiple app servers (or even multiple processes on one server) talking to one DB, then they have to implement the limit through some other mechanism, because the MaximumPoolSize constraint only applies within one .NET process.

@bgrainger
Copy link
Member

As a workaround, your "startup" connections could use Pooling=false; then there's no limit on the maximum simultaneous connections that can be opened, and they won't stay open.

@nvivo
Copy link
Author

nvivo commented Mar 30, 2017

Oh my =) Disabling the pooling should do the trick for a while then.

But yeah, I agree, it seems odd to me that "max pool size" implies "max connections". It would make more sense to have a different setting to limit that. I'll see how that goes. Thanks.

@nvivo
Copy link
Author

nvivo commented Mar 31, 2017

@bgrainger So, there is one thing I was missing yesterday that you certainly know but I didn't remember. Max Pool Size limits the # of connections, but if I reach the limit, it just waits for some connection to return to the pool. For some reason I was under the impression that once that limit was reached I would get an exception trying to open the connection.

So, in effect a Max Pool Size + Connection Lifetime looks like the perfect combination to both limit concurrent connections and also cleanup resources after peak.

@caleblloyd kinda implied that, I just didn't pay attention at the time.

@bgrainger
Copy link
Member

For some reason I was under the impression that once that limit was reached I would get an exception trying to open the connection.

You will get an exception if an idle connection is not returned to the pool within Connect Timeout settings; otherwise the connector will wait that long to see if a connection becomes available.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Development

No branches or pull requests

3 participants