You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository has been archived by the owner on Jan 25, 2023. It is now read-only.
We started running into the "Too many files open" issue on our client nodes due to some increased IO usage by some of our services. The erroring process (JVM) has been holding a little over 4000 file descriptors. Given the default soft limit of 1024 we've tried increasing it to 10000 by updating /etc/security/limits.conf. Even though the new config seemed to be picked up by the system we continued to experience the issue.
Then we saw that cat /proc/<pid_id>/limits was showing
Limit Soft Limit Hard Limit Units
Max open files 4096 4096 files
which made us learn about the whole chain of starting an allocation by nomad, realizing that the process is inheriting the limits from the parent process. And given that the allocation is started by nomad and nomad is started by supervisord we found that the actual limit is set by supervisord.
The solution for us was to add another config file in /etc/supervisor/conf.d that included
To prevent surprises for other users, I'd suggest that either the default supervisord config should be updated in this module (perhaps with an optional argument passed to the install script) or at least this fact should be mentioned in documentation so the users know how to act.
The text was updated successfully, but these errors were encountered:
To prevent surprises for other users, I'd suggest that either the default supervisord config should be updated in this module (perhaps with an optional argument passed to the install script)
I'd vote to keep the default as-is, but to expose a parameter to make it easy to tweak this setting, and add docs explaining when you might want to do so. Would you be up for a PR to add that?
Sign up for freeto subscribe to this conversation on GitHub.
Already have an account?
Sign in.
We started running into the "Too many files open" issue on our client nodes due to some increased IO usage by some of our services. The erroring process (JVM) has been holding a little over 4000 file descriptors. Given the default soft limit of 1024 we've tried increasing it to 10000 by updating
/etc/security/limits.conf
. Even though the new config seemed to be picked up by the system we continued to experience the issue.Then we saw that
cat /proc/<pid_id>/limits
was showingwhich made us learn about the whole chain of starting an allocation by nomad, realizing that the process is inheriting the limits from the parent process. And given that the allocation is started by
nomad
andnomad
is started bysupervisord
we found that the actual limit is set bysupervisord
.The solution for us was to add another config file in
/etc/supervisor/conf.d
that includedSee the docs for
minfds
here http://supervisord.org/configuration.html#supervisord-section-settingsTo prevent surprises for other users, I'd suggest that either the default
supervisord
config should be updated in this module (perhaps with an optional argument passed to the install script) or at least this fact should be mentioned in documentation so the users know how to act.The text was updated successfully, but these errors were encountered: