Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Ping failures causes slaves to halt #8

Open
dinatale2 opened this issue Jun 28, 2016 · 2 comments
Open

Ping failures causes slaves to halt #8

dinatale2 opened this issue Jun 28, 2016 · 2 comments

Comments

@dinatale2
Copy link
Member

Buildbot pings slaves before initiating a build. If the ping fails, the build request is re-queued. It appears that once a ping failure occurs, the slave that failed is not reused. This creates issues for the lustre buildbot since the tarball builder only has one slave that is expected to be stable and reliable. So this needs to be fixed. I will keep an eye out for this issue to see if it continues to happen frequently. Log snippet below.

2016-06-27 18:23:42+0000 [-] added buildset 3847 to database
2016-06-27 18:23:42+0000 [-] prioritized 1 builder(s): ['CentOS 7.2 x86_64 (TARBALL)']
2016-06-27 18:23:42+0000 [-] starting build <Build CentOS 7.2 x86_64 (TARBALL)> using slave
2016-06-27 18:23:42+0000 [-] acquireLocks(slave <LustreEC2Slave 'CentOS-7.2-x86_64-tarballslave'>, locks [])
2016-06-27 18:23:42+0000 [-] substantiating slave
2016-06-27 18:23:42+0000 [-] starting build <Build CentOS 7.2 x86_64 (TARBALL)>.. pinging the slave
2016-06-27 18:23:42+0000 [-] sending ping
2016-06-27 18:23:42+0000 [-] while pinging slave before build:
Traceback (most recent call last):
File "/usr/lib64/python2.7/site-packages/twisted/internet/defer.py", line 1128, in _inlineCallbacks
result = g.send(result)
File "/usr/lib/python2.7/site-packages/buildbot-0.8.13_pre_12_gfc4f555-py2.7.egg/buildbot/process/builder.py", line 522, in maybeStartBuild
build_started = yield self._startBuildFor(slavebuilder, breqs)
File "/usr/lib64/python2.7/site-packages/twisted/internet/defer.py", line 1274, in unwindGenerator
return _inlineCallbacks(None, gen, Deferred())
File "/usr/lib64/python2.7/site-packages/twisted/internet/defer.py", line 1128, in _inlineCallbacks
result = g.send(result)
--- ---
File "/usr/lib/python2.7/site-packages/buildbot-0.8.13_pre_12_gfc4f555-py2.7.egg/buildbot/process/builder.py", line 360, in _startBuildFor
ping_success = yield slavebuilder.ping()
File "/usr/lib/python2.7/site-packages/buildbot-0.8.13_pre_12_gfc4f555-py2.7.egg/buildbot/process/slavebuilder.py", line 309, in ping
return AbstractSlaveBuilder.ping(self, status)
File "/usr/lib/python2.7/site-packages/buildbot-0.8.13_pre_12_gfc4f555-py2.7.egg/buildbot/process/slavebuilder.py", line 157, in ping
Ping().ping(self.remote).addCallback(self._pong)
File "/usr/lib/python2.7/site-packages/buildbot-0.8.13_pre_12_gfc4f555-py2.7.egg/buildbot/process/slavebuilder.py", line 201, in ping
remote.callRemote("print", "ping").addCallbacks(self._pong,
File "/usr/lib64/python2.7/site-packages/twisted/spread/pb.py", line 343, in callRemote
_name, args, kw)
File "/usr/lib64/python2.7/site-packages/twisted/spread/pb.py", line 856, in _sendMessage
raise DeadReferenceError("Calling Stale Broker")
twisted.spread.pb.DeadReferenceError: Calling Stale Broker
2016-06-27 18:23:42+0000 [-] slave ping failed; re-queueing the request
2016-06-27 18:23:42+0000 [-] releaseLocks(<LustreEC2Slave 'CentOS-7.2-x86_64-tarballslave'>): []
2016-06-27 18:23:42+0000 [-] prioritized 1 builder(s): ['CentOS 7.2 x86_64 (TARBALL)']

@morrone
Copy link
Member

morrone commented Jun 28, 2016

Yes, this is not an ICMP ping. The twisted Perspective Broker connection was lost at some point, and buildbot didn't notice. It also fails to correctly recover from the error here. So this can be fixed without any particularly special AWS networking knowledge.

@dinatale2
Copy link
Member Author

I deployed a patch to both Lustre's and ZFS's buildbot instances. The Lustre buildbot has yet to hit this problem it seems and has been rather stable lately. ZFS's on the other hand has, and my fix, while it handles the error, doesn't prevent the slave from halting permanently. I will investigate some more.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants