This repository has been archived by the owner on Dec 4, 2018. It is now read-only.
interleave Pacemaker clones to minimise disruption (bsc#965886) #270
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
By default, Pacemaker clones aren't interleaved. This means that if
Pacemaker wants to restart a dead clone instance, and there is an order
constraint on that clone, it will do the same restart on all other
nodes, even if all the others are healthy.
More details on interleaving are here:
https://www.hastexo.com/resources/hints-and-kinks/interleaving-pacemaker-clones/
This behaviour is far more disruptive than we want. For example, in
https://bugzilla.suse.com/show_bug.cgi?id=965886
we saw that when a network node dies and Pacemaker wants to stop the
instance of cl-g-neutron-agents on that node, it also stops and restarts
the same clone instances on the healthy nodes. This means there is a
small window in which there are no neutron agents running anywhere. If
neutron-ha-tool attempts a router migration during this window, it will
fail, at which point things start to go badly wrong.
In general, the cloned (i.e. active/active) services on our controller
and compute nodes should all behave like independent vertical stacks,
so that a failure on one node should not cause ripple effects on other
nodes. So we interleave all our clones.
(There is a corresponding commit to crowbar-ha for the Apache clone.)
(cherry picked from commit bdde4b4dc2534e91bf1f2869a66491463134f8c1)