From a7043fe1a9c3bf028c45e8eb1826d8ae20a0f22b Mon Sep 17 00:00:00 2001 From: Adam Spiers Date: Wed, 9 Mar 2016 17:06:34 +0000 Subject: [PATCH] interleave Pacemaker clones to minimise disruption (bsc#965886) By default, Pacemaker clones aren't interleaved. This means that if Pacemaker wants to restart a dead clone instance, and there is an order constraint on that clone, it will do the same restart on all other nodes, even if all the others are healthy. More details on interleaving are here: https://www.hastexo.com/resources/hints-and-kinks/interleaving-pacemaker-clones/ This behaviour is far more disruptive than we want. For example, in https://bugzilla.suse.com/show_bug.cgi?id=965886 we saw that when a network node dies and Pacemaker wants to stop the instance of cl-g-neutron-agents on that node, it also stops and restarts the same clone instances on the healthy nodes. This means there is a small window in which there are no neutron agents running anywhere. If neutron-ha-tool attempts a router migration during this window, it will fail, at which point things start to go badly wrong. In general, the cloned (i.e. active/active) services on our controller and compute nodes should all behave like independent vertical stacks, so that a failure on one node should not cause ripple effects on other nodes. So we interleave all our clones. (There is a corresponding commit to crowbar-ha for the Apache clone.) (cherry picked from commit bdde4b4dc2534e91bf1f2869a66491463134f8c1) --- chef/cookbooks/cinder/recipes/controller_ha.rb | 3 +++ 1 file changed, 3 insertions(+) diff --git a/chef/cookbooks/cinder/recipes/controller_ha.rb b/chef/cookbooks/cinder/recipes/controller_ha.rb index 1cd0ed74..24f540c4 100644 --- a/chef/cookbooks/cinder/recipes/controller_ha.rb +++ b/chef/cookbooks/cinder/recipes/controller_ha.rb @@ -61,6 +61,9 @@ pacemaker_clone "cl-#{group_name}" do rsc group_name action [:create, :start] + meta ({ + "interleave" => "true", + }) only_if { CrowbarPacemakerHelper.is_cluster_founder?(node) } end