-
Notifications
You must be signed in to change notification settings - Fork 57
Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
Rework Ceph RBD migration documentation
This patch represents a rework of the current RBD documentation to move it from a POC to a procedure that we can test in CI. In particular: - the procedure is split between Ceph Mgr and Ceph Mons migration - Ceph MGR and Mon docs are more similar to procedures that the user should follow - the order is fixed as rbd should be last Signed-off-by: Francesco Pantano <[email protected]>
- Loading branch information
Showing
5 changed files
with
524 additions
and
380 deletions.
There are no files selected for viewing
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
100 changes: 100 additions & 0 deletions
100
docs_user/modules/proc_migrating-mgr-from-controller-nodes.adoc
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,100 @@ | ||
[id="migrating-mgr-from-controller-nodes_{context}"] | ||
|
||
= Migrating Ceph Mgr daemons to {Ceph} nodes | ||
|
||
The following section describes how to move Ceph Mgr daemons from the | ||
OpenStack controller nodes to a set of target nodes. Target nodes might be | ||
pre-existing {Ceph} nodes, or OpenStack Compute nodes if Ceph is deployed by | ||
{OpenStackPreviousInstaller} with an HCI topology. | ||
|
||
.Prerequisites | ||
|
||
Configure the target nodes (CephStorage or ComputeHCI) to have both `storage` | ||
and `storage_mgmt` networks to ensure that you can use both {Ceph} public and | ||
cluster networks from the same node. This step requires you to interact with | ||
{OpenStackPreviousInstaller}. From {rhos_prev_long} {rhos_prev_ver} and later | ||
you do not have to run a stack update. | ||
|
||
.Procedure | ||
|
||
This procedure assumes that cephadm and the orchestrator are the tools that | ||
drive the Ceph Mgr migration. As done with the other Ceph daemons (MDS, | ||
Monitoring and RGW), the procedure uses the Ceph spec to modify the placement | ||
and reschedule the daemons. Ceph Mgr is run in an active/passive fashion, and | ||
it's also responsible to provide many modules, including the orchestrator. | ||
|
||
. Before start the migration, ssh into the target node and enable the firewall | ||
rules required to reach a Mgr service. | ||
[source,bash] | ||
+ | ||
---- | ||
dports="6800:7300" | ||
ssh heat-admin@<target_node> sudo iptables -I INPUT \ | ||
-p tcp --match multiport --dports $dports -j ACCEPT; | ||
---- | ||
|
||
[NOTE] | ||
Repeat the previous action for each target_node. | ||
|
||
. Check the rules are properly applied and persist them: | ||
+ | ||
[source,bash] | ||
---- | ||
sudo iptables-save | ||
sudo systemctl restart iptables | ||
---- | ||
|
||
. Prepare the target node to host the new Ceph Mgr daemon, and add the `mgr` | ||
label to the target node: | ||
+ | ||
[source,bash] | ||
---- | ||
ceph orch host label add <target_node> mgr; done | ||
---- | ||
|
||
- Replace <target_node> with the hostname of the hosts listed in the {Ceph} | ||
through the `ceph orch host ls` command. | ||
|
||
Repeat this action for each node that will be host a Ceph Mgr daemon. | ||
|
||
Get the Ceph Mgr spec and update the `placement` section to use `label` as the | ||
main scheduling strategy. | ||
|
||
. Get the Ceph Mgr spec: | ||
+ | ||
[source,yaml] | ||
---- | ||
sudo cephadm shell -- ceph orch ls --export mgr > mgr.yaml | ||
---- | ||
|
||
.Edit the retrieved spec and add the `label: mgr` section: | ||
+ | ||
[source,yaml] | ||
---- | ||
service_type: mgr | ||
service_id: mgr | ||
placement: | ||
label: mgr | ||
---- | ||
|
||
. Save the spec in `/tmp/mgr.yaml` | ||
. Apply the spec with cephadm using the orchestrator: | ||
+ | ||
---- | ||
sudo cephadm shell -m /tmp/mgr.yaml -- ceph orch apply -i /mnt/mgr.yaml | ||
---- | ||
|
||
According to the numner of nodes where the `mgr` label is added, you will see a | ||
Ceph Mgr daemon count that matches the number of hosts. | ||
|
||
. Verify new Ceph Mgr have been created in the target_nodes: | ||
+ | ||
---- | ||
ceph orch ps | grep -i mgr | ||
ceph -s | ||
---- | ||
+ | ||
[NOTE] | ||
The procedure does not shrink the Ceph Mgr daemons: the count is grown by the | ||
number of target nodes, and the xref:migrating-mon-from-controller-nodes[Ceph Mon migration procedure] | ||
will decommission the stand-by Ceph Mgr instances. |
Oops, something went wrong.