Skip to content

Commit

Permalink
Merge pull request openstack-k8s-operators#366 from gouthampacha/ceph…
Browse files Browse the repository at this point in the history
…nfs-adoption

Add CephFS-via-NFS migration guide
  • Loading branch information
jistr authored Mar 20, 2024
2 parents 247eaaf + 0e39da6 commit eb4e909
Show file tree
Hide file tree
Showing 4 changed files with 403 additions and 22 deletions.
147 changes: 140 additions & 7 deletions docs_user/modules/openstack-ceph_backend_configuration.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -37,9 +37,13 @@ became far simpler and hence, more became more secure with RHOSP 18.
* It is simpler to create a common ceph secret (keyring and ceph config
file) and propagate the secret to all services that need it.

TIP: To run `ceph` commands, you must use SSH to connect to a Ceph
storage node and run `sudo cephadm shell`. This brings up a ceph orchestrator
container that allows you to run administrative commands against the ceph
cluster. If Director deployed the ceph cluster, you may launch the cephadm
shell from an OpenStack controller node.

----
$CEPH_SSH cephadm shell
# wait for shell to come up, then execute:
ceph auth caps client.openstack \
mgr 'allow *' \
mon 'allow r, profile rbd' \
Expand All @@ -66,10 +70,8 @@ EOF

The content of the file should look something like this:

____
[source,yaml]
----
---
apiVersion: v1
kind: Secret
metadata:
Expand All @@ -80,17 +82,17 @@ stringData:
[client.openstack]
key = <secret key>
caps mgr = "allow *"
caps mon = "profile rbd"
caps osd = "profile rbd pool=images"
caps mon = "allow r, profile rbd"
caps osd = "pool=vms, profile rbd pool=volumes, profile rbd pool=images, allow rw pool manila_data'
ceph.conf: |
[global]
fsid = 7a1719e8-9c59-49e2-ae2b-d7eb08c695d4
mon_host = 10.1.1.2,10.1.1.3,10.1.1.4
----
____

Configure `extraMounts` within the `OpenStackControlPlane` CR:

[source,yaml]
----
oc patch openstackcontrolplane openstack --type=merge --patch '
spec:
Expand Down Expand Up @@ -122,6 +124,137 @@ spec:
Configuring some OpenStack services to use Ceph backend may require
the FSID value. You can fetch the value from the config like so:

[source,bash]
----
CEPH_FSID=$(oc get secret ceph-conf-files -o json | jq -r '.data."ceph.conf"' | base64 -d | grep fsid | sed -e 's/fsid = //')
----

[id="creating-a-ceph-nfs-cluster_{context}"]
== Creating a Ceph NFS cluster

If you use the Ceph via NFS backend with OpenStack Manila, prior to adoption,
you must create a new clustered NFS service on the Ceph cluster. This service
will replace the standalone, pacemaker-controlled `ceph-nfs` service that was
used on Red Hat OpenStack Platform 17.1.


=== Ceph node preparation

* You must identify the ceph nodes to deploy the new clustered NFS service.
* This service must be deployed on the `StorageNFS` isolated network so that
it is easier for clients to mount their existing shares through the new NFS
export locations.
* You must propagate the `StorageNFS` network to the target nodes
where the `ceph-nfs` service will be deployed. See link:https://docs.openstack.org/project-deploy-guide/tripleo-docs/wallaby/features/network_isolation.html#deploying-the-overcloud-with-network-isolation[Deploying
an Overcloud with Network Isolation with TripleO] and link:https://docs.openstack.org/project-deploy-guide/tripleo-docs/wallaby/post_deployment/updating_network_configuration_post_deployment.html[Applying
network configuration changes after deployment] for the background to these
tasks. The following steps will be relevant if the Ceph Storage nodes were
deployed via Director.
** Identify the node definition file used in the environment. This is
the input file associated with the `openstack overcloud node provision`
command. For example, this file may be called `overcloud-baremetal-deploy.yaml`
** Edit the networks associated with the `CephStorage` nodes to include the
`StorageNFS` network:
+
[source,yaml]
----
- name: CephStorage
count: 3
hostname_format: cephstorage-%index%
instances:
- hostname: cephstorage-0
name: ceph-0
- hostname: cephstorage-1
name: ceph-1
- hostname: cephstorage-2
name: ceph-2
defaults:
profile: ceph-storage
network_config:
template: /home/stack/network/nic-configs/ceph-storage.j2
network_config_update: true
networks:
- network: ctlplane
vif: true
- network: storage
- network: storage_mgmt
- network: storage_nfs
----
** Edit the network configuration template file for the `CephStorage` nodes
to include an interface connecting to the `StorageNFS` network. In the
example above, the path to the network configuration template file is
`/home/stack/network/nic-configs/ceph-storage.j2`. This file is modified
to include the following NIC template:
+
[source,yaml]
----
- type: vlan
device: nic2
vlan_id: {{ storage_nfs_vlan_id }}
addresses:
- ip_netmask: {{ storage_nfs_ip }}/{{ storage_nfs_cidr }}
routes: {{ storage_nfs_host_routes }}
----
** Re-run the `openstack overcloud node provision` command to update the
`CephStorage` nodes.
+
[source,bash]
----
openstack overcloud node provision \
--stack overcloud \
--network-config -y \
-o overcloud-baremetal-deployed-storage_nfs.yaml \
--concurrency 2 \
/home/stack/network/baremetal_deployment.yaml
----
** When the update is complete, ensure that the `CephStorage` nodes have a
new interface created and tagged with the appropriate VLAN associated with
`StorageNFS`.

=== Ceph NFS cluster creation

* Identify an IP address from the `StorageNFS` network to use as the Virtual IP
address for the Ceph NFS service. This IP address must be provided in place of
the `{{ VIP }}` in the example below. You can query used IP addresses with:

[source,bash]
----
openstack port list -c "Fixed IP Addresses" --network storage_nfs
----

* Pick an appropriate size for the NFS cluster. The NFS service provides
active/active high availability when the cluster size is more than
one node. It is recommended that the ``{{ cluster_size }}`` is at least one
less than the number of hosts identified. This solution has been well tested
with a 3-node NFS cluster.
* The `ingress-mode` argument must be set to ``haproxy-protocol``. No other
ingress-mode will be supported. This ingress mode will allow enforcing client
restrictions through OpenStack Manila.
* For more information on deploying the clustered Ceph NFS service, see the
link:https://docs.ceph.com/en/latest/cephadm/services/nfs/[ceph orchestrator
documentation]
* The following commands are run inside a `cephadm shell` to create a clustered
Ceph NFS service.

[source,bash]
----
# wait for shell to come up, then execute:
ceph orch host ls
# Identify the hosts that can host the NFS service.
# Repeat the following command to label each host identified:
ceph orch host label add <HOST> nfs
# Set the appropriate {{ cluster_size }} and {{ VIP }}:
ceph nfs cluster create cephfs \
"{{ cluster_size }} label:nfs" \
--ingress \
--virtual-ip={{ VIP }}
--ingress-mode=haproxy-protocol
}}
# Check the status of the nfs cluster with these commands
ceph nfs cluster ls
ceph nfs cluster info cephfs
----
Loading

0 comments on commit eb4e909

Please sign in to comment.