Skip to content

Commit

Permalink
Address review feedback and prettify o/p
Browse files Browse the repository at this point in the history
  • Loading branch information
gouthampacha committed Mar 18, 2024
1 parent b7ac5f4 commit 4024705
Show file tree
Hide file tree
Showing 3 changed files with 99 additions and 15 deletions.
106 changes: 95 additions & 11 deletions docs_user/modules/openstack-ceph_backend_configuration.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -37,9 +37,13 @@ became far simpler and hence, more became more secure with RHOSP 18.
* It is simpler to create a common ceph secret (keyring and ceph config
file) and propagate the secret to all services that need it.

TIP: To run `ceph` commands, you must use SSH to connect to a Ceph
storage node and run `sudo cephadm shell`. This brings up a ceph orchestrator
container that allows you to run administrative commands against the ceph
cluster. If Director deployed the ceph cluster, you may launch the cephadm
shell from an OpenStack controller node.

----
$CEPH_SSH cephadm shell
# wait for shell to come up, then execute:
ceph auth caps client.openstack \
mgr 'allow *' \
mon 'allow r, profile rbd' \
Expand Down Expand Up @@ -133,14 +137,93 @@ you must create a new clustered NFS service on the Ceph cluster. This service
will replace the standalone, pacemaker-controlled `ceph-nfs` service that was
used on Red Hat OpenStack Platform 17.1.

* You may identify a subset of the ceph nodes to deploy the new clustered NFS
service.
* This cluster must be deployed on the `StorageNFS` isolated network so that

=== Ceph node preparation

* You must identify the ceph nodes to deploy the new clustered NFS service.
* This service must be deployed on the `StorageNFS` isolated network so that
it is easier for clients to mount their existing shares through the new NFS
export locations. Replace the ``{{ VIP }}`` in the following example with an
IP address from the `StorageNFS` isolated network
* You can pick an appropriate size for the NFS cluster. The NFS service
provides active/active high availability when the cluster size is more than
export locations.
* You must propagate the `StorageNFS` network to the target nodes
where the `ceph-nfs` service will be deployed. See link:https://docs.openstack.org/project-deploy-guide/tripleo-docs/wallaby/features/network_isolation.html#deploying-the-overcloud-with-network-isolation[Deploying
an Overcloud with Network Isolation with TripleO] and link:https://docs.openstack.org/project-deploy-guide/tripleo-docs/wallaby/post_deployment/updating_network_configuration_post_deployment.html[Applying
network configuration changes after deployment] for the background to these
tasks. The following steps will be relevant if the Ceph Storage nodes were
deployed via Director.
** Identify the node definition file used in the environment. This is
the input file associated with the `openstack overcloud node provision`
command. For example, this file may be called `overcloud-baremetal-deploy.yaml`
** Edit the networks associated with the `CephStorage` nodes to include the
`StorageNFS` network:
+
[source,yaml]
----
- name: CephStorage
count: 3
hostname_format: cephstorage-%index%
instances:
- hostname: cephstorage-0
name: ceph-0
- hostname: cephstorage-1
name: ceph-1
- hostname: cephstorage-2
name: ceph-2
defaults:
profile: ceph-storage
network_config:
template: /home/stack/network/nic-configs/ceph-storage.j2
network_config_update: true
networks:
- network: ctlplane
vif: true
- network: storage
- network: storage_mgmt
- network: storage_nfs
----
** Edit the network configuration template file for the `CephStorage` nodes
to include an interface connecting to the `StorageNFS` network. In the
example above, the path to the network configuration template file is
`/home/stack/network/nic-configs/ceph-storage.j2`. This file is modified
to include the following NIC template:
+
[source,yaml]
----
- type: vlan
device: nic2
vlan_id: {{ storage_nfs_vlan_id }}
addresses:
- ip_netmask: {{ storage_nfs_ip }}/{{ storage_nfs_cidr }}
routes: {{ storage_nfs_host_routes }}
----
** Re-run the `openstack overcloud node provision` command to update the
`CephStorage` nodes.
+
[source,bash]
----
openstack overcloud node provision \
--stack overcloud \
--network-config -y \
-o overcloud-baremetal-deployed-storage_nfs.yaml \
--concurrency 2 \
/home/stack/network/baremetal_deployment.yaml
----
** When the update is complete, ensure that the `CephStorage` nodes have a
new interface created and tagged with the appropriate VLAN associated with
`StorageNFS`.

=== Ceph NFS cluster creation

* Identify an IP address from the `StorageNFS` network to use as the Virtual IP
address for the Ceph NFS service. This IP address must be provided in place of
the `{{ VIP }}` in the example below. You can query used IP addresses with:

[source,bash]
----
openstack port list -c "Fixed IP Addresses" --network storage_nfs
----

* Pick an appropriate size for the NFS cluster. The NFS service provides
active/active high availability when the cluster size is more than
one node. It is recommended that the ``{{ cluster_size }}`` is at least one
less than the number of hosts identified. This solution has been well tested
with a 3-node NFS cluster.
Expand All @@ -150,10 +233,11 @@ restrictions through OpenStack Manila.
* For more information on deploying the clustered Ceph NFS service, see the
link:https://docs.ceph.com/en/latest/cephadm/services/nfs/[ceph orchestrator
documentation]
* The following commands are run inside a `cephadm shell` to create a clustered
Ceph NFS service.

[source,bash]
----
$CEPH_SSH cephadm shell
# wait for shell to come up, then execute:
ceph orch host ls
Expand All @@ -164,7 +248,7 @@ ceph orch host label add <HOST> nfs
# Set the appropriate {{ cluster_size }} and {{ VIP }}:
ceph nfs cluster create cephfs \
{{ cluster_size }} label:nfs \
"{{ cluster_size }} label:nfs" \
--ingress \
--virtual-ip={{ VIP }}
--ingress-mode=haproxy-protocol
Expand Down
4 changes: 2 additions & 2 deletions docs_user/modules/openstack-manila_adoption.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -60,7 +60,7 @@ that can be decommissioned at will after completing the OpenStack adoption.
Red Hat Ceph Storage 7.0 introduced a native `clustered Ceph NFS service`. This
service has to be deployed on the Ceph cluster using the Ceph orchestrator
prior to adopting Manila. This NFS service will eventually replace the
standalone NFS service from RHOSP 17.1 in your deployment. When manila is
standalone NFS service from RHOSP 17.1 in your deployment. When Manila is
adopted into the RHOSP 18 environment, it will establish all the existing
exports and client restrictions on the new clustered Ceph NFS service. Clients
can continue to read and write data on their existing NFS shares, and are not
Expand All @@ -85,7 +85,7 @@ for instructions on setting up a clustered NFS service.

== Prerequisites

* Ensure that manila systemd services (`api`, `cron`, `scheduler`) are
* Ensure that Manila systemd services (`api`, `cron`, `scheduler`) are
stopped. For more information, see xref:stopping-openstack-services_{context}[Stopping OpenStack services].
* If the deployment uses CephFS via NFS as a storage backend, ensure that
pacemaker ordering and collocation constraints are adjusted. For more
Expand Down
4 changes: 2 additions & 2 deletions docs_user/modules/openstack-rolling_back.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -140,12 +140,12 @@ If the Ceph NFS service is running on the deployment as a OpenStack Manila
backend, you must restore the pacemaker ordering and colocation constraints
involving the "openstack-manila-share" service:

---
----
sudo pcs constraint order start ceph-nfs then openstack-manila-share kind=Optional id=order-ceph-nfs-openstack-manila-share-Optional
sudo pcs constraint colocation add openstack-manila-share with ceph-nfs score=INFINITY id=colocation-openstack-manila-share-ceph-nfs-INFINITY
---
----

Now you can verify that the source cloud is operational again, e.g. by
running `openstack` CLI commands or using the Horizon Dashboard.
Expand Down

0 comments on commit 4024705

Please sign in to comment.