Skip to content

Commit

Permalink
fixed merge conflicts and incorporated peer review comments
Browse files Browse the repository at this point in the history
  • Loading branch information
klgill committed Apr 17, 2024
1 parent fe94d5f commit a0564c1
Show file tree
Hide file tree
Showing 6 changed files with 10 additions and 10 deletions.
Original file line number Diff line number Diff line change
Expand Up @@ -8,4 +8,4 @@ Review information about your {bare_metal_first_ref} configuration and then adop

include::../modules/con_bare-metal-provisioning-service-configurations.adoc[leveloffset=+1]

include::../modules/deploying-the-bare-metal-provisioning-service.adoc[leveloffset=+1]
include::../modules/proc_deploying-the-bare-metal-provisioning-service.adoc[leveloffset=+1]
6 changes: 3 additions & 3 deletions docs_user/modules/con_changes-to-cephFS-via-NFS.adoc
Original file line number Diff line number Diff line change
@@ -1,8 +1,8 @@
[id="changes-to-cephFS-via-NFS_{context}"]
[id="changes-to-cephFS-through-NFS_{context}"]

= Changes to CephFS via NFS
= Changes to CephFS through NFS

If the {rhos_prev_long} ({OpenStackShort}) {rhos_prev_ver} deployment uses CephFS via NFS as a backend for {rhos_component_storage_file_first_ref}, there's a `ceph-nfs` service on the {OpenStackShort} controller nodes deployed and managed by {OpenStackPreviousInstaller}. This service cannot be directly imported into {rhos_long} {rhos_curr_ver}. On {rhos_acro} {rhos_curr_ver}, the {rhos_component_storage_file} only supports using a "clustered" NFS service that is directly managed on the Ceph cluster. So, adoption with this service will involve a data path disruption to existing NFS clients. The timing of this disruption can be controlled by the deployer independent of this adoption procedure.
If the {rhos_prev_long} ({OpenStackShort}) {rhos_prev_ver} deployment uses CephFS through NFS as a backend for {rhos_component_storage_file_first_ref}, there's a `ceph-nfs` service on the {OpenStackShort} controller nodes deployed and managed by {OpenStackPreviousInstaller}. This service cannot be directly imported into {rhos_long} {rhos_curr_ver}. On {rhos_acro} {rhos_curr_ver}, the {rhos_component_storage_file} only supports using a "clustered" NFS service that is directly managed on the Ceph cluster. So, adoption with this service will involve a data path disruption to existing NFS clients. The timing of this disruption can be controlled by the deployer independent of this adoption procedure.

On {OpenStackShort} {rhos_prev_ver}, pacemaker controls the high availability of the `ceph-nfs` service. This service is assigned a Virtual IP (VIP) address that is also managed by pacemaker. The VIP is typically created on an isolated `StorageNFS` network. There are ordering and collocation constraints established between this VIP, `ceph-nfs` and the Shared File Systems service's share manager service on the
controller nodes. Prior to adopting {rhos_component_storage_file}, pacemaker's ordering and collocation constraints must be adjusted to separate the share manager service. This establishes `ceph-nfs` with its VIP as an isolated, standalone NFS service that can be decommissioned at will after completing the {OpenStackShort} adoption.
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@

= Decommissioning the {rhos_prev_long} standalone Ceph NFS service

If the deployment uses CephFS via NFS, you must inform your {rhos_prev_long}({OpenStackShort}) users
If the deployment uses CephFS through NFS, you must inform your {rhos_prev_long}({OpenStackShort}) users
that the old, standalone NFS service will be decommissioned. Users can discover
the new export locations for their pre-existing shares by querying the {rhos_component_storage_file} API.
To stop using the old NFS server, they need to unmount and remount their
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -8,15 +8,15 @@ Copy the {rhos_component_storage_file_first_ref} configuration from the {rhos_pr

* Ensure that {rhos_component_storage_file} systemd services (`api`, `cron`, `scheduler`) are
stopped. For more information, see xref:stopping-openstack-services_migrating-databases[Stopping {rhos_prev_long} services].
* If the deployment uses CephFS via NFS as a storage backend, ensure that
* If the deployment uses CephFS through NFS as a storage backend, ensure that
pacemaker ordering and collocation constraints are adjusted. For more
information, see xref:stopping-openstack-services_migrating-databases[Stopping {rhos_prev_long} services].
* Ensure that the {rhos_component_storage_file} pacemaker service (`openstack-manila-share`) is
stopped. For more information, see xref:stopping-openstack-services_migrating-databases[Stopping {rhos_prev_long} services].
* Ensure that the database migration has completed. For more information, see xref:migrating-databases-to-mariadb-instances_migrating-databases[Migrating databases to MariaDB instances].
* Ensure that {OpenShift} nodes where `manila-share` service will be deployed
can reach the management network that the storage system is in.
* If the deployment uses CephFS via NFS as a storage backend, ensure that
* If the deployment uses CephFS through NFS as a storage backend, ensure that
a new clustered Ceph NFS service is deployed on the Ceph cluster with the help
of Ceph orchestrator. For more information, see
xref:creating-a-ceph-nfs-cluster_migrating-databases[Creating a Ceph NFS cluster].
Expand Down Expand Up @@ -206,7 +206,7 @@ count of the `manilaShares` service/s to 1.
* Ensure that the appropriate storage management network is specified in the
`manilaShares` section. The example below connects the `manilaShares`
instance with the CephFS backend driver to the `storage` network.
* Prior to adopting the `manilaShares` service for CephFS via NFS, ensure that
* Prior to adopting the `manilaShares` service for CephFS through NFS, ensure that
you have a clustered Ceph NFS service created. You will need to provide the
name of the service as ``cephfs_nfs_cluster_id``.

Expand Down Expand Up @@ -263,7 +263,7 @@ spec:
__EOF__
----
+
Below is an example that uses CephFS via NFS. In this example:
Below is an example that uses CephFS through NFS. In this example:

* The `cephfs_ganesha_server_ip` option is preserved from the configuration on
the old {OpenStackShort} {rhos_prev_ver} environment.
Expand Down
2 changes: 1 addition & 1 deletion docs_user/modules/proc_stopping-openstack-services.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -69,7 +69,7 @@ services.

The cinder-backup service on {OpenStackShort} {rhos_prev_ver} could be running as Active-Passive under pacemaker or as Active-Active, so you must check how it is running and stop it.

If the deployment enables CephFS via NFS as a backend for {rhos_component_storage_file_first_ref}, there are pacemaker ordering and co-location
If the deployment enables CephFS through NFS as a backend for {rhos_component_storage_file_first_ref}, there are pacemaker ordering and co-location
constraints that govern the Virtual IP address assigned to the `ceph-nfs`
service, the `ceph-nfs` service itself and `manila-share` service.
These constraints must be removed:
Expand Down

0 comments on commit a0564c1

Please sign in to comment.