From 9ddfa4e10053b1052ef3fdd13015bff57426a7a2 Mon Sep 17 00:00:00 2001 From: Katie Gilligan Date: Mon, 6 May 2024 15:20:21 -0400 Subject: [PATCH] checking external links and variables --- ...ssembly_configuring-isolated-networks.adoc | 4 +-- ...figuring-network-for-RHOSO-deployment.adoc | 8 +++--- .../assembly_planning-the-new-deployment.adoc | 2 +- ...mbly_planning-your-ipam-configuration.adoc | 14 +++++----- .../assembly_storage-requirements.adoc | 8 +++--- .../modules/con_about-machine-configs.adoc | 6 ++-- .../modules/con_about-node-selector.adoc | 24 ++++++++-------- ...l-provisioning-service-configurations.adoc | 4 +-- ...on_block-storage-service-requirements.adoc | 6 ++-- ...nfiguration-files-between-deployments.adoc | 7 ++--- .../con_identity-service-authentication.adoc | 2 +- ...er-service-support-for-crypto-plugins.adoc | 2 +- docs_user/modules/con_node-roles.adoc | 28 +++++++++---------- ...reparation-for-block-storage-adoption.adoc | 8 ++++-- .../modules/con_service-configurations.adoc | 21 +++++++------- ...ng-compute-services-to-the-data-plane.adoc | 8 ++++-- ...pting-image-service-with-ceph-backend.adoc | 2 +- .../proc_configuring-data-plane-nodes.adoc | 8 +++--- ...networking-for-control-plane-services.adoc | 4 +-- ...oc_configuring-openshift-worker-nodes.adoc | 4 +-- .../proc_creating-a-ceph-nfs-cluster.adoc | 8 ++++-- ...ng-file-systems-service-control-plane.adoc | 22 ++++++++++----- ...rating-databases-to-mariadb-instances.adoc | 2 +- ...-service-by-customizing-configuration.adoc | 4 ++- .../proc_reusing-existing-subnet-ranges.adoc | 3 +- .../modules/proc_using-new-subnet-ranges.adoc | 8 +++--- 26 files changed, 118 insertions(+), 99 deletions(-) diff --git a/docs_user/assemblies/assembly_configuring-isolated-networks.adoc b/docs_user/assemblies/assembly_configuring-isolated-networks.adoc index d9e8dd8f8..00c8c22d2 100644 --- a/docs_user/assemblies/assembly_configuring-isolated-networks.adoc +++ b/docs_user/assemblies/assembly_configuring-isolated-networks.adoc @@ -10,7 +10,7 @@ you would like to replicate in the new environment. Before proceeding, you should have a list of the following IP address allocations to be used for the new control plane services: -* 1 IP address, per isolated network, per OpenShift worker node. (These +* 1 IP address, per isolated network, per {OpenShift} worker node. (These addresses will <<_configure_openshift_worker_nodes,translate>> to `NodeNetworkConfigurationPolicy` custom resources (CRs).) * IP range, per isolated network, for the data plane nodes. (These ranges will @@ -31,7 +31,7 @@ listed below should reflect the actual adopted environment. The number of isolated networks may differ from the example below. IPAM scheme may differ. Only relevant parts of the configuration are shown. Examples are incomplete and should be incorporated into the general configuration for the new deployment, -as described in the general OpenStack documentation. +as described in the general {rhos_prev_long} documentation. include::../modules/proc_configuring-openshift-worker-nodes.adoc[leveloffset=+1] diff --git a/docs_user/assemblies/assembly_configuring-network-for-RHOSO-deployment.adoc b/docs_user/assemblies/assembly_configuring-network-for-RHOSO-deployment.adoc index a4b333f61..17d9f475c 100644 --- a/docs_user/assemblies/assembly_configuring-network-for-RHOSO-deployment.adoc +++ b/docs_user/assemblies/assembly_configuring-network-for-RHOSO-deployment.adoc @@ -4,9 +4,9 @@ = Configuring the network for the RHOSO deployment -With OpenShift, the network is a very important aspect of the deployment, and +With {OpenShift} ({OpenShiftShort}), the network is a very important aspect of the deployment, and it is important to plan it carefully. The general network requirements for the -OpenStack services are not much different from the ones in a {OpenStackPreviousInstaller} deployment, but the way you handle them is. +{rhos_prev_long} ({OpenStackShort}) services are not much different from the ones in a {OpenStackPreviousInstaller} deployment, but the way you handle them is. [NOTE] For more information about the network architecture and configuration, see @@ -17,14 +17,14 @@ networking] in _OpenShift Container Platform 4.15 Documentation_. This document // TODO: should we parametrize the version in the links somehow? -When adopting a new OpenStack deployment, it is important to align the network +When adopting a new {OpenStackShort} deployment, it is important to align the network configuration with the adopted cluster to maintain connectivity for existing workloads. The following logical configuration steps will incorporate the existing network configuration: -* configure **OpenShift worker nodes** to align VLAN tags and IPAM +* configure **{OpenShiftShort} worker nodes** to align VLAN tags and IPAM configuration with the existing deployment. * configure **Control Plane services** to use compatible IP ranges for service and load balancing IPs. diff --git a/docs_user/assemblies/assembly_planning-the-new-deployment.adoc b/docs_user/assemblies/assembly_planning-the-new-deployment.adoc index fb6d483d2..e95836524 100644 --- a/docs_user/assemblies/assembly_planning-the-new-deployment.adoc +++ b/docs_user/assemblies/assembly_planning-the-new-deployment.adoc @@ -4,7 +4,7 @@ = Planning the new deployment -Just like you did back when you installed your director-deployed {rhos_prev_long}, the +Just like you did back when you installed your {OpenStackPreviousInstaller}-deployed {rhos_prev_long}, the upgrade/migration to the control plane requires planning various aspects of the environment such as node roles, planning your network topology, and storage. diff --git a/docs_user/assemblies/assembly_planning-your-ipam-configuration.adoc b/docs_user/assemblies/assembly_planning-your-ipam-configuration.adoc index 5c89fb88f..7b9353e5a 100644 --- a/docs_user/assemblies/assembly_planning-your-ipam-configuration.adoc +++ b/docs_user/assemblies/assembly_planning-your-ipam-configuration.adoc @@ -5,8 +5,8 @@ = Planning your IPAM configuration The new deployment model puts additional burden on the size of IP allocation -pools available for OpenStack services. This is because each service deployed -on OpenShift worker nodes will now require an IP address from the IPAM pool (in +pools available for {rhos_prev_long} ({OpenStackShort}) services. This is because each service deployed +on {OpenShift} ({OpenShiftShort}) worker nodes will now require an IP address from the IPAM pool (in the previous deployment model, all services hosted on a controller node shared the same IP address.) @@ -19,7 +19,7 @@ your particular case. The total number of IP addresses required for the new control plane services, in each isolated network, is calculated as a sum of the following: -* The number of OpenShift worker nodes. (Each node will require 1 IP address in +* The number of {OpenShiftShort} worker nodes. (Each node will require 1 IP address in `NodeNetworkConfigurationPolicy` custom resources (CRs).) * The number of IP addresses required for the data plane nodes. (Each node will require an IP address from `NetConfig` CRs.) @@ -29,7 +29,7 @@ in each isolated network, is calculated as a sum of the following: * The number of IP addresses required for load balancer IP addresses. (Each service will require a VIP address from `IPAddressPool` CRs.) -As of the time of writing, the simplest single worker node OpenShift deployment +As of the time of writing, the simplest single worker node {OpenShiftShort} deployment (CRC) has the following IP ranges defined (for the `internalapi` network): * 1 IP address for the single worker node; @@ -44,11 +44,11 @@ allocation pools. // TODO: update the numbers above for a more realistic multinode cluster. -The exact requirements may differ depending on the list of OpenStack services -to be deployed, their replica numbers, as well as the number of OpenShift +The exact requirements may differ depending on the list of {OpenStackShort} services +to be deployed, their replica numbers, as well as the number of {OpenShiftShort} worker nodes and data plane nodes. -Additional IP addresses may be required in future OpenStack releases, so it is +Additional IP addresses may be required in future {OpenStackShort} releases, so it is advised to plan for some extra capacity, for each of the allocation pools used in the new environment. diff --git a/docs_user/assemblies/assembly_storage-requirements.adoc b/docs_user/assemblies/assembly_storage-requirements.adoc index dc9003e6c..068d8472b 100644 --- a/docs_user/assemblies/assembly_storage-requirements.adoc +++ b/docs_user/assemblies/assembly_storage-requirements.adoc @@ -4,12 +4,12 @@ = Storage requirements -When looking into the storage in an OpenStack deployment you can differentiate +When looking into the storage in an {rhos_prev_long} ({OpenStackShort}) deployment you can differentiate two kinds, the storage requirements of the services themselves and the -storage used for the OpenStack users that the services will manage. +storage used for the {OpenStackShort} users that the services will manage. -These requirements may drive your OpenShift node selection, as mentioned above, -and may require you to do some preparations on the OpenShift nodes before +These requirements may drive your {OpenShift} ({OpenShiftShort}) node selection, as mentioned above, +and may require you to do some preparations on the {OpenShiftShort} nodes before you can deploy the services. //*TODO: Galera, RabbitMQ, Swift, Glance, etc.* diff --git a/docs_user/modules/con_about-machine-configs.adoc b/docs_user/modules/con_about-machine-configs.adoc index a6ac444b8..82a3a4ba5 100644 --- a/docs_user/modules/con_about-machine-configs.adoc +++ b/docs_user/modules/con_about-machine-configs.adoc @@ -6,13 +6,13 @@ Some services require you to have services or kernel modules running on the host `nvme-fabrics` kernel module. For those cases you use `MachineConfig` manifests, and if you are restricting -the nodes that you are placing the OpenStack services on using the `nodeSelector` then +the nodes that you are placing the {rhos_prev_long} services on using the `nodeSelector` then you also want to limit where the `MachineConfig` is applied. To define where the `MachineConfig` can be applied, you need to use a `MachineConfigPool` that links the `MachineConfig` to the nodes. -For example to be able to limit `MachineConfig` to the 3 OpenShift nodes that you +For example to be able to limit `MachineConfig` to the 3 {OpenShift} ({OpenShiftShort}) nodes that you marked with the `type: openstack` label, you create the `MachineConfigPool` like this: @@ -46,4 +46,4 @@ metadata: Refer to the link:https://docs.openshift.com/container-platform/4.15/post_installation_configuration/machine-configuration-tasks.html[Postinstallation machine configuration tasks] in _OpenShift Container Platform 4.15 Documentation_. [WARNING] -Applying a `MachineConfig` to an {OpenShift} node makes the node reboot. +Applying a `MachineConfig` to an {OpenShiftShort} node makes the node reboot. diff --git a/docs_user/modules/con_about-node-selector.adoc b/docs_user/modules/con_about-node-selector.adoc index 221437a92..258865437 100644 --- a/docs_user/modules/con_about-node-selector.adoc +++ b/docs_user/modules/con_about-node-selector.adoc @@ -3,36 +3,36 @@ = About node selector There are a variety of reasons why you might want to restrict the nodes where -OpenStack services can be placed: +{rhos_prev_long} ({OpenStackShort}) services can be placed: * Hardware requirements: System memory, Disk space, Cores, HBAs -* Limit the impact of the OpenStack services on other OpenShift workloads. -* Avoid collocating OpenStack services. +* Limit the impact of the {OpenStackShort} services on other {OpenShift} workloads. +* Avoid collocating {OpenStackShort} services. -The mechanism provided by the OpenStack operators to achieve this is through the +The mechanism provided by the {OpenStackShort} operators to achieve this is through the use of labels. -You either label the OpenShift nodes or use existing labels, and then use those labels in the OpenStack manifests in the +You either label the {OpenShiftShort} nodes or use existing labels, and then use those labels in the {OpenStackShort} manifests in the `nodeSelector` field. -The `nodeSelector` field in the OpenStack manifests follows the standard -OpenShift `nodeSelector` field. For more information, see link:https://docs.openshift.com/container-platform/4.15/nodes/scheduling/nodes-scheduler-node-selectors.html[About node selectors] in _OpenShift Container Platform 4.15 Documentation_. +The `nodeSelector` field in the {OpenStackShort} manifests follows the standard +{OpenShiftShort} `nodeSelector` field. For more information, see link:https://docs.openshift.com/container-platform/4.15/nodes/scheduling/nodes-scheduler-node-selectors.html[About node selectors] in _OpenShift Container Platform 4.15 Documentation_. -This field is present at all the different levels of the OpenStack manifests: +This field is present at all the different levels of the {OpenStackShort} manifests: * Deployment: The `OpenStackControlPlane` object. * Component: For example the `cinder` element in the `OpenStackControlPlane`. * Service: For example the `cinderVolume` element within the `cinder` element in the `OpenStackControlPlane`. -This allows a fine grained control of the placement of the OpenStack services +This allows a fine grained control of the placement of the {OpenStackShort} services with minimal repetition. Values of the `nodeSelector` are propagated to the next levels unless they are overwritten. This means that a `nodeSelector` value at the deployment level will -affect all the OpenStack services. +affect all the {OpenStackShort} services. -For example, you can add label `type: openstack` to any 3 OpenShift nodes: +For example, you can add label `type: openstack` to any 3 {OpenShiftShort} nodes: ---- $ oc label nodes worker0 type=openstack @@ -90,5 +90,5 @@ The Block Storage service operator does not currently have the possibility of de the `nodeSelector` in `cinderVolumes`, so you need to specify it on each of the backends. -It is possible to leverage labels added by the Node Feature Discovery (NFD) Operator to place OpenStack services. For more information, see link:https://docs.openshift.com/container-platform/4.13/hardware_enablement/psap-node-feature-discovery-operator.html[Node Feature Discovery Operator] in _OpenShift Container Platform 4.15 Documentation_. +It is possible to leverage labels added by the Node Feature Discovery (NFD) Operator to place {OpenStackShort} services. For more information, see link:https://docs.openshift.com/container-platform/4.13/hardware_enablement/psap-node-feature-discovery-operator.html[Node Feature Discovery Operator] in _OpenShift Container Platform 4.15 Documentation_. diff --git a/docs_user/modules/con_bare-metal-provisioning-service-configurations.adoc b/docs_user/modules/con_bare-metal-provisioning-service-configurations.adoc index 101e53e38..4fe5d81e9 100644 --- a/docs_user/modules/con_bare-metal-provisioning-service-configurations.adoc +++ b/docs_user/modules/con_bare-metal-provisioning-service-configurations.adoc @@ -42,5 +42,5 @@ Finally, a parameter which may be important based upon your configuration and ex As a warning, hardware types set via the `ironic.conf` `enabled_hardware_types` parameter and hardware type driver interfaces starting with `staging-` are not available to be migrated into an adopted configuration. -Furthermore, {OpenStackPreviousInstaller}-based deployments made architectural decisions based upon self-management of services. When adopting deployments, you don't necessarilly need multiple replicas of secondary services such as the Introspection service. Should the host the container is running upon fail, OpenShift will restart the container on another host. The short-term transitory loss -//kgilliga: This last sentence tails off. \ No newline at end of file +Furthermore, {OpenStackPreviousInstaller}-based deployments made architectural decisions based upon self-management of services. When adopting deployments, you don't necessarilly need multiple replicas of secondary services such as the Introspection service. Should the host the container is running upon fail, {OpenShift} will restart the container on another host. The short-term transitory loss +//kgilliga: This last sentence trails off. \ No newline at end of file diff --git a/docs_user/modules/con_block-storage-service-requirements.adoc b/docs_user/modules/con_block-storage-service-requirements.adoc index ded2362e8..ca562ca2c 100644 --- a/docs_user/modules/con_block-storage-service-requirements.adoc +++ b/docs_user/modules/con_block-storage-service-requirements.adoc @@ -2,12 +2,12 @@ = Block Storage service requirements -The Block Storage service (cinder) has both local storage used by the service and OpenStack user requirements. +The Block Storage service (cinder) has both local storage used by the service and {rhos_prev_long} ({OpenStackShort}) user requirements. Local storage is used for example when downloading a glance image for the create volume from image operation, which can become considerable when having concurrent operations and not using the Block Storage service volume cache. -In the Operator deployed OpenStack, there is a way to configure the +In the Operator deployed {OpenStackShort}, there is a way to configure the location of the conversion directory to be an NFS share (using the extra volumes feature), something that needed to be done manually before. @@ -19,7 +19,7 @@ First you need to check the transport protocol the Block Storage service backend RBD, iSCSI, FC, NFS, NVMe-oF, etc. Once you know all the transport protocols that you are using, you can make -sure that you are taking them into consideration when placing the Block Storage services (as mentioned above in the Node Roles section) and the right storage transport related binaries are running on the OpenShift nodes. +sure that you are taking them into consideration when placing the Block Storage services (as mentioned above in the Node Roles section) and the right storage transport related binaries are running on the {OpenShift} nodes. Detailed information about the specifics for each storage transport protocol can be found in the xref:adopting-the-block-storage-service_adopt-control-plane[Adopting the {block_storage}]. diff --git a/docs_user/modules/con_comparing-configuration-files-between-deployments.adoc b/docs_user/modules/con_comparing-configuration-files-between-deployments.adoc index b196706d2..74111980b 100644 --- a/docs_user/modules/con_comparing-configuration-files-between-deployments.adoc +++ b/docs_user/modules/con_comparing-configuration-files-between-deployments.adoc @@ -2,12 +2,11 @@ = Comparing configuration files between deployments -In order to help users to handle the configuration for the TripleO and OpenStack +In order to help users to handle the configuration for the {OpenStackPreviousInstaller} and {rhos_prev_long} services the tool: https://github.com/openstack-k8s-operators/os-diff has been -develop to compare the configuration files between the TripleO deployment and -the next gen cloud. +develop to compare the configuration files between the {OpenStackPreviousInstaller} deployment and the next gen cloud. Make sure Golang is installed and configured on your env: - +//kgilliga: Do we want to link to "https://github.com/openstack-k8s-operators/os-diff" downstream? ---- git clone https://github.com/openstack-k8s-operators/os-diff pushd os-diff diff --git a/docs_user/modules/con_identity-service-authentication.adoc b/docs_user/modules/con_identity-service-authentication.adoc index 9ef81a8ba..76c70ffc3 100644 --- a/docs_user/modules/con_identity-service-authentication.adoc +++ b/docs_user/modules/con_identity-service-authentication.adoc @@ -2,7 +2,7 @@ = Identity service authentication -When you adopt a director OpenStack deployment, users authenticate to the Identity service (keystone) by using Secure RBAC (SRBAC). There is no change to how you perform operations if SRBAC is enabled. If SRBAC is not enabled, then adopting a director OpenStack deployment changes how you perform operations, such as adding roles to users. If you have custom policies enabled, contact support before adopting a director OpenStack deployment. +When you adopt a {OpenStackPreviousInstaller} {rhos_prev_long} ({OpenStackShort}) deployment, users authenticate to the Identity service (keystone) by using Secure RBAC (SRBAC). There is no change to how you perform operations if SRBAC is enabled. If SRBAC is not enabled, then adopting a {OpenStackPreviousInstaller} {OpenStackShort} deployment changes how you perform operations, such as adding roles to users. If you have custom policies enabled, contact support before adopting a {OpenStackPreviousInstaller} {OpenStackShort} deployment. // For more information on SRBAC see [link]. diff --git a/docs_user/modules/con_key-manager-service-support-for-crypto-plugins.adoc b/docs_user/modules/con_key-manager-service-support-for-crypto-plugins.adoc index d63b7249b..ea3c44780 100644 --- a/docs_user/modules/con_key-manager-service-support-for-crypto-plugins.adoc +++ b/docs_user/modules/con_key-manager-service-support-for-crypto-plugins.adoc @@ -2,7 +2,7 @@ = Key Manager service support for crypto plug-ins -The Key Manager service (barbican) does not yet support all of the crypto plug-ins available in TripleO. +The Key Manager service (barbican) does not yet support all of the crypto plug-ins available in {OpenStackPreviousInstaller}. //**TODO: Right now Barbican only supports the simple crypto plugin. diff --git a/docs_user/modules/con_node-roles.adoc b/docs_user/modules/con_node-roles.adoc index 2f8e13ca5..6935a0187 100644 --- a/docs_user/modules/con_node-roles.adoc +++ b/docs_user/modules/con_node-roles.adoc @@ -2,26 +2,26 @@ = About node roles -In director deployments you had 4 different standard roles for the nodes: +In {OpenStackPreviousInstaller} deployments you had 4 different standard roles for the nodes: `Controller`, `Compute`, `Ceph Storage`, `Swift Storage`, but in the control plane you make a distinction based on where things are running, in -OpenShift or external to it. +{OpenShift} ({OpenShiftShort}) or external to it. -When adopting a director OpenStack your `Compute` nodes will directly become +When adopting a {OpenStackPreviousInstaller} {rhos_prev_long} ({OpenStackShort}) your `Compute` nodes will directly become external nodes, so there should not be much additional planning needed there. In many deployments being adopted the `Controller` nodes will require some -thought because you have many OpenShift nodes where the controller services +thought because you have many {OpenShiftShort} nodes where the Controller services could run, and you have to decide which ones you want to use, how you are going to use them, and make sure those nodes are ready to run the services. -In most deployments running OpenStack services on `master` nodes can have a -seriously adverse impact on the OpenShift cluster, so it is recommended that you place OpenStack services on non `master` nodes. +In most deployments running {OpenStackShort} services on `master` nodes can have a +seriously adverse impact on the {OpenShiftShort} cluster, so it is recommended that you place {OpenStackShort} services on non `master` nodes. -By default OpenStack Operators deploy OpenStack services on any worker node, but +By default {OpenStackShort} Operators deploy {OpenStackShort} services on any worker node, but that is not necessarily what's best for all deployments, and there may be even services that won't even work deployed like that. When planing a deployment it's good to remember that not all the services on an -OpenStack deployments are the same as they have very different requirements. +{OpenStackShort} deployments are the same as they have very different requirements. Looking at the Block Storage service (cinder) component you can clearly see different requirements for its services: the cinder-scheduler is a very light service with low @@ -35,17 +35,17 @@ data) requirements. The Glance and Swift components are in the data path, as well as RabbitMQ and Galera services. Given these requirements it may be preferable not to let these services wander -all over your OpenShift worker nodes with the possibility of impacting other +all over your {OpenShiftShort} worker nodes with the possibility of impacting other workloads, or maybe you don't mind the light services wandering around but you want to pin down the heavy ones to a set of infrastructure nodes. There are also hardware restrictions to take into consideration, because if you are using a Fibre Channel (FC) Block Storage service backend you need the cinder-volume, cinder-backup, and maybe even the glance (if it's using the Block Storage service as a backend) -services to run on a OpenShift host that has an HBA. +services to run on a {OpenShiftShort} host that has an HBA. -The OpenStack Operators allow a great deal of flexibility on where to run the -OpenStack services, as you can use node labels to define which OpenShift nodes -are eligible to run the different OpenStack services. Refer to the xref:about-node-selector_{context}[About node +The {OpenStackShort} Operators allow a great deal of flexibility on where to run the +{OpenStackShort} services, as you can use node labels to define which {OpenShiftShort} nodes +are eligible to run the different {OpenStackShort} services. Refer to the xref:about-node-selector_{context}[About node selector] to learn more about using labels to define -placement of the OpenStack services. \ No newline at end of file +placement of the {OpenStackShort} services. \ No newline at end of file diff --git a/docs_user/modules/con_openshift-preparation-for-block-storage-adoption.adoc b/docs_user/modules/con_openshift-preparation-for-block-storage-adoption.adoc index 09d6f3e19..d5a767142 100644 --- a/docs_user/modules/con_openshift-preparation-for-block-storage-adoption.adoc +++ b/docs_user/modules/con_openshift-preparation-for-block-storage-adoption.adoc @@ -1,7 +1,5 @@ [id="openshift-preparation-for-block-storage-adoption_{context}"] -//kgilliga: There are some external links that need to be revisited. - = {OpenShift} preparation for {block_storage} adoption As explained in xref:planning-the-new-deployment_planning[Planning the new deployment], before deploying {rhos_prev_long} {OpenStackShort} in {OpenShift}, you must ensure that the networks are ready, that you have decided the node selection, and also make sure any necessary changes to the {OpenShiftShort} nodes have been made. For {block_storage_first_ref} volume and backup services all these 3 must be carefully considered. @@ -184,9 +182,15 @@ If you are using a single node deployment to test the process,replace `worker` w You are only loading the `nvme-fabrics` module because it takes care of loading the transport specific modules (tcp, rdma, fc) as needed. + +ifeval::["{build}" != "downstream"] For production deployments using NVMe-oF volumes it is recommended that you use multipathing. For NVMe-oF volumes {OpenStackShort} uses native multipathing, called https://nvmexpress.org/faq-items/what-is-ana-nvme-multipathing/[ANA]. +endif::[] +ifeval::["{build}" != "upstream"] +For production deployments using NVMe-oF volumes it is recommended that you use +multipathing. For NVMe-oF volumes {OpenStackShort} uses native multipathing, called ANA. +endif::[] + Once the {OpenShiftShort} nodes have rebooted and are loading the `nvme-fabrics` module you can confirm that the Operating System is configured and supports ANA by diff --git a/docs_user/modules/con_service-configurations.adoc b/docs_user/modules/con_service-configurations.adoc index 2d485f337..e90c8306a 100644 --- a/docs_user/modules/con_service-configurations.adoc +++ b/docs_user/modules/con_service-configurations.adoc @@ -2,19 +2,18 @@ = Service configurations -There is a fundamental difference between the director and operator deployments +There is a fundamental difference between the {OpenStackPreviousInstaller} and operator deployments regarding the configuration of the services. -In director deployments many of the service configurations are abstracted by -director-specific configuration options. A single director option may trigger +In {OpenStackPreviousInstaller} deployments many of the service configurations are abstracted by +{OpenStackPreviousInstaller}-specific configuration options. A single {OpenStackPreviousInstaller} option may trigger changes for multiple services and support for drivers, for example, the Block Storage service (cinder), that -require patches to the director code base. +require patches to the {OpenStackPreviousInstaller} code base. -In operator deployments this approach has changed: reduce the installer specific knowledge and leverage OpenShift and -OpenStack service specific knowledge whenever possible. +In operator deployments this approach has changed: reduce the installer specific knowledge and leverage {OpenShift} ({OpenShiftShort}) and +{rhos_prev_long} ({OpenStackShort}) service specific knowledge whenever possible. -To this effect OpenStack services will have sensible defaults for OpenShift -deployments and human operators will provide configuration snippets to provide +To this effect {OpenStackShort} services will have sensible defaults for {OpenShiftShort} deployments and human operators will provide configuration snippets to provide necessary configuration, such as the Block Storage service backend configuration, or to override the defaults. @@ -62,9 +61,9 @@ spec: < . . . > ---- -In OpenShift it is not recommended to store sensitive information like the -credentials to the Block Storage service storage array in the CRs, so most OpenStack operators -have a mechanism to use OpenShift's `Secrets` for sensitive configuration +In {OpenShift} it is not recommended to store sensitive information like the +credentials to the Block Storage service storage array in the CRs, so most {OpenStackShort} operators +have a mechanism to use the {OpenShift} `Secrets` for sensitive configuration parameters of the services and then use them by reference in the `customServiceConfigSecrets` section which is analogous to the `customServiceConfig`. diff --git a/docs_user/modules/proc_adopting-compute-services-to-the-data-plane.adoc b/docs_user/modules/proc_adopting-compute-services-to-the-data-plane.adoc index b6bdb62ff..ff35303d9 100644 --- a/docs_user/modules/proc_adopting-compute-services-to-the-data-plane.adoc +++ b/docs_user/modules/proc_adopting-compute-services-to-the-data-plane.adoc @@ -120,9 +120,13 @@ endif::[] "grep -qF $uuid /var/lib/nova/compute_id || (echo $uuid | sudo tee /var/lib/nova/compute_id && sudo chown 42436:42436 /var/lib/nova/compute_id && sudo chcon -t container_file_t /var/lib/nova/compute_id)" done ---- - +ifeval::["{build}" != "downstream"] . Create a https://kubernetes.io/docs/concepts/configuration/secret/#ssh-authentication-secrets[ssh authentication secret] for the data plane nodes: //kgilliga: We probably shouldn't link to an external site. I need to check if we will document this in Red Hat docs. +endif::[] +ifeval::["{build}" != "upstream"] +. Create a ssh authentication secret for the data plane nodes: +endif::[] + [subs=+quotes] ---- @@ -260,7 +264,7 @@ EOF That service removes pre-FFU workarounds and configures Nova compute services for Ceph storage backend. Provided above resources should contain a cell-specific configurations. -For multi-cell, config maps and OpenStack dataplane services should be named like `nova-custom-ceph-cellX` and `nova-compute-extraconfig-cellX`. +For multi-cell, config maps and {rhos_prev_long} data plane services should be named like `nova-custom-ceph-cellX` and `nova-compute-extraconfig-cellX`. ifeval::["{build}" == "downstream"] . Create a secret for the subscription manager and a secret for the Red Hat registry: diff --git a/docs_user/modules/proc_adopting-image-service-with-ceph-backend.adoc b/docs_user/modules/proc_adopting-image-service-with-ceph-backend.adoc index 1bcdd14f2..aa3028fb1 100644 --- a/docs_user/modules/proc_adopting-image-service-with-ceph-backend.adoc +++ b/docs_user/modules/proc_adopting-image-service-with-ceph-backend.adoc @@ -52,7 +52,7 @@ EOF [NOTE] ==== If you have previously backed up your {OpenStackShort} services configuration file from the old environment, you can use os-diff to compare and make sure the configuration is correct. -For more information, see xref:reviewing-the-openstack-control-plane-configuration_adopt-control-plane[Reviewing the OpenStack control plane configuration]. +For more information, see xref:reviewing-the-openstack-control-plane-configuration_adopt-control-plane[Reviewing the {rhos_prev_long} control plane configuration]. ---- pushd os-diff diff --git a/docs_user/modules/proc_configuring-data-plane-nodes.adoc b/docs_user/modules/proc_configuring-data-plane-nodes.adoc index 2f5d6b81e..91ba592b1 100644 --- a/docs_user/modules/proc_configuring-data-plane-nodes.adoc +++ b/docs_user/modules/proc_configuring-data-plane-nodes.adoc @@ -2,9 +2,9 @@ = Configuring data plane nodes -A complete OpenStack cluster consists of OpenShift nodes and data plane nodes. The +A complete {rhos_prev_long} ({OpenStackShort}) cluster consists of {OpenShift} ({OpenShiftShort}) nodes and data plane nodes. The former use `NodeNetworkConfigurationPolicy` custom resource (CR) to configure physical -interfaces. Since data plane nodes are not OpenShift nodes, a different approach to +interfaces. Since data plane nodes are not {OpenShiftShort} nodes, a different approach to configure their network connectivity is used. Instead, data plane nodes are configured by `dataplane-operator` and its CRs. The CRs @@ -22,8 +22,8 @@ To make sure the latest network configuration is used during the data plane adop should also set `edpm_network_config_update: true` in the `nodeTemplate`. You will proceed with <> once the OpenStack control plane is deployed in the -OpenShift cluster. When doing so, you will configure `NetConfig` and +process>> once the {OpenShiftShort} control plane is deployed in the +{OpenShiftShort} cluster. When doing so, you will configure `NetConfig` and `OpenstackDataplaneNodeSet` CRs, using the same VLAN tags and IPAM configuration as determined in the previous steps. diff --git a/docs_user/modules/proc_configuring-networking-for-control-plane-services.adoc b/docs_user/modules/proc_configuring-networking-for-control-plane-services.adoc index f6323e529..eab300a13 100644 --- a/docs_user/modules/proc_configuring-networking-for-control-plane-services.adoc +++ b/docs_user/modules/proc_configuring-networking-for-control-plane-services.adoc @@ -3,7 +3,7 @@ = Configuring the networking for control plane services Once NMState operator created the desired hypervisor network configuration for -isolated networks, we need to configure OpenStack services to use configured +isolated networks, we need to configure {rhos_prev_long} ({OpenStackShort}) services to use configured interfaces. This is achieved by defining `NetworkAttachmentDefinition` custom resources (CRs) for each isolated network. (In some clusters, these CRs are managed by the Cluster Network Operator in which case `Network` CRs should be used instead. For more information, see @@ -66,7 +66,7 @@ The example above would exclude addresses `172.17.0.24` as well as //== Load balancer IP addresses -Some OpenStack services require load balancer IP addresses. These IP addresses +Some {OpenStackShort} services require load balancer IP addresses. These IP addresses belong to the same IP range as the control plane services, and are managed by MetalLB. The IP address pool is defined by `IPAllocationPool` CRs. This pool should also be aligned with the adopted configuration. diff --git a/docs_user/modules/proc_configuring-openshift-worker-nodes.adoc b/docs_user/modules/proc_configuring-openshift-worker-nodes.adoc index e29ea9ac0..a9f1e7e39 100644 --- a/docs_user/modules/proc_configuring-openshift-worker-nodes.adoc +++ b/docs_user/modules/proc_configuring-openshift-worker-nodes.adoc @@ -1,8 +1,8 @@ [id="configuring-openshift-worker-nodes_{context}"] -= Configuring OpenShift worker nodes += Configuring {OpenShift} worker nodes -OCP worker nodes that run OpenStack services need a way to connect the service +{OpenShift} worker nodes that run {rhos_prev_long} services need a way to connect the service pods to isolated networks. This requires physical network configuration on the hypervisor. diff --git a/docs_user/modules/proc_creating-a-ceph-nfs-cluster.adoc b/docs_user/modules/proc_creating-a-ceph-nfs-cluster.adoc index 2c7ab591b..c15d26f8d 100644 --- a/docs_user/modules/proc_creating-a-ceph-nfs-cluster.adoc +++ b/docs_user/modules/proc_creating-a-ceph-nfs-cluster.adoc @@ -10,10 +10,14 @@ If you use the Ceph via NFS backend with {rhos_component_storage_file_first_ref} it is easier for clients to mount their existing shares through the new NFS export locations. . You must propagate the `StorageNFS` network to the target nodes -where the `ceph-nfs` service will be deployed. See link:https://docs.openstack.org/project-deploy-guide/tripleo-docs/wallaby/features/network_isolation.html#deploying-the-overcloud-with-network-isolation[Deploying +where the `ceph-nfs` service will be deployed. +ifeval::["{build}" != "downstream"] +See link:https://docs.openstack.org/project-deploy-guide/tripleo-docs/wallaby/features/network_isolation.html#deploying-the-overcloud-with-network-isolation[Deploying an Overcloud with Network Isolation with TripleO] and link:https://docs.openstack.org/project-deploy-guide/tripleo-docs/wallaby/post_deployment/updating_network_configuration_post_deployment.html[Applying network configuration changes after deployment] for the background to these -tasks. The following steps will be relevant if the Ceph Storage nodes were +tasks. +endif::[] +The following steps will be relevant if the Ceph Storage nodes were deployed via {OpenStackPreviousInstaller}. .. Identify the node definition file used in the environment. This is the input file associated with the `openstack overcloud node provision` diff --git a/docs_user/modules/proc_deploying-file-systems-service-control-plane.adoc b/docs_user/modules/proc_deploying-file-systems-service-control-plane.adoc index a217c6f80..11db73eb0 100644 --- a/docs_user/modules/proc_deploying-file-systems-service-control-plane.adoc +++ b/docs_user/modules/proc_deploying-file-systems-service-control-plane.adoc @@ -26,10 +26,14 @@ adopting the Shared File Systems services. ensure that neutron has been deployed prior to adopting Shared File Systems services. .Procedure - +ifeval::["{build}" != "downstream"] . Define the `CONTROLLER1_SSH` environment variable, if it link:stop_openstack_services.md#variables[hasn't been -defined] already. Then copy the configuration file from {OpenStackShort} {rhos_prev_ver} for -reference. +defined] already. Then copy the configuration file from {OpenStackShort} {rhos_prev_ver} for reference. +endif::[] +ifeval::["{build}" != "upstream"] +. Define the `CONTROLLER1_SSH` environment variable, if it hasn't been +defined already. Then copy the configuration file from {OpenStackShort} {rhos_prev_ver} for reference. +endif::[] + ---- $CONTROLLER1_SSH cat /var/lib/config-data/puppet-generated/manila/etc/manila/manila.conf | awk '!/^ *#/ && NF' > ~/manila.conf @@ -48,9 +52,12 @@ all of these can be ignored. * Ignore the `osapi_share_listen` configuration. In {rhos_long} {rhos_curr_ver}, you rely on {OpenShift} routes and ingress. * Pay attention to policy overrides. In {rhos_acro} {rhos_curr_ver}, the {rhos_component_storage_file} ships with a secure -default RBAC, and overrides may not be necessary. Please review RBAC -defaults by using the https://docs.openstack.org/oslo.policy/latest/cli/oslopolicy-policy-generator.html[Oslo policy generator] -tool. If a custom policy is necessary, you must provide it as a +default RBAC, and overrides may not be necessary. +ifeval::["{build}" != "downstream"] +Please review RBAC defaults by using the https://docs.openstack.org/oslo.policy/latest/cli/oslopolicy-policy-generator.html[Oslo policy generator] +tool. +endif::[] +If a custom policy is necessary, you must provide it as a `ConfigMap`. The following sample spec illustrates how a `ConfigMap` called `manila-policy` can be set up with the contents of a file called `policy.yaml`. @@ -96,7 +103,8 @@ you will need to split them up when deploying {rhos_acro} {rhos_curr_ver}. Each backend driver needs to use its own instance of the `manila-share` service. * If a storage backend driver needs a custom container image, find it on the -https://catalog.redhat.com/software/containers/search?gs&q=manila[RHOSP Ecosystem Catalog] +https://catalog.redhat.com/software/containers/search?gs&q=manila[RHOSP Ecosystem Catalog] +//kgilliga: Should this link to the RH Ecosystem Catalog appear downstream only? and set `manila: template: manilaShares: : containerImage` value. The following example illustrates multiple storage backend drivers, using custom container images. diff --git a/docs_user/modules/proc_migrating-databases-to-mariadb-instances.adoc b/docs_user/modules/proc_migrating-databases-to-mariadb-instances.adoc index c61b8f913..4c54b714e 100644 --- a/docs_user/modules/proc_migrating-databases-to-mariadb-instances.adoc +++ b/docs_user/modules/proc_migrating-databases-to-mariadb-instances.adoc @@ -1,6 +1,5 @@ [id="migrating-databases-to-mariadb-instances_{context}"] -//Check xref contexts. //kgilliga: Find out if the steps in the Variables and pre-checks sections can go in the main procedure or if they have to be done before. = Migrating databases to MariaDB instances @@ -279,3 +278,4 @@ oc delete pod mariadb-copy-data oc delete pvc mariadb-data ---- For more information, see https://learn.redhat.com/t5/DO280-Red-Hat-OpenShift/About-pod-security-standards-and-warnings/m-p/32502[About pod security standards and warnings]. +//kgilliga: Should this link to "About pod security standards and warnings" appear downstream only? diff --git a/docs_user/modules/proc_preparing-block-storage-service-by-customizing-configuration.adoc b/docs_user/modules/proc_preparing-block-storage-service-by-customizing-configuration.adoc index b92cd9883..34504f1f2 100644 --- a/docs_user/modules/proc_preparing-block-storage-service-by-customizing-configuration.adoc +++ b/docs_user/modules/proc_preparing-block-storage-service-by-customizing-configuration.adoc @@ -30,8 +30,10 @@ configuration would go in `customServiceConfig` (or a `Secret` and then used in `customServiceConfigSecrets`). . Check if any of the {block_storage} volume drivers being used requires a custom vendor image. If they do, find the location of the image in the vendor's instruction -available in the w https://catalog.redhat.com/software/search?target_platforms=Red%20Hat%20OpenStack%20Platform&p=1&functionalCategories=Data%20storage[OpenStack Cinder ecosystem +available in the https://catalog.redhat.com/software/search?target_platforms=Red%20Hat%20OpenStack%20Platform&p=1&functionalCategories=Data%20storage[{rhos_prev_long} {block_storage} ecosystem page] +//kgilliga: Shouldn't this link be this instead? https://catalog.redhat.com/software/search?target_platforms=Red%20Hat%20OpenStack%20Platform&p=1&functionalCategories=Data%20storage&certified_plugin_types=Block%20Storage%20(Cinder) +//Also, should we link to the Red Hat catalog downstream only? and add it under the specific's driver section using the `containerImage` key. The following example shows a CRD for a Pure Storage array with a certified driver: + diff --git a/docs_user/modules/proc_reusing-existing-subnet-ranges.adoc b/docs_user/modules/proc_reusing-existing-subnet-ranges.adoc index 8f5c8df11..2f0645183 100644 --- a/docs_user/modules/proc_reusing-existing-subnet-ranges.adoc +++ b/docs_user/modules/proc_reusing-existing-subnet-ranges.adoc @@ -18,8 +18,7 @@ instead. For more information, see xref:planning-your-ipam-configuration_configu No special routing configuration is required in this scenario; the only thing to pay attention to is to make sure that already consumed IP addresses don't -overlap with the new allocation pools configured for OpenStack control plane -services. +overlap with the new allocation pools configured for {rhos_prev_long} control plane services. If you are especially constrained by the size of the existing subnet, you may have to apply elaborate exclusion rules when defining allocation pools for the diff --git a/docs_user/modules/proc_using-new-subnet-ranges.adoc b/docs_user/modules/proc_using-new-subnet-ranges.adoc index 9cc0e275a..ebe9b08ea 100644 --- a/docs_user/modules/proc_using-new-subnet-ranges.adoc +++ b/docs_user/modules/proc_using-new-subnet-ranges.adoc @@ -9,7 +9,7 @@ addresses for the new control plane services. The general idea here is to define new IP ranges for control plane services that belong to a different subnet that was not used in the existing cluster. Then, configure link local IP routing between the old and new subnets to allow -old and new service deployments to communicate. This involves using TripleO +old and new service deployments to communicate. This involves using {OpenStackPreviousInstaller} mechanism on pre-adopted cluster to configure additional link local routes there. This will allow EDP deployment to reach out to adopted nodes using their old subnet addresses. @@ -58,7 +58,7 @@ Once done, run `tripleo deploy` to apply the new configuration. Note that network configuration changes are not applied by default to avoid risk of network disruption. You will have to enforce the changes by setting the -`StandaloneNetworkConfigUpdate: true` in the TripleO configuration files. +`StandaloneNetworkConfigUpdate: true` in the {OpenStackPreviousInstaller} configuration files. Once `tripleo deploy` is complete, you should see new link local routes to the new subnet on each node. For example, @@ -78,7 +78,7 @@ The next step is to configure similar routes for the old subnet for control plan next-hop-interface: ospbr ``` -Once applied, you should eventually see the following route added to your OCP nodes. +Once applied, you should eventually see the following route added to your {OpenShift} ({OpenShiftShort}) nodes. ```bash # ip route | grep 192 @@ -87,7 +87,7 @@ Once applied, you should eventually see the following route added to your OCP no --- -At this point, you should be able to ping the adopted nodes from OCP nodes +At this point, you should be able to ping the adopted nodes from {OpenShiftShort} nodes using their old subnet addresses; and vice versa. ---