diff --git a/docs_user/assemblies/assembly_adopting-the-image-service.adoc b/docs_user/assemblies/assembly_adopting-the-image-service.adoc index d7aca4c3b..9e564f66d 100644 --- a/docs_user/assemblies/assembly_adopting-the-image-service.adoc +++ b/docs_user/assemblies/assembly_adopting-the-image-service.adoc @@ -1,7 +1,6 @@ [id="adopting-the-image-service_{context}"] :context: image-service -//Check xref context."Reviewing the OpenStack configuration" xref does not work. = Adopting the {image_service} @@ -14,9 +13,9 @@ up and running: the {identity_service} endpoints are updated and the same backen This guide also assumes that: -* A `TripleO` environment (the source Cloud) is running on one side. +* A {OpenStackPreviousInstaller} environment (the source Cloud) is running on one side. * A `SNO` / `CodeReadyContainers` is running on the other side. -* (optional) An internal/external `Ceph` cluster is reachable by both `crc` and `TripleO`. +* (optional) An internal/external `Ceph` cluster is reachable by both `crc` and {OpenStackPreviousInstaller}. ifeval::["{build}" != "downstream"] //This link goes to a 404. Do we need this text downstream? diff --git a/docs_user/assemblies/assembly_migrating-monitoring-stack-to-target-nodes.adoc b/docs_user/assemblies/assembly_migrating-monitoring-stack-to-target-nodes.adoc index c55cfff88..ba039e3f0 100644 --- a/docs_user/assemblies/assembly_migrating-monitoring-stack-to-target-nodes.adoc +++ b/docs_user/assemblies/assembly_migrating-monitoring-stack-to-target-nodes.adoc @@ -12,7 +12,7 @@ Before start this process, a few considerations are required: * There’s no need to migrate node exporters: these daemons are deployed across the nodes that are part of the {CephCluster} cluster (placement is ‘*’), and we’re going to lose metrics as long as the Controller nodes are not part of the {CephCluster} cluster anymore -* Each monitoring stack component is bound to specific ports that TripleO is +* Each monitoring stack component is bound to specific ports that {OpenStackPreviousInstaller} is supposed to open beforehand; make sure to double check the firewall rules are in place and the ports are opened for a given monitoring stack service @@ -24,7 +24,7 @@ reducing the placement with `count: 1` is a reasonable solution and allows to successfully migrate the existing daemons in an HCI (or HW limited) scenario without impacting other services. However, it is still possible to put in place a dedicated HA solution and -realize a component that is consistent with the TripleO model to reach HA. +realize a component that is consistent with the {OpenStackPreviousInstaller} model to reach HA. Building and deployment such HA model is out of scope for this procedure. include::../modules/proc_migrating-existing-daemons-to-target-nodes.adoc[leveloffset=+1] diff --git a/docs_user/assemblies/assembly_preparing-the-block-storage-service-for-adoption.adoc b/docs_user/assemblies/assembly_preparing-the-block-storage-service-for-adoption.adoc index a40dbe2e5..1ce83e48d 100644 --- a/docs_user/assemblies/assembly_preparing-the-block-storage-service-for-adoption.adoc +++ b/docs_user/assemblies/assembly_preparing-the-block-storage-service-for-adoption.adoc @@ -6,8 +6,7 @@ The {block_storage_first_ref} is configured using configuration snippets instead of using configuration parameters -defined by the installer. For more information, see xref:planning-the-new-deployment_planning[Planning the new deployment]. -//kgilliga: Note to self: This xref does not work in the preview. Need to revisit. +defined by the installer. For more information, see xref:service-configurations_planning[Service configurations]. The recommended way to deploy {block_storage} volume backends has changed to remove old limitations, add flexibility, and improve operations. diff --git a/docs_user/modules/con_bare-metal-provisioning-service-configurations.adoc b/docs_user/modules/con_bare-metal-provisioning-service-configurations.adoc index 4fe5d81e9..e7022632c 100644 --- a/docs_user/modules/con_bare-metal-provisioning-service-configurations.adoc +++ b/docs_user/modules/con_bare-metal-provisioning-service-configurations.adoc @@ -1,11 +1,8 @@ [id="con_bare-metal-provisioning-service-configurations_{context}"] -//Check xrefs - = Bare Metal Provisioning service configurations -The {bare_metal_first_ref} is configured by using configuration snippets. For more information about the configuration snippets, see xref:planning-the-new-deployment_planning[Planning the new deployment]. -//kgilliga: Note to self: This xref does not work in the preview. Need to revisit. +The {bare_metal_first_ref} is configured by using configuration snippets. For more information about the configuration snippets, see xref:service-configurations_planning[Service configurations]. {OpenStackPreviousInstaller} generally took care to not override the defaults of the {bare_metal}, however as with any system of descreet configuration management attempting to provide a cross-version compatability layer, some configuration was certainly defaulted in particular ways. For example, PXE Loader file names were often overridden at intermediate layers, and you will thus want to pay particular attention to the settings you choose to apply in your adopted deployment. The operator attempts to apply reasonable working default configuration, but if you override them with prior configuration, your experience may not be ideal or your new {bare_metal} will fail to operate. Similarly, additional configuration may be necessary, for example if your `ironic.conf` has additional hardware types enabled and in use. diff --git a/docs_user/modules/con_block-storage-service-requirements.adoc b/docs_user/modules/con_block-storage-service-requirements.adoc index ca562ca2c..9083a6d0d 100644 --- a/docs_user/modules/con_block-storage-service-requirements.adoc +++ b/docs_user/modules/con_block-storage-service-requirements.adoc @@ -4,7 +4,7 @@ The Block Storage service (cinder) has both local storage used by the service and {rhos_prev_long} ({OpenStackShort}) user requirements. -Local storage is used for example when downloading a glance image for the create volume from image operation, which can become considerable when having +Local storage is used for example when downloading a {image_service_first_ref} image for the create volume from image operation, which can become considerable when having concurrent operations and not using the Block Storage service volume cache. In the Operator deployed {OpenStackShort}, there is a way to configure the @@ -21,5 +21,5 @@ RBD, iSCSI, FC, NFS, NVMe-oF, etc. Once you know all the transport protocols that you are using, you can make sure that you are taking them into consideration when placing the Block Storage services (as mentioned above in the Node Roles section) and the right storage transport related binaries are running on the {OpenShift} nodes. -Detailed information about the specifics for each storage transport protocol can be found in the xref:adopting-the-block-storage-service_adopt-control-plane[Adopting the {block_storage}]. +Detailed information about the specifics for each storage transport protocol can be found in the xref:openshift-preparation-for-block-storage-adoption_adopting-block-storage[{OpenShift} preparation for {block_storage} adoption]. diff --git a/docs_user/modules/con_changes-to-cephFS-via-NFS.adoc b/docs_user/modules/con_changes-to-cephFS-via-NFS.adoc index 0e87c902f..a3820b883 100644 --- a/docs_user/modules/con_changes-to-cephFS-via-NFS.adoc +++ b/docs_user/modules/con_changes-to-cephFS-via-NFS.adoc @@ -2,12 +2,12 @@ = Changes to CephFS through NFS -If the {rhos_prev_long} ({OpenStackShort}) {rhos_prev_ver} deployment uses CephFS through NFS as a backend for {rhos_component_storage_file_first_ref}, there's a `ceph-nfs` service on the {OpenStackShort} controller nodes deployed and managed by {OpenStackPreviousInstaller}. This service cannot be directly imported into {rhos_long} {rhos_curr_ver}. On {rhos_acro} {rhos_curr_ver}, the {rhos_component_storage_file} only supports using a "clustered" NFS service that is directly managed on the Ceph cluster. So, adoption with this service will involve a data path disruption to existing NFS clients. The timing of this disruption can be controlled by the deployer independent of this adoption procedure. +If the {rhos_prev_long} ({OpenStackShort}) {rhos_prev_ver} deployment uses CephFS through NFS as a backend for {rhos_component_storage_file_first_ref}, there's a `ceph-nfs` service on the {OpenStackShort} controller nodes deployed and managed by {OpenStackPreviousInstaller}. This service cannot be directly imported into {rhos_long} {rhos_curr_ver}. On {rhos_acro} {rhos_curr_ver}, the {rhos_component_storage_file} only supports using a "clustered" NFS service that is directly managed on the {Ceph} cluster. So, adoption with this service will involve a data path disruption to existing NFS clients. The timing of this disruption can be controlled by the deployer independent of this adoption procedure. On {OpenStackShort} {rhos_prev_ver}, pacemaker controls the high availability of the `ceph-nfs` service. This service is assigned a Virtual IP (VIP) address that is also managed by pacemaker. The VIP is typically created on an isolated `StorageNFS` network. There are ordering and collocation constraints established between this VIP, `ceph-nfs` and the Shared File Systems service's share manager service on the controller nodes. Prior to adopting {rhos_component_storage_file}, pacemaker's ordering and collocation constraints must be adjusted to separate the share manager service. This establishes `ceph-nfs` with its VIP as an isolated, standalone NFS service that can be decommissioned at will after completing the {OpenStackShort} adoption. -Red Hat Ceph Storage 7.0 introduced a native `clustered Ceph NFS service`. This service has to be deployed on the Ceph cluster using the Ceph orchestrator prior to adopting the {rhos_component_storage_file}. This NFS service will eventually replace the standalone NFS service from {OpenStackShort} {rhos_prev_ver} in your deployment. When the {rhos_component_storage_file} is adopted into the {rhos_acro} {rhos_curr_ver} environment, it will establish all the existing +Red Hat Ceph Storage 7.0 introduced a native `clustered Ceph NFS service`. This service has to be deployed on the {Ceph} cluster using the Ceph orchestrator prior to adopting the {rhos_component_storage_file}. This NFS service will eventually replace the standalone NFS service from {OpenStackShort} {rhos_prev_ver} in your deployment. When the {rhos_component_storage_file} is adopted into the {rhos_acro} {rhos_curr_ver} environment, it will establish all the existing exports and client restrictions on the new clustered Ceph NFS service. Clients can continue to read and write data on their existing NFS shares, and are not affected until the old standalone NFS service is decommissioned. This switchover window allows clients to re-mount the same share from the new clustered Ceph NFS service during a scheduled downtime. diff --git a/docs_user/modules/con_comparing-configuration-files-between-deployments.adoc b/docs_user/modules/con_comparing-configuration-files-between-deployments.adoc index 74111980b..de3736e83 100644 --- a/docs_user/modules/con_comparing-configuration-files-between-deployments.adoc +++ b/docs_user/modules/con_comparing-configuration-files-between-deployments.adoc @@ -4,7 +4,7 @@ In order to help users to handle the configuration for the {OpenStackPreviousInstaller} and {rhos_prev_long} services the tool: https://github.com/openstack-k8s-operators/os-diff has been -develop to compare the configuration files between the {OpenStackPreviousInstaller} deployment and the next gen cloud. +develop to compare the configuration files between the {OpenStackPreviousInstaller} deployment and the {rhos_long} cloud. Make sure Golang is installed and configured on your env: //kgilliga: Do we want to link to "https://github.com/openstack-k8s-operators/os-diff" downstream? ---- diff --git a/docs_user/modules/con_node-roles.adoc b/docs_user/modules/con_node-roles.adoc index 6935a0187..ca6587459 100644 --- a/docs_user/modules/con_node-roles.adoc +++ b/docs_user/modules/con_node-roles.adoc @@ -32,7 +32,7 @@ high disk and network usage since many of its operations are in the data path the cinder-backup service which has high memory, network, and CPU (to compress data) requirements. -The Glance and Swift components are in the data path, as well as RabbitMQ and Galera services. +The {image_service_first_ref} and Swift components are in the data path, as well as RabbitMQ and Galera services. Given these requirements it may be preferable not to let these services wander all over your {OpenShiftShort} worker nodes with the possibility of impacting other @@ -41,7 +41,7 @@ want to pin down the heavy ones to a set of infrastructure nodes. There are also hardware restrictions to take into consideration, because if you are using a Fibre Channel (FC) Block Storage service backend you need the cinder-volume, -cinder-backup, and maybe even the glance (if it's using the Block Storage service as a backend) +cinder-backup, and maybe even the {image_service_first_ref} (if it's using the Block Storage service as a backend) services to run on a {OpenShiftShort} host that has an HBA. The {OpenStackShort} Operators allow a great deal of flexibility on where to run the diff --git a/docs_user/modules/con_openshift-preparation-for-block-storage-adoption.adoc b/docs_user/modules/con_openshift-preparation-for-block-storage-adoption.adoc index d5a767142..245e7e016 100644 --- a/docs_user/modules/con_openshift-preparation-for-block-storage-adoption.adoc +++ b/docs_user/modules/con_openshift-preparation-for-block-storage-adoption.adoc @@ -2,8 +2,7 @@ = {OpenShift} preparation for {block_storage} adoption -As explained in xref:planning-the-new-deployment_planning[Planning the new deployment], before deploying {rhos_prev_long} {OpenStackShort} in {OpenShift}, you must ensure that the networks are ready, that you have decided the node selection, and also make sure any necessary changes to the {OpenShiftShort} nodes have been made. For {block_storage_first_ref} volume and backup services all these 3 must be carefully considered. -//kgilliga: Note to self: xref for planning the new deployment does not work in preview. need to revisit. +Before deploying {rhos_prev_long} ({OpenStackShort}) in {OpenShift}, you must ensure that the networks are ready, that you have decided the node selection, and also make sure any necessary changes to the {OpenShiftShort} nodes have been made. For {block_storage_first_ref} volume and backup services all these 3 must be carefully considered. Node Selection:: You might need, or want, to restrict the {OpenShiftShort} nodes where {block_storage} volume and @@ -48,7 +47,7 @@ CPU intensive. This may be a concern for the {OpenShiftShort} human operators, a they may want to use the `nodeSelector` to prevent these service from interfering with their other {OpenShiftShort} workloads. For more information about node selection, see xref:about-node-selector_planning[About node selector]. + -When selecting the nodes where the {block_storage} volume is going to run remember that {block_storage}-volume may also use local storage when downloading a glance image for the create volume from image operation, and it can require a considerable +When selecting the nodes where the {block_storage} volume is going to run remember that {block_storage}-volume may also use local storage when downloading a {image_service_first_ref} image for the create volume from image operation, and it can require a considerable amount of space when having concurrent operations and not using {block_storage} volume cache. + diff --git a/docs_user/modules/openstack-troubleshooting.adoc b/docs_user/modules/openstack-troubleshooting.adoc index 44bcabf6d..da3989853 100644 --- a/docs_user/modules/openstack-troubleshooting.adoc +++ b/docs_user/modules/openstack-troubleshooting.adoc @@ -1,7 +1,6 @@ [id="troubleshooting-adoption_{context}"] //:context: troubleshooting-adoption -//kgilliga: This module might be converted to an assembly. = Troubleshooting adoption diff --git a/docs_user/modules/proc_adopting-autoscaling.adoc b/docs_user/modules/proc_adopting-autoscaling.adoc index 3d3af667f..686320d39 100644 --- a/docs_user/modules/proc_adopting-autoscaling.adoc +++ b/docs_user/modules/proc_adopting-autoscaling.adoc @@ -1,14 +1,12 @@ [id="adopting-autoscaling_{context}"] -//Check xref contexts. - = Adopting autoscaling Adopting autoscaling means that an existing `OpenStackControlPlane` custom resource (CR), where Aodh services are supposed to be disabled, should be patched to start the service with the configuration parameters provided by the source environment. This guide also assumes that: -. A `TripleO` environment (the source Cloud) is running on one side; +. A {OpenStackPreviousInstaller} environment (the source Cloud) is running on one side; . A `SNO` / `CodeReadyContainers` is running on the other side. .Prerequisites diff --git a/docs_user/modules/proc_adopting-compute-services-to-the-data-plane.adoc b/docs_user/modules/proc_adopting-compute-services-to-the-data-plane.adoc index ff35303d9..61bc5e062 100644 --- a/docs_user/modules/proc_adopting-compute-services-to-the-data-plane.adoc +++ b/docs_user/modules/proc_adopting-compute-services-to-the-data-plane.adoc @@ -75,7 +75,7 @@ spec: EOF ---- -* When `neutron-sriov-nic-agent` is running on the existing Compute nodes, check the physical device mappings and ensure that they match the values that are defined in the `OpenStackDataPlaneNodeSet` custom resource (CR). For more information, see xref:reviewing-the-openstack-control-plane-configuration_adopt-control-plane[Reviewing the {rhos_prev_long} control plane configuration]. +* When `neutron-sriov-nic-agent` is running on the existing Compute nodes, check the physical device mappings and ensure that they match the values that are defined in the `OpenStackDataPlaneNodeSet` custom resource (CR). For more information, see xref:pulling-configuration-from-tripleo-deployment_reviewing-configuration[Pulling the configuration from a {OpenStackPreviousInstaller} deployment]. * Define the shell variables necessary to run the script that runs the fast-forward upgrade. Omit setting `CEPH_FSID`, if the local storage backend is going to be configured by Nova for Libvirt. The storage backend cannot be changed during adoption, and must match the one used on the source cloud: ---- @@ -122,7 +122,7 @@ done ---- ifeval::["{build}" != "downstream"] . Create a https://kubernetes.io/docs/concepts/configuration/secret/#ssh-authentication-secrets[ssh authentication secret] for the data plane nodes: -//kgilliga: We probably shouldn't link to an external site. I need to check if we will document this in Red Hat docs. +//kgilliga:I need to check if we will document this in Red Hat docs. endif::[] ifeval::["{build}" != "upstream"] . Create a ssh authentication secret for the data plane nodes: diff --git a/docs_user/modules/proc_adopting-image-service-with-ceph-backend.adoc b/docs_user/modules/proc_adopting-image-service-with-ceph-backend.adoc index aa3028fb1..1470f76b4 100644 --- a/docs_user/modules/proc_adopting-image-service-with-ceph-backend.adoc +++ b/docs_user/modules/proc_adopting-image-service-with-ceph-backend.adoc @@ -52,7 +52,7 @@ EOF [NOTE] ==== If you have previously backed up your {OpenStackShort} services configuration file from the old environment, you can use os-diff to compare and make sure the configuration is correct. -For more information, see xref:reviewing-the-openstack-control-plane-configuration_adopt-control-plane[Reviewing the {rhos_prev_long} control plane configuration]. +For more information, see xref:pulling-configuration-from-tripleo-deployment_reviewing-configuration[Pulling the configuration from a {OpenStackPreviousInstaller} deployment]. ---- pushd os-diff diff --git a/docs_user/modules/proc_adopting-image-service-with-nfs-ganesha-backend.adoc b/docs_user/modules/proc_adopting-image-service-with-nfs-ganesha-backend.adoc index 5328e0b77..42a2d271e 100644 --- a/docs_user/modules/proc_adopting-image-service-with-nfs-ganesha-backend.adoc +++ b/docs_user/modules/proc_adopting-image-service-with-nfs-ganesha-backend.adoc @@ -12,7 +12,7 @@ Adopt the {image_service_first_ref} that you deployed with an NFS Ganesha backen * Previous Adoption steps completed. Notably, MariaDB, Keystone and Barbican should be already adopted. * In the source cloud, verify the NFS Ganesha parameters used by the overcloud to configure the {image_service} backend. -In particular, find among the TripleO heat templates the following variables that are usually an override of the default content provided by +In particular, find among the {OpenStackPreviousInstaller} heat templates the following variables that are usually an override of the default content provided by `/usr/share/openstack-tripleo-heat-templates/environments/storage/glance-nfs.yaml`[glance-nfs.yaml]: + ---- diff --git a/docs_user/modules/proc_adopting-telemetry-services.adoc b/docs_user/modules/proc_adopting-telemetry-services.adoc index 3913dabfe..2e7189d8e 100644 --- a/docs_user/modules/proc_adopting-telemetry-services.adoc +++ b/docs_user/modules/proc_adopting-telemetry-services.adoc @@ -6,7 +6,7 @@ Adopting Telemetry means that an existing `OpenStackControlPlane` custom resourc This guide also assumes that: -. A `TripleO` environment (the source Cloud) is running on one side; +. A {OpenStackPreviousInstaller} environment (the source Cloud) is running on one side; . A `SNO` / `CodeReadyContainers` is running on the other side. .Prerequisites diff --git a/docs_user/modules/proc_adopting-the-compute-service.adoc b/docs_user/modules/proc_adopting-the-compute-service.adoc index b9d5f38f2..ad8bce155 100644 --- a/docs_user/modules/proc_adopting-the-compute-service.adoc +++ b/docs_user/modules/proc_adopting-the-compute-service.adoc @@ -1,7 +1,5 @@ [id="adopting-the-compute-service_{context}"] -//kgilliga: Note to self: "Adopting the data plane" xrefs do not work. Need to revisit. - = Adopting the {compute_service} [NOTE] @@ -21,8 +19,8 @@ must already be imported into the control plane MariaDB; ** the xref:adopting-the-image-service_adopt-control-plane[Adopting the Image service] needs to be imported; ** the xref:migrating-ovn-data_migrating-databases[Migrating OVN data] need to be imported; ** the xref:adopting-the-networking-service_adopt-control-plane[Adopting the Networking service] needs to be imported; - ** the xref:adopting-the-bare-metal-provisioning-service_{context}[Adopting the Openstack Baremetal service] needs to be imported; -//kgilliga:Need to revist this xref. Might rewrite this section anyway. +** the {bare_metal} needs to be imported; +//kgilliga:I removed the link because it did not work. I might rewrite this section anyway. ** Required services specific topology xref:proc_retrieving-services-topology-specific-configuration_adopt-control-plane[Retrieving services from a topology specific-configuration]. ** {rhos_prev_long} services have been stopped. For more information, see xref:stopping-openstack-services_migrating-databases[Stopping {rhos_prev_long} services]. @@ -132,7 +130,7 @@ oc wait --for condition=Ready --timeout=300s Nova/nova ---- + The local Conductor services will be started for each cell, while the superconductor runs in `cell0`. -Note that `disable_compute_service_check_for_ffu` is mandatory for all imported Nova services, until the external data plane is imported, and until Nova Compute services fast-forward upgraded. For more information, see xref:adopting-data-plane_data-plane[Adopting the data plane]. +Note that `disable_compute_service_check_for_ffu` is mandatory for all imported Nova services, until the external data plane is imported, and until Nova Compute services fast-forward upgraded. For more information, see xref:adopting-compute-services-to-the-data-plane_data-plane[Adopting Compute services to the {rhos_acro} data plane] and xref:performing-a-fast-forward-upgrade-on-compute-services_data-plane[Performing a fast-forward upgrade on Compute services]. .Verification @@ -161,4 +159,4 @@ The expected changes to happen: ** RabbitMQ transport URL no longer uses `guest`. [NOTE] -At this point, the {compute_service} control plane services do not control the existing {compute_service} Compute workloads. The control plane manages the data plane only after the data adoption process is successfully completed. For more information, see xref:adopting-data-plane_data-plane[Adopting the data plane]. +At this point, the {compute_service} control plane services do not control the existing {compute_service} Compute workloads. The control plane manages the data plane only after the data adoption process is successfully completed. For more information, see xref:adopting-compute-services-to-the-data-plane_data-plane[Adopting Compute services to the {rhos_acro} data plane]. diff --git a/docs_user/modules/proc_adopting-the-networking-service.adoc b/docs_user/modules/proc_adopting-the-networking-service.adoc index 5fecd3b26..ed90f5007 100644 --- a/docs_user/modules/proc_adopting-the-networking-service.adoc +++ b/docs_user/modules/proc_adopting-the-networking-service.adoc @@ -10,7 +10,7 @@ When the procedure is over, the expectation is to see the `NeutronAPI` service i This guide also assumes that: -. A `TripleO` environment (the source Cloud) is running on one side; +. A {OpenStackPreviousInstaller} environment (the source Cloud) is running on one side; . A `SNO` / `CodeReadyContainers` is running on the other side. .Prerequisites diff --git a/docs_user/modules/proc_adopting-the-object-storage-service.adoc b/docs_user/modules/proc_adopting-the-object-storage-service.adoc index ae0485614..6ab6e2b6f 100644 --- a/docs_user/modules/proc_adopting-the-object-storage-service.adoc +++ b/docs_user/modules/proc_adopting-the-object-storage-service.adoc @@ -1,5 +1,5 @@ [id="adopting-the-object-storage-service_{context}"] -//check xref + = Adopting the Object Storage service This section only applies if you are using OpenStack Swift as {object_storage_first_ref}. If you are using the Object Storage API of Ceph RGW this section can be skipped. @@ -135,4 +135,4 @@ Hello World! [NOTE] At this point data is still stored on the previously existing nodes. For more information about migrating the actual data from the old -to the new deployment, see xref:migrating-the-object-storage-service_migrate-object-storage-service[Migrating the {object_storage_first_ref} to {rhos_long} nodes]. +to the new deployment, see xref:migrating-object-storage-data-to-rhoso-nodes_migrate-object-storage-service[Migrating the {object_storage_first_ref} data from {OpenStackShort} to {rhos_long} nodes]. diff --git a/docs_user/modules/proc_adopting-the-orchestration-service.adoc b/docs_user/modules/proc_adopting-the-orchestration-service.adoc index 67a953d98..d6f77b2ee 100644 --- a/docs_user/modules/proc_adopting-the-orchestration-service.adoc +++ b/docs_user/modules/proc_adopting-the-orchestration-service.adoc @@ -25,7 +25,6 @@ such as {networking_first_ref}, {compute_service_first_ref}, {object_storage_fir trying to adopt {orchestration}. .Procedure -//kgilliga: I get an error when I click this link. Do we need it in the downstream docs? ifeval::["{build}" != "downstream"] As already done for https://github.com/openstack-k8s-operators/data-plane-adoption/blob/main/keystone_adoption.md[Keystone], the Heat Adoption follows a similar pattern. endif::[] diff --git a/docs_user/modules/proc_adopting-the-placement-service.adoc b/docs_user/modules/proc_adopting-the-placement-service.adoc index f3a8614ad..4b3ac8d41 100644 --- a/docs_user/modules/proc_adopting-the-placement-service.adoc +++ b/docs_user/modules/proc_adopting-the-placement-service.adoc @@ -1,7 +1,5 @@ [id="adopting-the-placement-service_{context}"] -//Check xref contexts. - = Adopting the Placement service .Prerequisites diff --git a/docs_user/modules/proc_configuring-a-ceph-backend.adoc b/docs_user/modules/proc_configuring-a-ceph-backend.adoc index 484e72357..1ff7090b9 100644 --- a/docs_user/modules/proc_configuring-a-ceph-backend.adoc +++ b/docs_user/modules/proc_configuring-a-ceph-backend.adoc @@ -3,10 +3,10 @@ = Configuring a Ceph backend If the original deployment uses a Ceph storage backend for any service -(e.g. Glance, Cinder, Nova, Manila), the same backend must be used in the +(e.g. {image_service_first_ref}, {block_storage_first_ref}, {compute_service_first_ref}, {rhos_component_storage_file_first_ref}), the same backend must be used in the adopted deployment and custom resources (CRs) must be configured accordingly. -If you use {rhos_component_storage_file_first_ref}, on TripleO environments, the CephFS driver in {rhos_component_storage_file} is configured to use +If you use {rhos_component_storage_file_first_ref}, on {OpenStackPreviousInstaller} environments, the CephFS driver in {rhos_component_storage_file} is configured to use its own keypair. For convenience, modify the `openstack` user so that you can use it across all {rhos_prev_long} services. diff --git a/docs_user/modules/proc_deploying-backend-services.adoc b/docs_user/modules/proc_deploying-backend-services.adoc index 4bdea208e..ff662155a 100644 --- a/docs_user/modules/proc_deploying-backend-services.adoc +++ b/docs_user/modules/proc_deploying-backend-services.adoc @@ -51,7 +51,7 @@ ADMIN_PASSWORD=$(cat ~/tripleo-standalone-passwords.yaml | grep ' AdminPassword: Database passwords can differ in the control plane environment, but synchronizing the service account passwords is a required step. + -For example, in developer environments with TripleO Standalone, the +For example, in developer environments with {OpenStackPreviousInstaller} Standalone, the passwords can be extracted like this: + ---- diff --git a/docs_user/modules/proc_deploying-file-systems-service-control-plane.adoc b/docs_user/modules/proc_deploying-file-systems-service-control-plane.adoc index 11db73eb0..c48262e76 100644 --- a/docs_user/modules/proc_deploying-file-systems-service-control-plane.adoc +++ b/docs_user/modules/proc_deploying-file-systems-service-control-plane.adoc @@ -104,7 +104,6 @@ backend driver needs to use its own instance of the `manila-share` service. * If a storage backend driver needs a custom container image, find it on the https://catalog.redhat.com/software/containers/search?gs&q=manila[RHOSP Ecosystem Catalog] -//kgilliga: Should this link to the RH Ecosystem Catalog appear downstream only? and set `manila: template: manilaShares: : containerImage` value. The following example illustrates multiple storage backend drivers, using custom container images. diff --git a/docs_user/modules/proc_deploying-the-bare-metal-provisioning-service.adoc b/docs_user/modules/proc_deploying-the-bare-metal-provisioning-service.adoc index bca6edbc4..81ef7c9e6 100644 --- a/docs_user/modules/proc_deploying-the-bare-metal-provisioning-service.adoc +++ b/docs_user/modules/proc_deploying-the-bare-metal-provisioning-service.adoc @@ -14,7 +14,7 @@ By default, newer versions of the {bare_metal} contain a more restrictive access * Previous Adoption steps completed. Notably, the service databases must already be imported into the control plane MariaDB, {identity_service_first_ref}, {networking_first_ref}, {image_service_first_ref}, and {block_storage_first_ref} should be in an operational state. Ideally, {compute_service_first_ref} has not been adopted yet if {bare_metal} is leveraged in a Bare Metal as a Service configuration. -* As explained in xref:planning-the-new-deployment_planning[Planning the new deployment], before deploying {rhos_prev_long} in {rhos_long}, you must ensure that the networks are ready, that you have decided the node selection, and also make sure any necessary changes to the {rhos_acro} nodes have been made. For {bare_metal} conductor services, it is necessary that the services be able to reach Baseboard Management Controllers of hardware which is configured to be managed by {bare_metal}. If this hardware is unreachable, the nodes may enter "maintenance" state and be unable to be acted upon until connectivity is restored at a later point in time. +* Before deploying {rhos_prev_long} in {rhos_long}, you must ensure that the networks are ready, that you have decided the node selection, and also make sure any necessary changes to the {rhos_acro} nodes have been made. For {bare_metal} conductor services, it is necessary that the services be able to reach Baseboard Management Controllers of hardware which is configured to be managed by {bare_metal}. If this hardware is unreachable, the nodes may enter "maintenance" state and be unable to be acted upon until connectivity is restored at a later point in time. * You need the contents of `ironic.conf` file. Download the file so that you can access it locally: + diff --git a/docs_user/modules/proc_deploying-the-block-storage-services.adoc b/docs_user/modules/proc_deploying-the-block-storage-services.adoc index 0d2972e4b..27f62cd94 100644 --- a/docs_user/modules/proc_deploying-the-block-storage-services.adoc +++ b/docs_user/modules/proc_deploying-the-block-storage-services.adoc @@ -3,7 +3,7 @@ = Deploying the Block Storage services Assuming you have already stopped {block_storage_first_ref} services, prepared the {OpenShift} nodes, -deployed the {rhos_prev_long} {OpenStackShort} operators and a bare {OpenStackShort} manifest, and migrated the +deployed the {rhos_prev_long} ({OpenStackShort}) operators and a bare {OpenStackShort} manifest, and migrated the database, and prepared the patch manifest with the {block_storage} configuration, you must apply the patch and wait for the operator to apply the changes and deploy the Block Storage services. @@ -185,8 +185,7 @@ openstack volume backup list To confirm that the configuration is working, the following basic operations are recommended: -. Create a volume from an image to check that the connection to glance is -working. +. Create a volume from an image to check that the connection to {image_service_first_ref} is working. + ---- openstack volume create --image cirros --bootable --size 1 disk_new diff --git a/docs_user/modules/proc_migrating-ceph-mds.adoc b/docs_user/modules/proc_migrating-ceph-mds.adoc index b31ef651d..cd35abed0 100644 --- a/docs_user/modules/proc_migrating-ceph-mds.adoc +++ b/docs_user/modules/proc_migrating-ceph-mds.adoc @@ -207,7 +207,6 @@ ifeval::["{build}" != "downstream"] [NOTE] It is possible to elect as "active" a dedicated MDS for a particular file system. To configure this preference, `CephFS` provides a configuration option for MDS called `mds_join_fs` which enforces this affinity. When failing over MDS daemons, a cluster’s monitors will prefer standby daemons with `mds_join_fs` equal to the file system name with the failed rank. If no standby exists with `mds_join_fs` equal to the file system name, it will choose an unqualified standby as a replacement. -//kgilliga: We might want to discuss what info is really necessary downstream. endif::[] + ---- diff --git a/docs_user/modules/proc_migrating-databases-to-mariadb-instances.adoc b/docs_user/modules/proc_migrating-databases-to-mariadb-instances.adoc index 4c54b714e..042183010 100644 --- a/docs_user/modules/proc_migrating-databases-to-mariadb-instances.adoc +++ b/docs_user/modules/proc_migrating-databases-to-mariadb-instances.adoc @@ -278,4 +278,4 @@ oc delete pod mariadb-copy-data oc delete pvc mariadb-data ---- For more information, see https://learn.redhat.com/t5/DO280-Red-Hat-OpenShift/About-pod-security-standards-and-warnings/m-p/32502[About pod security standards and warnings]. -//kgilliga: Should this link to "About pod security standards and warnings" appear downstream only? + diff --git a/docs_user/modules/proc_migrating-ovn-data.adoc b/docs_user/modules/proc_migrating-ovn-data.adoc index f2671a221..b9e1b70e4 100644 --- a/docs_user/modules/proc_migrating-ovn-data.adoc +++ b/docs_user/modules/proc_migrating-ovn-data.adoc @@ -188,7 +188,7 @@ oc exec ovn-copy-data -- bash -c "ovsdb-client restore tcp:$PODIFIED_OVSDB_NB_IP oc exec ovn-copy-data -- bash -c "ovsdb-client restore tcp:$PODIFIED_OVSDB_SB_IP:6642 < /backup/ovs-sb.db" ---- -. Restore database backup to podified OVN database servers on a TLS everywhere environment. +. Restore database backup to control plane OVN database servers on a TLS everywhere environment. + ---- oc exec ovn-copy-data -- bash -c "ovsdb-client restore --ca-cert=/etc/pki/tls/misc/ca.crt --private-key=/etc/pki/tls/misc/tls.key --certificate=/etc/pki/tls/misc/tls.crt ssl:$PODIFIED_OVSDB_NB_IP:6641 < /backup/ovs-nb.db" diff --git a/docs_user/modules/proc_migrating-the-rgw-backends.adoc b/docs_user/modules/proc_migrating-the-rgw-backends.adoc index 9ac5e8887..b0e6731e1 100644 --- a/docs_user/modules/proc_migrating-the-rgw-backends.adoc +++ b/docs_user/modules/proc_migrating-the-rgw-backends.adoc @@ -49,7 +49,7 @@ endif::[] ifeval::["{build}" != "upstream"] . During the overcloud deployment, RGW is applied at step 2 (external_deployment_steps), and a cephadm compatible spec is generated in -`/home/ceph-admin/specs/rgw` from director. Find the RGW spec: +`/home/ceph-admin/specs/rgw` from {OpenStackPreviousInstaller}. Find the RGW spec: endif::[] + ---- diff --git a/docs_user/modules/proc_migrating-tls-everywhere.adoc b/docs_user/modules/proc_migrating-tls-everywhere.adoc index c67d05670..8e2fd4bf6 100644 --- a/docs_user/modules/proc_migrating-tls-everywhere.adoc +++ b/docs_user/modules/proc_migrating-tls-everywhere.adoc @@ -78,7 +78,6 @@ The item you need to consider is the first one: `caSigningCert cert-pki-ca`. . Export the certificate and key from the `/etc/pki/pki-tomcat/alias` directory: -//kgilliga: SMEs, Please confirm that this step is accurate. ^ + ---- $IPA_SSH pk12util -o /tmp/freeipa.p12 -n 'caSigningCert\ cert-pki-ca' -d /etc/pki/pki-tomcat/alias -k /etc/pki/pki-tomcat/alias/pwdfile.txt -w /etc/pki/pki-tomcat/alias/pwdfile.txt diff --git a/docs_user/modules/proc_preparing-block-storage-service-by-customizing-configuration.adoc b/docs_user/modules/proc_preparing-block-storage-service-by-customizing-configuration.adoc index 34504f1f2..857c71796 100644 --- a/docs_user/modules/proc_preparing-block-storage-service-by-customizing-configuration.adoc +++ b/docs_user/modules/proc_preparing-block-storage-service-by-customizing-configuration.adoc @@ -33,7 +33,6 @@ image. If they do, find the location of the image in the vendor's instruction available in the https://catalog.redhat.com/software/search?target_platforms=Red%20Hat%20OpenStack%20Platform&p=1&functionalCategories=Data%20storage[{rhos_prev_long} {block_storage} ecosystem page] //kgilliga: Shouldn't this link be this instead? https://catalog.redhat.com/software/search?target_platforms=Red%20Hat%20OpenStack%20Platform&p=1&functionalCategories=Data%20storage&certified_plugin_types=Block%20Storage%20(Cinder) -//Also, should we link to the Red Hat catalog downstream only? and add it under the specific's driver section using the `containerImage` key. The following example shows a CRD for a Pure Storage array with a certified driver: + diff --git a/docs_user/modules/proc_relocating-one-instance-of-a-monitoring-stack-to-migrate-daemons-to-target-nodes.adoc b/docs_user/modules/proc_relocating-one-instance-of-a-monitoring-stack-to-migrate-daemons-to-target-nodes.adoc index 18c290426..861970ae3 100644 --- a/docs_user/modules/proc_relocating-one-instance-of-a-monitoring-stack-to-migrate-daemons-to-target-nodes.adoc +++ b/docs_user/modules/proc_relocating-one-instance-of-a-monitoring-stack-to-migrate-daemons-to-target-nodes.adoc @@ -104,6 +104,7 @@ With the procedure described above we lose High Availability: the monitoring stack daemons have no VIP and haproxy anymore; Node exporters are still running on all the nodes: instead of using labels we keep the current approach as we want to not reduce the monitoring space covered. + //kgilliga: What does "the procedure described above" refer to? . Update the Ceph Dashboard Manager configuration. An important aspect that should be considered at this point is to replace and diff --git a/docs_user/modules/proc_retrieving-services-topology-specific-configuration.adoc b/docs_user/modules/proc_retrieving-services-topology-specific-configuration.adoc index 894b4d28b..f6de69cb7 100644 --- a/docs_user/modules/proc_retrieving-services-topology-specific-configuration.adoc +++ b/docs_user/modules/proc_retrieving-services-topology-specific-configuration.adoc @@ -2,8 +2,6 @@ = Retrieving services from a topology specific-configuration -//Check xrefs - .Prerequisites * Define the following shell variables. The values that are used are examples. Replace these example values with values that are correct for your environment: @@ -98,4 +96,4 @@ podman run -i --rm --userns=keep-id -u $UID $MARIADB_IMAGE mysql \ "select host, configurations from agents where agents.binary='neutron-sriov-nic-agent';" ---- -This configuration will be required later, during the xref:adopting-dataplane_{context}[Data Plane Adoption]. +This configuration will be required later, during the data plane adoption. diff --git a/docs_user/modules/proc_stopping-openstack-services.adoc b/docs_user/modules/proc_stopping-openstack-services.adoc index 387cd60b1..37a8415b2 100644 --- a/docs_user/modules/proc_stopping-openstack-services.adoc +++ b/docs_user/modules/proc_stopping-openstack-services.adoc @@ -1,7 +1,5 @@ [id="stopping-openstack-services_{context}"] -//Check xref context. - = Stopping {rhos_prev_long} services Before you start the adoption, you must stop the {rhos_prev_long} ({OpenStackShort}) services. diff --git a/docs_user/modules/proc_verifying-the-image-service-adoption.adoc b/docs_user/modules/proc_verifying-the-image-service-adoption.adoc index 5b2db9acd..fa012a86c 100644 --- a/docs_user/modules/proc_verifying-the-image-service-adoption.adoc +++ b/docs_user/modules/proc_verifying-the-image-service-adoption.adoc @@ -6,7 +6,7 @@ Verify that you successfully adopted your {image_service_first_ref} to the {rhos .Procedure -. Test the glance service from the {rhos_prev_long} CLI. You can compare and make sure the configuration has been correctly applied to the glance pods: +. Test the {image_service_first_ref} from the {rhos_prev_long} CLI. You can compare and make sure the configuration has been correctly applied to the {image_service} pods: + ---- ./os-diff cdiff --service glance -c /etc/glance/glance.conf.d/02-config.conf -o glance_patch.yaml --frompod -p glance-api