Skip to content

Commit

Permalink
Merge pull request #445 from klgill/BetaDocs-AttempttoFixXrefs
Browse files Browse the repository at this point in the history
Beta docs attempt to fix xrefs
  • Loading branch information
klgill authored May 8, 2024
2 parents abab6af + 3258ccd commit fbc2418
Show file tree
Hide file tree
Showing 35 changed files with 40 additions and 62 deletions.
5 changes: 2 additions & 3 deletions docs_user/assemblies/assembly_adopting-the-image-service.adoc
Original file line number Diff line number Diff line change
@@ -1,7 +1,6 @@
[id="adopting-the-image-service_{context}"]

:context: image-service
//Check xref context."Reviewing the OpenStack configuration" xref does not work.

= Adopting the {image_service}

Expand All @@ -14,9 +13,9 @@ up and running: the {identity_service} endpoints are updated and the same backen

This guide also assumes that:

* A `TripleO` environment (the source Cloud) is running on one side.
* A {OpenStackPreviousInstaller} environment (the source Cloud) is running on one side.
* A `SNO` / `CodeReadyContainers` is running on the other side.
* (optional) An internal/external `Ceph` cluster is reachable by both `crc` and `TripleO`.
* (optional) An internal/external `Ceph` cluster is reachable by both `crc` and {OpenStackPreviousInstaller}.

ifeval::["{build}" != "downstream"]
//This link goes to a 404. Do we need this text downstream?
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -12,7 +12,7 @@ Before start this process, a few considerations are required:

* There’s no need to migrate node exporters: these daemons are deployed across
the nodes that are part of the {CephCluster} cluster (placement is ‘*’), and we’re going to lose metrics as long as the Controller nodes are not part of the {CephCluster} cluster anymore
* Each monitoring stack component is bound to specific ports that TripleO is
* Each monitoring stack component is bound to specific ports that {OpenStackPreviousInstaller} is
supposed to open beforehand; make sure to double check the firewall rules are
in place and the ports are opened for a given monitoring stack service

Expand All @@ -24,7 +24,7 @@ reducing the placement with `count: 1` is a reasonable solution and allows to
successfully migrate the existing daemons in an HCI (or HW limited) scenario
without impacting other services.
However, it is still possible to put in place a dedicated HA solution and
realize a component that is consistent with the TripleO model to reach HA.
realize a component that is consistent with the {OpenStackPreviousInstaller} model to reach HA.
Building and deployment such HA model is out of scope for this procedure.

include::../modules/proc_migrating-existing-daemons-to-target-nodes.adoc[leveloffset=+1]
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -6,8 +6,7 @@

The {block_storage_first_ref} is configured using
configuration snippets instead of using configuration parameters
defined by the installer. For more information, see xref:planning-the-new-deployment_planning[Planning the new deployment].
//kgilliga: Note to self: This xref does not work in the preview. Need to revisit.
defined by the installer. For more information, see xref:service-configurations_planning[Service configurations].

The recommended way to deploy {block_storage} volume backends has changed to remove old
limitations, add flexibility, and improve operations.
Expand Down
Original file line number Diff line number Diff line change
@@ -1,11 +1,8 @@
[id="con_bare-metal-provisioning-service-configurations_{context}"]

//Check xrefs

= Bare Metal Provisioning service configurations

The {bare_metal_first_ref} is configured by using configuration snippets. For more information about the configuration snippets, see xref:planning-the-new-deployment_planning[Planning the new deployment].
//kgilliga: Note to self: This xref does not work in the preview. Need to revisit.
The {bare_metal_first_ref} is configured by using configuration snippets. For more information about the configuration snippets, see xref:service-configurations_planning[Service configurations].

{OpenStackPreviousInstaller} generally took care to not override the defaults of the {bare_metal}, however as with any system of descreet configuration management attempting to provide a cross-version compatability layer, some configuration was certainly defaulted in particular ways. For example, PXE Loader file names were often overridden at intermediate layers, and you will thus want to pay particular attention to the settings you choose to apply in your adopted deployment. The operator attempts to apply reasonable working default configuration, but if you override them with prior configuration, your experience may not be ideal or your new {bare_metal} will fail to operate. Similarly, additional configuration may be necessary, for example
if your `ironic.conf` has additional hardware types enabled and in use.
Expand Down
4 changes: 2 additions & 2 deletions docs_user/modules/con_block-storage-service-requirements.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,7 @@

The Block Storage service (cinder) has both local storage used by the service and {rhos_prev_long} ({OpenStackShort}) user requirements.

Local storage is used for example when downloading a glance image for the create volume from image operation, which can become considerable when having
Local storage is used for example when downloading a {image_service_first_ref} image for the create volume from image operation, which can become considerable when having
concurrent operations and not using the Block Storage service volume cache.

In the Operator deployed {OpenStackShort}, there is a way to configure the
Expand All @@ -21,5 +21,5 @@ RBD, iSCSI, FC, NFS, NVMe-oF, etc.
Once you know all the transport protocols that you are using, you can make
sure that you are taking them into consideration when placing the Block Storage services (as mentioned above in the Node Roles section) and the right storage transport related binaries are running on the {OpenShift} nodes.

Detailed information about the specifics for each storage transport protocol can be found in the xref:adopting-the-block-storage-service_adopt-control-plane[Adopting the {block_storage}].
Detailed information about the specifics for each storage transport protocol can be found in the xref:openshift-preparation-for-block-storage-adoption_adopting-block-storage[{OpenShift} preparation for {block_storage} adoption].

4 changes: 2 additions & 2 deletions docs_user/modules/con_changes-to-cephFS-via-NFS.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -2,12 +2,12 @@

= Changes to CephFS through NFS

If the {rhos_prev_long} ({OpenStackShort}) {rhos_prev_ver} deployment uses CephFS through NFS as a backend for {rhos_component_storage_file_first_ref}, there's a `ceph-nfs` service on the {OpenStackShort} controller nodes deployed and managed by {OpenStackPreviousInstaller}. This service cannot be directly imported into {rhos_long} {rhos_curr_ver}. On {rhos_acro} {rhos_curr_ver}, the {rhos_component_storage_file} only supports using a "clustered" NFS service that is directly managed on the Ceph cluster. So, adoption with this service will involve a data path disruption to existing NFS clients. The timing of this disruption can be controlled by the deployer independent of this adoption procedure.
If the {rhos_prev_long} ({OpenStackShort}) {rhos_prev_ver} deployment uses CephFS through NFS as a backend for {rhos_component_storage_file_first_ref}, there's a `ceph-nfs` service on the {OpenStackShort} controller nodes deployed and managed by {OpenStackPreviousInstaller}. This service cannot be directly imported into {rhos_long} {rhos_curr_ver}. On {rhos_acro} {rhos_curr_ver}, the {rhos_component_storage_file} only supports using a "clustered" NFS service that is directly managed on the {Ceph} cluster. So, adoption with this service will involve a data path disruption to existing NFS clients. The timing of this disruption can be controlled by the deployer independent of this adoption procedure.

On {OpenStackShort} {rhos_prev_ver}, pacemaker controls the high availability of the `ceph-nfs` service. This service is assigned a Virtual IP (VIP) address that is also managed by pacemaker. The VIP is typically created on an isolated `StorageNFS` network. There are ordering and collocation constraints established between this VIP, `ceph-nfs` and the Shared File Systems service's share manager service on the
controller nodes. Prior to adopting {rhos_component_storage_file}, pacemaker's ordering and collocation constraints must be adjusted to separate the share manager service. This establishes `ceph-nfs` with its VIP as an isolated, standalone NFS service that can be decommissioned at will after completing the {OpenStackShort} adoption.

Red Hat Ceph Storage 7.0 introduced a native `clustered Ceph NFS service`. This service has to be deployed on the Ceph cluster using the Ceph orchestrator prior to adopting the {rhos_component_storage_file}. This NFS service will eventually replace the standalone NFS service from {OpenStackShort} {rhos_prev_ver} in your deployment. When the {rhos_component_storage_file} is adopted into the {rhos_acro} {rhos_curr_ver} environment, it will establish all the existing
Red Hat Ceph Storage 7.0 introduced a native `clustered Ceph NFS service`. This service has to be deployed on the {Ceph} cluster using the Ceph orchestrator prior to adopting the {rhos_component_storage_file}. This NFS service will eventually replace the standalone NFS service from {OpenStackShort} {rhos_prev_ver} in your deployment. When the {rhos_component_storage_file} is adopted into the {rhos_acro} {rhos_curr_ver} environment, it will establish all the existing
exports and client restrictions on the new clustered Ceph NFS service. Clients can continue to read and write data on their existing NFS shares, and are not affected until the old standalone NFS service is decommissioned. This switchover window allows clients to re-mount the same share from the new
clustered Ceph NFS service during a scheduled downtime.

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,7 @@

In order to help users to handle the configuration for the {OpenStackPreviousInstaller} and {rhos_prev_long}
services the tool: https://github.com/openstack-k8s-operators/os-diff has been
develop to compare the configuration files between the {OpenStackPreviousInstaller} deployment and the next gen cloud.
develop to compare the configuration files between the {OpenStackPreviousInstaller} deployment and the {rhos_long} cloud.
Make sure Golang is installed and configured on your env:
//kgilliga: Do we want to link to "https://github.com/openstack-k8s-operators/os-diff" downstream?
----
Expand Down
4 changes: 2 additions & 2 deletions docs_user/modules/con_node-roles.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -32,7 +32,7 @@ high disk and network usage since many of its operations are in the data path
the cinder-backup service which has high memory, network, and CPU (to compress
data) requirements.

The Glance and Swift components are in the data path, as well as RabbitMQ and Galera services.
The {image_service_first_ref} and Swift components are in the data path, as well as RabbitMQ and Galera services.

Given these requirements it may be preferable not to let these services wander
all over your {OpenShiftShort} worker nodes with the possibility of impacting other
Expand All @@ -41,7 +41,7 @@ want to pin down the heavy ones to a set of infrastructure nodes.

There are also hardware restrictions to take into consideration, because if you
are using a Fibre Channel (FC) Block Storage service backend you need the cinder-volume,
cinder-backup, and maybe even the glance (if it's using the Block Storage service as a backend)
cinder-backup, and maybe even the {image_service_first_ref} (if it's using the Block Storage service as a backend)
services to run on a {OpenShiftShort} host that has an HBA.

The {OpenStackShort} Operators allow a great deal of flexibility on where to run the
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -2,8 +2,7 @@

= {OpenShift} preparation for {block_storage} adoption

As explained in xref:planning-the-new-deployment_planning[Planning the new deployment], before deploying {rhos_prev_long} {OpenStackShort} in {OpenShift}, you must ensure that the networks are ready, that you have decided the node selection, and also make sure any necessary changes to the {OpenShiftShort} nodes have been made. For {block_storage_first_ref} volume and backup services all these 3 must be carefully considered.
//kgilliga: Note to self: xref for planning the new deployment does not work in preview. need to revisit.
Before deploying {rhos_prev_long} ({OpenStackShort}) in {OpenShift}, you must ensure that the networks are ready, that you have decided the node selection, and also make sure any necessary changes to the {OpenShiftShort} nodes have been made. For {block_storage_first_ref} volume and backup services all these 3 must be carefully considered.

Node Selection::
You might need, or want, to restrict the {OpenShiftShort} nodes where {block_storage} volume and
Expand Down Expand Up @@ -48,7 +47,7 @@ CPU intensive. This may be a concern for the {OpenShiftShort} human operators, a
they may want to use the `nodeSelector` to prevent these service from
interfering with their other {OpenShiftShort} workloads. For more information about node selection, see xref:about-node-selector_planning[About node selector].
+
When selecting the nodes where the {block_storage} volume is going to run remember that {block_storage}-volume may also use local storage when downloading a glance image for the create volume from image operation, and it can require a considerable
When selecting the nodes where the {block_storage} volume is going to run remember that {block_storage}-volume may also use local storage when downloading a {image_service_first_ref} image for the create volume from image operation, and it can require a considerable
amount of space when having concurrent operations and not using {block_storage} volume
cache.
+
Expand Down
1 change: 0 additions & 1 deletion docs_user/modules/openstack-troubleshooting.adoc
Original file line number Diff line number Diff line change
@@ -1,7 +1,6 @@
[id="troubleshooting-adoption_{context}"]

//:context: troubleshooting-adoption
//kgilliga: This module might be converted to an assembly.

= Troubleshooting adoption

Expand Down
4 changes: 1 addition & 3 deletions docs_user/modules/proc_adopting-autoscaling.adoc
Original file line number Diff line number Diff line change
@@ -1,14 +1,12 @@
[id="adopting-autoscaling_{context}"]

//Check xref contexts.

= Adopting autoscaling

Adopting autoscaling means that an existing `OpenStackControlPlane` custom resource (CR), where Aodh services are supposed to be disabled, should be patched to start the service with the configuration parameters provided by the source environment.

This guide also assumes that:

. A `TripleO` environment (the source Cloud) is running on one side;
. A {OpenStackPreviousInstaller} environment (the source Cloud) is running on one side;
. A `SNO` / `CodeReadyContainers` is running on the other side.

.Prerequisites
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -75,7 +75,7 @@ spec:
EOF
----

* When `neutron-sriov-nic-agent` is running on the existing Compute nodes, check the physical device mappings and ensure that they match the values that are defined in the `OpenStackDataPlaneNodeSet` custom resource (CR). For more information, see xref:reviewing-the-openstack-control-plane-configuration_adopt-control-plane[Reviewing the {rhos_prev_long} control plane configuration].
* When `neutron-sriov-nic-agent` is running on the existing Compute nodes, check the physical device mappings and ensure that they match the values that are defined in the `OpenStackDataPlaneNodeSet` custom resource (CR). For more information, see xref:pulling-configuration-from-tripleo-deployment_reviewing-configuration[Pulling the configuration from a {OpenStackPreviousInstaller} deployment].

* Define the shell variables necessary to run the script that runs the fast-forward upgrade. Omit setting `CEPH_FSID`, if the local storage backend is going to be configured by Nova for Libvirt. The storage backend cannot be changed during adoption, and must match the one used on the source cloud:
----
Expand Down Expand Up @@ -122,7 +122,7 @@ done
----
ifeval::["{build}" != "downstream"]
. Create a https://kubernetes.io/docs/concepts/configuration/secret/#ssh-authentication-secrets[ssh authentication secret] for the data plane nodes:
//kgilliga: We probably shouldn't link to an external site. I need to check if we will document this in Red Hat docs.
//kgilliga:I need to check if we will document this in Red Hat docs.
endif::[]
ifeval::["{build}" != "upstream"]
. Create a ssh authentication secret for the data plane nodes:
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -52,7 +52,7 @@ EOF
[NOTE]
====
If you have previously backed up your {OpenStackShort} services configuration file from the old environment, you can use os-diff to compare and make sure the configuration is correct.
For more information, see xref:reviewing-the-openstack-control-plane-configuration_adopt-control-plane[Reviewing the {rhos_prev_long} control plane configuration].
For more information, see xref:pulling-configuration-from-tripleo-deployment_reviewing-configuration[Pulling the configuration from a {OpenStackPreviousInstaller} deployment].
----
pushd os-diff
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -12,7 +12,7 @@ Adopt the {image_service_first_ref} that you deployed with an NFS Ganesha backen
* Previous Adoption steps completed. Notably, MariaDB, Keystone and Barbican
should be already adopted.
* In the source cloud, verify the NFS Ganesha parameters used by the overcloud to configure the {image_service} backend.
In particular, find among the TripleO heat templates the following variables that are usually an override of the default content provided by
In particular, find among the {OpenStackPreviousInstaller} heat templates the following variables that are usually an override of the default content provided by
`/usr/share/openstack-tripleo-heat-templates/environments/storage/glance-nfs.yaml`[glance-nfs.yaml]:
+
----
Expand Down
2 changes: 1 addition & 1 deletion docs_user/modules/proc_adopting-telemetry-services.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,7 @@ Adopting Telemetry means that an existing `OpenStackControlPlane` custom resourc

This guide also assumes that:

. A `TripleO` environment (the source Cloud) is running on one side;
. A {OpenStackPreviousInstaller} environment (the source Cloud) is running on one side;
. A `SNO` / `CodeReadyContainers` is running on the other side.

.Prerequisites
Expand Down
10 changes: 4 additions & 6 deletions docs_user/modules/proc_adopting-the-compute-service.adoc
Original file line number Diff line number Diff line change
@@ -1,7 +1,5 @@
[id="adopting-the-compute-service_{context}"]

//kgilliga: Note to self: "Adopting the data plane" xrefs do not work. Need to revisit.

= Adopting the {compute_service}

[NOTE]
Expand All @@ -21,8 +19,8 @@ must already be imported into the control plane MariaDB;
** the xref:adopting-the-image-service_adopt-control-plane[Adopting the Image service] needs to be imported;
** the xref:migrating-ovn-data_migrating-databases[Migrating OVN data] need to be imported;
** the xref:adopting-the-networking-service_adopt-control-plane[Adopting the Networking service] needs to be imported;
** the xref:adopting-the-bare-metal-provisioning-service_{context}[Adopting the Openstack Baremetal service] needs to be imported;
//kgilliga:Need to revist this xref. Might rewrite this section anyway.
** the {bare_metal} needs to be imported;
//kgilliga:I removed the link because it did not work. I might rewrite this section anyway.
** Required services specific topology
xref:proc_retrieving-services-topology-specific-configuration_adopt-control-plane[Retrieving services from a topology specific-configuration].
** {rhos_prev_long} services have been stopped. For more information, see xref:stopping-openstack-services_migrating-databases[Stopping {rhos_prev_long} services].
Expand Down Expand Up @@ -132,7 +130,7 @@ oc wait --for condition=Ready --timeout=300s Nova/nova
----
+
The local Conductor services will be started for each cell, while the superconductor runs in `cell0`.
Note that `disable_compute_service_check_for_ffu` is mandatory for all imported Nova services, until the external data plane is imported, and until Nova Compute services fast-forward upgraded. For more information, see xref:adopting-data-plane_data-plane[Adopting the data plane].
Note that `disable_compute_service_check_for_ffu` is mandatory for all imported Nova services, until the external data plane is imported, and until Nova Compute services fast-forward upgraded. For more information, see xref:adopting-compute-services-to-the-data-plane_data-plane[Adopting Compute services to the {rhos_acro} data plane] and xref:performing-a-fast-forward-upgrade-on-compute-services_data-plane[Performing a fast-forward upgrade on Compute services].

.Verification

Expand Down Expand Up @@ -161,4 +159,4 @@ The expected changes to happen:
** RabbitMQ transport URL no longer uses `guest`.

[NOTE]
At this point, the {compute_service} control plane services do not control the existing {compute_service} Compute workloads. The control plane manages the data plane only after the data adoption process is successfully completed. For more information, see xref:adopting-data-plane_data-plane[Adopting the data plane].
At this point, the {compute_service} control plane services do not control the existing {compute_service} Compute workloads. The control plane manages the data plane only after the data adoption process is successfully completed. For more information, see xref:adopting-compute-services-to-the-data-plane_data-plane[Adopting Compute services to the {rhos_acro} data plane].
Loading

0 comments on commit fbc2418

Please sign in to comment.