Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Beta docs checking cross references and links #437

Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Original file line number Diff line number Diff line change
Expand Up @@ -23,6 +23,7 @@ include::../modules/proc_adopting-the-placement-service.adoc[leveloffset=+1]
include::../modules/proc_adopting-the-compute-service.adoc[leveloffset=+1]

include::../assemblies/assembly_adopting-the-block-storage-service.adoc[leveloffset=+1]

include::../modules/proc_adopting-the-openstack-dashboard.adoc[leveloffset=+1]

include::../assemblies/assembly_adopting-the-shared-file-systems-service.adoc[leveloffset=+1]
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -9,12 +9,9 @@ it is important to plan it carefully. The general network requirements for the
OpenStack services are not much different from the ones in a {OpenStackPreviousInstaller} deployment, but the way you handle them is.

[NOTE]
More details about the network architecture and configuration can be
found in the
https://access.redhat.com/documentation/en-us/red_hat_openstack_platform/18.0-dev-preview/html/deploying_red_hat_openstack_platform_18.0_development_preview_3_on_red_hat_openshift_container_platform/assembly_preparing-rhocp-for-rhosp#doc-wrapper[general
OpenStack documentation] as well as
https://docs.openshift.com/container-platform/4.14/networking/about-networking.html[OpenShift
Networking guide]. This document will address concerns specific to adoption.
For more information about the network architecture and configuration, see
link:https://access.redhat.com/documentation/en-us/red_hat_openstack_platform/18.0-dev-preview/html/deploying_red_hat_openstack_platform_18.0_development_preview_3_on_red_hat_openshift_container_platform/assembly_preparing-rhocp-for-rhosp[_Deploying Red Hat OpenStack Platform 18.0 Development Preview 3 on Red Hat OpenShift Container Platform_] and link:https://docs.openshift.com/container-platform/4.15/networking/about-networking.html[About
networking] in _OpenShift Container Platform 4.15 Documentation_. This document will address concerns specific to adoption.

// TODO: update the openstack link with the final documentation

Expand Down
2 changes: 1 addition & 1 deletion docs_user/modules/con_about-machine-configs.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -43,7 +43,7 @@ metadata:
< . . . >
----

Refer to the https://docs.openshift.com/container-platform/4.13/post_installation_configuration/machine-configuration-tasks.html[OpenShift documentation for additional information on `MachineConfig` and `MachineConfigPools`]
Refer to the link:https://docs.openshift.com/container-platform/4.15/post_installation_configuration/machine-configuration-tasks.html[Postinstallation machine configuration tasks] in _OpenShift Container Platform 4.15 Documentation_.

*WARNING:* Applying a `MachineConfig` to an OpenShift node will make the node
reboot.
9 changes: 3 additions & 6 deletions docs_user/modules/con_about-node-selector.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -16,9 +16,7 @@ You either label the OpenShift nodes or use existing labels, and then use those
`nodeSelector` field.

The `nodeSelector` field in the OpenStack manifests follows the standard
OpenShift `nodeSelector` field, please refer to https://docs.openshift.com/container-platform/4.13/nodes/scheduling/nodes-scheduler-node-selectors.html[the OpenShift documentation on
the matter]
additional information.
OpenShift `nodeSelector` field. For more information, see link:https://docs.openshift.com/container-platform/4.15/nodes/scheduling/nodes-scheduler-node-selectors.html[About node selectors] in _OpenShift Container Platform 4.15 Documentation_.

This field is present at all the different levels of the OpenStack manifests:

Expand Down Expand Up @@ -92,6 +90,5 @@ The Block Storage service operator does not currently have the possibility of de
the `nodeSelector` in `cinderVolumes`, so you need to specify it on each of the
backends.

It's possible to leverage labels added by https://docs.openshift.com/container-platform/4.13/hardware_enablement/psap-node-feature-discovery-operator.html[the node feature discovery
operator]
to place OpenStack services.
It is possible to leverage labels added by the Node Feature Discovery (NFD) Operator to place OpenStack services. For more information, see link:https://docs.openshift.com/container-platform/4.13/hardware_enablement/psap-node-feature-discovery-operator.html[Node Feature Discovery Operator] in _OpenShift Container Platform 4.15 Documentation_.

Original file line number Diff line number Diff line change
Expand Up @@ -21,5 +21,5 @@ RBD, iSCSI, FC, NFS, NVMe-oF, etc.
Once you know all the transport protocols that you are using, you can make
sure that you are taking them into consideration when placing the Block Storage services (as mentioned above in the Node Roles section) and the right storage transport related binaries are running on the OpenShift nodes.

Detailed information about the specifics for each storage transport protocol can be found in the xref:adopting-the-block-storage-service_adopt-control-plane[Adopting the Block Storage service].
Detailed information about the specifics for each storage transport protocol can be found in the xref:adopting-the-block-storage-service_adopt-control-plane[Adopting the {block_storage}].

Original file line number Diff line number Diff line change
Expand Up @@ -74,8 +74,9 @@ spec:
vlan: 22
EOF
----
+

* When `neutron-sriov-nic-agent` is running on the existing Compute nodes, check the physical device mappings and ensure that they match the values that are defined in the `OpenStackDataPlaneNodeSet` custom resource (CR). For more information, see xref:reviewing-the-openstack-control-plane-configuration_adopt-control-plane[Reviewing the {rhos_prev_long} control plane configuration].

* Define the shell variables necessary to run the script that runs the fast-forward upgrade. Omit setting `CEPH_FSID`, if the local storage backend is going to be configured by Nova for Libvirt. The storage backend cannot be changed during adoption, and must match the one used on the source cloud:
----
PODIFIED_DB_ROOT_PASSWORD=$(oc get -o json secret/osp-secret | jq -r .data.DbRootPassword | base64 -d)
Expand Down
10 changes: 5 additions & 5 deletions docs_user/modules/proc_adopting-the-compute-service.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -15,16 +15,16 @@ here this time).
* Previous Adoption steps completed. Notably,
** the xref:migrating-databases-to-mariadb-instances_migrating-databases[Migrating databases to MariaDB instances]
must already be imported into the control plane MariaDB;
** the xref:adopting-the-identity-service_{context}[Adopting the Identity service] needs to be imported;
** the xref:adopting-the-key-manager-service_{context}[Adopting the Key Manager service] needs to be imported;
** the xref:adopting-the-identity-service_adopt-control-plane[Adopting the Identity service] needs to be imported;
** the xref:adopting-the-key-manager-service_adopt-control-plane[Adopting the Key Manager service] needs to be imported;
** the xref:adopting-the-placement-service_{context}[Adopting the Placement service] needs to be imported;
** the xref:adopting-the-image-service_{context}[Adopting the Image service] needs to be imported;
** the xref:adopting-the-image-service_adopt-control-plane[Adopting the Image service] needs to be imported;
** the xref:migrating-ovn-data_migrating-databases[Migrating OVN data] need to be imported;
** the xref:adopting-the-networking-service_{context}[Adopting the Networking service] needs to be imported;
** the xref:adopting-the-networking-service_adopt-control-plane[Adopting the Networking service] needs to be imported;
** the xref:adopting-the-bare-metal-provisioning-service_{context}[Adopting the Openstack Baremetal service] needs to be imported;
//kgilliga:Need to revist this xref. Might rewrite this section anyway.
** Required services specific topology
xref:proc_retrieving-services-topology-specific-configuration_{context}[Retrieving services from a topology specific-configuration].
xref:proc_retrieving-services-topology-specific-configuration_adopt-control-plane[Retrieving services from a topology specific-configuration].
** {rhos_prev_long} services have been stopped. For more information, see xref:stopping-openstack-services_migrating-databases[Stopping {rhos_prev_long} services].
* Define the following shell variables. The values that are used are examples. Replace these example values with values that are correct for your environment:
----
Expand Down
2 changes: 2 additions & 0 deletions docs_user/modules/proc_adopting-the-networking-service.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -20,7 +20,9 @@ should be already adopted.

.Procedure
//The following link takes me to a 404. Do we need this text? I think we should start the procedure at "Patch OpenStackControlPlane..."
ifeval::["{build}" != "downstream"]
As already done for https://github.com/openstack-k8s-operators/data-plane-adoption/blob/main/keystone_adoption.md[Keystone], the Neutron Adoption follows the same pattern.
endif::[]

* Patch `OpenStackControlPlane` to deploy {networking}:
+
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -135,4 +135,4 @@ Hello World!

[NOTE]
At this point data is still stored on the previously existing nodes. For more information about migrating the actual data from the old
to the new deployment, see xref:migrating-the-object-storage-service_migrate-object-storage-service[Object Storage service migration].
to the new deployment, see xref:migrating-the-object-storage-service_migrate-object-storage-service[Migrating the {object_storage_first_ref} to {rhos_long} nodes].
Original file line number Diff line number Diff line change
Expand Up @@ -26,7 +26,9 @@ trying to adopt {orchestration}.

.Procedure
//kgilliga: I get an error when I click this link. Do we need it in the downstream docs?
ifeval::["{build}" != "downstream"]
As already done for https://github.com/openstack-k8s-operators/data-plane-adoption/blob/main/keystone_adoption.md[Keystone], the Heat Adoption follows a similar pattern.
endif::[]

. Patch the `osp-secret` to update the `HeatAuthEncryptionKey` and `HeatPassword`. This needs to match what you have configured in the existing {OpenStackPreviousInstaller} {orchestration} configuration.
You can retrieve and verify the existing `auth_encryption_key` and `service` passwords via:
Expand Down
2 changes: 1 addition & 1 deletion docs_user/modules/proc_adopting-the-placement-service.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -9,7 +9,7 @@
* Previous Adoption steps completed. Notably,
** the xref:migrating-databases-to-mariadb-instances_migrating-databases[Migrating databases to MariaDB instances]
must already be imported into the control plane MariaDB.
** the xref:adopting-the-identity-service_{context}[Adopting the Identity service] needs to be imported.
** the xref:adopting-the-identity-service_adopt-control-plane[Adopting the Identity service] needs to be imported.
** the Memcached operator needs to be deployed (nothing to import for it from
the source environment).

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -2,15 +2,13 @@

= Configuring the networking for control plane services

//== Pod connectivity to isolated networks
//Heading 2s are commented out until I rewrite this file for GA.

Once NMState operator created the desired hypervisor network configuration for
isolated networks, we need to configure OpenStack services to use configured
interfaces. This is achieved by defining `NetworkAttachmentDefinition` custom resources (CRs) for
each isolated network. (In some clusters, these CRs are managed by
https://docs.openshift.com/container-platform/4.14/networking/cluster-network-operator.html[Cluster
Network Operator], in which case `Network` CRs should be used instead.)
each isolated network. (In some clusters, these CRs are managed by the Cluster
Network Operator in which case `Network` CRs should be used instead. For more information, see
link:https://docs.openshift.com/container-platform/4.15/networking/cluster-network-operator.html[Cluster
Network Operator] in _OpenShift Container Platform 4.15 Documentation_.)

For example,

Expand Down
9 changes: 8 additions & 1 deletion docs_user/modules/proc_creating-a-ceph-nfs-cluster.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -92,9 +92,16 @@ with a 3-node NFS cluster.
* The `ingress-mode` argument must be set to ``haproxy-protocol``. No other
ingress-mode will be supported. This ingress mode will allow enforcing client
restrictions through {rhos_component_storage_file}.
ifeval::["{build}" != "downstream"]
* For more information on deploying the clustered Ceph NFS service, see the
link:https://docs.ceph.com/en/latest/cephadm/services/nfs/[ceph orchestrator
documentation]
documentation].
endif::[]
ifeval::["{build}" != "upstream"]
For more information on deploying the clustered Ceph NFS service, see the
link:https://access.redhat.com/documentation/en-us/red_hat_ceph_storage/7/html-single/operations_guide/index#management-of-nfs-ganesha-gateway-using-the-ceph-orchestrator[Management of NFS-Ganesha gateway using the Ceph Orchestrator (Limited Availability)] in _Red Hat Ceph Storage 7 Operations Guide_.
endif::[]
//kgilliga: Confirm that we should link to the Ceph Operations Guide downstream.
* The following commands are run inside a `cephadm shell` to create a clustered
Ceph NFS service.
+
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -19,8 +19,7 @@ here this time).
* Make sure the previous Adoption steps have been performed successfully.
** The `OpenStackControlPlane` resource must be already created.
** The control plane MariaDB and RabbitMQ are running. No other control plane services are running.
** Required services specific topology. For more information, see xref:pulling-the-openstack-configuration_{context}[Pulling the {rhos_prev_long} configuration].
//kgilliga: this xref should specifically point to the Get services topology specific configuration module when it's ready.
** Required services specific topology. For more information, see xref:proc_retrieving-services-topology-specific-configuration_adopt-control-plane[Retrieving services from a topology specific-configuration].
** {OpenStackShort} services have been stopped. For more information, see xref:stopping-openstack-services_{context}[Stopping {rhos_prev_long} services].
** There must be network routability between the original MariaDB and the MariaDB for the control plane.
* Define the following shell variables. The values that are used are examples. Replace these example values with values that are correct for your environment:
Expand Down Expand Up @@ -228,8 +227,7 @@ EOF
.Verification

Compare the following outputs with the topology specific configuration.
For more information, see xref:pulling-the-openstack-configuration_{context}[Pulling the {rhos_prev_long} configuration].
//kgilliga: this xref should specifically point to the Get services topology specific configuration module when it's ready.:
For more information, see xref:proc_retrieving-services-topology-specific-configuration_adopt-control-plane[Retrieving services from a topology specific-configuration].

. Check that the databases were imported correctly:
+
Expand Down
2 changes: 1 addition & 1 deletion docs_user/modules/proc_reusing-existing-subnet-ranges.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -14,7 +14,7 @@ allocated to existing cluster nodes.
This scenario implies that the remaining IP addresses in the existing subnet is
enough for the new control plane services. If not,
xref:using-new-subnet-ranges_{context}[Scenario 1: Using new subnet ranges] should be used
instead. For more information, see xref:planning-your-ipam-configuration_network-requirements[Planning your IPAM configuration].
instead. For more information, see xref:planning-your-ipam-configuration_configuring-network[Planning your IPAM configuration].

No special routing configuration is required in this scenario; the only thing
to pay attention to is to make sure that already consumed IP addresses don't
Expand Down
Loading