Skip to content

Commit

Permalink
Merge pull request #442 from klgill/BetaDocs-HideExternalLinksDownstream
Browse files Browse the repository at this point in the history
checking external links and variables
  • Loading branch information
klgill authored May 7, 2024
2 parents 4f17ff0 + cc5d44c commit abab6af
Show file tree
Hide file tree
Showing 26 changed files with 118 additions and 99 deletions.
Original file line number Diff line number Diff line change
Expand Up @@ -10,7 +10,7 @@ you would like to replicate in the new environment.
Before proceeding, you should have a list of the following IP address
allocations to be used for the new control plane services:

* 1 IP address, per isolated network, per OpenShift worker node. (These
* 1 IP address, per isolated network, per {OpenShift} worker node. (These
addresses will <<_configure_openshift_worker_nodes,translate>> to
`NodeNetworkConfigurationPolicy` custom resources (CRs).)
* IP range, per isolated network, for the data plane nodes. (These ranges will
Expand All @@ -31,7 +31,7 @@ listed below should reflect the actual adopted environment. The number of
isolated networks may differ from the example below. IPAM scheme may differ.
Only relevant parts of the configuration are shown. Examples are incomplete and
should be incorporated into the general configuration for the new deployment,
as described in the general OpenStack documentation.
as described in the general {rhos_prev_long} documentation.

include::../modules/proc_configuring-openshift-worker-nodes.adoc[leveloffset=+1]

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -4,9 +4,9 @@

= Configuring the network for the RHOSO deployment

With OpenShift, the network is a very important aspect of the deployment, and
With {OpenShift} ({OpenShiftShort}), the network is a very important aspect of the deployment, and
it is important to plan it carefully. The general network requirements for the
OpenStack services are not much different from the ones in a {OpenStackPreviousInstaller} deployment, but the way you handle them is.
{rhos_prev_long} ({OpenStackShort}) services are not much different from the ones in a {OpenStackPreviousInstaller} deployment, but the way you handle them is.

[NOTE]
For more information about the network architecture and configuration, see
Expand All @@ -17,14 +17,14 @@ networking] in _OpenShift Container Platform 4.15 Documentation_. This document

// TODO: should we parametrize the version in the links somehow?

When adopting a new OpenStack deployment, it is important to align the network
When adopting a new {OpenStackShort} deployment, it is important to align the network
configuration with the adopted cluster to maintain connectivity for existing
workloads.

The following logical configuration steps will incorporate the existing network
configuration:

* configure **OpenShift worker nodes** to align VLAN tags and IPAM
* configure **{OpenShiftShort} worker nodes** to align VLAN tags and IPAM
configuration with the existing deployment.
* configure **Control Plane services** to use compatible IP ranges for
service and load balancing IPs.
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,7 @@

= Planning the new deployment

Just like you did back when you installed your director-deployed {rhos_prev_long}, the
Just like you did back when you installed your {OpenStackPreviousInstaller}-deployed {rhos_prev_long}, the
upgrade/migration to the control plane requires planning various aspects
of the environment such as node roles, planning your network topology, and
storage.
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -5,8 +5,8 @@
= Planning your IPAM configuration

The new deployment model puts additional burden on the size of IP allocation
pools available for OpenStack services. This is because each service deployed
on OpenShift worker nodes will now require an IP address from the IPAM pool (in
pools available for {rhos_prev_long} ({OpenStackShort}) services. This is because each service deployed
on {OpenShift} ({OpenShiftShort}) worker nodes will now require an IP address from the IPAM pool (in
the previous deployment model, all services hosted on a controller node shared
the same IP address.)

Expand All @@ -19,7 +19,7 @@ your particular case.
The total number of IP addresses required for the new control plane services,
in each isolated network, is calculated as a sum of the following:

* The number of OpenShift worker nodes. (Each node will require 1 IP address in
* The number of {OpenShiftShort} worker nodes. (Each node will require 1 IP address in
`NodeNetworkConfigurationPolicy` custom resources (CRs).)
* The number of IP addresses required for the data plane nodes. (Each node will require
an IP address from `NetConfig` CRs.)
Expand All @@ -29,7 +29,7 @@ in each isolated network, is calculated as a sum of the following:
* The number of IP addresses required for load balancer IP addresses. (Each
service will require a VIP address from `IPAddressPool` CRs.)

As of the time of writing, the simplest single worker node OpenShift deployment
As of the time of writing, the simplest single worker node {OpenShiftShort} deployment
(CRC) has the following IP ranges defined (for the `internalapi` network):

* 1 IP address for the single worker node;
Expand All @@ -44,11 +44,11 @@ allocation pools.

// TODO: update the numbers above for a more realistic multinode cluster.

The exact requirements may differ depending on the list of OpenStack services
to be deployed, their replica numbers, as well as the number of OpenShift
The exact requirements may differ depending on the list of {OpenStackShort} services
to be deployed, their replica numbers, as well as the number of {OpenShiftShort}
worker nodes and data plane nodes.

Additional IP addresses may be required in future OpenStack releases, so it is
Additional IP addresses may be required in future {OpenStackShort} releases, so it is
advised to plan for some extra capacity, for each of the allocation pools used
in the new environment.

Expand Down
8 changes: 4 additions & 4 deletions docs_user/assemblies/assembly_storage-requirements.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -4,12 +4,12 @@

= Storage requirements

When looking into the storage in an OpenStack deployment you can differentiate
When looking into the storage in an {rhos_prev_long} ({OpenStackShort}) deployment you can differentiate
two kinds, the storage requirements of the services themselves and the
storage used for the OpenStack users that the services will manage.
storage used for the {OpenStackShort} users that the services will manage.

These requirements may drive your OpenShift node selection, as mentioned above,
and may require you to do some preparations on the OpenShift nodes before
These requirements may drive your {OpenShift} ({OpenShiftShort}) node selection, as mentioned above,
and may require you to do some preparations on the {OpenShiftShort} nodes before
you can deploy the services.

//*TODO: Galera, RabbitMQ, Swift, Glance, etc.*
Expand Down
6 changes: 3 additions & 3 deletions docs_user/modules/con_about-machine-configs.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -6,13 +6,13 @@ Some services require you to have services or kernel modules running on the host
`nvme-fabrics` kernel module.

For those cases you use `MachineConfig` manifests, and if you are restricting
the nodes that you are placing the OpenStack services on using the `nodeSelector` then
the nodes that you are placing the {rhos_prev_long} services on using the `nodeSelector` then
you also want to limit where the `MachineConfig` is applied.

To define where the `MachineConfig` can be applied, you need to use a
`MachineConfigPool` that links the `MachineConfig` to the nodes.

For example to be able to limit `MachineConfig` to the 3 OpenShift nodes that you
For example to be able to limit `MachineConfig` to the 3 {OpenShift} ({OpenShiftShort}) nodes that you
marked with the `type: openstack` label, you create the
`MachineConfigPool` like this:

Expand Down Expand Up @@ -46,4 +46,4 @@ metadata:
Refer to the link:https://docs.openshift.com/container-platform/4.15/post_installation_configuration/machine-configuration-tasks.html[Postinstallation machine configuration tasks] in _OpenShift Container Platform 4.15 Documentation_.

[WARNING]
Applying a `MachineConfig` to an {OpenShift} node makes the node reboot.
Applying a `MachineConfig` to an {OpenShiftShort} node makes the node reboot.
24 changes: 12 additions & 12 deletions docs_user/modules/con_about-node-selector.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -3,36 +3,36 @@
= About node selector

There are a variety of reasons why you might want to restrict the nodes where
OpenStack services can be placed:
{rhos_prev_long} ({OpenStackShort}) services can be placed:

* Hardware requirements: System memory, Disk space, Cores, HBAs
* Limit the impact of the OpenStack services on other OpenShift workloads.
* Avoid collocating OpenStack services.
* Limit the impact of the {OpenStackShort} services on other {OpenShift} workloads.
* Avoid collocating {OpenStackShort} services.

The mechanism provided by the OpenStack operators to achieve this is through the
The mechanism provided by the {OpenStackShort} operators to achieve this is through the
use of labels.

You either label the OpenShift nodes or use existing labels, and then use those labels in the OpenStack manifests in the
You either label the {OpenShiftShort} nodes or use existing labels, and then use those labels in the {OpenStackShort} manifests in the
`nodeSelector` field.

The `nodeSelector` field in the OpenStack manifests follows the standard
OpenShift `nodeSelector` field. For more information, see link:https://docs.openshift.com/container-platform/4.15/nodes/scheduling/nodes-scheduler-node-selectors.html[About node selectors] in _OpenShift Container Platform 4.15 Documentation_.
The `nodeSelector` field in the {OpenStackShort} manifests follows the standard
{OpenShiftShort} `nodeSelector` field. For more information, see link:https://docs.openshift.com/container-platform/4.15/nodes/scheduling/nodes-scheduler-node-selectors.html[About node selectors] in _OpenShift Container Platform 4.15 Documentation_.

This field is present at all the different levels of the OpenStack manifests:
This field is present at all the different levels of the {OpenStackShort} manifests:

* Deployment: The `OpenStackControlPlane` object.
* Component: For example the `cinder` element in the `OpenStackControlPlane`.
* Service: For example the `cinderVolume` element within the `cinder` element
in the `OpenStackControlPlane`.

This allows a fine grained control of the placement of the OpenStack services
This allows a fine grained control of the placement of the {OpenStackShort} services
with minimal repetition.

Values of the `nodeSelector` are propagated to the next levels unless they are
overwritten. This means that a `nodeSelector` value at the deployment level will
affect all the OpenStack services.
affect all the {OpenStackShort} services.

For example, you can add label `type: openstack` to any 3 OpenShift nodes:
For example, you can add label `type: openstack` to any 3 {OpenShiftShort} nodes:

----
$ oc label nodes worker0 type=openstack
Expand Down Expand Up @@ -90,5 +90,5 @@ The Block Storage service operator does not currently have the possibility of de
the `nodeSelector` in `cinderVolumes`, so you need to specify it on each of the
backends.

It is possible to leverage labels added by the Node Feature Discovery (NFD) Operator to place OpenStack services. For more information, see link:https://docs.openshift.com/container-platform/4.13/hardware_enablement/psap-node-feature-discovery-operator.html[Node Feature Discovery Operator] in _OpenShift Container Platform 4.15 Documentation_.
It is possible to leverage labels added by the Node Feature Discovery (NFD) Operator to place {OpenStackShort} services. For more information, see link:https://docs.openshift.com/container-platform/4.13/hardware_enablement/psap-node-feature-discovery-operator.html[Node Feature Discovery Operator] in _OpenShift Container Platform 4.15 Documentation_.

Original file line number Diff line number Diff line change
Expand Up @@ -42,5 +42,5 @@ Finally, a parameter which may be important based upon your configuration and ex

As a warning, hardware types set via the `ironic.conf` `enabled_hardware_types` parameter and hardware type driver interfaces starting with `staging-` are not available to be migrated into an adopted configuration.

Furthermore, {OpenStackPreviousInstaller}-based deployments made architectural decisions based upon self-management of services. When adopting deployments, you don't necessarilly need multiple replicas of secondary services such as the Introspection service. Should the host the container is running upon fail, OpenShift will restart the container on another host. The short-term transitory loss
//kgilliga: This last sentence tails off.
Furthermore, {OpenStackPreviousInstaller}-based deployments made architectural decisions based upon self-management of services. When adopting deployments, you don't necessarilly need multiple replicas of secondary services such as the Introspection service. Should the host the container is running upon fail, {OpenShift} will restart the container on another host. The short-term transitory loss
//kgilliga: This last sentence trails off.
6 changes: 3 additions & 3 deletions docs_user/modules/con_block-storage-service-requirements.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -2,12 +2,12 @@

= Block Storage service requirements

The Block Storage service (cinder) has both local storage used by the service and OpenStack user requirements.
The Block Storage service (cinder) has both local storage used by the service and {rhos_prev_long} ({OpenStackShort}) user requirements.

Local storage is used for example when downloading a glance image for the create volume from image operation, which can become considerable when having
concurrent operations and not using the Block Storage service volume cache.

In the Operator deployed OpenStack, there is a way to configure the
In the Operator deployed {OpenStackShort}, there is a way to configure the
location of the conversion directory to be an NFS share (using the extra
volumes feature), something that needed to be done manually before.

Expand All @@ -19,7 +19,7 @@ First you need to check the transport protocol the Block Storage service backend
RBD, iSCSI, FC, NFS, NVMe-oF, etc.

Once you know all the transport protocols that you are using, you can make
sure that you are taking them into consideration when placing the Block Storage services (as mentioned above in the Node Roles section) and the right storage transport related binaries are running on the OpenShift nodes.
sure that you are taking them into consideration when placing the Block Storage services (as mentioned above in the Node Roles section) and the right storage transport related binaries are running on the {OpenShift} nodes.

Detailed information about the specifics for each storage transport protocol can be found in the xref:adopting-the-block-storage-service_adopt-control-plane[Adopting the {block_storage}].

Original file line number Diff line number Diff line change
Expand Up @@ -2,12 +2,11 @@

= Comparing configuration files between deployments

In order to help users to handle the configuration for the TripleO and OpenStack
In order to help users to handle the configuration for the {OpenStackPreviousInstaller} and {rhos_prev_long}
services the tool: https://github.com/openstack-k8s-operators/os-diff has been
develop to compare the configuration files between the TripleO deployment and
the next gen cloud.
develop to compare the configuration files between the {OpenStackPreviousInstaller} deployment and the next gen cloud.
Make sure Golang is installed and configured on your env:

//kgilliga: Do we want to link to "https://github.com/openstack-k8s-operators/os-diff" downstream?
----
git clone https://github.com/openstack-k8s-operators/os-diff
pushd os-diff
Expand Down
2 changes: 1 addition & 1 deletion docs_user/modules/con_identity-service-authentication.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@

= Identity service authentication

When you adopt a director OpenStack deployment, users authenticate to the Identity service (keystone) by using Secure RBAC (SRBAC). There is no change to how you perform operations if SRBAC is enabled. If SRBAC is not enabled, then adopting a director OpenStack deployment changes how you perform operations, such as adding roles to users. If you have custom policies enabled, contact support before adopting a director OpenStack deployment.
When you adopt a {OpenStackPreviousInstaller} {rhos_prev_long} ({OpenStackShort}) deployment, users authenticate to the Identity service (keystone) by using Secure RBAC (SRBAC). There is no change to how you perform operations if SRBAC is enabled. If SRBAC is not enabled, then adopting a {OpenStackPreviousInstaller} {OpenStackShort} deployment changes how you perform operations, such as adding roles to users. If you have custom policies enabled, contact support before adopting a {OpenStackPreviousInstaller} {OpenStackShort} deployment.

// For more information on SRBAC see [link].

Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@

= Key Manager service support for crypto plug-ins

The Key Manager service (barbican) does not yet support all of the crypto plug-ins available in TripleO.
The Key Manager service (barbican) does not yet support all of the crypto plug-ins available in {OpenStackPreviousInstaller}.

//**TODO: Right now Barbican only supports the simple crypto plugin.

Expand Down
28 changes: 14 additions & 14 deletions docs_user/modules/con_node-roles.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -2,26 +2,26 @@

= About node roles

In director deployments you had 4 different standard roles for the nodes:
In {OpenStackPreviousInstaller} deployments you had 4 different standard roles for the nodes:
`Controller`, `Compute`, `Ceph Storage`, `Swift Storage`, but in the control plane you make a distinction based on where things are running, in
OpenShift or external to it.
{OpenShift} ({OpenShiftShort}) or external to it.

When adopting a director OpenStack your `Compute` nodes will directly become
When adopting a {OpenStackPreviousInstaller} {rhos_prev_long} ({OpenStackShort}) your `Compute` nodes will directly become
external nodes, so there should not be much additional planning needed there.

In many deployments being adopted the `Controller` nodes will require some
thought because you have many OpenShift nodes where the controller services
thought because you have many {OpenShiftShort} nodes where the Controller services
could run, and you have to decide which ones you want to use, how you are going to use them, and make sure those nodes are ready to run the services.

In most deployments running OpenStack services on `master` nodes can have a
seriously adverse impact on the OpenShift cluster, so it is recommended that you place OpenStack services on non `master` nodes.
In most deployments running {OpenStackShort} services on `master` nodes can have a
seriously adverse impact on the {OpenShiftShort} cluster, so it is recommended that you place {OpenStackShort} services on non `master` nodes.

By default OpenStack Operators deploy OpenStack services on any worker node, but
By default {OpenStackShort} Operators deploy {OpenStackShort} services on any worker node, but
that is not necessarily what's best for all deployments, and there may be even
services that won't even work deployed like that.

When planing a deployment it's good to remember that not all the services on an
OpenStack deployments are the same as they have very different requirements.
{OpenStackShort} deployments are the same as they have very different requirements.

Looking at the Block Storage service (cinder) component you can clearly see different requirements for
its services: the cinder-scheduler is a very light service with low
Expand All @@ -35,17 +35,17 @@ data) requirements.
The Glance and Swift components are in the data path, as well as RabbitMQ and Galera services.

Given these requirements it may be preferable not to let these services wander
all over your OpenShift worker nodes with the possibility of impacting other
all over your {OpenShiftShort} worker nodes with the possibility of impacting other
workloads, or maybe you don't mind the light services wandering around but you
want to pin down the heavy ones to a set of infrastructure nodes.

There are also hardware restrictions to take into consideration, because if you
are using a Fibre Channel (FC) Block Storage service backend you need the cinder-volume,
cinder-backup, and maybe even the glance (if it's using the Block Storage service as a backend)
services to run on a OpenShift host that has an HBA.
services to run on a {OpenShiftShort} host that has an HBA.

The OpenStack Operators allow a great deal of flexibility on where to run the
OpenStack services, as you can use node labels to define which OpenShift nodes
are eligible to run the different OpenStack services. Refer to the xref:about-node-selector_{context}[About node
The {OpenStackShort} Operators allow a great deal of flexibility on where to run the
{OpenStackShort} services, as you can use node labels to define which {OpenShiftShort} nodes
are eligible to run the different {OpenStackShort} services. Refer to the xref:about-node-selector_{context}[About node
selector] to learn more about using labels to define
placement of the OpenStack services.
placement of the {OpenStackShort} services.
Loading

0 comments on commit abab6af

Please sign in to comment.