Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Beta docs restructuring c tl p lane services pt3 #402

Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
1 change: 1 addition & 0 deletions docs_user/adoption-attributes.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -4,6 +4,7 @@

ifeval::["{build}" == "upstream"]
:OpenShift: OpenShift
:rhos_long: OpenStack
:rhos_prev_long: OpenStack
:rhos_acro: OSP
:OpenStackShort: OSP
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -20,4 +20,16 @@ include::../modules/proc_adopting-the-placement-service.adoc[leveloffset=+1]

include::../modules/proc_adopting-the-compute-service.adoc[leveloffset=+1]

include::../assemblies/assembly_adopting-the-block-storage-service.adoc[leveloffset=+1]
include::../assemblies/assembly_adopting-the-block-storage-service.adoc[leveloffset=+1]

include::../assemblies/assembly_adopting-the-shared-file-systems-service.adoc[leveloffset=+1]

include::../assemblies/assembly_adopting-the-bare-metal-provisioning-service.adoc[leveloffset=+1]

include::../modules/proc_adopting-telemetry-services.adoc[leveloffset=+1]

include::../modules/proc_adopting-autoscaling.adoc[leveloffset=+1]

include::assembly_reviewing-the-openstack-control-plane-configuration.adoc[leveloffset=+1]

include::../modules/proc_rolling-back-the-control-plane-adoption.adoc[leveloffset=+1]
Original file line number Diff line number Diff line change
@@ -0,0 +1,11 @@
[id="adopting-the-bare-metal-provisioning-service_{context}"]

:context: adopting-bare-metal-provisioning

= Adopting the Bare Metal Provisioning service

Review information about your {bare_metal_first_ref} configuration and then adopt your {bare_metal} to the {rhos_long} control plane.

include::../modules/con_bare-metal-provisioning-service-configurations.adoc[leveloffset=+1]

include::../modules/proc_deploying-the-bare-metal-provisioning-service.adoc[leveloffset=+1]
Original file line number Diff line number Diff line change
@@ -0,0 +1,27 @@
[id="adopting-the-shared-file-systems-service_{context}"]

:context: adopting-shared-file-systems

= Adopting the {rhos_component_storage_file}

The {rhos_component_storage_file_first_ref} provides {rhos_prev_long} ({OpenStackShort})
users with a self-service API to create and manage file shares. File
shares (or simply, "shares"), are built for concurrent read/write access by
any number of clients. This, coupled with the inherent elasticity of the
underlying storage makes the {rhos_component_storage_file} essential in
cloud environments with require RWX ("read write many") persistent storage.

File shares in {OpenStackShort} are accessed directly over a network. Hence, it is essential to plan the networking of the cloud to create a successful and sustainable orchestration layer for shared file systems.

The {rhos_component_storage_file} supports two levels of storage networking abstractions - one where users can directly control the networking for their respective file shares; and another where the storage networking is configured by the {OpenStackShort} administrator. It is important to ensure that the networking in the {OpenStackShort} {rhos_prev_ver} environment matches the network plans for your new cloud after adoption. This ensures that tenant workloads remain connected to storage through the adoption process, even as the control plane suffers a minor interruption. The {rhos_component_storage_file} control plane services are not in the data path; and shutting down the API, scheduler and share manager services will not impact access to existing shared file systems.

Typically, storage and storage device management networks are separate.
Shared File Systems services only need access to the storage device management network.
For example, if a Ceph cluster was used in the deployment, the "storage"
network refers to the Ceph cluster's public network, and the Shared File Systems service's share manager service needs to be able to reach it.

include::../modules/con_changes-to-cephFS-via-NFS.adoc[leveloffset=+1]

include::../modules/proc_deploying-file-systems-service-control-plane.adoc[leveloffset=+1]

include::../modules/proc_decommissioning-rhosp-standalone-ceph-NFS-service.adoc[leveloffset=+1]
Original file line number Diff line number Diff line change
@@ -0,0 +1,15 @@
[id="reviewing-the-openstack-control-plane-configuration_{context}"]

:context: reviewing-configuration

= Reviewing the {rhos_prev_long} control plane configruation

Before starting the adoption workflow, pull the configuration from the {rhos_prev_long} services and {OpenStackPreviousInstaller} on your file system to back up the configuration files. You can then use the files later, during the configuration of the adopted services, and for the record to compare and make sure nothing has been missed or misconfigured.

Make sure you have pull the os-diff repository and configure according to your environment:
link:planning.md#Configuration tooling[Configure os-diff]
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Does this need to be commented out?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@matbu Should this "planning.md" link be included in the downstream docs?
And should this link be in the downstream docs? https://github.com/openstack-k8s-operators/os-diff/blob/main/config.yaml

CC: @jistr

//kgilliga: Should we use this link in the downstream guide?

include::../modules/proc_pulling-configuration-from-a-tripleo-deployment.adoc[leveloffset=+1]

include::../modules/proc_retrieving-services-topology-specific-configuration.adoc[leveloffset=+1]
8 changes: 0 additions & 8 deletions docs_user/assemblies/openstack_adoption.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -9,16 +9,8 @@ ifdef::context[:parent-context: {context}]
:toc: left
:toclevels: 3
klgill marked this conversation as resolved.
Show resolved Hide resolved

include::../modules/openstack-pull_openstack_configuration.adoc[leveloffset=+1]
include::../modules/proc_adopting-the-openstack-dashboard.adoc[leveloffset=+1]
include::../modules/openstack-manila_adoption.adoc[leveloffset=+1]
include::../modules/openstack-ironic_adoption.adoc[leveloffset=+1]
include::../modules/proc_adopting-the-orchestration-service.adoc[leveloffset=+1]
include::../modules/openstack-telemetry_adoption.adoc[leveloffset=+1]
include::../modules/openstack-autoscaling_adoption.adoc[leveloffset=+1]
include::../modules/openstack-stop_remaining_services.adoc[leveloffset=+1]
include::../modules/openstack-dataplane_adoption.adoc[leveloffset=+1]
include::../modules/openstack-rolling_back.adoc[leveloffset=+1]

ifdef::parent-context[:context: {parent-context}]
ifndef::parent-context[:!context:]
Original file line number Diff line number Diff line change
@@ -0,0 +1,45 @@
[id="con_bare-metal-provisioning-service-configurations_{context}"]

//Check xrefs

= Bare Metal Provisioning service configurations

The {bare_metal_first_ref} is configured by using configuration snippets. For more information about the configuration snippets, see xref:planning-the-new-deployment_planning[Planning the new deployment].

{OpenStackPreviousInstaller} generally took care to not override the defaults of the {bare_metal}, however as with any system of descreet configuration management attempting to provide a cross-version compatability layer, some configuration was certainly defaulted in particular ways. For example, PXE Loader file names were often overridden at intermediate layers, and you will thus want to pay particular attention to the settings you choose to apply in your adopted deployment. The operator attempts to apply reasonable working default configuration, but if you override them with prior configuration, your experience may not be ideal or your new {bare_metal} will fail to operate. Similarly, additional configuration may be necessary, for example
if your `ironic.conf` has additional hardware types enabled and in use.

Furthermore, the model of reasonable defaults includes commonly used hardware-types and driver interfaces. For example, if you previously needed to enable the `redfish-virtual-media` boot interface and the `ramdisk` deploy interface, the good news is you don't need to, they are enabled by default. One aspect to be on the watch for after completing adoption is when adding new bare metal nodes, the driver interface selection occurs based upon order of presidence in the configuration if not explicitly set on the node creation request or as an established default in `ironic.conf`.

That being said, some configuration parameters are provided as either a convenience to the operator so they don't need to be set on an individual node level while also needing to know precise values, for example, network UUID values, or it is centrally configured in `ironic.conf` as the setting controls behaivor a security control.

The settings, if configured, and formatted as [section] and parameter name, are critical to be maintained from the prior deployment to the new deployment as it will govern quite a bit of the underlying behavior and values in the previous configuration, would have used specific values if
set.

* [neutron]cleaning_network
* [neutron]provisioning_network
* [neutron]rescuing_network
* [neutron]inspection_network
* [conductor]automated_clean
* [deploy]erase_devices_priority
* [deploy]erase_devices_metadata_priority
* [conductor]force_power_state_during_sync
// FIXME: The setting above likely should be True by default in deployments, but would have been *false* by defaults on prior underclouds.

The following parameters *can* be set individually on a node, however, some operators choose to use embedded configuration options to avoid the need to set it individually when creating/managing bare metal nodes. We recommend you check your prior ironic.conf file for these parameters, and if set apply as specific override configuration.

* [conductor]bootloader
* [conductor]rescue_ramdisk
* [conductor]rescue_kernel
* [conductor]deploy_kernel
* [conductor]deploy_ramdisk

Finally, a parameter which may be important based upon your configuration and experience, are the instances of `kernel_append_params`, formerly `pxe_append_params` in the `[pxe]` and `[redfish]` configuration sections. Largely this parameter is used to appy boot time options like "console" for the deployment ramdisk and as such often seeks to be changed.

// TODO:
// Conductor Groups?!

As a warning, hardware types set via the `ironic.conf` `enabled_hardware_types` parameter and hardware type driver interfaces starting with `staging-` are not available to be migrated into an adopted configuration.

Furthermore, {OpenStackPreviousInstaller}-based deployments made architectural decisions based upon self-management of services. When adopting deployments, you don't necessarilly need multiple replicas of secondary services such as the Introspection service. Should the host the container is running upon fail, OpenShift will restart the container on another host. The short-term transitory loss
//kgilliga: This last sentence tails off.
22 changes: 22 additions & 0 deletions docs_user/modules/con_changes-to-cephFS-via-NFS.adoc
Original file line number Diff line number Diff line change
@@ -0,0 +1,22 @@
[id="changes-to-cephFS-through-NFS_{context}"]

= Changes to CephFS through NFS

If the {rhos_prev_long} ({OpenStackShort}) {rhos_prev_ver} deployment uses CephFS through NFS as a backend for {rhos_component_storage_file_first_ref}, there's a `ceph-nfs` service on the {OpenStackShort} controller nodes deployed and managed by {OpenStackPreviousInstaller}. This service cannot be directly imported into {rhos_long} {rhos_curr_ver}. On {rhos_acro} {rhos_curr_ver}, the {rhos_component_storage_file} only supports using a "clustered" NFS service that is directly managed on the Ceph cluster. So, adoption with this service will involve a data path disruption to existing NFS clients. The timing of this disruption can be controlled by the deployer independent of this adoption procedure.

On {OpenStackShort} {rhos_prev_ver}, pacemaker controls the high availability of the `ceph-nfs` service. This service is assigned a Virtual IP (VIP) address that is also managed by pacemaker. The VIP is typically created on an isolated `StorageNFS` network. There are ordering and collocation constraints established between this VIP, `ceph-nfs` and the Shared File Systems service's share manager service on the
controller nodes. Prior to adopting {rhos_component_storage_file}, pacemaker's ordering and collocation constraints must be adjusted to separate the share manager service. This establishes `ceph-nfs` with its VIP as an isolated, standalone NFS service that can be decommissioned at will after completing the {OpenStackShort} adoption.

Red Hat Ceph Storage 7.0 introduced a native `clustered Ceph NFS service`. This service has to be deployed on the Ceph cluster using the Ceph orchestrator prior to adopting the {rhos_component_storage_file}. This NFS service will eventually replace the standalone NFS service from {OpenStackShort} {rhos_prev_ver} in your deployment. When the {rhos_component_storage_file} is adopted into the {rhos_acro} {rhos_curr_ver} environment, it will establish all the existing
exports and client restrictions on the new clustered Ceph NFS service. Clients can continue to read and write data on their existing NFS shares, and are not affected until the old standalone NFS service is decommissioned. This switchover window allows clients to re-mount the same share from the new
clustered Ceph NFS service during a scheduled downtime.

In order to ensure that existing clients can easily switchover to the new NFS
service, it is necessary that the clustered Ceph NFS service is assigned an
IP address from the same isolated `StorageNFS` network. Doing this will ensure that NFS users aren't expected to make any networking changes to their
existing workloads. These users only need to discover and re-mount their shares using new export paths. When the adoption procedure is complete, {OpenStackShort} users can query the {rhos_component_storage_file} API to list the export locations on existing shares to identify the `preferred` paths to mount these shares. These `preferred` paths
will correspond to the new clustered Ceph NFS service in contrast to other
non-preferred export paths that continue to be displayed until the old
isolated, standalone NFS service is decommissioned.

See xref:creating-a-ceph-nfs-cluster_migrating-databases[Creating a Ceph NFS cluster] for instructions on setting up a clustered NFS service.
Loading
Loading