Development environment
+The Adoption development environment utilizes +install_yamls +for CRC VM creation and for creation of the VM that hosts the source +Wallaby (or OSP 17.1) OpenStack in Standalone configuration.
+Environment prep
+Get dataplane adoption repo:
+git clone https://github.com/openstack-k8s-operators/data-plane-adoption.git ~/data-plane-adoption
+Get install_yamls:
+git clone https://github.com/openstack-k8s-operators/install_yamls.git ~/install_yamls
+Install tools for operator development:
+cd ~/install_yamls/devsetup
+make download_tools
+CRC deployment
+CRC environment with network isolation
+cd ~/install_yamls/devsetup
+PULL_SECRET=$HOME/pull-secret.txt CPUS=12 MEMORY=40000 DISK=100 make crc
+
+eval $(crc oc-env)
+oc login -u kubeadmin -p 12345678 https://api.crc.testing:6443
+
+make crc_attach_default_interface
++
CRC environment with Openstack ironic
++ + | ++This section is specific to deploying Nova with Ironic backend. Skip +it if you want to deploy Nova normally. + | +
Create the BMaaS network (crc-bmaas
) and virtual baremetal nodes controlled by
+a RedFish BMC emulator.
cd ~/install_yamls
+make nmstate
+make namespace
+cd devsetup # back to install_yamls/devsetup
+make bmaas BMAAS_NODE_COUNT=2
+A node definition YAML file to use with the openstack baremetal
+create <file>.yaml
command can be generated for the virtual baremetal
+nodes by running the bmaas_generate_nodes_yaml
make target. Store it
+in a temp file for later.
make bmaas_generate_nodes_yaml | tail -n +2 | tee /tmp/ironic_nodes.yaml
+Set variables to deploy edpm Standalone with additional network
+(baremetal
) and compute driver ironic
.
cat << EOF > /tmp/addtional_nets.json
+[
+ {
+ "type": "network",
+ "name": "crc-bmaas",
+ "standalone_config": {
+ "type": "ovs_bridge",
+ "name": "baremetal",
+ "mtu": 1500,
+ "vip": true,
+ "ip_subnet": "172.20.1.0/24",
+ "allocation_pools": [
+ {
+ "start": "172.20.1.100",
+ "end": "172.20.1.150"
+ }
+ ],
+ "host_routes": [
+ {
+ "destination": "192.168.130.0/24",
+ "nexthop": "172.20.1.1"
+ }
+ ]
+ }
+ }
+]
+EOF
+export EDPM_COMPUTE_ADDITIONAL_NETWORKS=$(jq -c . /tmp/addtional_nets.json)
+export STANDALONE_COMPUTE_DRIVER=ironic
+export NTP_SERVER=pool.ntp.org # Only necessary if not on the RedHat network ...
+export EDPM_COMPUTE_CEPH_ENABLED=false # Optional
+export EDPM_COMPUTE_CEPH_NOVA=false # Optional
+export EDPM_COMPUTE_SRIOV_ENABLED=false # Without this the standalone deploy fails when compute driver is ironic.
+===
+If EDPM_COMPUTE_CEPH_ENABLED=false
is set, TripleO configures Glance
with
+Swift
as a backend.
+If EDPM_COMPUTE_CEPH_NOVA=false
is set, TripleO configures Nova/Libvirt
with
+a local storage backend.
+===
+'
Standalone deployment
+Use the install_yamls devsetup +to create a virtual machine (edpm-compute-0) connected to the isolated networks.
++ + | +
+To use OSP 17.1 content to deploy TripleO Standalone, follow the
+guide for setting up downstream content
+for make standalone .
+ |
+
To use Wallaby content instead, run the following:
+cd ~/install_yamls/devsetup
+make standalone
+To deploy using TLS everywhere enabled, instead run:
++
cd ~/install_yamls/devsetup +TLS_ENABLED=true make standalone +---
+Install the openstack-k8s-operators (openstack-operator)
+cd .. # back to install_yamls
+make crc_storage
+make input
+make openstack
+Route networks
+Route VLAN20 to have access to the MariaDB cluster:
+EDPM_BRIDGE=$(sudo virsh dumpxml edpm-compute-0 | grep -oP "(?<=bridge=').*(?=')")
+sudo ip link add link $EDPM_BRIDGE name vlan20 type vlan id 20
+sudo ip addr add dev vlan20 172.17.0.222/24
+sudo ip link set up dev vlan20
+To adopt the Swift service as well, route VLAN23 to have access to the storage +backend services:
+EDPM_BRIDGE=$(sudo virsh dumpxml edpm-compute-0 | grep -oP "(?<=bridge=').*(?=')")
+sudo ip link add link $EDPM_BRIDGE name vlan23 type vlan id 23
+sudo ip addr add dev vlan23 172.20.0.222/24
+sudo ip link set up dev vlan23
+Snapshot/revert
+When the deployment of the Standalone OpenStack is finished, it’s a +good time to snapshot the machine, so that multiple Adoption attempts +can be done without having to deploy from scratch.
+cd ~/install_yamls/devsetup
+make standalone_snapshot
+And when you wish to revert the Standalone deployment to the +snapshotted state:
+cd ~/install_yamls/devsetup
+make standalone_revert
+Similar snapshot could be done for the CRC virtual machine, but the
+developer environment reset on CRC side can be done sufficiently via
+the install_yamls *_cleanup
targets. This is further detailed in
+the section:
+Reset the environment to pre-adoption state
Creating a workload to adopt
+To run openstack
commands from the host without
+installing the package and copying the configuration file from the virtual machine, create an alias:
alias openstack="ssh -i ~/install_yamls/out/edpm/ansibleee-ssh-key-id_rsa root@192.168.122.100 OS_CLOUD=standalone openstack"
+Ironic Steps
+# Enroll baremetal nodes
+make bmaas_generate_nodes_yaml | tail -n +2 | tee /tmp/ironic_nodes.yaml
+scp -i $HOME/install_yamls/out/edpm/ansibleee-ssh-key-id_rsa /tmp/ironic_nodes.yaml root@192.168.122.100:
+ssh -i $HOME/install_yamls/out/edpm/ansibleee-ssh-key-id_rsa root@192.168.122.100
+
+export OS_CLOUD=standalone
+openstack baremetal create /root/ironic_nodes.yaml
+export IRONIC_PYTHON_AGENT_RAMDISK_ID=$(openstack image show deploy-ramdisk -c id -f value)
+export IRONIC_PYTHON_AGENT_KERNEL_ID=$(openstack image show deploy-kernel -c id -f value)
+for node in $(openstack baremetal node list -c UUID -f value); do
+ openstack baremetal node set $node \
+ --driver-info deploy_ramdisk=${IRONIC_PYTHON_AGENT_RAMDISK_ID} \
+ --driver-info deploy_kernel=${IRONIC_PYTHON_AGENT_KERNEL_ID} \
+ --resource-class baremetal \
+ --property capabilities='boot_mode:uefi'
+done
+
+# Create a baremetal flavor
+openstack flavor create baremetal --ram 1024 --vcpus 1 --disk 15 \
+ --property resources:VCPU=0 \
+ --property resources:MEMORY_MB=0 \
+ --property resources:DISK_GB=0 \
+ --property resources:CUSTOM_BAREMETAL=1 \
+ --property capabilities:boot_mode="uefi"
+
+# Create image
+IMG=Fedora-Cloud-Base-38-1.6.x86_64.qcow2
+URL=https://download.fedoraproject.org/pub/fedora/linux/releases/38/Cloud/x86_64/images/$IMG
+curl -o /tmp/${IMG} -L $URL
+DISK_FORMAT=$(qemu-img info /tmp/${IMG} | grep "file format:" | awk '{print $NF}')
+openstack image create --container-format bare --disk-format ${DISK_FORMAT} Fedora-Cloud-Base-38 < /tmp/${IMG}
+
+export BAREMETAL_NODES=$(openstack baremetal node list -c UUID -f value)
+# Manage nodes
+for node in $BAREMETAL_NODES; do
+ openstack baremetal node manage $node
+done
+
+# Wait for nodes to reach "manageable" state
+watch openstack baremetal node list
+
+# Inspect baremetal nodes
+for node in $BAREMETAL_NODES; do
+ openstack baremetal introspection start $node
+done
+
+# Wait for inspection to complete
+watch openstack baremetal introspection list
+
+# Provide nodes
+for node in $BAREMETAL_NODES; do
+ openstack baremetal node provide $node
+done
+
+# Wait for nodes to reach "available" state
+watch openstack baremetal node list
+
+# Create an instance on baremetal
+openstack server show baremetal-test || {
+ openstack server create baremetal-test --flavor baremetal --image Fedora-Cloud-Base-38 --nic net-id=provisioning --wait
+}
+
+# Check instance status and network connectivity
+openstack server show baremetal-test
+ping -c 4 $(openstack server show baremetal-test -f json -c addresses | jq -r .addresses.provisioning[0])
++
Virtual Machine Steps
+Create a test VM instance with a test volume attachement:
+cd ~/data-plane-adoption
+bash tests/roles/development_environment/files/pre_launch.bash
+This also creates a test Cinder volume, a backup from it, and a snapshot of it.
++
Ceph Storage Steps
+Confirm the image UUID can be seen in Ceph’s images pool.
+ssh -i ~/install_yamls/out/edpm/ansibleee-ssh-key-id_rsa root@192.168.122.100 sudo cephadm shell -- rbd -p images ls -l
+Create a Barbican secret
+openstack secret store --name testSecret --payload 'TestPayload'
+Performing the adoption procedure
+To simplify the adoption procedure, copy the deployment passwords that +you use in copy the deployment passwords that you use in the +backend +services deployment phase of the data plane adoption.
+scp -i ~/install_yamls/out/edpm/ansibleee-ssh-key-id_rsa root@192.168.122.100:/root/tripleo-standalone-passwords.yaml ~/
+The development environment is now set up, you can go to the Adoption +documentation +and perform adoption manually, or run the test +suite +against your environment.
+Resetting the environment to pre-adoption state
+The development environment must be rolled back in case we want to execute another Adoption run.
+Delete the data-plane and control-plane resources from the CRC vm
+oc delete --ignore-not-found=true --wait=false openstackdataplanedeployment/openstack
+oc delete --ignore-not-found=true --wait=false openstackdataplanedeployment/openstack-nova-compute-ffu
+oc delete --ignore-not-found=true --wait=false openstackcontrolplane/openstack
+oc patch openstackcontrolplane openstack --type=merge --patch '
+metadata:
+ finalizers: []
+' || true
+
+while oc get pod | grep rabbitmq-server-0; do
+ sleep 2
+done
+while oc get pod | grep openstack-galera-0; do
+ sleep 2
+done
+
+oc delete --wait=false pod ovn-copy-data || true
+oc delete secret osp-secret || true
+Revert the standalone vm to the snapshotted state
+cd ~/install_yamls/devsetup
+make standalone_revert
+Clean up and initialize the storage PVs in CRC vm
+cd ..
+for i in {1..3}; do make crc_storage_cleanup crc_storage && break || sleep 5; done
+Experimenting with an additional compute node
+The following is not on the critical path of preparing the development +environment for Adoption, but it shows how to make the environment +work with an additional compute node VM.
+The remaining steps should be completed on the hypervisor hosting crc +and edpm-compute-0.
+Deploy NG Control Plane with Ceph
+Export the Ceph configuration from edpm-compute-0 into a secret.
+SSH=$(ssh -i ~/install_yamls/out/edpm/ansibleee-ssh-key-id_rsa root@192.168.122.100)
+KEY=$($SSH "cat /etc/ceph/ceph.client.openstack.keyring | base64 -w 0")
+CONF=$($SSH "cat /etc/ceph/ceph.conf | base64 -w 0")
+
+cat <<EOF > ceph_secret.yaml
+apiVersion: v1
+data:
+ ceph.client.openstack.keyring: $KEY
+ ceph.conf: $CONF
+kind: Secret
+metadata:
+ name: ceph-conf-files
+ namespace: openstack
+type: Opaque
+EOF
+
+oc create -f ceph_secret.yaml
+Deploy the NG control plane with Ceph as backend for Glance and
+Cinder. As described in
+the install_yamls README,
+use the sample config located at
+https://github.com/openstack-k8s-operators/openstack-operator/blob/main/config/samples/core_v1beta1_openstackcontrolplane_network_isolation_ceph.yaml
+but make sure to replace the _FSID_
in the sample with the one from
+the secret created in the previous step.
curl -o /tmp/core_v1beta1_openstackcontrolplane_network_isolation_ceph.yaml https://raw.githubusercontent.com/openstack-k8s-operators/openstack-operator/main/config/samples/core_v1beta1_openstackcontrolplane_network_isolation_ceph.yaml
+FSID=$(oc get secret ceph-conf-files -o json | jq -r '.data."ceph.conf"' | base64 -d | grep fsid | sed -e 's/fsid = //') && echo $FSID
+sed -i "s/_FSID_/${FSID}/" /tmp/core_v1beta1_openstackcontrolplane_network_isolation_ceph.yaml
+oc apply -f /tmp/core_v1beta1_openstackcontrolplane_network_isolation_ceph.yaml
+A NG control plane which uses the same Ceph backend should now be +functional. If you create a test image on the NG system to confirm +it works from the configuration above, be sure to read the warning +in the next section.
+Before beginning adoption testing or development you may wish to +deploy an EDPM node as described in the following section.
+Warning about two OpenStacks and one Ceph
+Though workloads can be created in the NG deployment to test, be +careful not to confuse them with workloads from the Wallaby cluster +to be migrated. The following scenario is now possible.
+A Glance image exists on the Wallaby OpenStack to be adopted.
+[stack@standalone standalone]$ export OS_CLOUD=standalone
+[stack@standalone standalone]$ openstack image list
++--------------------------------------+--------+--------+
+| ID | Name | Status |
++--------------------------------------+--------+--------+
+| 33a43519-a960-4cd0-a593-eca56ee553aa | cirros | active |
++--------------------------------------+--------+--------+
+[stack@standalone standalone]$
+If you now create an image with the NG cluster, then a Glance image +will exsit on the NG OpenStack which will adopt the workloads of the +wallaby.
+[fultonj@hamfast ng]$ export OS_CLOUD=default
+[fultonj@hamfast ng]$ export OS_PASSWORD=12345678
+[fultonj@hamfast ng]$ openstack image list
++--------------------------------------+--------+--------+
+| ID | Name | Status |
++--------------------------------------+--------+--------+
+| 4ebccb29-193b-4d52-9ffd-034d440e073c | cirros | active |
++--------------------------------------+--------+--------+
+[fultonj@hamfast ng]$
+Both Glance images are stored in the same Ceph pool.
+ssh -i ~/install_yamls/out/edpm/ansibleee-ssh-key-id_rsa root@192.168.122.100 sudo cephadm shell -- rbd -p images ls -l
+Inferring fsid 7133115f-7751-5c2f-88bd-fbff2f140791
+Using recent ceph image quay.rdoproject.org/tripleowallabycentos9/daemon@sha256:aa259dd2439dfaa60b27c9ebb4fb310cdf1e8e62aa7467df350baf22c5d992d8
+NAME SIZE PARENT FMT PROT LOCK
+33a43519-a960-4cd0-a593-eca56ee553aa 273 B 2
+33a43519-a960-4cd0-a593-eca56ee553aa@snap 273 B 2 yes
+4ebccb29-193b-4d52-9ffd-034d440e073c 112 MiB 2
+4ebccb29-193b-4d52-9ffd-034d440e073c@snap 112 MiB 2 yes
+However, as far as each Glance service is concerned each has one +image. Thus, in order to avoid confusion during adoption the test +Glance image on the NG OpenStack should be deleted.
+openstack image delete 4ebccb29-193b-4d52-9ffd-034d440e073c
+Connecting the NG OpenStack to the existing Ceph cluster is part of +the adoption procedure so that the data migration can be minimized +but understand the implications of the above example.
+Deploy edpm-compute-1
+edpm-compute-0 is not available as a standard EDPM system to be +managed by edpm-ansible +or +dataplane-operator +because it hosts the wallaby deployment which will be adopted +and after adoption it will only host the Ceph server.
+Use the install_yamls devsetup
+to create additional virtual machines and be sure
+that the EDPM_COMPUTE_SUFFIX
is set to 1
or greater.
+Do not set EDPM_COMPUTE_SUFFIX
to 0
or you could delete
+the Wallaby system created in the previous section.
When deploying EDPM nodes add an extraMounts
like the following in
+the OpenStackDataPlaneNodeSet
CR nodeTemplate
so that they will be
+configured to use the same Ceph cluster.
edpm-compute:
+ nodeTemplate:
+ extraMounts:
+ - extraVolType: Ceph
+ volumes:
+ - name: ceph
+ secret:
+ secretName: ceph-conf-files
+ mounts:
+ - name: ceph
+ mountPath: "/etc/ceph"
+ readOnly: true
+A NG data plane which uses the same Ceph backend should now be +functional. Be careful about not confusing new workloads to test the +NG OpenStack with the Wallaby OpenStack as described in the previous +section.
+Begin Adoption Testing or Development
+We should now have:
+-
+
-
+
An NG glance service based on Antelope running on CRC
+
+ -
+
An TripleO-deployed glance serviced running on edpm-compute-0
+
+ -
+
Both services have the same Ceph backend
+
+ -
+
Each service has their own independent database
+
+
An environment above is assumed to be available in the +Glance Adoption documentation. You +may now follow other Data Plane Adoption procedures described in the +documentation. +The same pattern can be applied to other services.
+Contributing to documentation
+Rendering documentation locally
+Install docs build requirements:
+make docs-dependencies
+To render the user-facing documentation site locally:
+make docs-user
+To render the contributor documentation site locally:
+make docs-dev
+The built HTML files are in docs_build/adoption-user
and
+docs_build/adoption-dev
directories respectively.
There are some additional make targets for convenience. The following +targets, in addition to rendering the docs, will also open the +resulting HTML in your browser so that you don’t have to look for it:
+make docs-user-open
+# or
+make docs-dev-open
+The following targets set up an inotify watch on the documentation
+sources, and when it detects modification, the HTML is re-rendered.
+This is so that you can use "edit source - save source - refresh
+browser page" loop when working on the docs, without having to run
+make docs-*
repeatedly.
make docs-user-watch
+# or
+make docs-dev-watch
+Preview of downstream documentation
+To render a preview of what should serve as the base for downstream
+docs (e.g. with downstream container image URLs), prepend
+BUILD=downstream
to your make targets. For example:
BUILD=downstream make docs-user
+Patterns and tips for contributing to documentation
+-
+
-
+
Pages concerning individual components/services should make sense in +the context of the broader adoption procedure. While adopting a +service in isolation is an option for developers, let’s write the +documentation with the assumption the adoption procedure is being +done in full, going step by step (one doc after another).
+
+ -
+
The procedure should be written with production use in mind. This +repository could be used as a starting point for product +technical documentation. We should not tie the documentation to +something that wouldn’t translate well from dev envs to production.
+++-
+
-
+
This includes not assuming that the source environment is +Standalone, and the destination is CRC. We can provide examples for +Standalone/CRC, but it should be possible to use the procedure +with fuller environments in a way that is obvious from the docs.
+
+
+ -
+
-
+
If possible, try to make code snippets copy-pastable. Use shell +variables if the snippets should be parametrized. Use
+oc
rather +thankubectl
in snippets.
+ -
+
Focus on the "happy path" in the docs as much as possible, +troubleshooting info can go into the Troubleshooting page, or +alternatively a troubleshooting section at the end of the document, +visibly separated from the main procedure.
+
+ -
+
The full procedure will inevitably happen to be quite long, so let’s +try to be concise in writing to keep the docs consumable (but not to +a point of making things difficult to understand or omitting +important things).
+
+ -
+
A bash alias can be created for long command however when implementing +them in the test roles you should transform them to avoid command not +found errors. +From:
+++++
+alias openstack="oc exec -t openstackclient -- openstack" + +openstack endpoint list | grep network
++To:
+++++
+alias openstack="oc exec -t openstackclient -- openstack" + +${BASH_ALIASES[openstack]} endpoint list | grep network
+
Tests
+Test suite information
+The adoption docs repository also includes a test suite for Adoption. +There are targets in the Makefile which can be used to execute the +test suite:
+-
+
-
+
+test-minimal
- a minimal test scenario, the eventual set of +services in this scenario should be the "core" services needed to +launch a VM. This scenario assumes local storage backend for +services like Glance and Cinder.
+ -
+
+test-with-ceph
- like minimal but with Ceph storage backend for +Glance and Cinder.
+
Configuring the test suite
+-
+
-
+
Create
+tests/vars.yaml
andtests/secrets.yaml
by copying the +included samples (tests/vars.sample.yaml
, +tests/secrets.sample.yaml
).
+ -
+
Walk through the
+tests/vars.yaml
andtests/secrets.yaml
files +and see if you need to edit any values. If you are using the +documented development environment, majority of the defaults should +work out of the box. The comments in the YAML files will guide you +regarding the expected values. You may want to double check that +these variables suit your environment:++-
+
-
+
+install_yamls_path
+ -
+
+tripleo_passwords
+ -
+
+controller*_ssh
+ -
+
+edpm_privatekey_path
+ -
+
+timesync_ntp_servers
+
+ -
+
Running the tests
+The interface between the execution infrastructure and the test suite +is an Ansible inventory and variables files. Inventory and variable +samples are provided. To run the tests, follow this procedure:
+-
+
-
+
Install dependencies and create a venv:
+++++
+sudo dnf -y install python-devel +python3 -m venv venv +source venv/bin/activate +pip install openstackclient osc_placement jmespath +ansible-galaxy collection install community.general
+ -
+
Run
+make test-with-ceph
(the documented development environment +does include Ceph).++If you are using Ceph-less environment, you should run
+make +test-minimal
.
+
Making patches to the test suite
+Please be aware of the following when changing the test suite:
+-
+
-
+
The test suite should follow the docs as much as possible.
+++The purpose of the test suite is to verify what the user would run +if they were following the docs. We don’t want to loosely rewrite +the docs into Ansible code following Ansible best practices. We want +to test the exact same bash commands/snippets that are written in +the docs. This often means that we should be using the
+shell
+module and do a verbatim copy/paste from docs, instead of using the +best Ansible module for the task at hand.
+