Skip to content

Commit

Permalink
Merge pull request #440 from jpodivin/pre-commit
Browse files Browse the repository at this point in the history
Adding basic pre-commit to the repo
  • Loading branch information
jistr authored May 15, 2024
2 parents aeb5441 + 1a96168 commit 59cb6ed
Show file tree
Hide file tree
Showing 68 changed files with 155 additions and 150 deletions.
19 changes: 0 additions & 19 deletions .github/workflows/ansible-lint.yaml

This file was deleted.

15 changes: 15 additions & 0 deletions .github/workflows/lint.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,15 @@
name: Linting
on:
push:
branches:
- main
pull_request:
branches:
- main
jobs:
pre-commit:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: actions/setup-python@v3
- uses: pre-commit/[email protected]
36 changes: 36 additions & 0 deletions .pre-commit-config.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,36 @@
---
repos:
- repo: https://github.com/pre-commit/pre-commit-hooks
rev: v4.4.0
hooks:
- id: end-of-file-fixer
- id: trailing-whitespace
- id: mixed-line-ending
- id: fix-byte-order-marker
- id: check-executables-have-shebangs
exclude: ".*.bash" # TODO Enable when scripts are consistent
- id: check-merge-conflict
- id: check-symlinks
- id: debug-statements
- id: check-yaml
files: .*\.(yaml|yml)$
args: [--allow-multiple-documents]
- repo: https://github.com/ansible/ansible-lint
rev: v6.22.1
hooks:
- id: ansible-lint
entry: env ANSIBLE_ROLES_PATH=./tests/roles:$ANSIBLE_ROLES_PATH ansible-lint
- repo: https://github.com/openstack-dev/bashate.git
rev: 2.1.1
hooks:
- id: bashate
entry: bashate --error . --ignore=E006,E040
verbose: false
exclude: ".*.bash" # TODO Enable when scripts are consistent
# Run bashate check for all bash scripts
# Ignores the following rules:
# E006: Line longer than 79 columns (as many scripts use jinja
# templating, this is very difficult)
# E040: Syntax error determined using `bash -n` (as many scripts
# use jinja templating, this will often fail and the syntax
# error will be discovered in execution anyway)
2 changes: 1 addition & 1 deletion Makefile
Original file line number Diff line number Diff line change
Expand Up @@ -21,7 +21,7 @@ TEST_ARGS ?=
help: ## Display this help.
@awk 'BEGIN {FS = ":.*##"; printf "\nUsage:\n make \033[36m<target>\033[0m\n"} /^[a-zA-Z_0-9-]+:.*?##/ { printf " \033[36m%-15s\033[0m %s\n", $$1, $$2 } /^##@/ { printf "\n\033[1m%s\033[0m\n", substr($$0, 5) } ' $(MAKEFILE_LIST)

##@ TESTS
##@ TESTS

test-minimal: TEST_OUTFILE := tests/logs/test_minimal_out_$(shell date +%FT%T%Z).log
test-minimal: ## Launch minimal test suite
Expand Down
9 changes: 8 additions & 1 deletion docs_dev/assemblies/development_environment.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -7,6 +7,13 @@ Wallaby (or OSP 17.1) OpenStack in Standalone configuration.

== Environment prep

Install https://pre-commit.com/[pre-commit hooks] before contributing:
[,bash]
----
pip install pre-commit
pre-commit install
----

Get dataplane adoption repo:
[,bash]
----
Expand Down Expand Up @@ -208,7 +215,7 @@ https://openstack-k8s-operators.github.io/data-plane-adoption/dev/#_reset_the_en

=== Creating a workload to adopt

To run `openstack` commands from the host without
To run `openstack` commands from the host without
installing the package and copying the configuration file from the virtual machine, create an alias:

[,bash]
Expand Down
4 changes: 2 additions & 2 deletions docs_user/adoption-attributes.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -81,8 +81,8 @@ ifeval::["{build}" == "downstream"]
:OpenShift: Red Hat OpenShift Container Platform
:OpenShiftShort: RHOCP
:OpenStackPreviousInstaller: director
:Ceph: Red Hat Ceph Storage
:CephCluster: Red Hat Ceph Storage
:Ceph: Red Hat Ceph Storage
:CephCluster: Red Hat Ceph Storage
:CephRelease: 7

//Components and services
Expand Down
2 changes: 1 addition & 1 deletion docs_user/assemblies/assembly_adopting-the-data-plane.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -17,4 +17,4 @@ include::../modules/proc_stopping-infrastructure-management-and-compute-services

include::../modules/proc_adopting-compute-services-to-the-data-plane.adoc[leveloffset=+1]

include::../modules/proc_performing-a-fast-forward-upgrade-on-compute-services.adoc[leveloffset=+1]
include::../modules/proc_performing-a-fast-forward-upgrade-on-compute-services.adoc[leveloffset=+1]
6 changes: 3 additions & 3 deletions docs_user/assemblies/assembly_adopting-the-image-service.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -11,14 +11,14 @@ configuration parameters provided by the source environment.
When the procedure is over, the expectation is to see the `GlanceAPI` service
up and running: the {identity_service} endpoints are updated and the same backend of the source Cloud is available. If the conditions above are met, the adoption is considered concluded.

This guide also assumes that:
This guide also assumes that:

* A {OpenStackPreviousInstaller} environment (the source Cloud) is running on one side.
* A `SNO` / `CodeReadyContainers` is running on the other side.
* (optional) An internal/external `Ceph` cluster is reachable by both `crc` and {OpenStackPreviousInstaller}.

ifeval::["{build}" != "downstream"]
//This link goes to a 404. Do we need this text downstream?
//This link goes to a 404. Do we need this text downstream?
As already done for https://github.com/openstack-k8s-operators/data-plane-adoption/blob/main/keystone_adoption.md[Keystone], the Glance Adoption follows the same pattern.
endif::[]

Expand All @@ -30,4 +30,4 @@ include::../modules/proc_adopting-image-service-with-nfs-ganesha-backend.adoc[le

include::../modules/proc_adopting-image-service-with-ceph-backend.adoc[leveloffset=+1]

include::../modules/proc_verifying-the-image-service-adoption.adoc[leveloffset=+1]
include::../modules/proc_verifying-the-image-service-adoption.adoc[leveloffset=+1]
Original file line number Diff line number Diff line change
Expand Up @@ -23,7 +23,7 @@ allocations to be used for the new control plane services:
[IMPORTANT]
Make sure you have the information listed above before proceeding with the next steps.

[NOTE]
[NOTE]
The exact list and configuration of isolated networks in the examples
listed below should reflect the actual adopted environment. The number of
isolated networks may differ from the example below. IPAM scheme may differ.
Expand All @@ -35,4 +35,4 @@ include::../modules/proc_configuring-openshift-worker-nodes.adoc[leveloffset=+1]

include::../modules/proc_configuring-networking-for-control-plane-services.adoc[leveloffset=+1]

include::../modules/proc_configuring-data-plane-nodes.adoc[leveloffset=+1]
include::../modules/proc_configuring-data-plane-nodes.adoc[leveloffset=+1]
Original file line number Diff line number Diff line change
Expand Up @@ -44,4 +44,4 @@ include::../modules/proc_retrieving-network-information-from-your-existing-deplo

include::../assemblies/assembly_planning-your-ipam-configuration.adoc[leveloffset=+1]

include::../assemblies/assembly_configuring-isolated-networks.adoc[leveloffset=+1]
include::../assemblies/assembly_configuring-isolated-networks.adoc[leveloffset=+1]
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@

:context: migrating-ceph-monitoring

= Migrating the monitoring stack component to new nodes within an existing {Ceph} cluster
= Migrating the monitoring stack component to new nodes within an existing {Ceph} cluster

In the context of data plane adoption, where the {rhos_prev_long} ({OpenStackShort}) services are
redeployed in {OpenShift}, a {OpenStackPreviousInstaller}-deployed {CephCluster} cluster will undergo a migration in a process we are calling “externalizing” the {CephCluster} cluster.
Expand Down Expand Up @@ -32,5 +32,3 @@ We assume that:
include::../modules/proc_completing-prerequisites-for-migrating-ceph-monitoring-stack.adoc[leveloffset=+1]

include::../assemblies/assembly_migrating-monitoring-stack-to-target-nodes.adoc[leveloffset=+1]


3 changes: 1 addition & 2 deletions docs_user/assemblies/assembly_migrating-ceph-rbd.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -9,9 +9,8 @@ For hyperconverged infrastructure (HCI) or dedicated Storage nodes that are runn
To migrate Red Hat Ceph Storage Rados Block Device (RBD), your environment must meet the following requirements:

* {Ceph} is running version 6 or later and is managed by cephadm/orchestrator.
* NFS (ganesha) is migrated from a {OpenStackPreviousInstaller}-based deployment to cephadm. For more information, see xref:creating-a-ceph-nfs-cluster_migrating-databases[Creating a NFS Ganesha cluster].
* NFS (ganesha) is migrated from a {OpenStackPreviousInstaller}-based deployment to cephadm. For more information, see xref:creating-a-ceph-nfs-cluster_migrating-databases[Creating a NFS Ganesha cluster].
* Both the {Ceph} public and cluster networks are propagated, with {OpenStackPreviousInstaller}, to the target nodes.
* Ceph Monitors need to keep their IPs to avoid cold migration.

include::../modules/proc_migrating-mon-and-mgr-from-controller-nodes.adoc[leveloffset=+1]

3 changes: 1 addition & 2 deletions docs_user/assemblies/assembly_migrating-ceph-rgw.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,7 @@

= Migrating {Ceph} RGW to external RHEL nodes

For hyperconverged infrastructure (HCI) or dedicated Storage nodes that are running {Ceph} version 6 or later, you must migrate the RGW daemons that are included in the {rhos_prev_long} Controller nodes into the existing external Red Hat Enterprise Linux (RHEL) nodes. The existing external RHEL nodes typically include the Compute nodes for an HCI environment or {Ceph} nodes.
For hyperconverged infrastructure (HCI) or dedicated Storage nodes that are running {Ceph} version 6 or later, you must migrate the RGW daemons that are included in the {rhos_prev_long} Controller nodes into the existing external Red Hat Enterprise Linux (RHEL) nodes. The existing external RHEL nodes typically include the Compute nodes for an HCI environment or {Ceph} nodes.

To migrate Ceph Object Gateway (RGW), your environment must meet the following requirements:

Expand All @@ -20,4 +20,3 @@ include::../modules/proc_migrating-the-rgw-backends.adoc[leveloffset=+1]
include::../modules/proc_deploying-a-ceph-ingress-daemon.adoc[leveloffset=+1]

include::../modules/proc_updating-the-object-storage-endpoints.adoc[leveloffset=+1]

Original file line number Diff line number Diff line change
Expand Up @@ -31,4 +31,4 @@ include::../modules/proc_migrating-existing-daemons-to-target-nodes.adoc[levelof

ifeval::["{build}" != "downstream"]
include::../modules/proc_relocating-one-instance-of-a-monitoring-stack-to-migrate-daemons-to-target-nodes.adoc[leveloffset=+1]
endif::[]
endif::[]
Original file line number Diff line number Diff line change
Expand Up @@ -12,4 +12,4 @@ Migration of the data happens replica by replica. Assuming you start with 3 repl

include::../modules/proc_migrating-object-storage-data-to-rhoso-nodes.adoc[leveloffset=+1]

include::../modules/con_troubleshooting-object-storage-migration.adoc[leveloffset=+1]
include::../modules/con_troubleshooting-object-storage-migration.adoc[leveloffset=+1]
Original file line number Diff line number Diff line change
Expand Up @@ -67,4 +67,3 @@ Regardless of the IPAM scenario, the VLAN tags used in the existing deployment w
include::../modules/proc_using-new-subnet-ranges.adoc[leveloffset=+1]

include::../modules/proc_reusing-existing-subnet-ranges.adoc[leveloffset=+1]

Original file line number Diff line number Diff line change
Expand Up @@ -10,4 +10,3 @@ Make sure you installed and configured the os-diff tool. For more information, s
xref:comparing-configuration-files-between-deployments_storage-requirements[Comparing configuration files between deployments].

include::../modules/proc_pulling-configuration-from-a-tripleo-deployment.adoc[leveloffset=+1]

2 changes: 1 addition & 1 deletion docs_user/modules/con_about-machine-configs.adoc
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
[id="about-machine-configs_{context}"]

= About machine configs
= About machine configs

Some services require you to have services or kernel modules running on the hosts where they run, for example `iscsid` or `multipathd` daemons, or the
`nvme-fabrics` kernel module.
Expand Down
1 change: 0 additions & 1 deletion docs_user/modules/con_about-node-selector.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -91,4 +91,3 @@ the `nodeSelector` in `cinderVolumes`, so you need to specify it on each of the
backends.

It is possible to leverage labels added by the Node Feature Discovery (NFD) Operator to place {OpenStackShort} services. For more information, see link:https://docs.openshift.com/container-platform/4.13/hardware_enablement/psap-node-feature-discovery-operator.html[Node Feature Discovery Operator] in _OpenShift Container Platform 4.15 Documentation_.

Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@

= Bare Metal Provisioning service configurations

The {bare_metal_first_ref} is configured by using configuration snippets. For more information about the configuration snippets, see xref:service-configurations_planning[Service configurations].
The {bare_metal_first_ref} is configured by using configuration snippets. For more information about the configuration snippets, see xref:service-configurations_planning[Service configurations].

{OpenStackPreviousInstaller} generally took care to not override the defaults of the {bare_metal}, however as with any system of descreet configuration management attempting to provide a cross-version compatability layer, some configuration was certainly defaulted in particular ways. For example, PXE Loader file names were often overridden at intermediate layers, and you will thus want to pay particular attention to the settings you choose to apply in your adopted deployment. The operator attempts to apply reasonable working default configuration, but if you override them with prior configuration, your experience may not be ideal or your new {bare_metal} will fail to operate. Similarly, additional configuration may be necessary, for example
if your `ironic.conf` has additional hardware types enabled and in use.
Expand Down Expand Up @@ -39,5 +39,5 @@ Finally, a parameter which may be important based upon your configuration and ex

As a warning, hardware types set via the `ironic.conf` `enabled_hardware_types` parameter and hardware type driver interfaces starting with `staging-` are not available to be migrated into an adopted configuration.

Furthermore, {OpenStackPreviousInstaller}-based deployments made architectural decisions based upon self-management of services. When adopting deployments, you don't necessarilly need multiple replicas of secondary services such as the Introspection service. Should the host the container is running upon fail, {OpenShift} will restart the container on another host. The short-term transitory loss
//kgilliga: This last sentence trails off.
Furthermore, {OpenStackPreviousInstaller}-based deployments made architectural decisions based upon self-management of services. When adopting deployments, you don't necessarilly need multiple replicas of secondary services such as the Introspection service. Should the host the container is running upon fail, {OpenShift} will restart the container on another host. The short-term transitory loss
//kgilliga: This last sentence trails off.
Original file line number Diff line number Diff line change
Expand Up @@ -9,7 +9,7 @@ helper tool that can create a draft of the files from a `cinder.conf` file.
This tool is not meant to be a automation tool. It is mostly to help you get the
gist of it, maybe point out some potential pitfalls and reminders.

[IMPORTANT]
[IMPORTANT]
The tools requires `PyYAML` Python package to be installed (`pip
install PyYAML`).

Expand Down Expand Up @@ -91,4 +91,3 @@ configuration because it has sensitive information (credentials). The
customServiceConfigSecrets:
- openstackcinder-volumes-hpe_fc
----

Original file line number Diff line number Diff line change
Expand Up @@ -14,4 +14,3 @@ nodes, is not currently being documented in this process.
* Support for {block_storage} backends that require kernel modules not included in RHEL
has not been tested in Operator deployed {rhos_prev_long}.
* Adoption of DCN/Edge deployment is not currently described in this guide.

Original file line number Diff line number Diff line change
Expand Up @@ -22,4 +22,3 @@ Once you know all the transport protocols that you are using, you can make
sure that you are taking them into consideration when placing the Block Storage services (as mentioned above in the Node Roles section) and the right storage transport related binaries are running on the {OpenShift} nodes.

Detailed information about the specifics for each storage transport protocol can be found in the xref:openshift-preparation-for-block-storage-adoption_adopting-block-storage[{OpenShift} preparation for {block_storage} adoption].

14 changes: 7 additions & 7 deletions docs_user/modules/con_ceph-daemon-cardinality.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -2,18 +2,18 @@

= {Ceph} daemon cardinality

{Ceph} 6 and later applies strict constraints in the way daemons can be colocated within the same node.
{Ceph} 6 and later applies strict constraints in the way daemons can be colocated within the same node.
ifeval::["{build}" != "upstream"]
For more information, see link:https://access.redhat.com/articles/1548993[Red Hat Ceph Storage: Supported configurations].
For more information, see link:https://access.redhat.com/articles/1548993[Red Hat Ceph Storage: Supported configurations].
endif::[]
The resulting topology depends on the available hardware, as well as the amount of {Ceph} services present in the Controller nodes which are going to be retired.
The resulting topology depends on the available hardware, as well as the amount of {Ceph} services present in the Controller nodes which are going to be retired.
ifeval::["{build}" != "upstream"]
For more information about the procedure that is required to migrate the RGW component and keep an HA model using the Ceph ingress daemon, see link:{defaultCephURL}/object_gateway_guide/index#high-availability-for-the-ceph-object-gateway[High availability for the Ceph Object Gateway] in _Object Gateway Guide_.
For more information about the procedure that is required to migrate the RGW component and keep an HA model using the Ceph ingress daemon, see link:{defaultCephURL}/object_gateway_guide/index#high-availability-for-the-ceph-object-gateway[High availability for the Ceph Object Gateway] in _Object Gateway Guide_.
endif::[]
ifeval::["{build}" != "downstream"]
ifeval::["{build}" != "downstream"]
The following document describes the procedure required to migrate the RGW component (and keep an HA model using the https://docs.ceph.com/en/latest/cephadm/services/rgw/#high-availability-service-for-rgw[Ceph Ingress daemon] in a common {OpenStackPreviousInstaller} scenario where Controller nodes represent the
https://github.com/openstack/tripleo-ansible/blob/master/tripleo_ansible/roles/tripleo_cephadm/tasks/rgw.yaml#L26-L30[spec placement] where the service is deployed.
endif::[]
endif::[]
As a general rule, the number of services that can be migrated depends on the number of available nodes in the cluster. The following diagrams cover the distribution of the {Ceph} daemons on the {Ceph} nodes where at least three nodes are required in a scenario that sees only RGW and RBD, without the {dashboard_first_ref}:

----
Expand Down Expand Up @@ -45,4 +45,4 @@ With the {dashboard} and the {rhos_component_storage_file}, 5 nodes minimum are
| osd | mon/mgr/crash | mds/ganesha/ingress |
| osd | rgw/ingress | mds/ganesha/ingress |
| osd | mds/ganesha/ingress | dashboard/grafana |
----
----
2 changes: 1 addition & 1 deletion docs_user/modules/con_changes-to-cephFS-via-NFS.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -19,4 +19,4 @@ will correspond to the new clustered Ceph NFS service in contrast to other
non-preferred export paths that continue to be displayed until the old
isolated, standalone NFS service is decommissioned.

See xref:creating-a-ceph-nfs-cluster_migrating-databases[Creating a NFS Ganesha cluster] for instructions on setting up a clustered NFS service.
See xref:creating-a-ceph-nfs-cluster_migrating-databases[Creating a NFS Ganesha cluster] for instructions on setting up a clustered NFS service.
Original file line number Diff line number Diff line change
Expand Up @@ -133,4 +133,4 @@ And test your connection:

----
ssh -F ssh.config standalone
----
----
1 change: 0 additions & 1 deletion docs_user/modules/con_identity-service-authentication.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -5,4 +5,3 @@
When you adopt a {OpenStackPreviousInstaller} {rhos_prev_long} ({OpenStackShort}) deployment, users authenticate to the Identity service (keystone) by using Secure RBAC (SRBAC). There is no change to how you perform operations if SRBAC is enabled. If SRBAC is not enabled, then adopting a {OpenStackPreviousInstaller} {OpenStackShort} deployment changes how you perform operations, such as adding roles to users. If you have custom policies enabled, contact support before adopting a {OpenStackPreviousInstaller} {OpenStackShort} deployment.

// For more information on SRBAC see [link].

Original file line number Diff line number Diff line change
Expand Up @@ -7,4 +7,4 @@ The Key Manager service (barbican) does not yet support all of the crypto plug-i
//**TODO: Right now Barbican only supports the simple crypto plugin.

//*TODO: Talk about Ceph Storage and Swift Storage nodes, HCI deployments,
//etc.*
//etc.*
2 changes: 1 addition & 1 deletion docs_user/modules/con_node-roles.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -48,4 +48,4 @@ The {OpenStackShort} Operators allow a great deal of flexibility on where to run
{OpenStackShort} services, as you can use node labels to define which {OpenShiftShort} nodes
are eligible to run the different {OpenStackShort} services. Refer to the xref:about-node-selector_{context}[About node
selector] to learn more about using labels to define
placement of the {OpenStackShort} services.
placement of the {OpenStackShort} services.
Loading

0 comments on commit 59cb6ed

Please sign in to comment.