diff --git a/contributing/common-pitfalls/index.html b/contributing/common-pitfalls/index.html index 73b4fb29c..377ae1f3c 100644 --- a/contributing/common-pitfalls/index.html +++ b/contributing/common-pitfalls/index.html @@ -665,55 +665,68 @@
This document lists common pitfalls that have been observed in the process of creating -and modifying VAs and DTs.
+This document lists common pitfalls that have been observed in the process of +creating and modifying VAs and DTs.
In general, it is a best practice to keep all kustomizations (patches
, replacements
, etc)
-for a particular resource in one kustomization.yaml
file. Sometimes, however, it is
-necessary to only perform a subset of OpenStackControlPlane
kustomizations at a certain stage
-of the deployment process. For instance, you might not want to kustomize an OpenStackControlPlane
-CR with certain data during its initial creation stage because that data is not yet available
-for use. Thus it would make sense to have a later stage and kustomization.yaml
file to
-add those kustomzations once the requisite data is available (perhaps during the data plane
-deployment stage).
What is crucial to keep in mind is that any kustomizations to a resource in an earlier stage
-will be lost/overwritten in later stages where that same resource is modified if those stages
-do not reference the same kustomization.yaml
that the earlier stage utilized. Thus it is
-best to have a base kustomization.yaml
for a given resource for all kustomizations common to
-all stages -- and all those stages should thus reference that kustomization.yaml
. Then, if
-later stages need specific changes for that resource, a separate kustomization.yaml
can be also
-used to apply those additional kustomizations beyond the base ones. This approach is also
-preferred to creating two somewhat-or-mostly duplicate kustomization.yaml
s, one for the earlier
-stage and one for a later stage. Keeping things DRY by using a common base will make future
-potential changes to the kustomization.yaml
s less prone to error, as changes to the common file
-will automatically be picked up by all deployment stages.
As an illustrative example of the best practice mentioned above, consider the following directory -structure:
+In general, it is a best practice to keep all kustomizations (patches
,
+replacements
, etc) for a particular resource in one kustomization.yaml
+file.
In some cases it is necessary to only perform a subset of
+OpenStackControlPlane
kustomizations at a certain stage of the deployment
+process. For instance, you might not want to kustomize an
+OpenStackControlPlane
CR with certain data during its initial creation stage
+because that data is not yet available for use. In the case of a multi-stage
+deployment, it would make sense to have a separate kustomization.yaml
file to
+add those kustomizations once the requisite data is available (perhaps during
+the data plane deployment stage).
What is crucial to keep in mind is that any kustomizations to a resource in
+an earlier stage will be lost/overwritten in later stages where that same
+resource is modified if those stages do not reference the same
+kustomization.yaml
that the earlier stage utilized.
It is best to have a base kustomization.yaml
for a given resource for all
+kustomizations common to all stages -- and all those stages should reference
+that kustomization.yaml
. If later stages need specific changes for that
+resource, a separate kustomization.yaml
can used to apply those additional
+kustomizations beyond the base ones.
The use of common base files is preferred to creating two nearly-identical
+kustomization.yaml
files; one for the earlier stage and one for a later
+stage. Keeping things DRY by using a common base will make future potential
+changes to the kustomization.yaml
files less prone to error, as changes to
+the common file will automatically be picked up by all deployment stages.
As an illustrative example of the best practice mentioned above, consider the +following directory structure:
some_dt_or_va/control_plane/kustomization.yaml
some_dt_or_va/data_plane/kustomization.yaml
If the data_plane/kustomization.yaml
needs to modify the OpenStackControlPlane
, then it should
-reference ../control_plane/kustomization.yaml
as a Component
and then add additional replacements
-and/or patches
as needed. If it were to instead reference this repo's lib/control-plane
-directory as its base OpenStackControlPlane
Component
, then the ../control_plane/kustomization.yaml
-kustomizations would be lost, since the OpenStackControlPlane
CR would be generated and applied without
-them.
It also follows in this scenario that, as mentioned above, the OpenStackControlPlane
kustomizations for
-its initial creation stage should be located in one and only one kustomization.yaml
. Thus you would
-want to avoid something like this...
If the data_plane/kustomization.yaml
needs to modify the
+OpenStackControlPlane
, then it should reference
+../control_plane/kustomization.yaml
as a Component
and then add additional
+replacements
and/or patches
as needed.
If it were to instead reference this repositories
+lib/control-plane directory as its base
+OpenStackControlPlane
Component
, then the
+../control_plane/kustomization.yaml
kustomizations would be lost, since the
+OpenStackControlPlane
CR would be generated and applied without them.
The kustomizations for an OpenStackControlPlane
resource should be within a
+single kustomization.yaml
that contains the kustomizations for the initial
+creation stage. You want to avoid the use of multiple files, such as creating
+an additional sub-directory within the same base directory containing the
+configuration. The following would be an example to avoid:
some_dt_or_va/control_plane/kustomization.yaml
some_dt_or_va/control_plane/some_subdir/kustomization.yaml
some_dt_or_va/data_plane/kustomization.yaml
...if some_dt_or_va/control_plane/some_subdir/kustomization.yaml
has further kustomizations to the
-OpenStackControlPlane
beyond some_dt_or_va/control_plane/kustomization.yaml
. (It would be fine, for
-instance, if that subdirectory was modifying some other resource like NodeNetworkConfigurationPolicy
).
-The reason for this is again that, if later stages do not want to accidentally overwrite earlier
-OpenStackControlPlane
kustomizations, those later stages will need to reference both
-../control_plane/kustomization.yaml
and ../control_plane/some_subdir/kustomization.yaml
in the case
-that those stages are modifying the OpenStackControlPlane
. It would be better for the two directories
-to be collapsed into one, such that a single kustomization.yaml
can be referenced as a Component
to
-include all the previous stage's kustomizations and not inadvertently overwrite them.
In some cases having an additional nested directory may be valid, in the case a
+subdirectory was modifying some other resource like
+NodeNetworkConfigurationPolicy
.
If later stages do not want to accidentally overwrite earlier
+OpenStackControlPlane
kustomizations, those later stages need to reference
+both ../control_plane/kustomization.yaml
and
+../control_plane/some_subdir/kustomization.yaml
so that those stages are
+modifying the OpenStackControlPlane
.
It would be better for the two directories to be collapsed into one, such that
+a single kustomization.yaml
can be referenced as a Component
to include all
+the previous stage's kustomizations and not inadvertently overwrite them.
The Architectures repository may be used with to create validated architectures (VAs), represented as custom resources (CRs) for openstack-k8s-operators. It may also be used to create deployed topologies (DTs) which should only be used for testing.
"},{"location":"dt/","title":"Deployed Topologies","text":"All validated architectures (VAs) are deployed topologies (DTs), but not all DTs are VAs.
DTs represent CI optimizations. We design them to test lots of things together so we can have as few of them as possible. Before proposing a new DT to test something, consider if an update to an existing DT will achieve the same result.
"},{"location":"contributing/cherry-picking/","title":"Getting your PR into a stable branch","text":"After your PR merges into the main branch you should open a PR to cherry pick it into a stable branch if appropriate. For example, you may want your patch to be in the 18.0.0-proposed branch.
If you anticipate that your PR should be cherry picked then please tag it accordingly. For example we have a needs-18.0.0-proposed-cherry-pick tag. If we think a patch should be in a stable branch, then we will apply that tag to your PR to remind you to follow up to send in a cherry pick.
Please do not ask core maintainers to cherry pick your patch though we will be happy to review it and help merge it once the cherry pick PR has been submitted using one of the methods below.
"},{"location":"contributing/cherry-picking/#method-1","title":"Method 1","text":"Add a comment like the following to the PR you wish to cherry pick.
/cherrypick 18.0.0-proposed\n
The openshift-cherrypick-robot will then attempt to create a new PR of the original PR (after it merges) with a cherry-pick of the same patch into the desired branch. There are examples of this in previous PRs in this repository. If there is a merge conflict, then method 1 will not work and you will need to use method 2.
"},{"location":"contributing/cherry-picking/#method-2","title":"Method 2","text":"If you can send a PR to this repository then you can create a cherry pick using the git
command line tools without requiring any additional privileges. For example, the following produces a cherry-pick within a personal fork of the architecture repository.
git remote add upstream git@github.com:openstack-k8s-operators/architecture.git \ngit fetch upstream\ngit checkout -b 18.0.0-proposed upstream/18.0.0-proposed\ngit push origin 18.0.0-proposed\ngit log origin/main\ngit cherry-pick <commit hash>\ngit push origin 18.0.0-proposed\n
You should then be able to use the github web interface to create the PR. Please add (cherry picked from <commit hash>)
to the bottom of your commit message."},{"location":"contributing/common-pitfalls/","title":"Common Design Pitfalls","text":"This document lists common pitfalls that have been observed in the process of creating and modifying VAs and DTs.
"},{"location":"contributing/common-pitfalls/#accidental-openstackcontrolplane-overwrites","title":"Accidental OpenStackControlPlane Overwrites","text":"In general, it is a best practice to keep all kustomizations (patches
, replacements
, etc) for a particular resource in one kustomization.yaml
file. Sometimes, however, it is necessary to only perform a subset of OpenStackControlPlane
kustomizations at a certain stage of the deployment process. For instance, you might not want to kustomize an OpenStackControlPlane
CR with certain data during its initial creation stage because that data is not yet available for use. Thus it would make sense to have a later stage and kustomization.yaml
file to add those kustomzations once the requisite data is available (perhaps during the data plane deployment stage).
What is crucial to keep in mind is that any kustomizations to a resource in an earlier stage will be lost/overwritten in later stages where that same resource is modified if those stages do not reference the same kustomization.yaml
that the earlier stage utilized. Thus it is best to have a base kustomization.yaml
for a given resource for all kustomizations common to all stages -- and all those stages should thus reference that kustomization.yaml
. Then, if later stages need specific changes for that resource, a separate kustomization.yaml
can be also used to apply those additional kustomizations beyond the base ones. This approach is also preferred to creating two somewhat-or-mostly duplicate kustomization.yaml
s, one for the earlier stage and one for a later stage. Keeping things DRY by using a common base will make future potential changes to the kustomization.yaml
s less prone to error, as changes to the common file will automatically be picked up by all deployment stages.
As an illustrative example of the best practice mentioned above, consider the following directory structure:
some_dt_or_va/control_plane/kustomization.yaml\nsome_dt_or_va/data_plane/kustomization.yaml\n
If the data_plane/kustomization.yaml
needs to modify the OpenStackControlPlane
, then it should reference ../control_plane/kustomization.yaml
as a Component
and then add additional replacements
and/or patches
as needed. If it were to instead reference this repo's lib/control-plane directory as its base OpenStackControlPlane
Component
, then the ../control_plane/kustomization.yaml
kustomizations would be lost, since the OpenStackControlPlane
CR would be generated and applied without them.
It also follows in this scenario that, as mentioned above, the OpenStackControlPlane
kustomizations for its initial creation stage should be located in one and only one kustomization.yaml
. Thus you would want to avoid something like this...
some_dt_or_va/control_plane/kustomization.yaml\nsome_dt_or_va/control_plane/some_subdir/kustomization.yaml\nsome_dt_or_va/data_plane/kustomization.yaml\n
...if some_dt_or_va/control_plane/some_subdir/kustomization.yaml
has further kustomizations to the OpenStackControlPlane
beyond some_dt_or_va/control_plane/kustomization.yaml
. (It would be fine, for instance, if that subdirectory was modifying some other resource like NodeNetworkConfigurationPolicy
). The reason for this is again that, if later stages do not want to accidentally overwrite earlier OpenStackControlPlane
kustomizations, those later stages will need to reference both ../control_plane/kustomization.yaml
and ../control_plane/some_subdir/kustomization.yaml
in the case that those stages are modifying the OpenStackControlPlane
. It would be better for the two directories to be collapsed into one, such that a single kustomization.yaml
can be referenced as a Component
to include all the previous stage's kustomizations and not inadvertently overwrite them.
Install docs build requirements into virtualenv:
python3 -m venv local/docs-venv\nsource local/docs-venv/bin/activate\npip install -r docs/doc_requirements.txt\n
Serve docs site on localhost:
mkdocs serve\n
Click the link it outputs. As you save changes to files modified in your editor, the browser will automatically show the new content.
"},{"location":"contributing/documentation/#structure-and-content","title":"Structure and Content","text":"The MkDocs
output generates nice looking HTML pages that link to the content genereated by github.com.
This is because the authors believe it's more valuable to have github.com/openstack-k8s-operators/architecture be navigable relative to the github pages which contain the CRs, than have all of the documentation isolated in the docs
directory. Thus, there are non-relative links in the MkDocs
content to the pages hosted on github.
Though it's possible to create symbolic links to README files or link to a directory above the docs
directory, the resulting HTML will contain invalid links unless all READMEs are moved out of the directories that they describe. However, this would make reading the CRs more complicated as they wouldn't have a corresponding README.
Thus, if you add a new VA or DT, then please just link it in the mkdocs.yml
file, similar to the way the HCI VA is linked, in to keep the MkDocs
output up to date.
Contributions to the architecture
repository are always welcomed and encouraged. In order to avoid causing regressions to the repository and to prove that the contributions are working as intended, all pull requests are expected to provide proof of validation.
The simplest way is to use the reproducer functionality in the CI-Framework.
"},{"location":"contributing/pull-request-testing/#using-the-reproducer-role","title":"Using the reproducer role","text":"Additional parameters can be passed to the reproducer role of the CI-Framework, allowing you validate changes to the architecture repository remain functional within the contexts of kustomize and CI-Framework itself (which consumes the contents of the architecture repository).
Use the reproducer.yml
playbook within the CI-Framework to deploy the HCI validated architecture aka VA1 (or any other validated architecture or deployment topology that might be affected) with an environment file containing parameters denoting which branch and repository to deploy with. The custom parameter filename is not important, as long as it is passed to Ansible, and is valid.
ansible-playbook reproducer.yml \\\n -i custom/inventory.yml \\\n -e cifmw_target_host=hypervisor-1 \\\n -e @scenarios/reproducers/va-hci.yml \\\n -e @scenarios/reproducers/networking-definition.yml \\\n -e @custom/default-vars.yaml \\\n -e @custom/secrets.yml \\\n -e @custom/test-my_pr_branch.yml\n
The test-my_pr_branch.yml
file contains parameters that identifies the remote git repository and branch name to deploy.
test-my_pr_branch.yml
remote_base_dir: \"/home/zuul/src/github.com/openstack-k8s-operators\"\ncifmw_reproducer_repositories:\n- src: \"https://github.com/<FORKED_ORGANIZATION>/architecture\"\n dest: \"{{ remote_base_dir }}/architecture\"\n version: <BRANCH_TO_DEPLOY>\n
Once your environment has been deployed, provide any relevant output showing that the deployment was successful, and that the environment continues to operate nominally. Provide any additional output showing that the changes to the architecture repository have been deployed and are functioning as intended by the pull request. You can SSH into the controller-0 machine and review the contents of /home/zuul/src/github.com/openstack-k8s-operators/architecture
which contains the content as configured by the test-<NAME>.yml
parameter file.
The kustomize command builds and results in the OpenStack control plane definitions and its dependent Custom Resources (CR).
kustomize build architecture/examples/va/hci > control-plane.yaml\n
The control-plane.yaml
file contains CRs for the NodeNetworkConfigurationPolicy
(NNCP), the NetworkAttachmentDefinition
, MetalLB resources and OpenStack resources. Is it possible to create a CR file with less custom resources?"},{"location":"faq/cr_by_components/#answer","title":"Answer","text":"Yes, it's possible to create CR files with less components and wait before applying each CR file. E.g. the file nncp.yaml
would contain only NodeNetworkConfigurationPolicy
CRs and NetworkAttachmentDefinition
and other CRs could exist in another file like networking.yaml
. The following process may be used to generate these files using kustomize.
components:\n- ../../lib/nncp\n
kustomize build architecture/examples/va/hci > nncp.yaml\n
components:\n- ../../lib/networking\n
kustomize build architecture/examples/va/hci > networking.yaml\n
The above process may be continued for each component.
Note that va/hci/kustomization.yaml is not the same file as examples/va/hci/kustomization.yaml. /example/va/hci
is a specific example of a given VA where as /va/hci
is a generic HCI VA that may be customised and shared in multiple examples or composed to make a larger VA.
This process will work for VAs (and DTs) besides HCI, but the paths may be different. E.g. examples/va/nfv/sriov/kustomization.yaml differs from va/nfv/sriov/kustomization.yaml and the later is in an nfv
subdirectory so each component is referred to using - ../../../lib/
instead of - ../../lib/
.
The Architectures repository may be used with to create validated architectures (VAs), represented as custom resources (CRs) for openstack-k8s-operators. It may also be used to create deployed topologies (DTs) which should only be used for testing.
"},{"location":"dt/","title":"Deployed Topologies","text":"All validated architectures (VAs) are deployed topologies (DTs), but not all DTs are VAs.
DTs represent CI optimizations. We design them to test lots of things together so we can have as few of them as possible. Before proposing a new DT to test something, consider if an update to an existing DT will achieve the same result.
"},{"location":"contributing/cherry-picking/","title":"Getting your PR into a stable branch","text":"After your PR merges into the main branch you should open a PR to cherry pick it into a stable branch if appropriate. For example, you may want your patch to be in the 18.0.0-proposed branch.
If you anticipate that your PR should be cherry picked then please tag it accordingly. For example we have a needs-18.0.0-proposed-cherry-pick tag. If we think a patch should be in a stable branch, then we will apply that tag to your PR to remind you to follow up to send in a cherry pick.
Please do not ask core maintainers to cherry pick your patch though we will be happy to review it and help merge it once the cherry pick PR has been submitted using one of the methods below.
"},{"location":"contributing/cherry-picking/#method-1","title":"Method 1","text":"Add a comment like the following to the PR you wish to cherry pick.
/cherrypick 18.0.0-proposed\n
The openshift-cherrypick-robot will then attempt to create a new PR of the original PR (after it merges) with a cherry-pick of the same patch into the desired branch. There are examples of this in previous PRs in this repository. If there is a merge conflict, then method 1 will not work and you will need to use method 2.
"},{"location":"contributing/cherry-picking/#method-2","title":"Method 2","text":"If you can send a PR to this repository then you can create a cherry pick using the git
command line tools without requiring any additional privileges. For example, the following produces a cherry-pick within a personal fork of the architecture repository.
git remote add upstream git@github.com:openstack-k8s-operators/architecture.git \ngit fetch upstream\ngit checkout -b 18.0.0-proposed upstream/18.0.0-proposed\ngit push origin 18.0.0-proposed\ngit log origin/main\ngit cherry-pick <commit hash>\ngit push origin 18.0.0-proposed\n
You should then be able to use the github web interface to create the PR. Please add (cherry picked from <commit hash>)
to the bottom of your commit message."},{"location":"contributing/common-pitfalls/","title":"Common Design Pitfalls","text":"This document lists common pitfalls that have been observed in the process of creating and modifying VAs and DTs.
"},{"location":"contributing/common-pitfalls/#accidental-openstackcontrolplane-overwrites","title":"Accidental OpenStackControlPlane Overwrites","text":"In general, it is a best practice to keep all kustomizations (patches
, replacements
, etc) for a particular resource in one kustomization.yaml
file.
In some cases it is necessary to only perform a subset of OpenStackControlPlane
kustomizations at a certain stage of the deployment process. For instance, you might not want to kustomize an OpenStackControlPlane
CR with certain data during its initial creation stage because that data is not yet available for use. In the case of a multi-stage deployment, it would make sense to have a separate kustomization.yaml
file to add those kustomizations once the requisite data is available (perhaps during the data plane deployment stage).
What is crucial to keep in mind is that any kustomizations to a resource in an earlier stage will be lost/overwritten in later stages where that same resource is modified if those stages do not reference the same kustomization.yaml
that the earlier stage utilized.
It is best to have a base kustomization.yaml
for a given resource for all kustomizations common to all stages -- and all those stages should reference that kustomization.yaml
. If later stages need specific changes for that resource, a separate kustomization.yaml
can used to apply those additional kustomizations beyond the base ones.
The use of common base files is preferred to creating two nearly-identical kustomization.yaml
files; one for the earlier stage and one for a later stage. Keeping things DRY by using a common base will make future potential changes to the kustomization.yaml
files less prone to error, as changes to the common file will automatically be picked up by all deployment stages.
As an illustrative example of the best practice mentioned above, consider the following directory structure:
some_dt_or_va/control_plane/kustomization.yaml\nsome_dt_or_va/data_plane/kustomization.yaml\n
If the data_plane/kustomization.yaml
needs to modify the OpenStackControlPlane
, then it should reference ../control_plane/kustomization.yaml
as a Component
and then add additional replacements
and/or patches
as needed.
If it were to instead reference this repositories lib/control-plane directory as its base OpenStackControlPlane
Component
, then the ../control_plane/kustomization.yaml
kustomizations would be lost, since the OpenStackControlPlane
CR would be generated and applied without them.
The kustomizations for an OpenStackControlPlane
resource should be within a single kustomization.yaml
that contains the kustomizations for the initial creation stage. You want to avoid the use of multiple files, such as creating an additional sub-directory within the same base directory containing the configuration. The following would be an example to avoid:
some_dt_or_va/control_plane/kustomization.yaml\nsome_dt_or_va/control_plane/some_subdir/kustomization.yaml\nsome_dt_or_va/data_plane/kustomization.yaml\n
In some cases having an additional nested directory may be valid, in the case a subdirectory was modifying some other resource like NodeNetworkConfigurationPolicy
.
If later stages do not want to accidentally overwrite earlier OpenStackControlPlane
kustomizations, those later stages need to reference both ../control_plane/kustomization.yaml
and ../control_plane/some_subdir/kustomization.yaml
so that those stages are modifying the OpenStackControlPlane
.
It would be better for the two directories to be collapsed into one, such that a single kustomization.yaml
can be referenced as a Component
to include all the previous stage's kustomizations and not inadvertently overwrite them.
Install docs build requirements into virtualenv:
python3 -m venv local/docs-venv\nsource local/docs-venv/bin/activate\npip install -r docs/doc_requirements.txt\n
Serve docs site on localhost:
mkdocs serve\n
Click the link it outputs. As you save changes to files modified in your editor, the browser will automatically show the new content.
"},{"location":"contributing/documentation/#structure-and-content","title":"Structure and Content","text":"The MkDocs
output generates nice looking HTML pages that link to the content genereated by github.com.
This is because the authors believe it's more valuable to have github.com/openstack-k8s-operators/architecture be navigable relative to the github pages which contain the CRs, than have all of the documentation isolated in the docs
directory. Thus, there are non-relative links in the MkDocs
content to the pages hosted on github.
Though it's possible to create symbolic links to README files or link to a directory above the docs
directory, the resulting HTML will contain invalid links unless all READMEs are moved out of the directories that they describe. However, this would make reading the CRs more complicated as they wouldn't have a corresponding README.
Thus, if you add a new VA or DT, then please just link it in the mkdocs.yml
file, similar to the way the HCI VA is linked, in to keep the MkDocs
output up to date.
Contributions to the architecture
repository are always welcomed and encouraged. In order to avoid causing regressions to the repository and to prove that the contributions are working as intended, all pull requests are expected to provide proof of validation.
The simplest way is to use the reproducer functionality in the CI-Framework.
"},{"location":"contributing/pull-request-testing/#using-the-reproducer-role","title":"Using the reproducer role","text":"Additional parameters can be passed to the reproducer role of the CI-Framework, allowing you validate changes to the architecture repository remain functional within the contexts of kustomize and CI-Framework itself (which consumes the contents of the architecture repository).
Use the reproducer.yml
playbook within the CI-Framework to deploy the HCI validated architecture aka VA1 (or any other validated architecture or deployment topology that might be affected) with an environment file containing parameters denoting which branch and repository to deploy with. The custom parameter filename is not important, as long as it is passed to Ansible, and is valid.
ansible-playbook reproducer.yml \\\n -i custom/inventory.yml \\\n -e cifmw_target_host=hypervisor-1 \\\n -e @scenarios/reproducers/va-hci.yml \\\n -e @scenarios/reproducers/networking-definition.yml \\\n -e @custom/default-vars.yaml \\\n -e @custom/secrets.yml \\\n -e @custom/test-my_pr_branch.yml\n
The test-my_pr_branch.yml
file contains parameters that identifies the remote git repository and branch name to deploy.
test-my_pr_branch.yml
remote_base_dir: \"/home/zuul/src/github.com/openstack-k8s-operators\"\ncifmw_reproducer_repositories:\n- src: \"https://github.com/<FORKED_ORGANIZATION>/architecture\"\n dest: \"{{ remote_base_dir }}/architecture\"\n version: <BRANCH_TO_DEPLOY>\n
Once your environment has been deployed, provide any relevant output showing that the deployment was successful, and that the environment continues to operate nominally. Provide any additional output showing that the changes to the architecture repository have been deployed and are functioning as intended by the pull request. You can SSH into the controller-0 machine and review the contents of /home/zuul/src/github.com/openstack-k8s-operators/architecture
which contains the content as configured by the test-<NAME>.yml
parameter file.
The kustomize command builds and results in the OpenStack control plane definitions and its dependent Custom Resources (CR).
kustomize build architecture/examples/va/hci > control-plane.yaml\n
The control-plane.yaml
file contains CRs for the NodeNetworkConfigurationPolicy
(NNCP), the NetworkAttachmentDefinition
, MetalLB resources and OpenStack resources. Is it possible to create a CR file with less custom resources?"},{"location":"faq/cr_by_components/#answer","title":"Answer","text":"Yes, it's possible to create CR files with less components and wait before applying each CR file. E.g. the file nncp.yaml
would contain only NodeNetworkConfigurationPolicy
CRs and NetworkAttachmentDefinition
and other CRs could exist in another file like networking.yaml
. The following process may be used to generate these files using kustomize.
components:\n- ../../lib/nncp\n
kustomize build architecture/examples/va/hci > nncp.yaml\n
components:\n- ../../lib/networking\n
kustomize build architecture/examples/va/hci > networking.yaml\n
The above process may be continued for each component.
Note that va/hci/kustomization.yaml is not the same file as examples/va/hci/kustomization.yaml. /example/va/hci
is a specific example of a given VA where as /va/hci
is a generic HCI VA that may be customised and shared in multiple examples or composed to make a larger VA.
This process will work for VAs (and DTs) besides HCI, but the paths may be different. E.g. examples/va/nfv/sriov/kustomization.yaml differs from va/nfv/sriov/kustomization.yaml and the later is in an nfv
subdirectory so each component is referred to using - ../../../lib/
instead of - ../../lib/
.