Skip to content

Commit

Permalink
docs(openstack): add instructions for OpenStack provider on HMC
Browse files Browse the repository at this point in the history
    - clarify provider setup, environment variables, and recommended image/flavor requirements
    - cosmetic changes

Signed-off-by: Satyam Bhardwaj <[email protected]>
  • Loading branch information
ramessesii2 committed Dec 31, 2024
1 parent 18f53fc commit 96c8a1a
Showing 1 changed file with 83 additions and 15 deletions.
98 changes: 83 additions & 15 deletions docs/dev.md
Original file line number Diff line number Diff line change
Expand Up @@ -80,15 +80,41 @@ To properly deploy dev cluster you need to have the following variable set:

- The rest of deployment procedure is the same as for other providers.

### OpenStack Provider Setup

To deploy a development cluster on OpenStack, first set:

- `DEV_PROVIDER` - should be "openstack"

We recommend using OpenStack Application Credentials as it enhances security by allowing
applications to authenticate with limited, specific permissions without exposing the user's password.

- `OS_AUTH_URL`
- `OS_AUTH_TYPE`
- `OS_APPLICATION_CREDENTIAL_ID`
- `OS_APPLICATION_CREDENTIAL_SECRET`
- `OS_REGION_NAME`
- `OS_INTERFACE`
- `OS_IDENTITY_API_VERSION`

You will also need to specify additional parameters related to machine sizes and images:

- `OPENSTACK_CONTROL_PLANE_MACHINE_FLAVOR`
- `OPENSTACK_NODE_MACHINE_FLAVOR`
- `OPENSTACK_IMAGE_NAME`

> [!NOTE]
> The recommended minimum vCPU value for the control plane flavor is 2, while for the worker node flavor, it is 1. For detailed information, refer to the [machine-flavor CAPI docs](https://github.com/kubernetes-sigs/cluster-api-provider-openstack/blob/main/docs/book/src/clusteropenstack/configuration.md#machine-flavor).
The rest of deployment procedure is the same as for other providers.

### Adopted Cluster Setup

To "adopt" an existing cluster first obtain the kubeconfig file for the cluster.
To "adopt" an existing cluster first obtain the kubeconfig file for the cluster.
Then set the `DEV_PROVIDER` to "adopted". Export the kubeconfig file as a variable by running the following:

`export KUBECONFIG_DATA=$(cat kubeconfig | base64 -w 0)`

The rest of deployment procedure is the same as for other providers.

## Deploy HMC

Default provider which will be used to deploy cluster is AWS, if you want to use
Expand All @@ -97,10 +123,9 @@ running make (e.g. `export DEV_PROVIDER=azure`).

1. Configure your cluster parameters in provider specific file
(for example `config/dev/aws-clusterdeployment.yaml` in case of AWS):

* Configure the `name` of the ClusterDeployment
* Change instance type or size for control plane and worker machines
* Specify the number of control plane and worker machines, etc
- Configure the `name` of the ClusterDeployment
- Change instance type or size for control plane and worker machines
- Specify the number of control plane and worker machines, etc

2. Run `make dev-apply` to deploy and configure management cluster.

Expand All @@ -114,11 +139,11 @@ running make (e.g. `export DEV_PROVIDER=azure`).
6. Wait for infrastructure to be provisioned and the cluster to be deployed. You
may watch the process with the `./bin/clusterctl describe` command. Example:

```bash
export KUBECONFIG=~/.kube/config
```bash
export KUBECONFIG=~/.kube/config

./bin/clusterctl describe cluster <clusterdeployment-name> -n hmc-system --show-conditions all
```
./bin/clusterctl describe cluster <clusterdeployment-name> -n hmc-system --show-conditions all
```

> [!NOTE]
> If you encounter any errors in the output of `clusterctl describe cluster` inspect the logs of the
Expand All @@ -130,17 +155,18 @@ export KUBECONFIG=~/.kube/config
7. Retrieve the `kubeconfig` of your managed cluster:
```bash
kubectl --kubeconfig ~/.kube/config get secret -n hmc-system <clusterdeployment-name>-kubeconfig -o=jsonpath={.data.value} | base64 -d > kubeconfig
```
```bash
kubectl --kubeconfig ~/.kube/config get secret -n hmc-system <clusterdeployment-name>-kubeconfig -o=jsonpath={.data.value} | base64 -d > kubeconfig
```
## Running E2E tests locally

E2E tests can be ran locally via the `make test-e2e` target. In order to have
CI properly deploy a non-local registry will need to be used and the Helm charts
and hmc-controller image will need to exist on the registry, for example, using
GHCR:

```
```bash
IMG="ghcr.io/mirantis/hmc/controller-ci:v0.0.1-179-ga5bdf29" \
REGISTRY_REPO="oci://ghcr.io/mirantis/hmc/charts-ci" \
make test-e2e
Expand All @@ -159,6 +185,7 @@ pass `CLUSTER_DEPLOYMENT_NAME=` from the get-go to customize the name used by th
test.

### Filtering test runs

Provider tests are broken into two types, `onprem` and `cloud`. For CI,
`provider:onprem` tests run on self-hosted runners provided by Mirantis.
`provider:cloud` tests run on GitHub actions runners and interact with cloud
Expand All @@ -179,6 +206,7 @@ ginkgo labels ./test/e2e
```

### Nuke created resources

In CI we run `make dev-aws-nuke` to cleanup test resources, you can do so
manually with:

Expand Down Expand Up @@ -238,7 +266,47 @@ CSI expects single Secret with configuration in `ini` format
([documented here](https://docs.vmware.com/en/VMware-vSphere-Container-Storage-Plug-in/2.0/vmware-vsphere-csp-getting-started/GUID-BFF39F1D-F70A-4360-ABC9-85BDAFBE8864.html)).
Options are similar to CCM and same defaults/considerations are applicable.

### OpenStack

CAPO relies on a clouds.yaml file in order to manage the OpenStack resources. This should be supplied as a Kubernetes Secret.

```yaml
clouds:
my-openstack-cloud:
auth:
auth_url: <your_auth_url>
application_credential_id: <your_credential_id>
application_credential_secret: <your_credential_secret>
region_name: <your_region>
interface: <public|internal|admin>
identity_api_version: 3
auth_type: v3applicationcredential
```
One would typically create a Secret (for example, openstack-cloud-config) in the hmc-system namespace with the clouds.yaml. Credential object references the secret and the CAPO controllers references this Credential to provision resources.
When you deploy a new cluster, HMC automatically parses the previously created Kubernetes Secret’s data to build a cloud.conf. This cloud-config is mounted inside the CCM and/or CSI pods enabling them to manage load balancers, floating IPs, etc.
Refer to [configuring OpenStack CCM](https://github.com/kubernetes/cloud-provider-openstack/blob/master/docs/openstack-cloud-controller-manager/using-openstack-cloud-controller-manager.md#config-openstack-cloud-controller-manager) for more details.
Here's an example of the generated cloud.conf:
```ini
[Global]
auth-url=<your_auth_url>
application-credential-id=<your_credential_id>
application-credential-secret=<your_credential_secret>
region=<your_region>
domain-name=<your_domain_name>

[LoadBalancer]
floating-network-id=<your_floating_network_id>

[Network]
public-network-name=<your_network_name>
```

## Generating the airgap bundle

Use the `make airgap-package` target to manually generate the airgap bundle,
to ensure the correctly tagged HMC controller image is present in the bundle
prefix the `IMG` env var with the desired image, for example:
Expand Down

0 comments on commit 96c8a1a

Please sign in to comment.