This README is a modified version of the original README from the KubeOne repository. The original README can be found here.
The full documentation for KubeOne can be found here. The version of KubeOne used in this repository is v1.8.0.
The OpenStack Quickstart Terraform configs can be used to create the needed infrastructure for a Kubernetes HA cluster. Check out the following Creating Infrastructure guide to learn more about how to use the configs and how to provision a Kubernetes cluster using KubeOne.
See the Terraform loadbalancers in examples document.
Name | Version |
---|---|
terraform | >= 1.0.0 |
openstack | ~> 1.52.0 |
Name | Version |
---|---|
openstack | ~> 1.52.0 |
No modules.
Name | Description | Type | Default | Required |
---|---|---|---|---|
apiserver_alternative_names | subject alternative names for the API Server signing cert. | list(string) |
[] |
no |
bastion_flavor | OpenStack instance flavor for the LoadBalancer node | string |
"m1.tiny" |
no |
bastion_host_key | Bastion SSH host public key | string |
null |
no |
bastion_port | Bastion SSH port | number |
22 |
no |
bastion_user | Bastion SSH username | string |
"ubuntu" |
no |
cluster_autoscaler_max_replicas | maximum number of replicas per MachineDeployment (requires cluster-autoscaler) | number |
0 |
no |
cluster_autoscaler_min_replicas | minimum number of replicas per MachineDeployment (requires cluster-autoscaler) | number |
0 |
no |
cluster_name | Name of the cluster | string |
n/a | yes |
control_plane_flavor | OpenStack instance flavor for the control plane nodes | string |
"m1.small" |
no |
control_plane_vm_count | number of control plane instances | number |
3 |
no |
external_network_name | OpenStack external network name | string |
n/a | yes |
image | image name to use | string |
"" |
no |
image_properties_query | in absence of var.image, this will be used to query API for the image | map(any) |
{ |
no |
initial_machinedeployment_operating_system_profile | Name of operating system profile for MachineDeployments, only applicable if operating-system-manager addon is enabled. If not specified, the default value will be added by machine-controller addon. |
string |
"" |
no |
initial_machinedeployment_replicas | Number of replicas per MachineDeployment | number |
2 |
no |
ssh_agent_socket | SSH Agent socket, default to grab from $SSH_AUTH_SOCK | string |
"env:SSH_AUTH_SOCK" |
no |
ssh_hosts_keys | A list of SSH hosts public keys to verify | list(string) |
null |
no |
ssh_port | SSH port to be used to provision instances | number |
22 |
no |
ssh_private_key_file | SSH private key file used to access instances | string |
"" |
no |
ssh_public_key_file | SSH public key file | string |
"~/.ssh/id_rsa.pub" |
no |
ssh_username | SSH user, used only in output | string |
"ubuntu" |
no |
subnet_cidr | OpenStack subnet cidr | string |
"192.168.1.0/24" |
no |
subnet_dns_servers | n/a | list(string) |
[ |
no |
worker_flavor | OpenStack instance flavor for the worker nodes | string |
"m1.small" |
no |
worker_os | OS to run on worker machines | string |
"ubuntu" |
no |
Name | Description |
---|---|
kubeone_api | kube-apiserver LB endpoint |
kubeone_hosts | Control plane endpoints to SSH to |
kubeone_workers | Workers definitions, that will be transformed into MachineDeployment object |
ssh_commands | n/a |
The original KubeOne Terraform configuration files have been adapted to work with the de.NBI Cloud Bielefeld. The following changes have been made:
- The OpenStack provider has been configured to use the de.NBI Cloud Bielefeld Keystone authentication endpoint.
- The Router definition has been adapted to use the de.NBI Cloud Bielefeld fixed router per project convention. The fixed router is created by the de.NBI Cloud Bielefeld team and is available in every project. The router name can be configured in the
terraform.tfvars
file using therouter_name
variable. - Due to the de.NBI Cloud Bielefeld network setup, the
subnet_cidr
variable has been set to a value in the192.168.
range. - The
external_network_name
variable has been set to the de.NBI Cloud Bielefeld external network nameexternal
. - The
image
variable has been set to the latest de.NBI Cloud Bielefeld Ubuntu 24.04 image nameUbuntu 24.04 LTS (2024-07-03)
. Please check the de.NBI Cloud Bielefeld Horizon dashboard for the latest image name. - The
bastion_flavor
,control_plane_flavor
, andworker_flavor
variables have been set to the de.NBI Cloud Bielefeld flavorsde.NBI tiny
,de.NBI default
, andde.NBI mini
respectively. You can check the de.NBI Cloud Bielefeld Horizon dashboard for the latest flavor names. Please adapt the flavor names according to your requirements. - The
control_plane_vm_count
variable has been set to3
to create a 3-node control plane in HA mode. - The load balancer configuration has been adapted to route traffic to the control plane nodes. The load balancer listens on port
6443
and forwards traffic to the control plane nodes on port6443
. For ssh access to the Kubernetes cluster, the load balancer listens on port22
and forwards traffic to the bastion node on port22
. Please adapt the load balancer configuration according to your requirements. If you plan to deploy externally available services, you should add additional listeners and pools to the load balancer configuration, e.g. for port443
(https).
To authenticate with the de.NBI Cloud Bielefeld OpenStack API, you need log in to the Horizon Dashboard and create new API credentials. The following steps describe how to create new API credentials:
- Log in to the de.NBI Cloud Bielefeld Horizon Dashboard.
- Select your project.
- Go to
Identity
->Application Credentials
. - Click on
Create Application Credential
and enter your project name for theName
field. You can leave theDescription
field empty. If you want to limit the lifetime of the credentials, you can set an expiration date. - Click on
Create Application Credential
and download theclouds.yaml
file. You will need the values contained in it to configure access to the de.NBI Cloud Bielefeld OpenStack API. - Create the
env.sh
file in this directory for use with Terraform and copy the values from theclouds.yaml
file into it, replacing the placeholders. Theenv.sh
file should look like this:
#!/bin/bash
OS_AUTH_URL=https://openstack.cebitec.uni-bielefeld.de:5000
OS_APPLICATION_CREDENTIAL_ID=<REPLACE-WITH-YOUR-CREDENTIAL-ID>
OS_APPLICATION_CREDENTIAL_SECRET=<REPLACE-WITH-YOUR-SECRET>
OS_AUTH_TYPE=v3applicationcredential
OS_REGION_NAME=Bielefeld
- Source the
env.sh
file to set the environment variables:
source env.sh
- Create the
credentials.yaml
file for use with thekubeone
command in this directory and copy the values from theclouds.yaml
file into it, replacing the placeholders. Thecredentials.yaml
file should look like this:
OS_AUTH_URL: https://openstack.cebitec.uni-bielefeld.de:5000
OS_APPLICATION_CREDENTIAL_ID: <REPLACE-WITH-YOUR-CREDENTIAL-ID>
OS_APPLICATION_CREDENTIAL_SECRET: <REPLACE-WITH-YOUR-SECRET>
OS_AUTH_TYPE: v3applicationcredential
OS_REGION_NAME: Bielefeld
- Install KubeOne, Terraform and the OpenStack provider. You can find the installation instructions here.
- Run
terraform init
to initialize the Terraform configuration. - Create the
terraform.tfvars
file in this directory and set the required variables. Theterraform.tfvars
file should look like this:
# set the Kubernetes cluster name (alphanumerical, lowercase and - separated)
cluster_name = "<CLUSTER-NAME>"
# this needs to be a valid OpenStack router name
# the Bielefeld cloud uses the following naming convention: <project_name>_router
# the Router is created by the cloud operator and can not be created by the user
router_name = "<OPENSTACK_PROJECT_NAME>_router"
# replace with your SSH public key file
ssh_public_key_file = "~/.ssh/id.pub"
# leave as is
external_network_name = "external"
# adapt to your requirements or leave as is
subnet_cidr = "192.168.33.0/24"
# adapt to your requirements or leave as is
image = "Ubuntu 24.04 LTS (2024-07-03)"
# adapt to your requirements or leave as is
bastion_flavor = "de.NBI tiny"
# adapt to your requirements or leave as is
control_plane_flavor = "de.NBI default"
# adapt to your requirements or leave as is
control_plane_vm_count = 3
# adapt to your requirements or leave as is
worker_flavor = "de.NBI mini"
- Run
terraform plan
to check the Terraform configuration. - Run
terraform apply
to create the infrastructure if the plan shows no errors and corresponds to your planned changes. - Save the Terraform state file in a secure location:
terraform output -json > tf.json
- Create the KubeOne configuration file
kubeone.yaml
in this directory and set the required variables. Adapt as necessary, check the terraform output or the OpenStack Dashboard to retrieve the LoadBalancer subnet-id and replace it. Additionally, since we will enable the default-storage-class addon, which will create a standard and cinder-csi storage class for OpenStack. Thekubeone.yaml
file should look like this:
apiVersion: kubeone.k8c.io/v1beta2
kind: KubeOneCluster
versions:
kubernetes: '1.29.4'
cloudProvider:
openstack: {}
external: true
cloudConfig: |
[Global]
auth-url=https://openstack.cebitec.uni-bielefeld.de:5000
application-credential-id=<REPLACE-WITH-YOUR-CREDENTIAL-ID>
application-credential-secret=<REPLACE-WITH-YOUR-SECRET>
[LoadBalancer]
subnet-id=<REPLACE-WITH-YOUR-LOAD-BALANCER-SUBNET-ID>
addons:
enable: true
addons:
#- name: unattended-upgrades
# default-storage-class adds cloud provider specific storage drivers and classes
- name: default-storage-class
- Run
kubeone apply -m kubeone.yaml -t tf.json -c credentials.yaml
to provision the Kubernetes cluster. - The command will show you the steps to provision the cluster. Enter
yes
to confirm the changes and proceed. - After the command has finished, you can access the Kubernetes cluster using the
kubeconfig
file that has been created in the current directory. You can use thekubectl
(if it is installed) command to interact with the cluster:
export KUBECONFIG=$PWD/<cluster_name>-kubeconfig
kubectl get nodes
For the next steps and for deploying applications to the Kubernetes cluster, we will use Helm. You can find the installation instructions for Helm here. We will use Helm to install the Kubernetes Dashboard and the Kubernetes Dashboard Ingress.
- Add the Kubernetes Dashboard repository and Helm charts to our local Helm installation using the following command:
helm --kubeconfig=eoc2024-cluster-kubeconfig repo add kubernetes-dashboard https://kubernetes.github.io/dashboard/
- Then, install the Kubernetes Dashboard Ingress using the following command:
helm --kubeconfig=eoc2024-cluster-kubeconfig upgrade --install kubernetes-dashboard kubernetes-dashboard/kubernetes-dashboard --create-namespace --namespace kubernetes-dashboard
- The Kubernetes Dashboard Ingress will be installed in the
kubernetes-dashboard
namespace. You can check the status of the installation using the following command:
kubectl --kubeconfig=eoc2024-cluster-kubeconfig get all -n kubernetes-dashboard
- To access the dashboard locally, add a port-forward to the Kubernetes Dashboard service using the following command:
kubectl --kubeconfig=eoc2024-cluster-kubeconfig -n kubernetes-dashboard port-forward svc/kubernetes-dashboard-kong-proxy 8443:443
- We now need to add a service account and a cluster role binding to access the Kubernetes Dashboard. You can create the
dashboard-user
service account and the cluster-admin role binding using the following commands:
kubectl --kubeconfig=eoc2024-cluster-kubeconfig -n kubernetes-dashboard create serviceaccount dashboard-user
kubectl --kubeconfig=eoc2024-cluster-kubeconfig -n kubernetes-dashboard create clusterrolebinding dashboard-user --clusterrole=cluster-admin --serviceaccount=kubernetes-dashboard:dashboard-user
- In order to access the Dashboard, you will need to generate a token for the
dashboard-user
service account to authenticate. You can create the tokens using the following commands:
kubectl --kubeconfig=eoc2024-cluster-kubeconfig -n kubernetes-dashboard create token dashboard-user
Take note of the token generated for the dashboard-user
service account. You can use the token to authenticate with the Kubernetes Dashboard. Please note that the token has a limited lifetime and will expire after a certain period. You can create a new token using the same command.
- You can now access the Kubernetes Dashboard in your browser at
https://localhost:8443
using the token for thedashboard-user
service account.
Follow the steps outlined in this tutorial:
https://docs.kubermatic.com/kubeone/v1.9/tutorials/creating-clusters-oidc/
Make sure to always issue the kubeconfig parameter:
helm --kubeconfig=eoc2024-cluster-kubeconfig --namespace kube-system upgrade --create-namespace --install dex ./charts/oauth