Skip to content

Latest commit

 

History

History

terraform

KubeOne OpenStack Setup - de.NBI Cloud Bielefeld

This README is a modified version of the original README from the KubeOne repository. The original README can be found here.

The full documentation for KubeOne can be found here. The version of KubeOne used in this repository is v1.8.0.

The OpenStack Quickstart Terraform configs can be used to create the needed infrastructure for a Kubernetes HA cluster. Check out the following Creating Infrastructure guide to learn more about how to use the configs and how to provision a Kubernetes cluster using KubeOne.

Kubernetes API Server Load Balancing

See the Terraform loadbalancers in examples document.

Requirements

Name Version
terraform >= 1.0.0
openstack ~> 1.52.0

Providers

Name Version
openstack ~> 1.52.0

Modules

No modules.

Resources

Name Type
openstack_compute_instance_v2.bastion resource
openstack_compute_instance_v2.control_plane resource
openstack_compute_keypair_v2.deployer resource
openstack_lb_listener_v2.kube_apiserver resource
openstack_lb_loadbalancer_v2.kube_apiserver resource
openstack_lb_member_v2.kube_apiserver resource
openstack_lb_monitor_v2.lb_monitor_tcp resource
openstack_lb_pool_v2.kube_apiservers resource
openstack_networking_floatingip_associate_v2.bastion resource
openstack_networking_floatingip_associate_v2.kube_apiserver resource
openstack_networking_floatingip_v2.bastion resource
openstack_networking_floatingip_v2.kube_apiserver resource
openstack_networking_network_v2.network resource
openstack_networking_port_v2.bastion resource
openstack_networking_port_v2.control_plane resource
openstack_networking_router_interface_v2.router_subnet_link resource
openstack_networking_router_v2.router resource
openstack_networking_secgroup_rule_v2.nodeports resource
openstack_networking_secgroup_rule_v2.secgroup_allow_internal_ipv4 resource
openstack_networking_secgroup_rule_v2.secgroup_apiserver resource
openstack_networking_secgroup_rule_v2.secgroup_ssh resource
openstack_networking_secgroup_v2.securitygroup resource
openstack_networking_subnet_v2.subnet resource
openstack_images_image_v2.image data source
openstack_networking_network_v2.external_network data source

Inputs

Name Description Type Default Required
apiserver_alternative_names subject alternative names for the API Server signing cert. list(string) [] no
bastion_flavor OpenStack instance flavor for the LoadBalancer node string "m1.tiny" no
bastion_host_key Bastion SSH host public key string null no
bastion_port Bastion SSH port number 22 no
bastion_user Bastion SSH username string "ubuntu" no
cluster_autoscaler_max_replicas maximum number of replicas per MachineDeployment (requires cluster-autoscaler) number 0 no
cluster_autoscaler_min_replicas minimum number of replicas per MachineDeployment (requires cluster-autoscaler) number 0 no
cluster_name Name of the cluster string n/a yes
control_plane_flavor OpenStack instance flavor for the control plane nodes string "m1.small" no
control_plane_vm_count number of control plane instances number 3 no
external_network_name OpenStack external network name string n/a yes
image image name to use string "" no
image_properties_query in absence of var.image, this will be used to query API for the image map(any)
{
"os_distro": "ubuntu",
"os_version": "22.04"
}
no
initial_machinedeployment_operating_system_profile Name of operating system profile for MachineDeployments, only applicable if operating-system-manager addon is enabled.
If not specified, the default value will be added by machine-controller addon.
string "" no
initial_machinedeployment_replicas Number of replicas per MachineDeployment number 2 no
ssh_agent_socket SSH Agent socket, default to grab from $SSH_AUTH_SOCK string "env:SSH_AUTH_SOCK" no
ssh_hosts_keys A list of SSH hosts public keys to verify list(string) null no
ssh_port SSH port to be used to provision instances number 22 no
ssh_private_key_file SSH private key file used to access instances string "" no
ssh_public_key_file SSH public key file string "~/.ssh/id_rsa.pub" no
ssh_username SSH user, used only in output string "ubuntu" no
subnet_cidr OpenStack subnet cidr string "192.168.1.0/24" no
subnet_dns_servers n/a list(string)
[
"8.8.8.8",
"8.8.4.4"
]
no
worker_flavor OpenStack instance flavor for the worker nodes string "m1.small" no
worker_os OS to run on worker machines string "ubuntu" no

Outputs

Name Description
kubeone_api kube-apiserver LB endpoint
kubeone_hosts Control plane endpoints to SSH to
kubeone_workers Workers definitions, that will be transformed into MachineDeployment object
ssh_commands n/a

Adaptations for the de.NBI Cloud Bielefeld

The original KubeOne Terraform configuration files have been adapted to work with the de.NBI Cloud Bielefeld. The following changes have been made:

  1. The OpenStack provider has been configured to use the de.NBI Cloud Bielefeld Keystone authentication endpoint.
  2. The Router definition has been adapted to use the de.NBI Cloud Bielefeld fixed router per project convention. The fixed router is created by the de.NBI Cloud Bielefeld team and is available in every project. The router name can be configured in the terraform.tfvars file using the router_name variable.
  3. Due to the de.NBI Cloud Bielefeld network setup, the subnet_cidr variable has been set to a value in the 192.168. range.
  4. The external_network_name variable has been set to the de.NBI Cloud Bielefeld external network name external.
  5. The image variable has been set to the latest de.NBI Cloud Bielefeld Ubuntu 24.04 image name Ubuntu 24.04 LTS (2024-07-03). Please check the de.NBI Cloud Bielefeld Horizon dashboard for the latest image name.
  6. The bastion_flavor, control_plane_flavor, and worker_flavor variables have been set to the de.NBI Cloud Bielefeld flavors de.NBI tiny, de.NBI default, and de.NBI mini respectively. You can check the de.NBI Cloud Bielefeld Horizon dashboard for the latest flavor names. Please adapt the flavor names according to your requirements.
  7. The control_plane_vm_count variable has been set to 3 to create a 3-node control plane in HA mode.
  8. The load balancer configuration has been adapted to route traffic to the control plane nodes. The load balancer listens on port 6443 and forwards traffic to the control plane nodes on port 6443. For ssh access to the Kubernetes cluster, the load balancer listens on port 22 and forwards traffic to the bastion node on port 22. Please adapt the load balancer configuration according to your requirements. If you plan to deploy externally available services, you should add additional listeners and pools to the load balancer configuration, e.g. for port 443 (https).

Setting up authentication

To authenticate with the de.NBI Cloud Bielefeld OpenStack API, you need log in to the Horizon Dashboard and create new API credentials. The following steps describe how to create new API credentials:

  1. Log in to the de.NBI Cloud Bielefeld Horizon Dashboard.
  2. Select your project.
  3. Go to Identity -> Application Credentials.
  4. Click on Create Application Credential and enter your project name for the Name field. You can leave the Description field empty. If you want to limit the lifetime of the credentials, you can set an expiration date.
  5. Click on Create Application Credential and download the clouds.yaml file. You will need the values contained in it to configure access to the de.NBI Cloud Bielefeld OpenStack API.
  6. Create the env.sh file in this directory for use with Terraform and copy the values from the clouds.yaml file into it, replacing the placeholders. The env.sh file should look like this:
#!/bin/bash
OS_AUTH_URL=https://openstack.cebitec.uni-bielefeld.de:5000
OS_APPLICATION_CREDENTIAL_ID=<REPLACE-WITH-YOUR-CREDENTIAL-ID>
OS_APPLICATION_CREDENTIAL_SECRET=<REPLACE-WITH-YOUR-SECRET>
OS_AUTH_TYPE=v3applicationcredential
OS_REGION_NAME=Bielefeld
  1. Source the env.sh file to set the environment variables:
source env.sh
  1. Create the credentials.yaml file for use with the kubeone command in this directory and copy the values from the clouds.yaml file into it, replacing the placeholders. The credentials.yaml file should look like this:
OS_AUTH_URL: https://openstack.cebitec.uni-bielefeld.de:5000
OS_APPLICATION_CREDENTIAL_ID: <REPLACE-WITH-YOUR-CREDENTIAL-ID>
OS_APPLICATION_CREDENTIAL_SECRET: <REPLACE-WITH-YOUR-SECRET>
OS_AUTH_TYPE: v3applicationcredential
OS_REGION_NAME: Bielefeld

Setting up the Terraform configuration

  1. Install KubeOne, Terraform and the OpenStack provider. You can find the installation instructions here.
  2. Run terraform init to initialize the Terraform configuration.
  3. Create the terraform.tfvars file in this directory and set the required variables. The terraform.tfvars file should look like this:
# set the Kubernetes cluster name (alphanumerical, lowercase and - separated)
cluster_name = "<CLUSTER-NAME>"
# this needs to be a valid OpenStack router name
# the Bielefeld cloud uses the following naming convention: <project_name>_router
# the Router is created by the cloud operator and can not be created by the user
router_name = "<OPENSTACK_PROJECT_NAME>_router"
# replace with your SSH public key file
ssh_public_key_file = "~/.ssh/id.pub"
# leave as is
external_network_name = "external"
# adapt to your requirements or leave as is
subnet_cidr = "192.168.33.0/24"
# adapt to your requirements or leave as is
image = "Ubuntu 24.04 LTS (2024-07-03)"
# adapt to your requirements or leave as is
bastion_flavor = "de.NBI tiny"
# adapt to your requirements or leave as is
control_plane_flavor = "de.NBI default"
# adapt to your requirements or leave as is
control_plane_vm_count = 3
# adapt to your requirements or leave as is
worker_flavor = "de.NBI mini"
  1. Run terraform plan to check the Terraform configuration.
  2. Run terraform apply to create the infrastructure if the plan shows no errors and corresponds to your planned changes.
  3. Save the Terraform state file in a secure location: terraform output -json > tf.json
  4. Create the KubeOne configuration file kubeone.yaml in this directory and set the required variables. Adapt as necessary, check the terraform output or the OpenStack Dashboard to retrieve the LoadBalancer subnet-id and replace it. Additionally, since we will enable the default-storage-class addon, which will create a standard and cinder-csi storage class for OpenStack. The kubeone.yaml file should look like this:
apiVersion: kubeone.k8c.io/v1beta2
kind: KubeOneCluster
versions:
  kubernetes: '1.29.4'
cloudProvider:
  openstack: {}
  external: true
  cloudConfig: |
    [Global]
    auth-url=https://openstack.cebitec.uni-bielefeld.de:5000
    application-credential-id=<REPLACE-WITH-YOUR-CREDENTIAL-ID>
    application-credential-secret=<REPLACE-WITH-YOUR-SECRET>

    [LoadBalancer]
    subnet-id=<REPLACE-WITH-YOUR-LOAD-BALANCER-SUBNET-ID>

addons:
  enable: true
  addons:
  #- name: unattended-upgrades
  # default-storage-class adds cloud provider specific storage drivers and classes  
  - name: default-storage-class
  1. Run kubeone apply -m kubeone.yaml -t tf.json -c credentials.yaml to provision the Kubernetes cluster.
  2. The command will show you the steps to provision the cluster. Enter yes to confirm the changes and proceed.
  3. After the command has finished, you can access the Kubernetes cluster using the kubeconfig file that has been created in the current directory. You can use the kubectl (if it is installed) command to interact with the cluster:
export KUBECONFIG=$PWD/<cluster_name>-kubeconfig
kubectl get nodes

Installing the Kubernetes Dashboard

For the next steps and for deploying applications to the Kubernetes cluster, we will use Helm. You can find the installation instructions for Helm here. We will use Helm to install the Kubernetes Dashboard and the Kubernetes Dashboard Ingress.

  1. Add the Kubernetes Dashboard repository and Helm charts to our local Helm installation using the following command:
helm --kubeconfig=eoc2024-cluster-kubeconfig repo add kubernetes-dashboard https://kubernetes.github.io/dashboard/
  1. Then, install the Kubernetes Dashboard Ingress using the following command:
helm --kubeconfig=eoc2024-cluster-kubeconfig upgrade --install kubernetes-dashboard kubernetes-dashboard/kubernetes-dashboard --create-namespace --namespace kubernetes-dashboard
  1. The Kubernetes Dashboard Ingress will be installed in the kubernetes-dashboard namespace. You can check the status of the installation using the following command:
kubectl --kubeconfig=eoc2024-cluster-kubeconfig get all -n kubernetes-dashboard
  1. To access the dashboard locally, add a port-forward to the Kubernetes Dashboard service using the following command:
kubectl --kubeconfig=eoc2024-cluster-kubeconfig -n kubernetes-dashboard port-forward svc/kubernetes-dashboard-kong-proxy 8443:443
  1. We now need to add a service account and a cluster role binding to access the Kubernetes Dashboard. You can create the dashboard-user service account and the cluster-admin role binding using the following commands:
kubectl --kubeconfig=eoc2024-cluster-kubeconfig -n kubernetes-dashboard create serviceaccount dashboard-user
kubectl --kubeconfig=eoc2024-cluster-kubeconfig -n kubernetes-dashboard create clusterrolebinding dashboard-user --clusterrole=cluster-admin --serviceaccount=kubernetes-dashboard:dashboard-user
  1. In order to access the Dashboard, you will need to generate a token for the dashboard-user service account to authenticate. You can create the tokens using the following commands:
kubectl --kubeconfig=eoc2024-cluster-kubeconfig -n kubernetes-dashboard create token dashboard-user

Take note of the token generated for the dashboard-user service account. You can use the token to authenticate with the Kubernetes Dashboard. Please note that the token has a limited lifetime and will expire after a certain period. You can create a new token using the same command.

  1. You can now access the Kubernetes Dashboard in your browser at https://localhost:8443 using the token for the dashboard-user service account.

Setting up Ingress and DEX authentication

Follow the steps outlined in this tutorial:

https://docs.kubermatic.com/kubeone/v1.9/tutorials/creating-clusters-oidc/

Make sure to always issue the kubeconfig parameter:

helm --kubeconfig=eoc2024-cluster-kubeconfig --namespace kube-system upgrade     --create-namespace --install dex ./charts/oauth