In this tutorial, we'll create a Kubernetes v1.23.4 cluster on DigitalOcean with Flatcar Linux.
We'll declare a Kubernetes cluster using the Typhoon Terraform module. Then apply the changes to create controller droplets, worker droplets, DNS records, tags, and TLS assets.
Controller hosts are provisioned to run an etcd-member
peer and a kubelet
service. Worker hosts run a kubelet
service. Controller nodes run kube-apiserver
, kube-scheduler
, kube-controller-manager
, and coredns
, while kube-proxy
and calico
(or flannel
) run on every node. A generated kubeconfig
provides kubectl
access to the cluster.
- Digital Ocean Account and Token
- Digital Ocean Domain (registered Domain Name or delegated subdomain)
- Terraform v0.13.0+
Install Terraform v0.13.0+ on your system.
$ terraform version
Terraform v1.0.0
Read concepts to learn about Terraform, modules, and organizing resources. Change to your infrastructure repository (e.g. infra
).
cd infra/clusters
Login to DigitalOcean. Or if you don't have one, create an account with our referral link to get free credits.
Generate a Personal Access Token with read/write scope from the API tab. Write the token to a file that can be referenced in configs.
mkdir -p ~/.config/digital-ocean
echo "TOKEN" > ~/.config/digital-ocean/token
Configure the DigitalOcean provider to use your token in a providers.tf
file.
provider "digitalocean" {
token = "${chomp(file("~/.config/digital-ocean/token"))}"
}
provider "ct" {}
terraform {
required_providers {
ct = {
source = "poseidon/ct"
version = "0.9.1"
}
digitalocean = {
source = "digitalocean/digitalocean"
version = "2.17.0"
}
}
}
Flatcar Linux publishes DigitalOcean images, but does not yet upload them. DigitalOcean allows custom images to be uploaded via URLor file.
Download the Flatcar Linux DigitalOcean bin image. Rename the image with the channel and version (to refer to these images over time) and upload it as a custom image.
data "digitalocean_image" "flatcar-stable-2303-4-0" {
name = "flatcar-stable-2303.4.0.bin.bz2"
}
Set the os_image in the next step.
Define a Kubernetes cluster using the module digital-ocean/flatcar-linux/kubernetes
.
module "nemo" {
source = "git::https://github.com/poseidon/typhoon//digital-ocean/flatcar-linux/kubernetes?ref=v1.23.4"
# Digital Ocean
cluster_name = "nemo"
region = "nyc3"
dns_zone = "digital-ocean.example.com"
# configuration
os_image = data.digitalocean_image.flatcar-stable-2303-4-0.id
ssh_fingerprints = ["d7:9d:79:ae:56:32:73:79:95:88:e3:a2:ab:5d:45:e7"]
# optional
worker_count = 2
}
Reference the variables docs or the variables.tf source.
Initial bootstrapping requires bootstrap.service
be started on one controller node. Terraform uses ssh-agent
to automate this step. Add your SSH private key to ssh-agent
.
ssh-add ~/.ssh/id_rsa
ssh-add -L
Initialize the config directory if this is the first use with Terraform.
terraform init
Plan the resources to be created.
$ terraform plan
Plan: 54 to add, 0 to change, 0 to destroy.
Apply the changes to create the cluster.
$ terraform apply
module.nemo.null_resource.bootstrap: Still creating... (30s elapsed)
module.nemo.null_resource.bootstrap: Provisioning with 'remote-exec'...
...
module.nemo.null_resource.bootstrap: Still creating... (6m20s elapsed)
module.nemo.null_resource.bootstrap: Creation complete (ID: 7599298447329218468)
Apply complete! Resources: 42 added, 0 changed, 0 destroyed.
In 3-6 minutes, the Kubernetes cluster will be ready.
Install kubectl on your system. Obtain the generated cluster kubeconfig
from module outputs (e.g. write to a local file).
resource "local_file" "kubeconfig-nemo" {
content = module.nemo.kubeconfig-admin
filename = "/home/user/.kube/configs/nemo-config"
}
List nodes in the cluster.
$ export KUBECONFIG=/home/user/.kube/configs/nemo-config
$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
10.132.110.130 Ready <none> 10m v1.23.4
10.132.115.81 Ready <none> 10m v1.23.4
10.132.124.107 Ready <none> 10m v1.23.4
List the pods.
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system coredns-1187388186-ld1j7 1/1 Running 0 11m
kube-system coredns-1187388186-rdhf7 1/1 Running 0 11m
kube-system calico-node-1m5bf 2/2 Running 0 11m
kube-system calico-node-7jmr1 2/2 Running 0 11m
kube-system calico-node-bknc8 2/2 Running 0 11m
kube-system kube-apiserver-ip-10.132.115.81 1/1 Running 0 11m
kube-system kube-controller-manager-ip-10.132.115.81 1/1 Running 0 11m
kube-system kube-proxy-6kxjf 1/1 Running 0 11m
kube-system kube-proxy-fh3td 1/1 Running 0 11m
kube-system kube-proxy-k35rc 1/1 Running 0 11m
kube-system kube-scheduler-ip-10.132.115.81 1/1 Running 0 11m
Learn about maintenance and addons.
Check the variables.tf source.
Name | Description | Example |
---|---|---|
cluster_name | Unique cluster name (prepended to dns_zone) | "nemo" |
region | Digital Ocean region | "nyc1", "sfo2", "fra1", tor1" |
dns_zone | Digital Ocean domain (i.e. DNS zone) | "do.example.com" |
os_image | Container Linux image for instances | "uploaded-flatcar-image-id" |
ssh_fingerprints | SSH public key fingerprints | ["d7:9d..."] |
Clusters create DNS A records ${cluster_name}.${dns_zone}
to resolve to controller droplets (round robin). This FQDN is used by workers and kubectl
to access the apiserver(s). In this example, the cluster's apiserver would be accessible at nemo.do.example.com
.
You'll need a registered domain name or delegated subdomain in DigitalOcean Domains (i.e. DNS zones). You can set this up once and create many clusters with unique names.
# Declare a DigitalOcean record to also create a zone file
resource "digitalocean_domain" "zone-for-clusters" {
name = "do.example.com"
ip_address = "8.8.8.8"
}
!!! tip "" If you have an existing domain name with a zone file elsewhere, just delegate a subdomain that can be managed on DigitalOcean (e.g. do.mydomain.com) and update nameservers.
DigitalOcean droplets are created with your SSH public key "fingerprint" (i.e. MD5 hash) to allow access. If your SSH public key is at ~/.ssh/id_rsa
, find the fingerprint with,
ssh-keygen -E md5 -lf ~/.ssh/id_rsa.pub | awk '{print $2}'
MD5:d7:9d:79:ae:56:32:73:79:95:88:e3:a2:ab:5d:45:e7
If you use ssh-agent
(e.g. Yubikey for SSH), find the fingerprint with,
ssh-add -l -E md5
2048 MD5:d7:9d:79:ae:56:32:73:79:95:88:e3:a2:ab:5d:45:e7 cardno:000603633110 (RSA)
Digital Ocean requires the SSH public key be uploaded to your account, so you may also find the fingerprint under Settings -> Security. Finally, if you don't have an SSH key, create one now.
Name | Description | Default | Example |
---|---|---|---|
controller_count | Number of controllers (i.e. masters) | 1 | 1 |
worker_count | Number of workers | 1 | 3 |
controller_type | Droplet type for controllers | "s-2vcpu-2gb" | s-2vcpu-2gb, s-2vcpu-4gb, s-4vcpu-8gb, ... |
worker_type | Droplet type for workers | "s-1vcpu-2gb" | s-1vcpu-2gb, s-2vcpu-2gb, ... |
controller_snippets | Controller Container Linux Config snippets | [] | example |
worker_snippets | Worker Container Linux Config snippets | [] | example |
networking | Choice of networking provider | "cilium" | "calico" or "cilium" or "flannel" |
pod_cidr | CIDR IPv4 range to assign to Kubernetes pods | "10.2.0.0/16" | "10.22.0.0/16" |
service_cidr | CIDR IPv4 range to assign to Kubernetes services | "10.3.0.0/16" | "10.3.0.0/24" |
Check the list of valid droplet types or use doctl compute size list
.
!!! warning
Do not choose a controller_type
smaller than 2GB. Smaller droplets are not sufficient for running a controller and bootstrapping will fail.