Skip to content

Commit

Permalink
Add a terraform deployment for a VM
Browse files Browse the repository at this point in the history
  • Loading branch information
j-kali committed Nov 12, 2024
1 parent d945237 commit 18d37b2
Show file tree
Hide file tree
Showing 10 changed files with 620 additions and 0 deletions.
2 changes: 2 additions & 0 deletions .gitattributes
Original file line number Diff line number Diff line change
Expand Up @@ -2,3 +2,5 @@ client/container_preparation/input_logic/age filter=lfs diff=lfs merge=lfs -text
client/container_preparation/input_logic/curl filter=lfs diff=lfs merge=lfs -text
client/container_preparation/input_logic/jq filter=lfs diff=lfs merge=lfs -text
client/container_preparation/input_logic/tar filter=lfs diff=lfs merge=lfs -text
# encrypted terraform secrets
terraform/secrets/** filter=git-crypt diff=git-crypt
9 changes: 9 additions & 0 deletions .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -10,3 +10,12 @@

# Undo-tree save-files
*.~undo-tree

# openrc configs
*-openrc.sh

# terraform
.terraform*
## user specific secrets
terraform/secrets/public_keys
terraform/secrets/tunnel_keys
97 changes: 97 additions & 0 deletions terraform/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,97 @@
# kind VM recipe

Recipe to deploy a simple VM with a running [kind](https://kind.sigs.k8s.io/) in Pouta.

## VM deployment

The VM is defined in [Terraform](https://www.terraform.io/) with state stored in `<project name>-terraform-state` bucket deployed under you project in allas.

To deploy/update, download a config file from Pouta for authentication (the `<project name>-openrc.sh`).
You will also need `S3` credentials for accessing the bucket, in the below recipe it assumes you have them nicely stored in [pass](https://www.passwordstore.org/).
Currently the VM also needs 2 secrets:
- host SSH private key
- host SSH public key (not really secret but we have it classified as such)
Code is looking for them in following locations:
- `secrets/ssh_host_ed25519_key`
- `secrets/ssh_host_ed25519_key.pub`

After cloning the repository unlock the secrets with

-> git-crypt unlock

Put public SSH keys with admin access to the `secrets/public_keys` file.
If you want some users to have just access to tunnel ports from the VM, add their keys to the `secrets/tunnel_keys` file, if not just `touch secrets/tunnel_keys`.
After both of those files are present, you should be able to deploy the VM:

# authenticate
-> source project_2007468-openrc.sh
# for simplicity of this example we just export S3 creentials
-> export AWS_ACCESS_KEY_ID=$(pass fancy_project/aws_key)
-> export AWS_SECRET_ACCESS_KEY=$(pass fancy_project/aws_secret)
# init
-> terraform init
# apply
-> terraform apply

And wait for things to finish, including package udpates and installations on the VM.
As one of the outputs you should see the address of your VM, e.g.:

Outputs:

address = "128.214.254.127"

## Connecting to kind

It takes a few moments for everything to finish setting up on the VM.
Once it finishes the VM should be running a configured `kind` cluster with a dashboard running.
You can download you config file and access the cluster, notice the access to the API is restricted to trusted networks only

-> scp [email protected]:.kube/remote-config .
-> export KUBECONFIG=$(pwd)/remote-config
-> kubectl auth whoami
ATTRIBUTE VALUE
Username kubernetes-admin
Groups [kubeadm:cluster-admins system:authenticated]

To, for example, check if the dashboard is ready

-> kubectl get all --namespace kubernetes-dashboard
NAME READY STATUS RESTARTS AGE
pod/kubernetes-dashboard-api-5cd64dbc99-xjbj8 1/1 Running 0 2m54s
pod/kubernetes-dashboard-auth-5c8859fcbd-zt2lm 1/1 Running 0 2m54s
pod/kubernetes-dashboard-kong-57d45c4f69-5gv2d 1/1 Running 0 2m54s
pod/kubernetes-dashboard-metrics-scraper-df869c886-chxx4 1/1 Running 0 2m54s
pod/kubernetes-dashboard-web-6ccf8d967-fsctp 1/1 Running 0 2m54s

NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/kubernetes-dashboard-api ClusterIP 10.96.149.208 <none> 8000/TCP 2m55s
service/kubernetes-dashboard-auth ClusterIP 10.96.140.195 <none> 8000/TCP 2m55s
service/kubernetes-dashboard-kong-proxy ClusterIP 10.96.35.136 <none> 443/TCP 2m55s
service/kubernetes-dashboard-metrics-scraper ClusterIP 10.96.222.176 <none> 8000/TCP 2m55s
service/kubernetes-dashboard-web ClusterIP 10.96.139.1 <none> 8000/TCP 2m55s

NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/kubernetes-dashboard-api 1/1 1 1 2m54s
deployment.apps/kubernetes-dashboard-auth 1/1 1 1 2m54s
deployment.apps/kubernetes-dashboard-kong 1/1 1 1 2m54s
deployment.apps/kubernetes-dashboard-metrics-scraper 1/1 1 1 2m54s
deployment.apps/kubernetes-dashboard-web 1/1 1 1 2m54s

NAME DESIRED CURRENT READY AGE
replicaset.apps/kubernetes-dashboard-api-5cd64dbc99 1 1 1 2m54s
replicaset.apps/kubernetes-dashboard-auth-5c8859fcbd 1 1 1 2m54s
replicaset.apps/kubernetes-dashboard-kong-57d45c4f69 1 1 1 2m54s
replicaset.apps/kubernetes-dashboard-metrics-scraper-df869c886 1 1 1 2m54s
replicaset.apps/kubernetes-dashboard-web-6ccf8d967 1 1 1 2m54s

Dashboard by default in this case is not overly secure so the external route is not setup, to access:

# Generate a token to login to the dashboard with
-> kubectl -n kubernetes-dashboard create token admin-user
# Forward the dashboard to your machine
-> kubectl -n kubernetes-dashboard port-forward svc/kubernetes-dashboard-kong-proxy 8443:443
Forwarding from 127.0.0.1:8443 -> 8443
Forwarding from [::1]:8443 -> 8443

And view the dashboard in your browser under `https://localhost:8443` using the generated token to login.
Note that the cluster and the dashboard use a self signed certificate so your browser is not going to like it.
109 changes: 109 additions & 0 deletions terraform/cloud-config.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,109 @@
#cloud-config
package_update: true
package_upgrade: true
package_reboot_if_required: true
apt:
sources:
docker.list:
source: deb [arch=amd64] https://download.docker.com/linux/ubuntu $RELEASE stable
keyid: 9DC858229FC7DD38854AE2D88D81803C0EBFCD88
helm.list:
source: deb [arch=amd64] https://baltocdn.com/helm/stable/debian/ all main
keyid: 81BF832E2F19CD2AA0471959294AC4827C1A168A # https://baltocdn.com/helm/signing.asc
packages:
- ca-certificates
- containerd.io
- curl
- docker-ce
- docker-ce-cli
- gnupg
- helm
- lsb-release
- uidmap
- net-tools
- yq
# fun utils
- git
- tmux
- wget
groups:
- docker
users:
- name: ubuntu
lock_passwd: true
shell: /bin/bash
ssh_authorized_keys:
%{ for key in public_keys ~}
- ${key}
%{ endfor ~}
groups:
- docker
- sudo
sudo:
- ALL=(ALL) NOPASSWD:ALL
- name: k8s-api
lock_passwd: true
shell: /usr/sbin/nologin
ssh_authorized_keys:
%{ for key in public_keys ~}
- ${key}
%{ endfor ~}
%{ for key in tunnel_keys ~}
- ${key}
%{ endfor ~}
ssh_genkeytypes:
- ed25519
ssh_keys:
ed25519_private: |
${ed25519_private}
ed25519_public: ${ed25519_public}
runcmd:
- systemctl disable --now docker.service docker.socket
- rm -f /var/run/docker.sock
- loginctl enable-linger ubuntu
- chown ubuntu:root /home/ubuntu # in some versions docker setup has problems without it
- su - ubuntu -c '/usr/local/sbin/setup.sh'
write_files:
- encoding: b64
content: ${setup_sha512}
owner: root:root
path: /etc/setup-sha512
- content: net.ipv4.ip_unprivileged_port_start=80
path: /etc/sysctl.d/unprivileged_port_start.conf
- encoding: b64
content: ${setup_sh}
owner: root:root
path: /usr/local/sbin/setup.sh
permissions: '0755'
- encoding: b64
content: ${hpcs_cluster_yaml}
owner: root:root
path: /etc/hpcs/hpcs-cluster.yaml
permissions: '0644'
- encoding: b64
content: ${kind_dashboard_admin_yaml}
owner: root:root
path: /etc/hpcs/admin-user.yaml
permissions: '0644'
- source:
uri: https://kind.sigs.k8s.io/dl/v0.24.0/kind-Linux-amd64
owner: root:root
path: /usr/bin/kind
permissions: '0755'
- source:
uri: https://dl.k8s.io/v1.31.2/bin/linux/amd64/kubectl
owner: root:root
path: /usr/bin/kubectl
permissions: '0755'
fs_setup:
- label: data
filesystem: 'ext4'
device: /dev/vdb
overwrite: false
- label: docker
filesystem: 'ext4'
device: /dev/vdc
overwrite: false
mounts:
- ['LABEL=data', /var/lib/data, "ext4", "defaults"]
- ['LABEL=docker', /var/lib/docker, "ext4", "defaults"]
18 changes: 18 additions & 0 deletions terraform/files/admin-user.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,18 @@
apiVersion: v1
kind: ServiceAccount
metadata:
name: admin-user
namespace: kubernetes-dashboard
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: admin-user
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- kind: ServiceAccount
name: admin-user
namespace: kubernetes-dashboard
39 changes: 39 additions & 0 deletions terraform/files/hpcs-cluster.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,39 @@
kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
name: hpcs
networking:
apiServerAddress: 0.0.0.0
apiServerPort: 6444
nodes:
- role: control-plane
kubeadmConfigPatches:
- |
kind: InitConfiguration
nodeRegistration:
kubeletExtraArgs:
node-labels: "ingress-ready=true"
authorization-mode: "AlwaysAllow"
extraPortMappings:
- containerPort: 80
hostPort: 80
- containerPort: 443
hostPort: 443
- containerPort: 30001
hostPort: 30001
- containerPort: 30002
hostPort: 30002
- containerPort: 30003
hostPort: 30003
- containerPort: 30004
hostPort: 30004
kubeadmConfigPatchesJSON6902:
- group: kubeadm.k8s.io
version: v1beta3
kind: ClusterConfiguration
patch: |
- op: add
path: /apiServer/certSANs/-
value: MY_PUBLIC_IP
- op: add
path: /apiServer/certSANs/-
value: MY_PUBLIC_HOSTNAME
18 changes: 18 additions & 0 deletions terraform/files/setup.sh
Original file line number Diff line number Diff line change
@@ -0,0 +1,18 @@
#!/bin/bash -eu
export XDG_RUNTIME_DIR=/run/user/1000

/usr/bin/dockerd-rootless-setuptool.sh install -f

MY_PUBLIC_IP=$(curl ifconfig.io 2> /dev/null)
export MY_PUBLIC_IP=${MY_PUBLIC_IP}
MY_PUBLIC_HOSTNAME=$(host "${MY_PUBLIC_IP}" | rev | cut -d " " -f 1 | tail -c +2 | rev)
export MY_PUBLIC_HOSTNAME=${MY_PUBLIC_HOSTNAME}
sed -e "s/MY_PUBLIC_IP/${MY_PUBLIC_IP}/" /etc/hpcs/hpcs-cluster.yaml > "${HOME}/hpcs-cluster.yaml"
sed -i -e "s/MY_PUBLIC_HOSTNAME/${MY_PUBLIC_HOSTNAME}/" "${HOME}/hpcs-cluster.yaml"
/usr/bin/kind create cluster --config "${HOME}/hpcs-cluster.yaml"

yq --yaml-output ".clusters[0].cluster.server = \"https://${MY_PUBLIC_HOSTNAME}:6444\"" "${HOME}/.kube/config" > "${HOME}/.kube/remote-config"

helm repo add kubernetes-dashboard https://kubernetes.github.io/dashboard/
helm upgrade --install kubernetes-dashboard kubernetes-dashboard/kubernetes-dashboard --create-namespace --namespace kubernetes-dashboard
kubectl apply -f /etc/hpcs/admin-user.yaml
Binary file added terraform/secrets/ssh_host_ed25519_key
Binary file not shown.
Binary file added terraform/secrets/ssh_host_ed25519_key.pub
Binary file not shown.
Loading

0 comments on commit 18d37b2

Please sign in to comment.