-
Notifications
You must be signed in to change notification settings - Fork 11
Rancher Kubernetes Engine v2 with Hashicorp Vault
Table of contents
The following are required:
- a HashiCorp Vault instance (Community or Enterprise)
- a HashiCorp Vault token
- a SSH access to the control plane nodes as an admin
- the necessary user permissions to handle files in
etc
and restart serivces, root is best, sudo is better ;) - the vault cli tool
- the kubectl cli tool
Export environment variables to reach out the HashiCorp Vault instance:
export VAULT_ADDR="https://addresss:8200"
export VAULT_TOKEN="s.oYpiOmnWL0PFDPS2ImJTdhRf.CxT2N"
NOTE: when using the HashiCorp Vault Enterprise, the concept of namespace is introduced.
This requires an additional environment variables to target the base root namespace:
export VAULT_NAMESPACE=admin
or a sub namespace like admin/gke01
export VAULT_NAMESPACE=admin/gke01
Make sure to have a Transit engine enable within Vault:
vault secrets enable transit
Success! Enabled the transit secrets engine at: transit/
List the secret engines:
vault secrets list
Path Type Accessor Description
---- ---- -------- -----------
cubbyhole/ ns_cubbyhole ns_cubbyhole_491a549d per-token private secret storage
identity/ ns_identity ns_identity_01d57d96 identity store
sys/ ns_system ns_system_d0f157ca system endpoints used for control, policy and debugging
transit/ transit transit_3a41addc n/a
NOTE about missing VAULT_NAMESPACE
Not exporting the VAULT_NAMESPACE will results in a similar error message when enabling the transit engine or even trying to list them:
vault secrets enable transit
Error enabling: Error making API request.
URL: POST https://vault-dev.vault.3c414da7-6890-49b8-b635-e3808a5f4fee.aws.hashicorp.cloud:8200/v1/sys/mounts/transit
Code: 403. Errors:
* 1 error occurred:
* permission denied
Finally, create a transit key:
vault write -f transit/keys/vault-kms-demo
Success! Data written to: transit/keys/vault-kms-demo
Building a Kubernetes RKE2 cluster is a different approach then with RKE fromn a configuration perpective as there is no more cluster.yml
configuration file. Instead of it use /etc/rancher/rke2/config.yaml
.
The best course of actions is to:
- deploy a fresh RKE2 control plane with the
/var/lib/rancher/rke2/server/cred/encryption-config.json
configured with an empty key - when the entire control plane has converged and stable AND all the
vault-kms-provider
daemonset are up and running (with potential expected crash) - modify the
/etc/rancher/rke2/config.yaml
changing the--encryption-provider-config
to point tovault-kms-encryption-config.yaml
Warning
Do not proceed with the below steps if you already have an existing RKE2 cluster deployed, the etcd is being encrypted at-rest by default with a key configured in the file/var/lib/rancher/rke2/server/cred/encryption-config.json
.
Removing the key or reconfiguring with the below steps will prevent access to previous secrets!
- prepare the following directory structure:
├── etc
│ └── rancher
│ └── rke2
│ └── config.yaml
├── opt
│ └── vault-kms
│ ├── config.yaml
└── var
└── lib
└── rancher
└── rke2
└── server
├── cred
│ └── vault-kms-encryption-config.yaml
└── manifests
└── trousseau-hcvault.yaml
Here are the content for each files:
-
/etc/rancher/rke2/config.yaml
:
# server: https://<address>:9345 #to edit/uncomment for second and third control plane node
# token: <rke2_server_token> #to edit/uncomment for second and third control plane node
kube-apiserver-arg:
- "--encryption-provider-config=/var/lib/rancher/rke2/server/cred/vault-kms-encryption-config.yaml"
kube-apiserver-extra-mount:
- "/opt/vault-kms:/opt/vault-kms"
-
/opt/vault-kms/config.yaml
:
provider: vault
vault:
keynames:
- <transit-secret-key-name>
address: https://<vault_address>:8200
token: <vault_token>
-
/var/lib/rancher/rke2/server/cred/vault-kms-encryption-config.yaml
:
kind: EncryptionConfiguration
apiVersion: apiserver.config.k8s.io/v1
resources:
- resources:
- secrets
providers:
- identity: {}
# - kms:
# name: vaultprovider
# endpoint: unix:///opt/vault-kms/vaultkms.socket
# cachesize: 1000
# - identity: {}
-
/var/lib/rancher/rke2/server/manifests/trousseau-hcvault.yaml
: see DaemonSet file here -
run the following command line for this first control plane/master/server node:
curl -sfL https://get.rke2.io | sh -
systemctl enable --now rke2-server.service
- or run the command generated by the Rancher UI with roles "etcd,control plane"
-
get the server node token from
/var/lib/rancher/rke2/server/node-token
and uncomment/edit theserver
andtoken
parameter it to the/etc/rancher/rke2/config.yaml
for control plane/master/server node 2 and 3: -
/etc/rancher/rke2/config.yaml
:
server: https://myfirstnode:9345 #to edit/uncomment for second and third control plane node
token: <rke2_server_token> #to edit/uncomment for second and third control plane node
kube-apiserver-arg:
- "--encryption-provider-config=/var/lib/rancher/rke2/server/cred/vault-kms-encryption-config.yaml"
kube-apiserver-extra-mount:
- "/opt/vault-kms:/opt/vault-kms"
- run on control plane/master/server node 2 and 3:
curl -sfL https://get.rke2.io | sh -
systemctl enable --now rke2-server.service
- or run the command generated by the Rancher UI with roles "etcd,control plane"
At this stage, the control plane will be deployed and converged to a nominal state with with no encryption at-rest and no vault-kms-provider. This process might take up to 5-15 minutes depending of the level of customization.
-
modify on every node of the control plane the file
/etc/rancher/rke2/config.yaml
by commenting--encryption-provider-config=/var/lib/rancher/rke2/server/cred/encryption-config.json
and uncommenting--encryption-provider-config=/var/lib/rancher/rke2/server/cred/vault-kms-encryption-config.yaml
-
run the following command on every node of the control plane:
systemctl restart rke2-server.service
Trousseau is coming with a Prometheus endpoint for monitoring with basic Grafana dashboard.
An example of configuration for the Prometheus endpoint access is available within the folder scripts/templates/monitoring
with the name prometheus.yaml
.
An example of configuration for the Grafana dashboard configuration is available within the folder scripts/templates/monitoring
with the name grafana-dashboard.yaml
.