In this guide, we will go step-by-step through the process of using Crossplane Managed Resources
to provision and manage a GKE cluster from a local KIND management cluster. Here's what we'll do:
- Set up a local KIND cluster as our management environment and install Crossplane into it.
- Configure Google Cloud to allow Crossplane to create and manage GKE clusters.
- Add Crossplane-managed custom resources that define and manage remote GKE clusters.
- Declaratively provision a GKE cluster from within your KIND environment using Crossplane!!!
Confession: this is not how it is usually done. In real life, Crossplane
Managed Resources
are not created directly, as we do in this demo. In real life, CrossplaneManaged Resources
are typically created as part of a CrossplaneComposition
. However, this exercise is worthwhile because it will help you understand the first principles of how Crossplane works.
Okay! Let's do this thing!!!
- Docker Desktop
- KIND
- Helm
- gcloud CLI, authorized to access Google Cloud
- yq
When you create your KIND cluster, a kubeconfig
file will get created which has your cluster connection credentials (among other things).
Tell KIND to use this file to store its configuration -- kubeconfig-kind.yaml
. That file has already been added to .gitignore
.
export KUBECONFIG=$PWD/kubeconfig-kind.yaml
Next, create a KIND cluster.
kind create cluster
Install Crossplane into the KIND cluster using Helm.
First, enable the Crossplane Helm Chart repository.
helm repo add \
crossplane-stable https://charts.crossplane.io/stable
helm repo update
Then install the Crossplane components into a crossplane-system
namespace.
helm install crossplane \
crossplane-stable/crossplane \
--namespace crossplane-system \
--create-namespace
Verify your Crossplane installation.
kubectl get pods -n crossplane-system
NAME READY STATUS RESTARTS AGE
crossplane-7b47779878-jv9kk 1/1 Running 0 115s
crossplane-rbac-manager-7f8b68c844-4lz2f 1/1 Running 0 115s
If you like, check out all of the custom resources that got added to your cluster as part of the Crossplane installation:
kubectl api-resources | grep crossplane
compositeresourcedefinitions xrd,xrds apiextensions.crossplane.io/v1 false CompositeResourceDefinition
compositionrevisions comprev apiextensions.crossplane.io/v1 false CompositionRevision
compositions comp apiextensions.crossplane.io/v1 false Composition
environmentconfigs envcfg apiextensions.crossplane.io/v1alpha1 false EnvironmentConfig
usages apiextensions.crossplane.io/v1alpha1 false Usage
configurationrevisions pkg.crossplane.io/v1 false ConfigurationRevision
configurations pkg.crossplane.io/v1 false Configuration
controllerconfigs pkg.crossplane.io/v1alpha1 false ControllerConfig
deploymentruntimeconfigs pkg.crossplane.io/v1beta1 false DeploymentRuntimeConfig
functionrevisions pkg.crossplane.io/v1 false FunctionRevision
functions pkg.crossplane.io/v1 false Function
locks pkg.crossplane.io/v1beta1 false Lock
providerrevisions pkg.crossplane.io/v1 false ProviderRevision
providers pkg.crossplane.io/v1 false Provider
storeconfigs secrets.crossplane.io/v1alpha1 false StoreConfig
Create a Google Cloud project.
export PROJECT_ID=wiggity-$(date +%Y%m%d%H%M%S)
gcloud projects create $PROJECT_ID
Enable Google Cloud Kubernetes API.
echo "https://console.cloud.google.com/marketplace/product/google/container.googleapis.com?project=$PROJECT_ID"
# Open the URL from the output and enable the Kubernetes API
Create a Google Cloud Service Account named wiggitywhitney
.
export SA_NAME=wiggitywhitney
export SA="${SA_NAME}@${PROJECT_ID}.iam.gserviceaccount.com"
gcloud iam service-accounts create $SA_NAME \
--project $PROJECT_ID
Bind the wiggitywhitney
service account to an admin role.
export ROLE=roles/admin
gcloud projects add-iam-policy-binding \
--role $ROLE $PROJECT_ID \
--member serviceAccount:$SA
Create credentials in a gcp-creds.json
file (already added to .gitignore
).
gcloud iam service-accounts keys create gcp-creds.json \
--project $PROJECT_ID \
--iam-account $SA
Create a secret named gcp-secret
that contains the GCP credentials that we just created and add it to the crossplane-system
namespace.
kubectl --namespace crossplane-system \
create secret generic gcp-secret \
--from-file creds=./gcp-creds.json
To see your new secret, run the following:
kubectl --namespace crossplane-system \
get secret gcp-secret \
--output yaml
View the provider-gcp-container
manifest. When applied, it will install the Crossplane infrastructure provider for GCP.
cat crossplane-config/provider-gcp-container.yaml
apiVersion: pkg.crossplane.io/v1
kind: Provider
metadata:
name: provider-gcp-container
spec:
package: xpkg.upbound.io/upbound/provider-gcp-container:v1.7.0
Providers extend Crossplane by installing controllers for new kinds of managed resources.
Apply provider-gcp-container
to your cluster to add three new custom resource definitions to your cluster. Each of these CRDs is called a Managed Resource
, and each one is Crossplane's representation of a GCP resource.
Once this Provider is installed, you will have the ability to manage external cloud resources via the Kubernetes API.
kubectl apply -f crossplane-config/provider-gcp-container.yaml
To your three new Crossplane custom resource definitions, run the following:
kubectl api-resources | grep "container.gcp.upbound.io"
clusters container.gcp.upbound.io/v1beta2 false Cluster
nodepools container.gcp.upbound.io/v1beta2 false NodePool
registries container.gcp.upbound.io/v1beta1 false Registry
(Spoiler alert: later we're going to create Cluster
and NodePool
resources!)
Next we need to teach Crossplane how to connect to our Google Cloud project with the permissions that we created in the last step. We do that using a Crossplane ProviderConfig
resource.
Run this command to add your project name to the providerconfig.yaml
file that is already in this repo:
yq --inplace ".spec.projectID = \"$PROJECT_ID\"" crossplane-config/providerconfig.yaml
As you can see, our ProviderConfig
references both our GCP project name and the gcp-secret
Kubernetes secret that we created earlier.
cat crossplane-config/providerconfig.yaml
apiVersion: gcp.upbound.io/v1beta1
kind: ProviderConfig
metadata:
name: default
spec:
projectID: <your $PROJECT_ID>
credentials:
source: Secret
secretRef:
namespace: crossplane-system
name: gcp-secret
key: creds
Let's apply it to the cluster.
kubectl apply -f crossplane-config/providerconfig.yaml
Great! Now we can use Crossplane and our local KIND cluster to create a GKE cluster!
- API Documentation for the Crossplane GCP
Cluster
Managed Resource - API Documentation for the Crossplane GCP
NodePool
Managed Resource
Apply this minimal clusterandnodepool
resource definition to make a GKE cluster!
kubectl apply -f cluster-definitions/clusterandnodepool.yaml
To see the Crossplane resources that got created, run the following:
kubectl get managed
This is creating an external Google Cloud Kubernetes cluster, so it may take some minutes for the resources to become Ready
. Mine took about 12 minutes.
NAME SYNCED READY EXTERNAL-NAME AGE
cluster.container.gcp.upbound.io/newclusterwhodis True True newclusterwhodis 12m
NAME SYNCED READY EXTERNAL-NAME AGE
nodepool.container.gcp.upbound.io/newnodepoolwhodis True True newnodepoolwhodis 12m
You made a Crossplane Cluster
resource and a Crossplane NodePool
resource, which in turn made an external GKE Cluster and a GKE Node Pool! Let's view the manifests.
cat cluster-definitions/clusterandnodepool.yaml
apiVersion: container.gcp.upbound.io/v1beta1
kind: Cluster
metadata:
name: newclusterwhodis
labels:
cluster-name: newclusterwhodis
spec:
forProvider:
deletionProtection: false
removeDefaultNodePool: true
initialNodeCount: 1
location: us-central1-b
---
apiVersion: container.gcp.upbound.io/v1beta1
kind: NodePool
metadata:
name: newnodepoolwhodis
labels:
cluster-name: newclusterwhodis
spec:
forProvider:
clusterSelector:
matchLabels:
cluster-name: newclusterwhodis
nodeCount: 1
nodeConfig:
- preemptible: true
machineType: e2-medium
oauthScopes:
- https://www.googleapis.com/auth/cloud-platform
You can get
and describe
your Crossplane Cluster
and NodePool
resources just like any other Kubernetes resource.
kubectl get cluster newclusterwhodis
NAME SYNCED READY EXTERNAL-NAME AGE
newclusterwhodis True True newclusterwhodis 14m
kubectl describe nodepool newnodepoolwhodis
# This has too long of an output to display here. But you should run it!
Let's view our newly created external resources! Which method do you prefer?
echo "https://console.cloud.google.com/kubernetes/list/overview?project=$PROJECT_ID"
# Open the URL from the output
Click around and try and find the Easter egg label set on the machine that our NodePool
resource created!
Do you give up? Find more detailed instructions here.
If needed, authorize gcloud
to access the Google Cloud Platform:
gcloud auth login
Set the project
property in your gcloud configuration
gcloud config set project $PROJECT_ID
See the cluster that you and Crossplane made!
gcloud container clusters list
NAME LOCATION MASTER_VERSION MASTER_IP MACHINE_TYPE NODE_VERSION NUM_NODES STATUS
newclusterwhodis us-central1-b 1.30.3-gke.1639000 104.155.132.170 e2-medium 1.30.3-gke.1639000 1 RUNNING
Describe the cluster that you and Crossplane made!
gcloud container clusters describe newclusterwhodis \
--region us-central1-b
Explore! Try and find the Easter egg label set on the machine that our NodePool
resource created!
Do you give up? Find more detailed instructions here.
Because your GKE cluster is being managed by the instance of Crossplane that is running in your KIND cluster, when you delete your local newclusterwhodis
Crossplane Cluster
resource and your newnodepoolwhodis
Crossplane NodePool
resource, it will also delete the associated GKE resources that are running in Google Cloud.
But you don't have to take my word for it! Let's do it!
kubectl delete cluster newclusterwhodis
kubectl delete nodepool newnodepoolwhodis
In a few minutes, once the commands resolve, use your preferred method (web UI or CLI) to see that the GKE resources have been deleted.
To summarize, we just did the following:
- We created a KIND cluster on our local machine
- We installed Crossplane
- We configured Google Cloud to allow Crossplane to create and manage GKE resources by creating a Service Account, giving it admin permissions, and saving credidentals in a secret accessible by Crossplane
- We enabled Crossplane to create and manage GKE clusters by adding Crossplane-managed
Cluster
andNodePool
custom resources to the cluster via a CrossplaneProvider
. - We created instances of those resources which provisioned a remote GKE cluster that is managed by Crossplane! Huzzah!
- We deleted the remote GKE resources simply by deleting the Crossplane resources
This exercise is an important part of understanding how Crossplane works. However, in real life, resources like Cluster
and NodePool
are not created directly, like we just did. They're usually created as part of a Crossplane Composition
. We'll learn about those in Part 2!
TO BE CONTINUED IN PART 2 - Compositions
Do not run these if you are continuing to PART 2 - Compositions
!
Destroy KIND cluster
kind delete cluster
Delete the GCP project
gcloud projects delete $PROJECT_ID --quiet
Delete the kubeconfig file
echo $KUBECONFIG
## MAKE SURE YOU ARE DELETING THE RIGHT FILE
rm -rf -i $PWD/kubeconfig-kind.yaml