clusterctl
is the SIG-cluster-lifecycle sponsored tool that implements the Cluster API.
Read the experience doc here. To gain viewing permissions, please join either the kubernetes-dev or kubernetes-sig-cluster-lifecycle google group.
Due to the limitations described below, you must currently compile and run a clusterctl
binary
from your chosen provider implementation rather than using the binary from
this repository.
- Cluster API runs its operations in Kubernetes. A pre-existing or temporary bootstrap cluster is required. Currently, we support multiple methods to bootstrap Cluster API:
kind
(preferred),minikube
or any pre-existing cluster. - If you are using
kind
or existing Kubernetes cluster, go to step 3. If you are usingminikube
, install a driver. For Linux, we recommendkvm2
. For MacOS, we recommend VirtualBox. - Build the
clusterctl
tool
$ git clone https://github.com/kubernetes-sigs/cluster-api $GOPATH/src/sigs.k8s.io/cluster-api
$ cd $GOPATH/src/sigs.k8s.io/cluster-api/cmd/clusterctl/
$ go build
clusterctl
can only use a provider that is compiled in. As provider specific code has been moved out
of this repository, running the clusterctl
binary compiled from this repository isn't particularly useful.
There is current work ongoing to rectify this issue, which centers around removing the
ProviderDeployer interface
from the clusterdeployer
package. The two tracking issues for removing the two functions in the interface are
https://github.com/kubernetes-sigs/cluster-api/issues/158 and https://github.com/kubernetes-sigs/cluster-api/issues/160.
-
Create the
cluster.yaml
,machines.yaml
,provider-components.yaml
, andaddons.yaml
files configured for your cluster. See the provider specific templates and generation tools for your chosen provider implementation. -
Create a cluster:
- Bootstrap Cluster: Use
bootstrap-type
, currently onlykind
andminikube
are supported.
./clusterctl create cluster --provider <provider> --bootstrap-type <bootstrap-type> -c cluster.yaml \ -m machines.yaml -p provider-components.yaml -a addons.yaml
If you are using minikube, to choose a specific minikube driver, please use the
--bootstrap-flags vm-driver=xxx
command line parameter. For example to use the kvm2 driver with clusterctl you woud add--bootstrap-flags vm-driver=kvm2
.- Existing Cluster: Use
bootstrap-cluster-kubeconfig
. This flag is used when you have an existing Kubernetes cluster.
./clusterctl create cluster --provider <provider> --bootstrap-cluster-kubeconfig <kubeconfig> \ -c cluster.yaml -m machines.yaml -p provider-components.yaml -a addons.yaml
- Bootstrap Cluster: Use
Additional advanced flags can be found via help.
Also, some environment variables are supported:
CLUSTER_API_MACHINE_READY_TIMEOUT
: set this value to adjust the timeout value in minutes for a machine to become ready, The default timeout is currently 30 minutes, export CLUSTER_API_MACHINE_READY_TIMEOUT=45
will extend the timeout value to 45 minutes.
CLUSTER_API_KUBECONFIG_READY_TIMEOUT
: set this value to adjust the timeout value in minutes to wait for the KubeConfig to be ready. Defaults to 20 minutes.
./clusterctl create cluster --help
If you are using kind, set the KUBECONFIG
environment variable first before using kubectl:
export KUBECONFIG="$(kind get kubeconfig-path --name="clusterapi")"
Once you have created a cluster, you can interact with the cluster and machine resources using kubectl:
$ kubectl --kubeconfig kubeconfig get clusters
$ kubectl --kubeconfig kubeconfig get machines
$ kubectl --kubeconfig kubeconfig get machines -o yaml
NOTE: There is no need to specify --kubeconfig
if your kubeconfig
was located in the default directory under $HOME/.kube/config
or if you have already exposed env variable KUBECONFIG
.
You can scale your cluster by adding additional individual Machines, or by adding a MachineSet or MachineDeployment and changing the number of replicas.
NOT YET SUPPORTED!
NOT YET SUPPORTED!
When you are ready to remove your cluster, you can use clusterctl to delete the cluster:
./clusterctl delete cluster --kubeconfig kubeconfig
Please also check the documentation for your provider implementation to determine if any additional steps need to be taken to completely clean up your cluster.
If you are interested in adding to this project, see the contributing guide for information on how you can get involved.