-
Notifications
You must be signed in to change notification settings - Fork 1
/
Copy pathen.search-data.min.cca5c9273a609378fa482e2c35e7d6bc341cc4ab64e6f4e19bcfed76e2e58aec.js
1 lines (1 loc) · 78 KB
/
en.search-data.min.cca5c9273a609378fa482e2c35e7d6bc341cc4ab64e6f4e19bcfed76e2e58aec.js
1
'use strict';(function(){const b={cache:!0};b.doc={id:'id',field:['title','content'],store:['title','href']};const a=FlexSearch.create('balance',b);window.bookSearchIndex=a,a.add({id:0,href:'/documentation/',title:"Documentation",content:"Documentation Welcome to Flexkube project documentation. Here you can find all information how to use Flexkube.\nFor new users If you are new user and want to try things out, head to Getting started section to learn the requirements and how to install tools provided by Flexkube.\nFollow the guide If you want to see step-by-step how to use Flexkube to create Kubernetes or etcd cluster, see the Guides section. There you can find various scenarios of use.\nSee available resources To find out what resources (e.g. kubelets, etcd cluster) Flexkube can manage, see the Resources section. It includes the description of each resource and what features they include.\nExample configurations To see configuration examples, go to the Examples section, which contains various snippets for CLI, Terraform and Go users.\nExplore Helm charts If you are interested in the Helm charts maintained by Flexkube, see the Helm charts section.\nDive into Flexkube internals If you would like to learn more about how Flexkube works, see Concepts section. There you can find description of various aspects of Flexkube functioning.\nReference documentation If you want to know more about available configuration options, see Reference section.\n"}),a.add({id:1,href:'/documentation/concepts/',title:"Concepts",content:"Concepts "}),a.add({id:2,href:'/documentation/concepts/managing-certificates/',title:"Managing Certificates",content:""}),a.add({id:3,href:'/documentation/concepts/managing-containers/',title:"Managing Containers",content:"Managing containers This document should explain how libflexkube manages the containers.\n"}),a.add({id:4,href:'/documentation/concepts/self-hosted-kubernetes-controlplane/',title:"Self Hosted Kubernetes Controlplane",content:"Self-hosted Kubernetes controlplane This document should describe why Flexkube uses and recommends using self-hosted Kubernetes controlplane, how it works etc.\n"}),a.add({id:5,href:'/documentation/concepts/supported-container-runtimes/',title:"Supported Container Runtimes",content:"Supported container runtimes This document should explain how libflexkube utilizes different container runtimes and link to existing implementations. Also, it should describe what other possible container runtimes could be added.\n"}),a.add({id:6,href:'/documentation/concepts/supported-container-runtimes/docker/',title:"Docker",content:""}),a.add({id:7,href:'/documentation/concepts/supported-transport-protocols/',title:"Supported Transport Protocols",content:"Supported transport protocols This document should explain what are transport protocols in libflexkube, how they are used and should link to all implementations.\n"}),a.add({id:8,href:'/documentation/concepts/supported-transport-protocols/direct/',title:"Direct",content:"Direct transport protocol This document should explain how direct transport protocol works.\n"}),a.add({id:9,href:'/documentation/concepts/supported-transport-protocols/ssh/',title:"SSH",content:"SSH transport protocol This document should explain how SSH transport protocol works.\n"}),a.add({id:10,href:'/documentation/examples/',title:"Examples",content:"Examples This section contains various\n"}),a.add({id:11,href:'/documentation/examples/cli/',title:"CLI",content:"CLI This section contains various configuration snippets for flexkube CLI.\n"}),a.add({id:12,href:'/documentation/examples/go/',title:"Go",content:"Go This section contains various sample Go programs, which shows how to use libflexkube library.\nFor more examples, see libflexkube unit tests and integration tests, as well as flexkube CLI implementation and Flexkube Terraform provider code.\n"}),a.add({id:13,href:'/documentation/examples/terraform/',title:"Terraform",content:"Terraform This section contains various sample Terraform code, which shows how to use terraform-provider-flexkube .\nFor more examples, see libflexkube e2e tests code.\n"}),a.add({id:14,href:'/documentation/getting-started/',title:"Getting Started",content:"Getting started This section includes some basic information of how to start using the project, how to download release binaries, what are the requirements etc.\nTo see the requirements for machines to deploy to and others, see the Requirements section.\nTo see how to install Flexkube tooling, see the Installing section.\n"}),a.add({id:15,href:'/documentation/getting-started/installing/',title:"Installing",content:"Installing Depending how you want to use Flexkube, you should see appropriate installing section:\n CLI for flexkube CLI users Terraform for Terraform users Go for Go module users "}),a.add({id:16,href:'/documentation/getting-started/installing/cli/',title:"CLI",content:"Flexkube CLI Download the pre-built binary The easiest way to get Flexkube CLI is to use one of the pre-built release binaries which are available for macOS and Linux.\nSee Github Releases page to find the latest available release.\nFor example, to download version v0.4.0 on Linux, execute the following command:\nVERSION=v0.4.0 It will download the flexkube binary into your current directory. It is recommende to move this binary into one of directories mentioned in your $PATH environment variable, e.g. to ~/.local/bin or /usr/local/bin, to make it easy to access.\nBuilding from source For building from source, make sure you have go and git binaries available in your system.\nUsing go get You can install Flexkube CLI from source using the following command:\ngo get github.com/flexkube/libflexkube/cmd/flexkube Once done, make sure your Go binary path is included in $PATH, so the binary is accessible for execution.\nUsing git and go build To build Flexkube CLI from source, first clone libflexkube repository. This can be done using the following command:\ngit clone https://github.com/flexkube/libflexkube.git \u0026amp;\u0026amp; cd libflexkube Then, to build the binary, run the following command:\ngo build ./cmd/flexkube When build is finished, the binary should be in the current directory. It is recommende to move this binary into one of directories mentioned in your $PATH environment variable, e.g. to ~/.local/bin or /usr/local/bin, to make it easy to access.\n"}),a.add({id:17,href:'/documentation/getting-started/installing/go/',title:"Go",content:"Go module Recommended way of using Flexkube in your Go project is via libflexkube library. libflexkube uses Go modules to manage it\u0026rsquo;s dependencies, so it is also recommended for your project to use it.\nTo add libflexkube module to your project, simply run the following command:\ngo get github.com/flexkube/libflexkube It will import latest release of libflexkube into your project.\nWith module added, go to Go examples to see how to use it in your code or see reference documentation to see all available packages.\n"}),a.add({id:18,href:'/documentation/getting-started/installing/terraform/',title:"Terraform",content:"Terraform provider Using Terraform 0.13 and Terraform Registry The easiest way to get Flexkube Terraform provider is to pull it from the Terraform Registry. You can do that by adding the following snippet to required_providers block in terraform block in your module configuration:\nflexkube = { source = \u0026#34;flexkube/flexkube\u0026#34; version = \u0026#34;0.4.0\u0026#34; } So example versions.tf file would look like following:\nterraform { required_providers { flexkube = { source = \u0026#34;flexkube/flexkube\u0026#34; version = \u0026#34;0.4.0\u0026#34; } } } Building from source For building from source, make sure you have go and git binaries available in your system.\nUsing go get You can install Flexkube Terraform Provider from source using the following command:\ngo get github.com/flexkube/libflexkube/cmd/terraform-provider-flexkube Once done, it is recommended to move the binary into ~/.local/share/terraform/plugins/registry.terraform.io/flexkube/flexkube/0.4.0/linux_amd64/ directory to make it available for all Terraform environments:\nmkdir -p ~/.local/share/terraform/plugins/registry.terraform.io/flexkube/flexkube/0.4.0/linux_amd64 \u0026amp;\u0026amp; mv $(go env GOPATH)/bin/terraform-provider-flexkube ~/.local/share/terraform/plugins/registry.terraform.io/flexkube/flexkube/0.4.0/linux_amd64/terraform-provider-flexkube Using git and go build To build Terraform provider from source, first clone terraform-provider-flexkube repository. This can be done using the following command:\ngit clone https://github.com/flexkube/terraform-provider-flexkube.git \u0026amp;\u0026amp; cd terraform-provider-flexkube Then, to build Terraform Provider binary, run the following command:\ngo build Once done, it is recommended to move the binary into ~/.local/share/terraform/plugins/registry.terraform.io/flexkube/flexkube/0.4.0/linux_amd64/ directory to make it available for all Terraform environments:\nmkdir -p ~/.local/share/terraform/plugins/registry.terraform.io/flexkube/flexkube/0.4.0/linux_amd64 \u0026amp;\u0026amp; mv ./terraform-provider-flexkube ~/.local/share/terraform/plugins/registry.terraform.io/flexkube/flexkube/0.4.0/linux_amd64/terraform-provider-flexkube "}),a.add({id:19,href:'/documentation/getting-started/requirements/',title:"Requirements",content:"Requirements This section describes various requirements of Flexkube.\nIt is recommended to deploy Flexkube resources (e.g. etcd, kubelet) into dedicated machine, not into local host, as resources will write to some hosts directories like /etc/kubernetes, /var/lib/kubelet or /var/lib/etcd to persist the cluster state across updates. Summary Short summary of the requirements for each machine where Kubernetes will be deployed:\n Minimum 2 GB of RAM SSH server configured (if deploying to remote machines) Internet access Docker daemon installed and running Hardware requirements To create Kubernetes cluster using Flexkube, you need a machine with at least 2 GB of RAM for controller node and at least 1 GB of RAM for worker nodes.\nConnectivity Containers registry Machines which will be part of the cluster must have access to container registry from where the cluster component images will be pulled. By default public registries are used, so machines must have internet access.\nIf you re-configure the cluster to use images from private repository, internet access should not be required.\nSSH For deploying on remote machines, Flexkube use SSH tunnels to talk to container runtime on remote machine, so make sure SSH daemon is configured on them and is accessible from the host you will be deploying.\nIf you deploy only on local machine or your container runtime on remote machines is directly reachable over network SSH is not required.\nUser used for SSH connection should have permissions to talk to local container runtime.\nNetwork It is recommended, that all machines which are part of the cluster are connected using private network, to avoid exposing your cluster components to the internet.\nContainer runtime Flexkube runs all of Kubernetes controlplane components as containers, so container runtime must be installed and configured on the machines before deploying.\nAt the moment only Docker runtime is supported. In the future, support for more container runtime might be added.\nCreating virtual machines locally for testing. If you don\u0026rsquo;t have a suitable machine available for testing, see Creating virtual machines for testing page, which explains how to create one on your local machine.\n"}),a.add({id:20,href:'/documentation/getting-started/requirements/creating-virtual-machines-for-testing/',title:"Creating Virtual Machines for Testing",content:"Creating virtual machines for testing If you don\u0026rsquo;t have a spare machine available for testing, you can create it locally, using VirtualBox and Vagrant. Make sure you have both tools installed by following respective guides:\n Installing VirtualBox Installing Vagrant Single node If you need just one machine, create file named Vagrantfile with the following content:\nVagrant.configure(\u0026#34;2\u0026#34;) do |config| config.vm.box = \u0026#34;flatcar-stable\u0026#34; config.vm.box_url = \u0026#34;https://stable.release.flatcar-linux.net/amd64-usr/current/flatcar_production_vagrant.box\u0026#34; config.ssh.username = \u0026#39;core\u0026#39; config.vm.provider :virtualbox do |v| v.memory = 1024 end end Then, run the following commands to create and connect to the machine:\nvagrant up \u0026amp;\u0026amp; vagrant ssh Multiple nodes If you need more than one machine, create file named Vagrantfile with the following content:\nVagrant.configure(\u0026#34;2\u0026#34;) do |config| config.vm.box = \u0026#34;flatcar-stable\u0026#34; config.vm.box_url = \u0026#34;https://stable.release.flatcar-linux.net/amd64-usr/current/flatcar_production_vagrant.box\u0026#34; config.ssh.username = \u0026#39;core\u0026#39; config.vm.provider :virtualbox do |v| v.memory = 1024 end config.vm.define \u0026#34;member1\u0026#34; do |config| config.vm.hostname = \u0026#34;member1\u0026#34; config.vm.network \u0026#34;private_network\u0026#34;, ip: \u0026#34;192.168.52.10\u0026#34; end config.vm.define \u0026#34;member2\u0026#34; do |config| config.vm.hostname = \u0026#34;member2\u0026#34; config.vm.network \u0026#34;private_network\u0026#34;, ip: \u0026#34;192.168.52.11\u0026#34; end end Then, run the following commands to create the machines:\nvagrant up "}),a.add({id:21,href:'/documentation/guides/',title:"Guides",content:"Guides This section contains user guides describing step-by-step how to execute specific scenarios using specific tools. The guides are grouped by the topics:\n Etcd describes creating and maintenance of etcd clusters Kubernetes describes creating and maintenance of Kubernetes clusters "}),a.add({id:22,href:'/documentation/guides/etcd/',title:"Etcd",content:"etcd guides In this section you can find all guides related to managing etcd clusters using Flexkube. Depending on tools you want to use and setup you need, see the appropriate guide:\n Creating single-member cluster on local machine using Terraform Creating single-member cluster on local machine using Flexkube CLI Creating multi-node cluster over SSH using Terraform "}),a.add({id:23,href:'/documentation/guides/etcd/creating-multi-member-cluster-over-ssh-using-terraform/',title:"Creating Multi Member Cluster Over SSH Using Terraform",content:"Creating multi-member cluster over SSH using Terraform This guide describes how to create multi member etcd cluster using Terraform and Flexkube provider. The process is very simple and requires just a few steps. If you have at least 3 members, your cluster will be able to tolerate loss on one member, so it will be highly available.\nRequirements For this guide, it is required to have at least 2 Linux machines, with Docker daemon installed and running.\nIt is recommended that machines has at least 1 GB of RAM and are fresh machines, as in tutorial the tools will write to directories like /var/lib/etcd or /etc/kubernetes without notice.\nThe Docker version should be 18.06+. You can follow Docker documentation to see how to install Docker on your machine.\nNetwork interfaces setup is not important, however having a private IP address is recommended from security perspective.\nThe machines must be able to communicate with each other.\nIf you don\u0026rsquo;t have such machines, visit Creating virtual machines for testing to see how to create them locally.\nPreparation Before we start creating a cluster, we need to gather some information and download required binaries.\nIP addresses for etcd members and SSH IP addresses of members must be known ahead of cluster creation time.\nYou can find available IP addresses on your machines using e.g. ifconfig tool.\nYou can try getting the IP address automatically using the following command:\nip addr show dev $(ip r | grep default | tr \u0026#39; \u0026#39; \\\\n | grep -A1 dev | tail -n1) | grep \u0026#39;inet \u0026#39; | awk \u0026#39;{print $2}\u0026#39; | cut -d/ -f1 Save the IP addresses of your machines, as they will be needed later on for configuration.\nIf you plan to use different IP addresses for connecting over SSH to your machines and different for members to communicate, note both of them. Downloading terraform binary For this guide, you must have terraform binary available. You can download it using the following command:\nexport TERRAFORM_VERSION=0.15.4 wget https://releases.hashicorp.com/terraform/${TERRAFORM_VERSION}/terraform_${TERRAFORM_VERSION}_linux_amd64.zip \u0026amp;\u0026amp; \\ unzip terraform_${TERRAFORM_VERSION}_linux_amd64.zip \u0026amp;\u0026amp; \\ rm terraform_${TERRAFORM_VERSION}_linux_amd64.zip Downloading etcdctl binary (optional) To test cluster functionality, you can download etcdctl binary, however, this is optional. Also, if you use Flatcar Container Linux, the binary should be available on the system already.\nYou can download it using the following command:\nexport ETCD_VERSION=v3.4.13 wget https://storage.googleapis.com/etcd/${ETCD_VERSION}/etcd-${ETCD_VERSION}-linux-amd64.tar.gz -O- | tar zxvf - etcd-${ETCD_VERSION}-linux-amd64/etcdctl \u0026amp;\u0026amp; mv etcd-${ETCD_VERSION}-linux-amd64/etcdctl ./ \u0026amp;\u0026amp; rmdir etcd-${ETCD_VERSION}-linux-amd64 Make downloaded binaries available in $PATH For compatibility with rest of the tutorial, you should make sure that downloaded binaries are in one of the directories in the $PATH environment variable.\nYou can also add working directory to the $PATH using the following command:\nexport PATH=\u0026#34;$(pwd):${PATH}\u0026#34; Creating the cluster Now that you have all required binaries and information, we can start creating the cluster.\nTerraform configuration First, create main.tf file with the following content:\nterraform { required_providers { flexkube = { source = \u0026#34;flexkube/flexkube\u0026#34; version = \u0026#34;0.5.1\u0026#34; } local = { source = \u0026#34;hashicorp/local\u0026#34; version = \u0026#34;1.4.0\u0026#34; } } required_version = \u0026#34;\u0026gt;= 0.15\u0026#34; } variable \u0026#34;members\u0026#34; { type = map(object({ peer_address = string ssh_address = string })) } variable \u0026#34;ssh_user\u0026#34; { default = \u0026#34;\u0026#34; } variable \u0026#34;ssh_password\u0026#34; { default = \u0026#34;\u0026#34; } variable \u0026#34;ssh_private_key\u0026#34; { default = \u0026#34;\u0026#34; } resource \u0026#34;flexkube_pki\u0026#34; \u0026#34;pki\u0026#34; { etcd { peers = { for name, member in var.members : name =\u0026gt; member.peer_address } servers = { for name, member in var.members : name =\u0026gt; member.peer_address } client_cns = [\u0026#34;root\u0026#34;] } } resource \u0026#34;flexkube_etcd_cluster\u0026#34; \u0026#34;etcd\u0026#34; { pki_yaml = flexkube_pki.pki.state_yaml dynamic \u0026#34;member\u0026#34; { for_each = var.members content { name = member.key peer_address = member.value.peer_address server_address = member.value.peer_address host { ssh { user = var.ssh_user password = var.ssh_password private_key = var.ssh_private_key address = member.value.ssh_address } } } } } locals { ca_cert = \u0026#34;./ca.pem\u0026#34; cert = \u0026#34;./client.pem\u0026#34; key = \u0026#34;./client.key\u0026#34; } resource \u0026#34;local_file\u0026#34; \u0026#34;etcd_ca_certificate\u0026#34; { content = flexkube_pki.pki.etcd[0].ca[0].x509_certificate filename = local.ca_cert } resource \u0026#34;local_file\u0026#34; \u0026#34;etcd_root_user_certificate\u0026#34; { content = flexkube_pki.pki.etcd[0].client_certificates[index(flexkube_pki.pki.etcd[0].client_cns, \u0026#34;root\u0026#34;)].x509_certificate filename = local.cert } resource \u0026#34;local_file\u0026#34; \u0026#34;etcd_root_user_private_key\u0026#34; { sensitive_content = flexkube_pki.pki.etcd[0].client_certificates[index(flexkube_pki.pki.etcd[0].client_cns, \u0026#34;root\u0026#34;)].private_key filename = local.key } resource \u0026#34;local_file\u0026#34; \u0026#34;etcd_environment\u0026#34; { filename = \u0026#34;./etcd.env\u0026#34; content = \u0026lt;\u0026lt;EOF#!/bin/bash export ETCDCTL_API=3 export ETCDCTL_CACERT=${abspath(local.ca_cert)} export ETCDCTL_CERT=${abspath(local.cert)} export ETCDCTL_KEY=${abspath(local.key)} export ETCDCTL_ENDPOINTS=\u0026#34;${join(\u0026#34;,\u0026#34;, formatlist(\u0026#34;https://%s:2379\u0026#34;, [for name, member in var.members : member.peer_address]))}\u0026#34; EOF depends_on = [ flexkube_etcd_cluster.etcd, ] } Terraform values Next, create file named values.auto.tfvars, which will store the values required by the Terraform configuration. The file should look like following:\nmembers = { \u0026#34;member1\u0026#34; = { peer_address = \u0026#34;192.168.52.10\u0026#34; ssh_address = \u0026#34;192.168.52.10\u0026#34; }, \u0026#34;member2\u0026#34; = { peer_address = \u0026#34;192.168.52.11\u0026#34; ssh_address = \u0026#34;192.168.52.11\u0026#34; }, } ssh_user = \u0026#34;core\u0026#34; ssh_password = \u0026#34;\u0026#34; ssh_port = 22 ssh_private_key = \u0026lt;\u0026lt;EOF EOF First, it has defined map of members, where they key is the member name and then each member has peer address and SSH address defined. peer_address will be used for etcd and ssh_address will be used to SSH into the machines.\nNext, make sure that SSH settings are correct. If the SSH key, which is authorized to log in into the machines, is loaded in your ssh-agent, you don\u0026rsquo;t need to specify any credentials. Flexkube will automatically pick it up and use it. If not, you can specify content of private key in ssh_private_key field or use password authentication using ssh_password field.\nUsing bastion host is currently not supported, though it will be in the future.\nRunning Terraform Now, to create the cluster run following commands:\nterraform init \u0026amp;\u0026amp; terraform apply If everything went successfully, you should see now running etcd container, when you execute docker ps on the machines.\nVerifying cluster functionality Now that the cluster is running, we can verify that it is functional.\nInspect created files After creating the cluster, you can find following files in the working directory, created by Terraform:\n ca.pem containing etcd CA X.509 certificate in PEM format. client.pem containing etcd client X.509 certificate in PEM format, with root Common Name. client.key RSA private key in PEM format for certificate in client.pemfile. etcd.env containing environment variables needed for etcdctl. Certificates and private key files are required to access the cluster. The etcd.env file is just a helper file for this tutorial.\nThe files can also be safely removed, as all the certificates are stored in Terraform state anyway.\nUsing etcdctl etcdctl can be used to verify that the cluster is functional and to perform some basic operations as well as administrative tasks.\nTo be able to use it, it is recommended to set environment variables, pointing to the certificates and cluster members, so they don\u0026rsquo;t have to be repeated for each command.\nWith this guide, you get etcd.env helper file created, from which you can load the environment variables, using following command:\nsource etcd.env Now etcdctl is ready to use.\nTo check if cluster is healthy, execute the following command:\netcdctl endpoint health What\u0026rsquo;s next With cluster running, you can now start using it, e.g. to deploy Kubernetes cluster. To do that using Flexkube and Terraform, you can follow Creating multi node Kubernetes cluster using Terraform.\nTo clean up created resources, see the section below.\nCleaning up First step of removing the cluster is running Terraform, to remove all containers. To perform that, run this command:\nterraform destroy Once finished, you can remove the directories created by the cluster, using the following command on the machines:\nsudo rm -rf /var/lib/etcd/ /etc/kubernetes/ "}),a.add({id:24,href:'/documentation/guides/etcd/creating-single-member-cluster-on-local-machine-using-flexkube-cli/',title:"Creating Single Member Cluster on Local Machine Using Flexkube CLI",content:"Creating single-node etcd cluster on local machine using \u0026ldquo;flexkube\u0026rdquo; CLI This guide describes how to create single member etcd cluster using flexkube CLI. It will explain cluster creation process step by step to explain the configuration and provide some insights.\nFor fully automated creation, see Creating single-member etcd cluster on local machine using Terraform.\nRequirements For this guide, it is required to have one Linux machine, with Docker daemon installed and running.\nIt is recommended that machine has at least 1 GB of RAM and is a fresh machine, as in tutorial the tools will write to directories like /var/lib/etcd or /etc/kubernetes without notice.\nThe Docker version should be 18.06+. You can follow Docker documentation to see how to install Docker on your machine.\nNetwork interfaces setup is not important, however having a private IP address is recommended from security perspective.\nIf you don\u0026rsquo;t have such machine, visit Creating virtual machines for testing to see how to create one locally.\nPreparation Before we start creating a cluster, we need to gather some information and download required binaries.\nLog in into the machine where you want to deploy etcd before proceeding.\nIP address for etcd member IP addresses of members must be known ahead of cluster creation time.\nYou can find available IP addresses on your machine using e.g. ifconfig tool.\nYou can try getting the IP address automatically using the following command:\nexport IP=$(ip addr show dev $(ip r | grep default | tr \u0026#39; \u0026#39; \\\\n | grep -A1 dev | tail -n1) | grep \u0026#39;inet \u0026#39; | awk \u0026#39;{print $2}\u0026#39; | cut -d/ -f1); echo $IP On VirtualBox, we can use 10.0.2.15 IP.\nSave the IP address for future use using the following command:\nexport IP=10.0.2.15 Downloading flexkube binary Once logged in, execute the following command to download flexkube CLI binary into working directory. This is the binary, which will be used to create a cluster components.\nexport FLEXKUBE_VERSION=v0.6.0 wget -O- https://github.com/flexkube/libflexkube/releases/download/${FLEXKUBE_VERSION}/flexkube_${FLEXKUBE_VERSION}_linux_amd64.tar.gz | tar zxvf - Downloading etcdctl binary (optional) To test cluster functionality, you can download etcdctl binary, however, this is optional. Also, if you use Flatcar Container Linux, the binary should be available on the system already.\nYou can download it using the following command:\nexport ETCD_VERSION=v3.4.16 wget https://storage.googleapis.com/etcd/${ETCD_VERSION}/etcd-${ETCD_VERSION}-linux-amd64.tar.gz -O- | tar zxvf - etcd-${ETCD_VERSION}-linux-amd64/etcdctl \u0026amp;\u0026amp; mv etcd-${ETCD_VERSION}-linux-amd64/etcdctl ./ \u0026amp;\u0026amp; rmdir etcd-${ETCD_VERSION}-linux-amd64 Make downloaded binaries available in $PATH For compatibility with rest of the tutorial, you should make sure that downloaded binaries are in one of the directories in the $PATH environment variable.\nYou can also add working directory to the $PATH using the following command:\nexport PATH=\u0026#34;$(pwd):${PATH}\u0026#34; Checking Docker availability To avoid runtime issues while running flexkube, run the following command to ensure, that Docker is running and is accessible on your machine:\ndocker ps Creating the cluster Now that you have all required binaries and information, we can start creating the cluster.\nCreating certificates First step to create a cluster is to generate all certificates required by etcd. For that, we will use Flexkube PKI resource.\nBefore we create the certificates, we need to provide some configuration to tell PKI resource to create etcd certificates, as by default it only creates Root CA certificate.\nFor this guide, you can create configuration using the following command:\ncat \u0026lt;\u0026lt;EOF | sed \u0026#39;/^$/d\u0026#39; \u0026gt; config.yaml pki: etcd: peers: member1: ${IP} EOF See PKI configuration reference to see all available configuration options. In the following example, we use member1 as a etcd member name. There is no strict convention about the names. etcd documentation suggests to use the hostname or machine-id, which might be a good choice, if you plan to run only one member on a single machine. Please note, that changing the member name here must also be reflected in next steps of the tutorial.\nOnce created, run the following command to generate the certificates:\nflexkube pki If everything succeeded, you should find many certificates in newly created state.yaml file.\nYou can inspect state.yaml file using the following command:\nless state.yaml In there, you should find etcd CA certificate and private key, peer and server certificates and private keys for all members we defined (in this tutorial only member1) and root CA certificate with private key.\nThe certificates properties are generated with accordance with Kubernetes PKI certificates and requirements.\n Creating etcd cluster With certificates ready, we can now create etcd cluster using etcd resource.\nTo create etcd cluster, we need to configure it\u0026rsquo;s members in config.yaml file. This can be done using the following command:\ncat \u0026lt;\u0026lt;EOF \u0026gt;\u0026gt; config.yaml etcd: members: member1: peerAddress: ${IP} EOF See etcd configuration reference to see all available configuration options. Now, you can run the following command to create etcd cluster:\nflexkube etcd When you execute this command, it will print you the list of the containers which will be created and ask you to confirm it.\nYou can also run it with --yes flag, to skip the confirmation.\nAfter confirmation, flexkube binary will by default talk to Docker runtime over UNIX socket on local host and create desired containers.\nOn consecutive runs, flexkube will first check the state of the created containers and then, it there is any update pending (e.g. container image update), it will again show you the diff which will be applied and ask you for confirmation.\nYou can also run it with --noop to only see if there are some updates pending.\n Once finished, if you run docker ps, you should see etcd container running.\nInspecting state.yaml file (optional) With etcd cluster created, state about running containers will be stored in state.yaml file. Storing state is needed to calculate configuration updates in the future runs and also to allow cleaning up created containers.\nYou can have a look into state file using the following command:\nless state.yaml In there, you can find list of all containers which has been created, their configuration files, flags and on which host and using which container runtime they has been created. This is useful if you want to inspect the configuration of created containers.\nVerifying cluster functionality To verify that the cluster is healthy, we will use etcd member certificates itself and previously downloaded etcdctl binary.\nFirst, we need to prepare the environment variables used by etcdctl to define how to authenticate to cluster. This can be done using the following commands:\nexport ETCDCTL_API=3 export ETCDCTL_CACERT=/etc/kubernetes/etcd/ca.crt export ETCDCTL_CERT=/etc/kubernetes/etcd/peer.crt export ETCDCTL_KEY=/etc/kubernetes/etcd/peer.key export ETCDCTL_ENDPOINTS=\u0026#34;https://10.0.2.15:2379\u0026#34; Now, we check if all endpoints are healthy, using this command:\nsudo -E etcdctl endpoint health We use sudo, as created certificate files are only readable by root user. If you are using root user already or you don\u0026rsquo;t want to use sudo, you can extract client certificates from state.yaml file. What\u0026rsquo;s next With cluster running, you can now start using it, e.g. to deploy Kubernetes cluster. To do that using Flexkube and Terraform, you can follow Creating single node Kubernetes cluster on local machine using Terraform.\nTo clean up created resources, see the section below.\nCleaning up To clean up the host, first, rename or remove config.yaml file, so CLI will be able to clean up the resources. For example, execute:\nmv config.yaml config.yaml.old Now you can remove all containers managed by flexkube using following commands:\nflexkube etcd Finally, following directories can be removed as well:\nsudo rm -rf /etc/kubernetes/ /var/lib/etcd/ "}),a.add({id:25,href:'/documentation/guides/etcd/creating-single-member-cluster-on-local-machine-using-terraform/',title:"Creating Single Member Cluster on Local Machine Using Terraform",content:"Creating single-member cluster on local machine using Terraform This guide describes how to create single member etcd cluster using Terraform and Flexkube provider. The process is very simple and requires just a few steps.\nFor more detailed guide, see Creating single member etcd cluster on local machine using flexkube CLI.\nRequirements For this guide, it is required to have one Linux machine, with Docker daemon installed and running.\nIt is recommended that machine has at least 1 GB of RAM and is a fresh machine, as in tutorial the tools will write to directories like /var/lib/etcd or /etc/kubernetes without notice.\nThe Docker version should be 18.06+. You can follow Docker documentation to see how to install Docker on your machine.\nNetwork interfaces setup is not important, however having a private IP address is recommended from security perspective.\nIf you don\u0026rsquo;t have such machine, visit Creating virtual machines for testing to see how to create one locally.\nPreparation Before we start creating a cluster, we need to gather some information and download required binaries.\nLog in into the machine where you want to deploy etcd before proceeding.\nIP address for etcd member IP addresses of members must be known ahead of cluster creation time.\nYou can find available IP addresses on your machine using e.g. ifconfig tool.\nYou can try getting the IP address automatically using the following command:\nexport TF_VAR_ip=$(ip addr show dev $(ip r | grep default | tr \u0026#39; \u0026#39; \\\\n | grep -A1 dev | tail -n1) | grep \u0026#39;inet \u0026#39; | awk \u0026#39;{print $2}\u0026#39; | cut -d/ -f1); echo $TF_VAR_ip On VirtualBox, we can use 10.0.2.15 IP.\nSave the IP address for future use using the following command:\nexport TF_VAR_ip=10.0.2.15 Downloading terraform binary For this guide, you must have terraform binary available. You can download it using the following command:\nexport TERRAFORM_VERSION=0.13.1 wget https://releases.hashicorp.com/terraform/${TERRAFORM_VERSION}/terraform_${TERRAFORM_VERSION}_linux_amd64.zip \u0026amp;\u0026amp; \\ unzip terraform_${TERRAFORM_VERSION}_linux_amd64.zip \u0026amp;\u0026amp; \\ rm terraform_${TERRAFORM_VERSION}_linux_amd64.zip Downloading etcdctl binary (optional) To test cluster functionality, you can download etcdctl binary, however, this is optional. Also, if you use Flatcar Container Linux, the binary should be available on the system already.\nYou can download it using the following command:\nexport ETCD_VERSION=v3.4.16 wget https://storage.googleapis.com/etcd/${ETCD_VERSION}/etcd-${ETCD_VERSION}-linux-amd64.tar.gz -O- | tar zxvf - etcd-${ETCD_VERSION}-linux-amd64/etcdctl \u0026amp;\u0026amp; mv etcd-${ETCD_VERSION}-linux-amd64/etcdctl ./ \u0026amp;\u0026amp; rmdir etcd-${ETCD_VERSION}-linux-amd64 Make downloaded binaries available in $PATH For compatibility with rest of the tutorial, you should make sure that downloaded binaries are in one of the directories in the $PATH environment variable.\nYou can also add working directory to the $PATH using the following command:\nexport PATH=\u0026#34;$(pwd):${PATH}\u0026#34; Creating the cluster Now that you have all required binaries and information, we can start creating the cluster.\nCreate main.tf file with the following content:\nterraform { required_providers { flexkube = { source = \u0026#34;flexkube/flexkube\u0026#34; version = \u0026#34;0.5.1\u0026#34; } local = { source = \u0026#34;hashicorp/local\u0026#34; version = \u0026#34;1.4.0\u0026#34; } } required_version = \u0026#34;\u0026gt;= 0.15\u0026#34; } variable \u0026#34;ip\u0026#34; {} variable \u0026#34;name\u0026#34; { default = \u0026#34;member01\u0026#34; } resource \u0026#34;flexkube_pki\u0026#34; \u0026#34;pki\u0026#34; { etcd { peers = { \u0026#34;${var.name}\u0026#34; = var.ip } servers = { \u0026#34;${var.name}\u0026#34; = var.ip } client_cns = [\u0026#34;root\u0026#34;] } } resource \u0026#34;flexkube_etcd_cluster\u0026#34; \u0026#34;etcd\u0026#34; { pki_yaml = flexkube_pki.pki.state_yaml member { name = var.name peer_address = var.ip server_address = var.ip } } locals { ca_cert = \u0026#34;./ca.pem\u0026#34; cert = \u0026#34;./client.pem\u0026#34; key = \u0026#34;./client.key\u0026#34; } resource \u0026#34;local_file\u0026#34; \u0026#34;etcd_ca_certificate\u0026#34; { content = flexkube_pki.pki.etcd[0].ca[0].x509_certificate filename = local.ca_cert } resource \u0026#34;local_file\u0026#34; \u0026#34;etcd_root_user_certificate\u0026#34; { content = flexkube_pki.pki.etcd[0].client_certificates[index(flexkube_pki.pki.etcd[0].client_cns, \u0026#34;root\u0026#34;)].x509_certificate filename = local.cert } resource \u0026#34;local_file\u0026#34; \u0026#34;etcd_root_user_private_key\u0026#34; { sensitive_content = flexkube_pki.pki.etcd[0].client_certificates[index(flexkube_pki.pki.etcd[0].client_cns, \u0026#34;root\u0026#34;)].private_key filename = local.key } resource \u0026#34;local_file\u0026#34; \u0026#34;etcd_environment\u0026#34; { filename = \u0026#34;./etcd.env\u0026#34; content = \u0026lt;\u0026lt;EOF#!/bin/bash export ETCDCTL_API=3 export ETCDCTL_CACERT=${abspath(local.ca_cert)} export ETCDCTL_CERT=${abspath(local.cert)} export ETCDCTL_KEY=${abspath(local.key)} export ETCDCTL_ENDPOINTS=\u0026#34;https://${var.ip}:2379\u0026#34; EOF depends_on = [ flexkube_etcd_cluster.etcd, ] } Now, to create the cluster run following commands:\nterraform init \u0026amp;\u0026amp; terraform apply Terraform should pick up the IP address automatically, if you exported it to TF_VAR_ip environment variable.\nIf everything went successfully, you should see now running etcd container, when you execute docker ps.\nVerifying cluster functionality Now that the cluster is running, we can verify that it is functional.\nInspect created files After creating the cluster, you can find following files in the working directory, created by Terraform:\n ca.pem containing etcd CA X.509 certificate in PEM format. client.pem containing etcd client X.509 certificate in PEM format, with root Common Name. client.key RSA private key in PEM format for certificate in client.pemfile. etcd.env containing environment variables needed for etcdctl. Certificates and private key files are required to access the cluster. The etcd.env file is just a helper file for this tutorial.\nThe files can also be safely removed, as all the certificates are stored in Terraform state anyway.\nUsing etcdctl etcdctl can be used to verify that the cluster is functional and to perform some basic operations as well as administrative tasks.\nTo be able to use it, it is recommended to set environment variables, pointing to the certificates and cluster members, so they don\u0026rsquo;t have to be repeated for each command.\nWith this guide, you get etcd.env helper file created, from which you can load the environment variables, using following command:\nsource etcd.env Now etcdctl is ready to use.\nTo check if cluster is healthy, execute the following command:\netcdctl endpoint health What\u0026rsquo;s next With cluster running, you can now start using it, e.g. to deploy Kubernetes cluster. To do that using Flexkube and Terraform, you can follow Creating single node Kubernetes cluster on local machine using Terraform.\nTo clean up created resources, see the section below.\nCleaning up First step of removing the cluster is running Terraform, to remove all containers. To perform that, run this command:\nterraform destroy Once finished, you can remove the directories created by the cluster, using the following command:\nsudo rm -rf /var/lib/etcd/ /etc/kubernetes/ "}),a.add({id:26,href:'/documentation/guides/kubernetes/',title:"Kubernetes",content:"Kubernetes guides In this section you can find all guides related to managing Kubernetes clusters using Flexkube. Depending on tools you want to use and setup you need, see the appropriate guide:\n Creating single-node cluster on local machine using \u0026ldquo;flexkube\u0026rdquo; CLI Creating single-node cluster on local machine using Terraform Creating multi-node cluster using Terraform "}),a.add({id:27,href:'/documentation/guides/kubernetes/creating-multi-node-cluster-using-terraform/',title:"Creating Multi Node Cluster Using Terraform",content:"Creating multi-node cluster using Terraform "}),a.add({id:28,href:'/documentation/guides/kubernetes/creating-single-node-cluster-on-local-machine-using-flexkube-cli/',title:"Creating Single Node Cluster on Local Machine Using Flexkube CLI",content:"Creating single-node cluster on local machine using \u0026ldquo;flexkube\u0026rdquo; CLI This guide describes how to create single node Kubernetes cluster using flexkube CLI. It will explain cluster creation process step by step to explain the configuration and provide some insights.\nFor fully automated creation, see Creating single-node Kubernetes cluster on local machine using Terraform.\nRequirements For this guide, it is required to have one Linux machine, with Docker daemon installed and running.\nIt is recommended that machine has at least 2 GB of RAM and is a fresh machine, as in tutorial the tools will write to directories like /etc/kubernetes or /var/lib/kubelet without notice.\nThe Docker version should be 18.06+.\nNetwork interfaces setup is not important, however having a private IP address is recommended from security perspective.\nIf you don\u0026rsquo;t have such machine, visit Creating virtual machines for testing to see how to create one locally.\nTL;DR If this guide is too long for you, you can try just running the script below, which summarizes all the steps from this guide.\nexport IP=$(ip addr show dev $(ip r | grep default | tr \u0026#39; \u0026#39; \\\\n | grep -A1 dev | tail -n1) | grep \u0026#39;inet \u0026#39; | awk \u0026#39;{print $2}\u0026#39; | cut -d/ -f1); echo $IP export POD_CIDR=10.0.0.0/24 export SERVICE_CIDR=11.0.0.0/24 export KUBERNETES_SERVICE_IP=11.0.0.1 export DNS_SERVICE_IP=11.0.0.10 export FLEXKUBE_VERSION=v0.6.0 export PATH=\u0026#34;$(pwd):${PATH}\u0026#34; export TOKEN_ID=$(cat /dev/urandom | tr -dc \u0026#39;a-z0-9\u0026#39; | fold -w 6 | head -n 1) export TOKEN_SECRET=$(cat /dev/urandom | tr -dc \u0026#39;a-z0-9\u0026#39; | fold -w 16 | head -n 1) export KUBECONFIG=$(pwd)/kubeconfig export API_SERVER_PORT=6443 umask 077 [ ! -f flexkube ] \u0026amp;\u0026amp; wget -O- https://github.com/flexkube/libflexkube/releases/download/${FLEXKUBE_VERSION}/flexkube_${FLEXKUBE_VERSION}_linux_amd64.tar.gz | tar zxvf - [ ! -f kubectl ] \u0026amp;\u0026amp; curl -LO https://storage.googleapis.com/kubernetes-release/release/`curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt`/bin/linux/amd64/kubectl \u0026amp;\u0026amp; chmod +x kubectl [ ! -f helm ] \u0026amp;\u0026amp; wget -O- https://get.helm.sh/helm-v3.5.4-linux-amd64.tar.gz | tar -zxvf - linux-amd64/helm \u0026amp;\u0026amp; mv linux-amd64/helm ./ \u0026amp;\u0026amp; rmdir linux-amd64 cat \u0026lt;\u0026lt;EOF | sed \u0026#39;/^$/d\u0026#39; \u0026gt; config.yaml pki: etcd: clientCNs: - kube-apiserver peers: testing: ${IP} kubernetes: kubeAPIServer: serverIPs: - ${IP} - ${KUBERNETES_SERVICE_IP} etcd: members: testing: peerAddress: ${IP} controlplane: apiServerAddress: ${IP} apiServerPort: ${API_SERVER_PORT} kubeAPIServer: serviceCIDR: ${SERVICE_CIDR} etcdServers: - https://${IP}:2379 kubeControllerManager: flexVolumePluginDir: /var/lib/kubelet/volumeplugins kubeletPools: default: bootstrapConfig: token: ${TOKEN_ID}.${TOKEN_SECRET} server: ${IP}:${API_SERVER_PORT} adminConfig: server: ${IP}:${API_SERVER_PORT} privilegedLabels: node-role.kubernetes.io/master: \u0026#34;\u0026#34; volumePluginDir: /var/lib/kubelet/volumeplugins kubelets: - name: testing address: ${IP} EOF flexkube --yes pki flexkube --yes etcd flexkube --yes controlplane flexkube --yes kubeconfig | grep -v \u0026#34;Trying to read\u0026#34; \u0026gt; ${KUBECONFIG} helm repo add flexkube https://flexkube.github.io/charts/ helm upgrade --install -n kube-system tls-bootstrapping flexkube/tls-bootstrapping --set tokens[0].token-id=$TOKEN_ID --set tokens[0].token-secret=$TOKEN_SECRET flexkube --yes kubelet-pool default helm upgrade --install --wait -n kube-system kube-proxy flexkube/kube-proxy --set \u0026#34;podCIDR=${POD_CIDR}\u0026#34; --set apiServers=\u0026#34;{${IP}:${API_SERVER_PORT}}\u0026#34; helm upgrade --install --wait -n kube-system calico flexkube/calico --set flexVolumePluginDir=/var/lib/kubelet/volumeplugins --set podCIDR=$POD_CIDR helm upgrade --install --wait -n kube-system coredns flexkube/coredns --set rbac.pspEnable=true --set service.ClusterIP=$DNS_SERVICE_IP helm upgrade --install --wait -n kube-system kubelet-rubber-stamp flexkube/kubelet-rubber-stamp If something fails, head down to specific section of this guide for more information.\nPreparation Before we start creating a cluster, we need to gather some information and download required binaries.\nLog in into the machine where you want to deploy Kubernetes before proceeding.\nIP address for deployment To configure cluster components, you need to provide the IP address, which will be used by the cluster. You can find available IP addresses using e.g. ifconfig command.\nYou can try getting the IP address automatically using the following command:\nexport IP=$(ip addr show dev $(ip r | grep default | tr \u0026#39; \u0026#39; \\\\n | grep -A1 dev | tail -n1) | grep \u0026#39;inet \u0026#39; | awk \u0026#39;{print $2}\u0026#39; | cut -d/ -f1); echo $IP On VirtualBox, we can use 10.0.2.15 IP.\nSave the IP address for future use using the following command:\nexport IP=10.0.2.15 Selecting service CIDR and pod CIDR Kubernetes requires 2 network CIDRs to operate, one from each pod will receive the IP address and one for Service objects with type ClusterIP. While selecting the CIDRs, make sure they don\u0026rsquo;t overlap with each other and other networks your machine is connected to.\nOnce decided on CIDRs, we should also save 2 special IP addresses:\n kubernetes Service - This IP address will be used by pods which talk to Kubernetes API. It must be included in kube-apiserver server certificate IP addresses list. This must be first address of Service CIDR. So if your service CIDR is 11.0.0.0/24, it should be 11.0.0.1. DNS Service - This IP address will be used by cluster\u0026rsquo;s DNS service. This IP is usually 10th address of Service CIDR. So if your service CIDR is 11.0.0.0/24, it should be 11.0.0.10. With all this information gathered, you command like this to save this information for later use:\nexport POD_CIDR=10.0.0.0/24 export SERVICE_CIDR=11.0.0.0/24 export KUBERNETES_SERVICE_IP=11.0.0.1 export DNS_SERVICE_IP=11.0.0.10 Downloading flexkube binary Once logged in, execute the following command to download flexkube CLI binary into working directory. This is the binary, which will be used to create a cluster components.\nexport FLEXKUBE_VERSION=v0.6.0 wget -O- https://github.com/flexkube/libflexkube/releases/download/${FLEXKUBE_VERSION}/flexkube_${FLEXKUBE_VERSION}_linux_amd64.tar.gz | tar zxvf - Downloading kubectl binary To verify that cluster is operational it is recommended to have kubectl binary available. You can install it using the following command:\ncurl -LO https://storage.googleapis.com/kubernetes-release/release/`curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt`/bin/linux/amd64/kubectl \u0026amp;\u0026amp; chmod +x kubectl Downloading helm binary Parts of cluster provisioning is done using Helm 3 binary, when deploying the cluster using the flexkube CLI. You can install it using the following command:\nexport HELM_VERSION=3.5.4 wget -O- https://get.helm.sh/helm-v${HELM_VERSION}-linux-amd64.tar.gz | tar -zxvf - linux-amd64/helm \u0026amp;\u0026amp; mv linux-amd64/helm ./ \u0026amp;\u0026amp; rmdir linux-amd64 Make downloaded binaries available in $PATH For compatibility with rest of the tutorial, you should make sure that downloaded binaries are in one of the directories in the $PATH environment variable.\nYou can also add working directory to the $PATH using the following command:\nexport PATH=\u0026#34;$(pwd):${PATH}\u0026#34; Creating the cluster Now that you have all required binaries and information, we can start creating the cluster.\nCreating certificates First step to create a cluster is to generate all certificates required by Kubernetes. As this is not a trivial task to create and manage those certificates, Flexkube provides PKI resource, which does exactly that.\nBefore we create the certificates, we need to provide some configuration to tell PKI resource to create for you both etcd and Kubernetes certificates, as by default it only creates Root CA certificate.\nFor this guide, you can create configuration using the following command:\ncat \u0026lt;\u0026lt;EOF | sed \u0026#39;/^$/d\u0026#39; \u0026gt; config.yaml pki: etcd: clientCNs: - kube-apiserver peers: testing: ${IP} kubernetes: kubeAPIServer: serverIPs: - ${IP} - ${KUBERNETES_SERVICE_IP} EOF See PKI configuration reference to see all available configuration options. Once created, run the following command to generate the certificates:\nflexkube pki If everything succeeded, you should find many certificates in newly created state.yaml file.\nCreating etcd cluster Before we start Kubernetes containers, we need etcd cluster. Flexkube provides etcd resource to manage such clusters.\nTo create etcd cluster, we need to configure it\u0026rsquo;s members in config.yaml file. This can be done using the following command:\ncat \u0026lt;\u0026lt;EOF \u0026gt;\u0026gt; config.yaml etcd: members: testing: peerAddress: ${IP} EOF See etcd configuration reference to see all available configuration options. Now, you can run the following command to create etcd cluster:\nflexkube etcd Once finished, you should see etcd container running, if you run docker ps.\nCreating static Kubernetes controlplane With etcd running, you can now create static Kubernetes controlplane. Static, as Flexkube recommends to run Kubernetes controlplane self-hosted, so managed using Kubernetes itself. However, before this can be done, temporary, or static controlplane is needed. And this is exactly what Controlplane resource provides.\nYou can configure it by running the following command:\nexport API_SERVER_PORT=6443 cat \u0026lt;\u0026lt;EOF \u0026gt;\u0026gt; config.yaml controlplane: apiServerAddress: ${IP} apiServerPort: ${API_SERVER_PORT} kubeAPIServer: serviceCIDR: ${SERVICE_CIDR} etcdServers: - https://${IP}:2379 kubeControllerManager: flexVolumePluginDir: /var/lib/kubelet/volumeplugins EOF See Controlplane configuration reference to see all available configuration options. Now, you can create Kubernetes controlplane using the following command:\nflexkube controlplane Execution can take a while, as Kubernetes docker images must be now pulled.\nOnce finished, you should see 3 new containers running when you run docker ps.\nGetting kubeconfig file Even though the cluster has no objects or deployments yet, you should be able to access it already. For that, you need kubeconfig file. flexkube CLI provides flexkube kubeconfig command, which will read information about the cluster from configuration and state files and print it to you.\nTo generate kubeconfig file, run the following command:\nflexkube kubeconfig | grep -v \u0026#34;Trying to read\u0026#34; \u0026gt; kubeconfig kubeconfig file should be created.\nNow, you need to configure Kubernetes clients to use this file. This can be done using the following command:\nexport KUBECONFIG=$(pwd)/kubeconfig You can now run kubectl version to verify, that the cluster is accessible.\nAdding Flexkube Helm charts repository Before proceeding, make sure you have flexkube Helm repositories configured, as it is the recommended source for installing the charts mentioned in next sections. You can add required repository by running the following command:\nhelm repo add flexkube https://flexkube.github.io/charts/ Adding nodes to the cluster Having a cluster without nodes is not very useful. This section describes how to add nodes to your cluster.\nCreating TLS bootstrapping RBAC rules and bootstrap tokens Flexkube requires TLS bootstrapping process to be used while adding new nodes to the cluster. To enable that, extra RBAC rules must be created before nodes tries to join the cluster.\nThis step is handled by tls-bootstrapping helm chart, which creates RBAC rules and allows to create bootstrap tokens.\nFirst, we need to generate bootstrap token, which will be used in next steps. You can do it by running the following commands:\nexport TOKEN_ID=$(cat /dev/urandom | tr -dc \u0026#39;a-z0-9\u0026#39; | fold -w 6 | head -n 1) export TOKEN_SECRET=$(cat /dev/urandom | tr -dc \u0026#39;a-z0-9\u0026#39; | fold -w 16 | head -n 1) Then, install the chart to create RBAC rules and bootstrap token, by running this command:\nhelm upgrade --install -n kube-system tls-bootstrapping flexkube/tls-bootstrapping --set tokens[0].token-id=$TOKEN_ID --set tokens[0].token-secret=$TOKEN_SECRET Creating kubelet pool With Flexkube, kubelets are managed in pools by Kubelet Pool resource. This allows to group them to share the configuration. Usually clusters have one group called controllers which runs controlplane components and one or more worker pools, which might characterize with e.g. different hardware.\nFor this tutorial, we will just create single pool default.\nYou can configure this pool by running the following command:\ncat \u0026lt;\u0026lt;EOF \u0026gt;\u0026gt; config.yaml kubeletPools: default: bootstrapConfig: token: ${TOKEN_ID}.${TOKEN_SECRET} server: ${IP}:${API_SERVER_PORT} adminConfig: server: ${IP}:${API_SERVER_PORT} privilegedLabels: node-role.kubernetes.io/master: \u0026#34;\u0026#34; volumePluginDir: /var/lib/kubelet/volumeplugins kubelets: - name: testing address: ${IP} EOF See Kubelet pool configuration reference to see all available configuration options. Now, to create default pool, run the following command:\nflexkube kubelet-pool default Once finished, you should see that node testing has been added to the cluster by running kubectl get nodes.\nInstalling CNI, CoreDNS and other packages Now that you have cluster running with nodes, you need to install some extra packages to make the cluster fully functional.\nInstalling kube-proxy kube-proxy is not required for bare Kubernetes cluster, so it can be fully managed using Kubernetes itself.\nkube-proxy handles load balancing traffic to service CIDR in the cluster.\nTo install it, run the following command:\nhelm upgrade --install -n kube-system kube-proxy flexkube/kube-proxy --set \u0026#34;podCIDR=${POD_CIDR}\u0026#34; --set apiServers=\u0026#34;{${IP}:${API_SERVER_PORT}}\u0026#34; Installing Calico chart as CNI plugin While not necessarily required for this guide, as we only run one node, it is recommended to install some CNI plugin on the cluster, as without that, kubelet will stay in NotReady state.\nFlexkube recommends using Calico as a CNI plugin, as it works on variety of platforms and provides both IPAM and NetworkPolicies implementation. Flexkube also provides calico helm chart, so Calico installation can be easily configured and managed.\nTo install it, run the following command:\nhelm upgrade --install -n kube-system calico flexkube/calico --set flexVolumePluginDir=/var/lib/kubelet/volumeplugins --set podCIDR=$POD_CIDR We specify flexVolumePluginDir, as default path is on /usr partition, which is read-only in Flatcar Container Linux. Installing CoreDNS as Cluster DNS To provide DNS resolving for pods and service names it is recommended to run CoreDNS on your cluster. It can be installed from upstream Helm chart.\nTo install it, run the following command:\nhelm upgrade --install -n kube-system coredns stable/coredns --set rbac.pspEnable=true --set service.ClusterIP=$DNS_SERVICE_IP Installing kubelet-rubber-stamp As part of kubelet TLS bootstrapping process, kubelet requests serving certificate from Kubernetes API, to be able to use it for serving logs and metrics securely to kube-apiserver.\nAt the time of writing, kube-controller-manager does not approve those certificates and 3rd party controller needs to be used to automate this process. This is what kubelet-rubber-stamp does.\nIt can be installed by running the following command:\nhelm upgrade --install -n kube-system kubelet-rubber-stamp flexkube/kubelet-rubber-stamp Verifying cluster functionality Now your cluster is ready to use. Go ahead and try deploying some application on it. Please keep following things in mind, while using the cluster:\n Service of type LoadBalancer won\u0026rsquo;t get the IP address, as there is no controller, which could assign it. The cluster has Pod Security Policies enabled by default. Make sure your deployment ships the PSP. There is no storage provider on the cluster, so pods requesting PVCs will be stuck in pending state. Cleaning up To clean up the host, first, uninstall all helm releases, so kubelet removes all the pods. This can be done using the following command:\nhelm uninstall -n kube-system calico coredns kube-proxy kubelet-rubber-stamp tls-bootstrapping Then, rename or remove config.yaml file, so CLI will be able to clean up the resources. For example, execute:\nmv config.yaml config.yaml.old Now you can remove all containers managed by flexkube using following commands:\nflexkube kubelet-pool default flexkube controlplane flexkube etcd Finally, following directories can be removed as well:\nsudo rm -rf /etc/kubernetes/ /var/lib/etcd/ /var/lib/kubelet/ /var/lib/calico/ What\u0026rsquo;s next This guide explains, how to create a cluster using flexkube CLI, which explains every step and provides insights, but might be time consuming and error-prone. For fully automated installation, see \u0026ldquo;Creating single-node Kubernetes cluster on local machine using Terraform\u0026rdquo;.\nIf you want to deploy the cluster to remote machine(s), which also supports HA controlplane, see \u0026ldquo;Creating multi-node cluster using Terraform\u0026rdquo;.\n"}),a.add({id:29,href:'/documentation/guides/kubernetes/creating-single-node-cluster-on-local-machine-using-terraform/',title:"Creating Single Node Cluster on Local Machine Using Terraform",content:"Creating single-member cluster on local machine using Terraform This guide describes how to create single node Kubernetes cluster using Terraform and Flexkube provider. The process is very simple and requires just a few steps.\nFor more detailed guide, see Creating single node Kubernetes cluster on local machine using flexkube CLI.\nRequirements For this guide, it is required to have one Linux machine, with Docker daemon installed and running.\nIt is recommended that machine has at least 2 GB of RAM and is a fresh machine, as in tutorial the tools will write to directories like /var/lib/etcd, /etc/kubernetes or /var/lib/kubelet without notice.\nThe Docker version should be 18.06+. You can follow Docker documentation to see how to install Docker on your machine.\nNetwork interfaces setup is not important, however having a private IP address is recommended from security perspective.\nIf you don\u0026rsquo;t have such machine, visit Creating virtual machines for testing to see how to create one locally.\nPreparation Before we start creating a cluster, we need to gather some information and download required binaries.\nLog in into the machine where you want to deploy Kubernetes before proceeding.\nIP address for deployment To configure cluster components, you need to provide the IP address, which will be used by the cluster. You can find available IP addresses using e.g. ifconfig command.\nYou can try getting the IP address automatically using the following command:\nexport TF_VAR_ip=$(ip addr show dev $(ip r | grep default | tr \u0026#39; \u0026#39; \\\\n | grep -A1 dev | tail -n1) | grep \u0026#39;inet \u0026#39; | awk \u0026#39;{print $2}\u0026#39; | cut -d/ -f1); echo $IP On VirtualBox, we can use 10.0.2.15 IP.\nSave the IP address for future use using the following command:\nexport TF_VAR_ip=10.0.2.15 Selecting service CIDR and pod CIDR Kubernetes requires 2 network CIDRs to operate, one from each pod will receive the IP address and one for Service objects with type ClusterIP. While selecting the CIDRs, make sure they don\u0026rsquo;t overlap with each other and other networks your machine is connected to.\nOnce decided on CIDRs, we should also save 2 special IP addresses:\n kubernetes Service - This IP address will be used by pods which talk to Kubernetes API. It must be included in kube-apiserver server certificate IP addresses list. This must be first address of Service CIDR. So if your service CIDR is 11.0.0.0/24, it should be 11.0.0.1. DNS Service - This IP address will be used by cluster\u0026rsquo;s DNS service. This IP is usually 10th address of Service CIDR. So if your service CIDR is 11.0.0.0/24, it should be 11.0.0.10. With all this information gathered, you command like this to save this information for later use:\nexport TF_VAR_pod_cidr=10.0.0.0/24 export TF_VAR_service_cidr=11.0.0.0/24 export TF_VAR_kubernetes_service_ip=11.0.0.1 export TF_VAR_dns_service_ip=11.0.0.10 Selecting node name To ensure the consistency between deployed components, it is recommended to select some identifier for the node, which will be used as a etcd member name and Kubernetes Node name. In this guide, we can just use the hostname of the machine. This can be done by executing the following command:\nexport TF_VAR_node_name=$(hostname) Downloading terraform binary For this guide, you must have terraform binary available. You can download it using the following command:\nexport TERRAFORM_VERSION=0.15.6 wget https://releases.hashicorp.com/terraform/${TERRAFORM_VERSION}/terraform_${TERRAFORM_VERSION}_linux_amd64.zip \u0026amp;\u0026amp; \\ unzip terraform_${TERRAFORM_VERSION}_linux_amd64.zip \u0026amp;\u0026amp; \\ rm terraform_${TERRAFORM_VERSION}_linux_amd64.zip Downloading kubectl binary To verify that cluster is operational it is recommended to have kubectl binary available. You can install it using the following command:\ncurl -LO https://storage.googleapis.com/kubernetes-release/release/`curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt`/bin/linux/amd64/kubectl \u0026amp;\u0026amp; chmod +x kubectl Make downloaded binaries available in $PATH For compatibility with rest of the tutorial, you should make sure that downloaded binaries are in one of the directories in the $PATH environment variable.\nYou can also add working directory to the $PATH using the following command:\nexport PATH=\u0026#34;$(pwd):${PATH}\u0026#34; Creating the cluster Now that you have all required binaries and information, we can start creating the cluster.\nCreate main.tf file with the following content:\nterraform { required_providers { flexkube = { source = \u0026#34;flexkube/flexkube\u0026#34; version = \u0026#34;0.5.1\u0026#34; } local = { source = \u0026#34;hashicorp/local\u0026#34; version = \u0026#34;1.4.0\u0026#34; } random = { source = \u0026#34;hashicorp/random\u0026#34; version = \u0026#34;2.2.1\u0026#34; } } required_version = \u0026#34;\u0026gt;= 0.15\u0026#34; } variable \u0026#34;ip\u0026#34; {} variable \u0026#34;pod_cidr\u0026#34; {} variable \u0026#34;service_cidr\u0026#34; {} variable \u0026#34;kubernetes_service_ip\u0026#34; {} variable \u0026#34;dns_service_ip\u0026#34; {} variable \u0026#34;node_name\u0026#34; {} resource \u0026#34;flexkube_pki\u0026#34; \u0026#34;pki\u0026#34; { etcd { peers = { \u0026#34;${var.name}\u0026#34; = var.ip } servers = { \u0026#34;${var.name}\u0026#34; = var.ip } client_cns = [\u0026#34;root\u0026#34;] } } resource \u0026#34;flexkube_etcd_cluster\u0026#34; \u0026#34;etcd\u0026#34; { pki_yaml = flexkube_pki.pki.state_yaml member { name = var.name peer_address = var.ip server_address = var.ip } } locals { ca_cert = \u0026#34;./ca.pem\u0026#34; cert = \u0026#34;./client.pem\u0026#34; key = \u0026#34;./client.key\u0026#34; } resource \u0026#34;local_file\u0026#34; \u0026#34;etcd_ca_certificate\u0026#34; { content = flexkube_pki.pki.etcd[0].ca[0].x509_certificate filename = local.ca_cert } resource \u0026#34;local_file\u0026#34; \u0026#34;etcd_root_user_certificate\u0026#34; { content = flexkube_pki.pki.etcd[0].client_certificates[index(flexkube_pki.pki.etcd[0].client_cns, \u0026#34;root\u0026#34;)].x509_certificate filename = local.cert } resource \u0026#34;local_file\u0026#34; \u0026#34;etcd_root_user_private_key\u0026#34; { sensitive_content = flexkube_pki.pki.etcd[0].client_certificates[index(flexkube_pki.pki.etcd[0].client_cns, \u0026#34;root\u0026#34;)].private_key filename = local.key } resource \u0026#34;local_file\u0026#34; \u0026#34;etcd_environment\u0026#34; { filename = \u0026#34;./etcd.env\u0026#34; content = \u0026lt;\u0026lt;EOF#!/bin/bash export ETCDCTL_API=3 export ETCDCTL_CACERT=${abspath(local.ca_cert)} export ETCDCTL_CERT=${abspath(local.cert)} export ETCDCTL_KEY=${abspath(local.key)} export ETCDCTL_ENDPOINTS=\u0026#34;https://${var.ip}:2379\u0026#34; EOF depends_on = [ flexkube_etcd_cluster.etcd, ] } Now, to create the cluster run following commands:\nterraform init \u0026amp;\u0026amp; terraform apply Terraform should pick up the IP address automatically, if you exported it to TF_VAR_ip environment variable.\nIf everything went successfully, you should see now running etcd container, when you execute docker ps.\nVerifying cluster functionality Now that the cluster is running, we can verify that it is functional.\nInspect created files After creating the cluster, you can find following files in the working directory, created by Terraform:\n ca.pem containing etcd CA X.509 certificate in PEM format. client.pem containing etcd client X.509 certificate in PEM format, with root Common Name. client.key RSA private key in PEM format for certificate in client.pemfile. etcd.env containing environment variables needed for etcdctl. Certificates and private key files are required to access the cluster. The etcd.env file is just a helper file for this tutorial.\nThe files can also be safely removed, as all the certificates are stored in Terraform state anyway.\nUsing etcdctl etcdctl can be used to verify that the cluster is functional and to perform some basic operations as well as administrative tasks.\nTo be able to use it, it is recommended to set environment variables, pointing to the certificates and cluster members, so they don\u0026rsquo;t have to be repeated for each command.\nWith this guide, you get etcd.env helper file created, from which you can load the environment variables, using following command:\nsource etcd.env Now etcdctl is ready to use.\nTo check if cluster is healthy, execute the following command:\netcdctl endpoint health What\u0026rsquo;s next With cluster running, you can now start using it, e.g. to deploy Kubernetes cluster. To do that using Flexkube and Terraform, you can follow Creating single node Kubernetes cluster on local machine using Terraform.\nTo clean up created resources, see the section below.\nCleaning up First step of removing the cluster is running Terraform, to remove all containers. To perform that, run this command:\nterraform destroy Once finished, you can remove the directories created by the cluster, using the following command:\nsudo rm -rf /var/lib/etcd/ /etc/kubernetes/ "}),a.add({id:30,href:'/documentation/helm-charts/',title:"Helm Charts",content:"Helm Charts Resources provided by Flexkube only allow to run minimal Kubernetes cluster, without many essential services like kube-proxy, CoreDNS or Network Plugin. However, those processes can be easily managed using Kubernetes itself, which allows to manage them as any other Kubernetes workload.\nIt is also recommended to run Kubernetes control plane components (kube-apiserver, kube-scheduler etc.) as Kubernetes workloads, as this allows easy integration with metrics collection, centralized logging, auto-scaling etc.\nThe recommended way of installing remaining components is trough helm 3.x, which no longer require Tiller for operating. This allows installing Helm charts directly into the Kubernetes temporary control plane.\nUpstream charts Following charts can be used directly from upstream and it is recommended to install them on every cluster:\n coredns - provides Cluster DNS service metrics-server - provides API for Pods and Nodes metrics, which is required by kubectl top command and auto-scaling Those charts can be installed from the stable repository e.g. using the following commands:\nhelm repo add stable https://kubernetes-charts.storage.googleapis.com/ \u0026amp;\u0026amp; \\ helm install -n kube-system coredns stable/coredns Flexkube charts For the charts, which are not available in upstream projects, Flexkube maintains it\u0026rsquo;s own charts and provides user a repository, from where the charts can be deployed. Here is the list of charts provided by Flexkube:\n kubernetes - provides kube-proxy, kube-scheduler, kube-controller-manager, extra roles etc. kube-apiserver - provides kube-apiserver, separately from other Kubernetes components to be able to enforce Kubernetes version skew policy calico - provides Calico CNI kubelet-rubber-stamp - provides daemon, which approves Kubelet serving certificates, which is not done by kube-controller-manager as for other Kubelet certificates Those charts can be installed from the flexkube repository e.g. using the following commands:\nhelm repo add flexkube https://flexkube.github.io/charts/ \u0026amp;\u0026amp; \\ helm install -n kube-system calico flexkube/calico "}),a.add({id:31,href:'/documentation/helm-charts/maintained/',title:"Maintained",content:""}),a.add({id:32,href:'/documentation/helm-charts/maintained/calico/',title:"Calico",content:""}),a.add({id:33,href:'/documentation/helm-charts/maintained/kube-apiserver/',title:"Kube Apiserver",content:""}),a.add({id:34,href:'/documentation/helm-charts/maintained/kube-proxy/',title:"Kube Proxy",content:""}),a.add({id:35,href:'/documentation/helm-charts/maintained/kubelet-rubber-stamp/',title:"Kubelet Rubber Stamp",content:""}),a.add({id:36,href:'/documentation/helm-charts/maintained/kubernetes/',title:"Kubernetes",content:""}),a.add({id:37,href:'/documentation/helm-charts/maintained/tls-bootstrapping/',title:"Tls Bootstrapping",content:""}),a.add({id:38,href:'/documentation/helm-charts/upstream/',title:"Upstream",content:""}),a.add({id:39,href:'/documentation/helm-charts/upstream/stable-coredns/',title:"Stable Coredns",content:""}),a.add({id:40,href:'/documentation/helm-charts/upstream/stable-metrics-server/',title:"Stable Metrics Server",content:""}),a.add({id:41,href:'/documentation/project-status/',title:"Project Status",content:""}),a.add({id:42,href:'/documentation/reference/',title:"Reference",content:"Reference This section includes the reference documentation for the Flexkube Go API, Terraform provider, CLI and configuration options:\n CLI - For people interested in using flexkube CLI Terraform - For people interested in using Flexkube using Terraform. Go - For people interested in using libflexkube in other Go projects. "}),a.add({id:43,href:'/documentation/reference/cli/',title:"CLI",content:"Flexkube CLI (flexkube) This section includes the reference documentation for the Flexkube CLI (flexkube), it\u0026rsquo;s subcommands and flags and configuration syntax and options.\nTo learn about available subcommands, see Commands section.\nAll commands consume config.yaml file from the current directory and may produce state.yaml file, which will also be used on next runs.\nConfiguration config.yaml file contains configuration for all subcommands. It\u0026rsquo;s syntax is described in Configuration section.\nState state.yaml file contains information about created resources, like container IDs, generated certificates etc. It is important to keep this file if you want to be able to update or remove your resources using flexkube CLI.\nYou can examine this file to find how how configuration options got transpiled into containers flags, environment variables and configuration files.\n"}),a.add({id:44,href:'/documentation/reference/cli/commands/',title:"Commands",content:"Commands flexkube CLI provides several commands, each is used to manage different kind of resources related to Kubernetes.\n"}),a.add({id:45,href:'/documentation/reference/cli/commands/apiloadbalancer-pool/',title:"Apiloadbalancer Pool",content:""}),a.add({id:46,href:'/documentation/reference/cli/commands/containers/',title:"Containers",content:""}),a.add({id:47,href:'/documentation/reference/cli/commands/controlplane/',title:"Controlplane",content:""}),a.add({id:48,href:'/documentation/reference/cli/commands/etcd/',title:"Etcd",content:""}),a.add({id:49,href:'/documentation/reference/cli/commands/kubeconfig/',title:"Kubeconfig",content:""}),a.add({id:50,href:'/documentation/reference/cli/commands/kubelet-pool/',title:"Kubelet Pool",content:""}),a.add({id:51,href:'/documentation/reference/cli/commands/pki/',title:"Pki",content:""}),a.add({id:52,href:'/documentation/reference/cli/configuration/',title:"Configuration",content:""}),a.add({id:53,href:'/documentation/reference/cli/configuration/apiloadbalancer-pool/',title:"Apiloadbalancer Pool",content:""}),a.add({id:54,href:'/documentation/reference/cli/configuration/containers/',title:"Containers",content:""}),a.add({id:55,href:'/documentation/reference/cli/configuration/controlplane/',title:"Controlplane",content:""}),a.add({id:56,href:'/documentation/reference/cli/configuration/etcd/',title:"Etcd",content:""}),a.add({id:57,href:'/documentation/reference/cli/configuration/kubelet-pool/',title:"Kubelet Pool",content:"kubelet-pool "}),a.add({id:58,href:'/documentation/reference/cli/configuration/pki/',title:"Pki",content:""}),a.add({id:59,href:'/documentation/reference/go/',title:"Go",content:"Go For Go language reference documentation, see https://pkg.go.dev/github.com/flexkube/libflexkube.\n"}),a.add({id:60,href:'/documentation/reference/helm-charts/',title:"Helm Charts",content:""}),a.add({id:61,href:'/documentation/reference/helm-charts/calico/',title:"Calico",content:""}),a.add({id:62,href:'/documentation/reference/helm-charts/kube-apiserver/',title:"Kube Apiserver",content:""}),a.add({id:63,href:'/documentation/reference/helm-charts/kubelet-rubber-stamp/',title:"Kubelet Rubber Stamp",content:""}),a.add({id:64,href:'/documentation/reference/helm-charts/kubernetes/',title:"Kubernetes",content:""}),a.add({id:65,href:'/documentation/reference/helm-charts/tls-bootstrapping/',title:"Tls Bootstrapping",content:""}),a.add({id:66,href:'/documentation/reference/terraform/',title:"Terraform",content:"Terraform For Terrafrom provider reference documentation, see https://registry.terraform.io/providers/flexkube/flexkube/latest/docs.\n"}),a.add({id:67,href:'/documentation/resources/',title:"Resources",content:"Resources This section describes all resources, which can be managed using Flexkube.\n"}),a.add({id:68,href:'/documentation/resources/api-loadbalancer/',title:"API Loadbalancer",content:""}),a.add({id:69,href:'/documentation/resources/containers/',title:"Containers",content:"Containers resource This document should describe when containers resource is useful.\n"}),a.add({id:70,href:'/documentation/resources/controlplane/',title:"Controlplane",content:""}),a.add({id:71,href:'/documentation/resources/etcd/',title:"Etcd",content:"etcd "}),a.add({id:72,href:'/documentation/resources/kubelet-pool/',title:"Kubelet Pool",content:""}),a.add({id:73,href:'/documentation/resources/pki/',title:"Pki",content:"PKI PKI (Public Key Infrastructure) resource is responsible for generating all X.509 certificates and RSA key pairs which are required by Kubernetes cluster. Kubernetes requires several certificates to be generated, with specific CNs, different CAs etc, which is difficult to manage, so Flexkube provides configurable and convenient interface to manage them.\nAll certificates are generated by following Kubernetes PKI certificates and requirements best practices.\nCurrent implementation of PKI is experimental and only supports generating the certificates. Renewing the certificates or changing certificate properties is currently not implemented. Example configuration: CLI To generate the certificates using flexkube CLI, create the following config.yaml file:\npki: certificate: organization: \u0026#34;example\u0026#34; etcd: peers: controller01: \u0026#34;192.168.1.10\u0026#34; clientCNs: - \u0026#34;root\u0026#34; - \u0026#34;kube-apiserver\u0026#34; - \u0026#34;prometheus\u0026#34; kubernetes: kubeAPIServer: externalNames: \u0026#34;kube-apiserver.example.com\u0026#34; serverIPs: - \u0026#34;192.168.1.10\u0026#34; Then, run the following command:\nflexkube pki If the configuration is correct, PKI will be created in state.yaml file.\nGo To generate Kubernetes PKI using Go, for example create file main.go with following content:\npackage main import ( \u0026#34;fmt\u0026#34; \u0026#34;github.com/flexkube/libflexkube/pkg/pki\u0026#34; ) func main() { p := \u0026amp;pki.PKI{ Certificate: pki.Certificate{ Organization: \u0026#34;example\u0026#34;, }, Etcd: \u0026amp;pki.Etcd{ Peers: map[string]string{ \u0026#34;controller01\u0026#34;: \u0026#34;192.168.1.10\u0026#34;, }, ClientCNs: []string{ \u0026#34;root\u0026#34;, \u0026#34;kube-apiserver\u0026#34;, \u0026#34;prometheus\u0026#34;, }, }, Kubernetes: \u0026amp;pki.Kubernetes{ KubeAPIServer: \u0026amp;pki.KubeAPIServer{ ExternalNames: []string{\u0026#34;kube-apiserver.example.com\u0026#34;}, ServerIPs: []string{\u0026#34;192.168.1.10\u0026#34;}, }, }, } p.Generate() fmt.Printf(\u0026#34;%+v\u0026#34;, p) } Then run the following command:\ngo run main.go If everything went successfully, you should get all generated certificates with their properties printed. Please not, that it is up to the user to persist generated certificates when using Go interface.\nTerraform To create Kubernetes PKI using Terraform, create main.tf file with the following content:\nresource \u0026#34;flexkube_pki\u0026#34; \u0026#34;pki\u0026#34; { certificate { organization = \u0026#34;example\u0026#34; } etcd { peers = { \u0026#34;controller01\u0026#34; = \u0026#34;192.168.1.10\u0026#34; } client_cns = [ \u0026#34;root\u0026#34;, \u0026#34;kube-apiserver\u0026#34;, \u0026#34;prometheus\u0026#34;, ] } kubernetes { kube_api_server { external_names = [\u0026#34;kube-apiserver.example.com\u0026#34;] server_ips = [\u0026#34;192.168.1.10\u0026#34;] } } } output \u0026#34;kubernetes_ca\u0026#34; { value = flexkube_pki.pki.kubernetes[0].ca[0].x509_certificate } Then, run following commands:\nterraform init \u0026amp;\u0026amp; terraform apply If everything went successfully, you should see Kubernetes CA certificate in PEM format printed as Terraform output.\nTo see all available parameters, see flexkube_pki page in Terraform Registry documentation.\n "})})()