This document covers basic needs to work with Kyverno codebase.
It contains instructions to build, run, and test Kyverno.
- Open project in devcontainer
- Tools
- Building local binaries
- Building local images
- Pushing images
- Deploying a local build
- Code generation
- Debugging local code
- Profiling
- Other Topics
- Selecting Issues
-
Clone the project to your local machine.
-
Make sure that you have the Visual Studio Code editor installed on your system.
-
Make sure that you have wsl(Ubuntu preferred) and Docker installed on your system and on wsl too (docker.sock (UNIX socket) file is necessary to enable devcontainer to communicate with docker running in host machine).
-
Open the project in Visual Studio Code, once the project is opened hit F1 and type wsl, now click on "Reopen in WSL".
-
If you haven't already done so, install the Dev Containers extension in Visual Studio Code.
-
Once the extension is installed, you should see a green icon in the bottom left corner of the window.
-
After you have installed Dev Containers extension, it should automatically detect the .devcontainer folder inside the project opened in wsl, and should suggest you to open the project in container.
-
If it doesn't suggest you, then press Ctrl + Shift + p and search "reopen in container" and click on it.
-
If everything goes well, the project should be opened in your devcontainer.
-
Then follow the steps as mentioned below to configure the project.
Building and/or testing Kyverno requires additional tooling.
We use make
to simplify installing the tools we use.
Tools will be installed in the .tools
folder when possible, this allows keeping installed tools local to the Kyverno repository.
The .tools
folder is ignored by git
and binaries should not be committed.
Note: If you don't install tools, they will be downloaded/installed as necessary when running
make
targets.
You can manually install tools by running:
make install-tools
To remove installed tools, run:
make clean-tools
The Kyverno repository contains code for three different binaries:
kyvernopre
: Binary to update/cleanup existing resources in clusters. This is typically run as an init container before Kyverno controller starts.kyverno
: The Kyverno controller binary.cli
: The Kyverno command line interface.
Note: You can build all binaries at once by running
make build-all
.
To build kyvernopre
binary on your local system, run:
make build-kyverno-init
The binary should be created at ./cmd/kyverno-init/kyvernopre
.
To build kyverno
binary on your local system, run:
make build-kyverno
The binary should be created at ./cmd/kyverno/kyverno
.
To build cli
binary on your local system, run:
make build-cli
The binary should be created at ./cmd/cli/kubectl-kyverno/kubectl-kyverno
.
In the same spirit as building local binaries, you can build local docker images instead of local binaries.
ko
is used to build images, please refer to Building local images with ko.
Building images uses repository tags. To fetch repository tags into your fork run the following commands:
git remote add upstream https://github.com/kyverno/kyverno
git fetch upstream --tags
When building local images with ko you can't specify the registry used to create the image names. It will always be ko.local
.
Note: You can build all local images at once by running
make ko-build-all
.
To build kyvernopre
image on your local system, run:
make ko-build-kyverno-init
The resulting image should be available locally, named ko.local/github.com/kyverno/kyverno/cmd/initcontainer
.
To build kyverno
image on your local system, run:
make ko-build-kyverno
The resulting image should be available locally, named ko.local/github.com/kyverno/kyverno/cmd/kyverno
.
To build cli
image on your local system, run:
make ko-build-cli
The resulting image should be available locally, named ko.local/github.com/kyverno/kyverno/cmd/cli/kubectl-kyverno
.
Pushing images is very similar to building local images, except that built images will be published on a remote image registry.
ko
is used to build and publish images, please refer to Pushing images with ko.
When pushing images you can specify the registry you want to publish images to by setting the REGISTRY
environment variable (default value is ghcr.io
).
When publishing images, we are using the following strategy:
- All published images are tagged with
latest
. Images tagged withlatest
should not be considered stable and can come from multiple release branches or main. - In addition to
latest
, dev images are tagged with the following pattern<major>.<minor>-dev-N-<git hash>
whereN
is a two-digit number beginning at one for the major-minor combination and incremented by one on each subsequent tagged image. - In addition to
latest
, release images are tagged with the following pattern<major>.<minor>.<patch>-<pre release>
. The pre release part is optional and only applies to pre releases (-beta.1
,-rc.2
, ...).
Authenticating to the remote registry is done automatically in the Makefile
with ko login
.
To allow authentication you will need to set REGISTRY_USERNAME
and REGISTRY_PASSWORD
environment variables before invoking targets responsible for pushing images.
Note: You can push all images at once by running
make ko-publish-all
ormake ko-publish-all-dev
.
To push kyvernopre
image on a remote registry, run:
# push stable image
make ko-publish-kyverno-init
or
# push dev image
make ko-publish-kyverno-init-dev
The resulting image should be available remotely, named ghcr.io/kyverno/kyvernopre
(by default, if REGISTRY
environment variable was not set).
To push kyverno
image on a remote registry, run:
# push stable image
make ko-publish-kyverno
or
# push dev image
make ko-publish-kyverno-dev
The resulting image should be available remotely, named ghcr.io/kyverno/kyverno
(by default, if REGISTRY
environment variable was not set).
To push cli
image on a remote registry, run:
# push stable image
make ko-publish-cli
or
# push dev image
make ko-publish-cli-dev
The resulting image should be available remotely, named ghcr.io/kyverno/kyverno-cli
(by default, if REGISTRY
environment variable was not set).
After building local images, it is often useful to deploy those images in a local cluster.
We use KinD to create local clusters easily, and have targets to:
If you already have a local KinD cluster running, you can skip this step.
To create a local KinD cluster, run:
make kind-create-cluster
You can override the k8s version by setting the KIND_IMAGE
environment variable (default value is kindest/node:v1.29.1
).
You can also override the KinD cluster name by setting the KIND_NAME
environment variable (default value is kind
).
To build local images and load them on a local KinD cluster, run:
# build kyvernopre image and load it in KinD cluster
make kind-load-kyverno-init
or
# build kyverno image and load it in KinD cluster
make kind-load-kyverno
or
# build kyvernopre and kyverno images and load them in KinD cluster
make kind-load-all
You can override the KinD cluster name by setting the KIND_NAME
environment variable (default value is kind
).
To build local images, load them on a local KinD cluster, and deploy helm charts, run:
# build images, load them in KinD cluster and deploy kyverno helm chart
make kind-deploy-kyverno
or
# deploy kyverno-policies helm chart
make kind-deploy-kyverno-policies
or
# build images, load them in KinD cluster and deploy helm charts
make kind-deploy-all
This will build local images, load built images in every node of the KinD cluster, and deploy kyverno
and/or kyverno-policies
helm charts in the cluster (overriding image repositories and tags).
You can override the KinD cluster name by setting the KIND_NAME
environment variable (default value is kind
).
We are using code generation tools to create the following portions of code:
- Generating kubernetes API client
- Generating API deep copy functions
- Generating CRD definitions
- Generating API docs
Note: You can run
make codegen-all
to build all generated code at once.
Based on the APIs golang code definitions, you can generate the corresponding Kubernetes client by running:
# generate clientset, listers and informers
make codegen-client-all
or
# generate clientset
make codegen-client-clientset
or
# generate listers
make codegen-client-listers
or
# generate informers
make codegen-client-informers
This will output generated files in the /pkg/client package.
Based on the APIs golang code definitions, you can generate the corresponding deep copy functions by running:
# generate all deep copy functions
make codegen-deepcopy-all
or
# generate kyverno deep copy functions
make codegen-deepcopy-kyverno
or
# generate policy reports deep copy functions
make codegen-deepcopy-report
This will output files named zz_generated.deepcopy.go
in every API package.
Based on the APIs golang code definitions, you can generate the corresponding CRDs manifests by running:
# generate all CRDs
make codegen-crds-all
or
# generate Kyverno CRDs
make codegen-crds-kyverno
or
# generate policy reports CRDs
make codegen-crds-report
This will output CRDs manifests /config/crds.
Based on the APIs golang code definitions, you can generate the corresponding API reference docs by running:
# generate API docs
make codegen-api-docs
This will output API docs in /docs/crd.
Based on the APIs golang code definitions, you can generate the corresponding CRD definitions for helm charts by running:
# generate helm CRDs
make codegen-helm-crds
This will output CRDs templates in /charts/kyverno/templates/crds.yaml.
Note: You can run
make codegen-helm-all
to generate CRDs and docs at once.
Based on the helm charts default values:
You can generate the corresponding helm chart docs by running:
# generate helm docs
make codegen-helm-docs
This will output docs in helm charts respective README.md
:
Note: You can run
make codegen-helm-all
to generate CRDs and docs at once.
Running Kyverno on a local machine without deploying it in a remote cluster can be useful, especially for debugging purpose. You can run Kyverno locally or in your IDE of choice with a few steps:
- Create a local cluster
- You can create a simple cluster with KinD with
make kind-create-cluster
- You can create a simple cluster with KinD with
- Deploy Kyverno manifests except the Kyverno
Deployment
- Kyverno is going to run on your local machine, so it should not run in cluster at the same time
- You can deploy the manifests by running
make debug-deploy
- There are multiple environment variables that need to be configured. The variables can be found in here. Their values can be set using the command
export $NAME=value
- To run Kyverno locally against the remote cluster you will need to provide
--kubeconfig
and--serverIP
arguments:--kubeconfig
must point to your kubeconfig file (usually~/.kube/config
)--serverIP
must be set to<local ip>:9443
(<local ip>
is the private ip adress of your local machine)--backgroundServiceAccountName
must be set tosystem:serviceaccount:kyverno:kyverno-background-controller
--caSecretName
must be set tokyverno-svc.kyverno.svc.kyverno-tls-ca
--tlsSecretName
must be set tokyverno-svc.kyverno.svc.kyverno-tls-pair
Once you are ready with the steps above, Kyverno can be started locally with:
go run ./cmd/kyverno/ --kubeconfig ~/.kube/config --serverIP=<local-ip>:9443 --backgroundServiceAccountName=system:serviceaccount:kyverno:kyverno-background-controller --caSecretName=kyverno-svc.kyverno.svc.kyverno-tls-ca --tlsSecretName=kyverno-svc.kyverno.svc.kyverno-tls-pair
You will need to adapt those steps to run debug sessions in your IDE of choice, but the general idea remains the same.
To profile Kyverno application running inside a Kubernetes pod, set --profile
flag to true
in install.yaml. The default profiling port is 6060, and it can be configured via profile-port
.
--profile
Set this flag to 'true', to enable profiling.
--profile-port string
Enable profiling at given port, defaults to 6060. (default "6060")
You can get at the application in the pod by port forwarding with kubectl, for example:
$ kubectl -n kyverno get pod
NAME READY STATUS RESTARTS AGE
kyverno-7d67c967c6-slbpr 1/1 Running 0 19s
$ kubectl -n kyverno port-forward kyverno-7d67c967c6-slbpr 6060
Forwarding from 127.0.0.1:6060 -> 6060
Forwarding from [::1]:6060 -> 6060
The HTTP endpoint will now be available as a local port.
Alternatively, use a Service of the type LoadBalancer
to expose Kyverno. An example Service manifest is given below:
apiVersion: v1
kind: Service
metadata:
name: pproc-service
namespace: kyverno
spec:
selector:
app: kyverno
ports:
- protocol: TCP
port: 6060
targetPort: 6060
type: LoadBalancer
You can then generate the file for the memory profile with curl and pipe the data to a file:
$ curl http://localhost:6060/debug/pprof/heap > heap.pprof
Generate the file for the CPU profile with curl and pipe the data to a file:
curl "http://localhost:6060/debug/pprof/profile?seconds=60" > cpu.pprof
To analyze the data:
go tool pprof heap.pprof
Additional advanced developer docs are avaialable in the developer docs folder.
When you are ready to contribute, you can select issue at Good First Issues.