-
Install the latest kubectl
-
Install krew
-
Install kuttl:
kubectl krew install kuttl
-
Install kubebuilder
-
Normally kubebuilder is installed to
/usr/local/kubebuilder/bin
. You should see the following executables in that directory:``etcd kube-apiserver kubebuilder kubectl``
You may want to append this directory to your ``PATH`` in your ``.bashrc``, ``.zshrc``, or similar.
-
Note
|
If you choose to place the kubebuilder executables in a different path, make sure to
use the KUBEBUILDER_ASSETS env var when running tests (mentioned in Unit Tests section below)
|
Note
|
If you want/need to use OpenShift, you can install Code Ready Containers, just be aware that it consumes a much larger amount of resources and our test helper scripts are designed to work with minikube. |
We haven’t had much success using the docker/podman drivers, and would recommend the kvm2 driver or virtualbox driver
-
If you don’t have virtualization enabled, follow the guide on the minikube docs
-
Note that
virt-host-validate
may throw errors related to cgroups on Fedora 33 — which you can ignore -
If you don’t want to enter a root password when minikube needs to modify its VM, add your user to the
libvirt
group:
sudo usermod -a -G libvirt $(whoami)
newgrp libvirt
-
You can set minikube to default to kvm2 with:
minikube config set driver kvm2
-
Move to the
Testing
section below to see if you can successfully run unit tests and E2E tests.
-
make install
will deploy the CRDs to the cluster configured in your kubeconfig. -
make run
will build the binary and locally run the binary, connecting the manager to the Openshift cluster configured in your kubeconfig. -
make deploy
will try to run the manager in the cluster configured in your kubeconfig. You likely need to push the image to a docker registry that your Kubernetes cluster can access. SeeE2E Testing
below. -
make genconfig
(optionally) needs to be run if the specification for the config has been altered.
The tests rely on the test environment set up by controller-runtime. This enables the operator to get initialized against a control plane just like it would against a real OpenShift cluster.
To run the tests:
make test
If kubebuilder is installed somewhere other than /usr/local/kubebuilder/bin
, then:
KUBEBUILDER_ASSETS=/path/to/kubebuilder/bin make test
If you’re just getting started with writing tests in Go, or getting started with Go in general, take a look at https://quii.gitbook.io/learn-go-with-tests/
There are two e2e testing scripts which:
-
build your code changes into a docker image (both
podman
ordocker
supported) -
push the image into a registry
-
deploy the operator onto a kubernetes cluster
-
run
kuttl`
tests
The scripts are:
-
e2e-test.sh
— pushes images to quay.io, used mainly for this repo’s CI/CD jobs or in cases where you have access to a remote cluster on which to test the operator. -
e2e-test-local.sh
— pushes images to a local docker registry, meant for local testing with minikube
You will usually want to run:
minikube start --addons=registry --addons=ingress --addons=metrics-server --disable-optimizations
/e2e-test-local.sh
If using podman to build the operator’s docker image, ensure sub ID’s for rootless mode are configured: Test with:
podman system migrate
podman unshare cat /proc/self/uid_map
If those commands throw an error, you need to add entries to /etc/subuid
and /etc/subgid
for your user.
The subuid range must not contain your user ID and the subgid range must not contain your group ID. Example:
❯ id -u
112898
❯ id -g
112898
# Use 200000-265535 since 112898 is not found in this range
❯ sudo usermod --add-subuids 200000-265535 --add-subgids 200000-265535 $(whoami)
# Run migrate again:
❯ podman system migrate
❯ podman unshare cat /proc/self/uid_map
The API docs are generated by using the crd-ref-docs tool
by Elastic. You will need to install asciidoctor
:
On Fedora use:
sudo dnf install -y asciidoctor
For others, see: https://docs.asciidoctor.org/asciidoctor/latest/install/
Generating the docs source using:
make api-docs
Then be sure to add doc changes before committing, e.g.:
git add docs/antora/modules/ROOT/pages/api_reference.adoc
The build docs stage only generates the asciidoc files. To actually view them, it is required to install antora.
npm install @antora/[email protected] @antora/[email protected]
A playbook.yaml
is required at the root of the repo which refers to certain directories:
site:
title: Clowder Documentation
url: https://redhatinsights.github.io/clowder/
start_page: clowder::index.adoc
content:
sources:
- url: /path/to/clowder/
branches: master
start_path: docs/antora
ui:
bundle:
url: https://gitlab.com/antora/antora-ui-default/-/jobs/artifacts/master/raw/build/ui-bundle.zip?job=bundle-stable
snapshot: true
output_dir: ui
runtime:
fetch: true
output:
dir: docs
After this, the antora build can be invoked:
./node_modules/.bin/antora generate playbook.yaml
And then viewed from the docs/clowder/dev/index.html
entrypoint.
Clowder can read a configuration file in order to turn on certain debug options, toggle feature
flags and perform profiling. By default clowder will read from the file
/config/clowder_config.json
to configure itself. When deployed as a pod, it an optional volume
is configured to look for a ConfigMap
in the same namespace, called clowder-config
which
looks similar to the following.
apiVersion: v1
data:
clowder_config.json: |-
{
"debugOptions": {
"trigger": {
"diff": false
},
"cache": {
"create": false,
"update": false,
"apply": false
},
"pprof": {
"enable": true,
"cpuFile": "testcpu"
}
},
"features": {
"createServiceMonitor": false,
"watchStrimziResources": true
}
}
kind: ConfigMap
metadata:
name: clowder-config
To run clowder with the make run
(or to debug it VSCode), and apply configuration, it is
required to either create the /config/clowder_config.json
file in the filesystem of the machine
running the Clowder process, or to use the environment variable CLOWDER_CONFIG_PATH
to point to
an alternative file.
At startup, Clowder will print the configuration that was read in the logs
[2021-06-16 11:10:44] INFO Loaded config config:{'debugOptions': {'trigger': {'diff': True}, 'cache': {'create': True, 'update': True, 'apply': True}, 'pprof': {'enable': True, 'cpuFile': 'testcpu'}}, 'features': {'createServiceMonitor': False, 'disableWebhooks': True, 'watchStrimziResources': False, 'useComplexStrimziTopicNames': False}}
Clowder has several debug flags which can aid in troubleshooting difficult situations. These are defined in the below.
-
debugOptions.trigger.diff
- When a resource is responsible for triggering a reconciliation of either aClowdApp
or aClowdEnvironment
this option will print out a diff of the old and new resource, allowing an inspection of what actually triggered the reconciliation.[2021-06-16 11:24:49] INFO APP Reconciliation trigger name:puptoo-processor namespace:test-basic-app resType:Deployment type:update [2021-06-16 11:24:49] INFO APP Trigger diff diff:--- old +++ new @@ -3,8 +3,8 @@ "name": "puptoo-processor", "namespace": "test-basic-app", "uid": "de492af3-be26-4a2c-b959-54b674c9e34f", - "resourceVersion": "43162", - "generation": 1, + "resourceVersion": "44111", + "generation": 2, "creationTimestamp": "2021-06-16T10:19:20Z", "labels": { "app": "puptoo", @@ -69,7 +69,7 @@ "manager": "manager", "operation": "Update", "apiVersion": "apps/v1", - "time": "2021-06-16T10:19:20Z", + "time": "2021-06-16T10:24:49Z", "fieldsType": "FieldsV1", "fieldsV1": { "f:metadata": {
-
debugOptions.cache.create
- When an item is created in Clowder’s resource cache, this option will enable printing of the resource that came from k8s cache. If the resource exists in k8s, this will be the starting resource that Clowder will update.[2021-06-16 11:20:23] INFO [test-basic-app:puptoo] CREATE resource app:test-basic-app:puptoo diff:{ "kind": "Deployment", "apiVersion": "apps/v1", "metadata": { "name": "puptoo-processor", "namespace": "test-basic-app", "uid": "de492af3-be26-4a2c-b959-54b674c9e34f", "resourceVersion": "43162", "generation": 1, "creationTimestamp": "2021-06-16T10:19:20Z", "labels": { "app": "puptoo", "pod": "puptoo-processor" ... ...
-
debugOptions.cache.update
- When enabled, and an item is updated in Clowder’s resource cache, this option will print the new version of the item in the cache.[2021-06-16 11:20:23] INFO [test-basic-app:puptoo] UPDATE resource app:test-basic-app:puptoo diff:{ "kind": "ServiceAccount", "apiVersion": "v1", "metadata": { "name": "iqe-test-basic-app", "namespace": "test-basic-app", "uid": "3d89ab16-dcb2-4dbb-b0e6-685009878175", "resourceVersion": "43135", "creationTimestamp": "2021-06-16T10:19:20Z", "labels": { "app": "test-basic-app" }, "ownerReferences": [ { "apiVersion": "cloud.redhat.com/v1alpha1", "kind": "ClowdEnvironment", "name": "test-basic-app", "uid": "25e121df-5b12-4c34-b8f3-a49b0f20afcf", "controller": true } ],
-
debugOptions.cache.apply
- This option is responsible for printing out a diff showing what the resource was when it was first read into Clowder’s cache via thecreate
, and what is being applied via the k8s client.[2021-06-16 11:20:23] INFO [test-basic-app:puptoo] Update diff app:test-basic-app:puptoo diff:--- old +++ new @@ -84,14 +84,14 @@ "protocol": "TCP", "appProtocol": "http", "port": 8000, - "targetPort": 0 + "targetPort": 8000 }, { "name": "metrics", "protocol": "TCP", "appProtocol": "http", "port": 9000, - "targetPort": 0 + "targetPort": 9000 } ],
-
debugOptions.pprof.enable
- To aid in profiling, this option enables the cpu profilier. -
debugOptions.pprof.cpuFile
- This option sets where the cpu profiling saves the collected pprof data.
Clowder currently support several feature flags which are intended to enable or disable certain behaviour. They are detailed as follows:
Flag Name | Description | Permanent |
---|---|---|
|
Enables the creation of prometheus |
No |
|
While testing locally and for the |
Yes |
|
When enabled, Clowder will assume ownership of the |
No |
|
This flag switches Clowder to use non-colliding names
for strimzi resources. This is important if using a singular strimzi server for multiple
|
Yes |