Skip to content

Commit

Permalink
Drop of the 10.1SP1 changes
Browse files Browse the repository at this point in the history
Contains all of the helm charts and tests from the 10.1SP1 release.
  • Loading branch information
spilchen committed Apr 29, 2021
1 parent 3947906 commit 360e178
Show file tree
Hide file tree
Showing 71 changed files with 4,541 additions and 2 deletions.
8 changes: 8 additions & 0 deletions .gitignore
Original file line number Diff line number Diff line change
@@ -0,0 +1,8 @@
test_output.*
*.rpm
cert/
logs.log
.vscode/
unit-tests.xml
int-tests-output/
stern.pid
23 changes: 23 additions & 0 deletions CONTRIBUTING.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,23 @@
First off, thank you for considering contributing to *vertica-kubernetes* and helping make it even better than it is today!

This document will guide you through the contribution process. There are a number of ways you can help:

- [Bug Reports](#bug-reports)
- [Feature Requests](#feature-requests)
- [Code Contributions](#code-contributions)

# Bug Reports

If you find a bug, submit an [issue](https://github.com/vertica/vertica-kubernetes/issues) with a complete and reproducible bug report. If the issue can't be reproduced, it will be closed. If you opened an issue, but figured out the answer later on your own, comment on the issue to let people know, then close the issue.

For issues (e.g. security related issues) that are **not suitable** to be reported publicly on the GitHub issue system, report your issues to [Vertica open source team](mailto:[email protected]) directly or file a case with Vertica support if you have a support account.

# Feature Requests

Feel free to share your ideas for how to improve *vertica-kubernetes*. We’re always open to suggestions.
You can open an [issue](https://github.com/vertica/vertica-kubernetes/issues)
with details describing what feature(s) you'd like added or changed.

# Code Contributions

At this time we are not accepting any code contributions. We are in the process of converting the helm chart to the [operator framework](https://operatorframework.io/). When that is done, we will commit the operator to this repository and open it up for outside contributions.
174 changes: 174 additions & 0 deletions DEVELOPER.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,174 @@
# Introduction

This guide explains out to setup a environment to develop and test Vertica in Kubernetes.

# Software Setup
Use of this repo obviously requires a working Kubernetes cluster. In addition to that, we require the following software to be installed in order to run the integration tests:

- [go](https://golang.org/doc/install) (version 1.13.8)
- [kubectl](https://kubernetes.io/docs/tasks/tools/install-kubectl/) (version 1.20.1). If you are using a real Kubernetes cluster this will already be installed.
- [helm](https://helm.sh/docs/intro/install/) (version 3.5.0)
- [kubectx](https://github.com/ahmetb/kubectx/releases/download/v0.9.1/kubectx) (version 0.9.1)
- [kubens](https://github.com/ahmetb/kubectx/releases/download/v0.9.1/kubens) (version 0.9.1)
- [daemonize](https://software.clapper.org/daemonize/)
- [stern](https://github.com/wercker/stern) (version 1.11.0)

# Kind Setup
[Kind](https://kind.sigs.k8s.io/) is a way to setup a multi-node Kubernetes cluster using Docker. It mimics a multi-node setup by starting a separate container for each node. The requirements for running Kind are quite low - it is possible to set this up on your own laptop. This is the intended deployment to run the tests in an automated fashion.

We have a wrapper that you can use that will setup kind and create a cluster suitable for testing Vertica. The following command creates a cluster named cluster1 that has one master node and two worker nodes. It takes only a few minutes to complete:

```
scripts/kind.sh init cluster1
```

After it returns, change the context to use the cluster. The cluster has its own kubectl context named kind-cluster1:

```
kubectx kind-cluster1
```

Test the container out by checking the status of the nodes:

```
kubectl get nodes
```

After kind is up, you need to configure it to run the integration tests. The `setup-int-tests.sh` script encompasses all of the setup:

```
scripts/setup-int-tests.sh
```



# Kind Cleanup

After you are done with the cluster, you can delete it with our helper script. Substitute `cluster1` with the name of your cluster:

```
scripts/kind.sh term cluster1
```

If you forgot your name, run kind directly to get the clusters installed:

```
$ PATH=$PATH:$HOME/go/bin
$ kind get clusters
cluster1
```

# Developer Workflow

## Make Changes

The structure of the repo is as follows:
- **docker-\***: Contains the Dockerfile that we have dependencies on.
- **helm-charts/vertica**: Contains the helm charts to deploy Vertica. This chart manages all of the required Kubernetes objects.
- **helm-charts/vertica-int-tests**: Contains the helm charts to run integration tests against Vertica.

## Build and Push Containers

We currently make use of a few containers:
- **vertica**: This container is used as an initContainer to bootstrap the config directory (/opt/vertica/config), as well as the long-running container that runs the vertica daemon. The files for this container are in the `docker-vertica/` directory.
- **python-tools**: This is a container we use for integration tests. It is a minimal python base with vertica_python installed. It has a helper class that creates a connection object using information from Kubernetes. We use this to write integration tests.

In order to run Vertica in Kubernetes, we need to package Vertica inside a container. This container is then referenced in the YAML file when we install the helm chart.

Run this make target to build the necessary containers:

```
make docker-build
```

By default, this creates a container that is stored in the local daemon. The tag is `<namespace>-1`.

You need to make these containers available to the Kubernetes cluster. With kind, you need push them into the cluster so they appear as local containers. Use the `kind load docker-image` command for this. The following script handles this for all images:

```
scripts/push-to-kind.sh -t <your-tag> <cluster-name>
```

Due to the size of the vertica image, this step can take in excess of 10 minutes when run on a laptop.

If your image builds fail silently, confirm that there is enough disk space in your Docker repository to store the built images.

## Run Unit Tests

Unit testing verifies the YAML files we create in `helm install` are in a valid format. Due to the various config knobs we provide, there are a lot of variations to the actual YAML files that helm installs. We have two flavors of unit testing:

1. **Helm lint**: This uses the chart verification test that is built into Helm. You can run this with the following make target:

```
make lint
```

2. **Helm unittest**:

```
make run-unit-tests
```

Unit tests are stored in `helm-charts/vertica/tests`. They use the [unittest plugin for helm](https://github.com/quintush/helm-unittest). Some samples that you can use to write your own tests can be found at [unittest github page](https://github.com/quintush/helm-unittest/tree/master/test/data/v3/basic). [This document](https://github.com/quintush/helm-unittest/blob/master/DOCUMENT.md) describes the format for the tests.


## Deploy Vertica

To deploy Vertica, use the Helm charts from helm-charts/vertica. Override the default configuration settings with values that are specific to your Kubernetes cluster. We have a make target to deploy using the kind cluster that was setup in the previous section. This make target synchronously waits until all of the pods are in the ready state, and cleans up any left over deployment that might exist:

```
make deploy-kind
```

If the pods never get to the ready state, this step will timeout. You can debug this step by describing any of the pods (if any exist) and looking at the events. Use the following selector to get the pods that Vertica creates:
```
$ ~/git/vertica-kubernetes# kubectl get pods -l vertica.com/database=verticadb
NAME READY STATUS RESTARTS AGE
cluster-vertica-defaultsubcluster-0 1/1 Running 0 30m
cluster-vertica-defaultsubcluster-1 1/1 Running 0 30m
cluster-vertica-defaultsubcluster-2 1/1 Running 0 30m
```

## Run Integration Tests

The integration tests are run through Kubernetes itself. We use [octopus as the testing framework](https://github.com/kyma-incubator/octopus). This allows you to define tests and package them up for running in a test suite. This test suite was chosen because it allows you to selectively run tests, have automatic retries for failed tests and allow the test to be run multiple times.

Before running the integration tests for the first time, you must setup Kubernetes with some required objects. We have encapsulated everything that you need in the following script:

```
scripts/setup-int-tests.sh
```

You only need to run this once - you do not need to run after each change.

We have a make target to run the integration tests against the currently deployed Vertica cluster:
```
make run-int-tests
```

This command waits synchronously until the tests succeeds or fails. It runs with an S3 backend, so it creates a minIO tenant to store the communal data.

There are a few ways to monitor the progress of the test:

- Each test runs in its own pod. To view the output of the test, issue the kubectl logs command.

```
kubectl logs oct-tp-testsuite-sanity-test-install-0
```

- Or you can tail the currently running test automatically using this script:

```
scripts/cur-oct-logs.sh
```

- Or you can use stern. We run stern automatically to collect the output for each test. The output is saved in the `int-tests-output/` directory. This output is overwritten each time we run the integration tests.

## Cleanup

The following make target cleans up the integration tests and deployment:

```
make clean-deploy clean-int-tests
```

2 changes: 1 addition & 1 deletion LICENSE
Original file line number Diff line number Diff line change
Expand Up @@ -186,7 +186,7 @@
same "printed page" as the copyright notice for easier
identification within third-party archives.

Copyright [yyyy] [name of copyright owner]
Copyright [2021] Microfocus

Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
Expand Down
156 changes: 156 additions & 0 deletions Makefile
Original file line number Diff line number Diff line change
@@ -0,0 +1,156 @@
GOPATH?=${HOME}/go
CONTAINER_REPO?=
GET_NAMESPACE_SH=kubectl config view --minify --output 'jsonpath={..namespace}'
ifeq (, $(shell ${GET_NAMESPACE_SH}))
NAMESPACE?=default
else
NAMESPACE?=$(shell ${GET_NAMESPACE_SH})
endif
REVISION?=1
TAG?=${NAMESPACE}-${REVISION}
VIMAGE_NAME=vertica-k8s
INT_TEST_TIMEOUT=30m
# Setting the communal path here is for the integration tests. It must be an
# s3 endpoint and will trigger a new tenant in the minIO operator.
COMMUNAL_PATH?="s3://nimbusdb/${HOSTNAME}-${USER}/db"
INT_TEST_OUTPUT_DIR?=$(PWD)/int-tests-output
TMPDIR?=$(PWD)
HELM_UNITTEST_PLUGIN_INSTALLED=$(shell helm plugin list | grep -c '^unittest')
ifeq ($(VERBOSE), FALSE)
WAIT_FOR_INT_TESTS_ARGS?=-q
endif

CERT_DIR=cert/
REQ_CONF=$(CERT_DIR)openssl_req.conf
ROOT_KEY=$(CERT_DIR)root.key
ROOT_CRT=$(CERT_DIR)root.crt
SERVER_KEY=$(CERT_DIR)server.key
SERVER_CSR=$(CERT_DIR)server.csr
SERVER_CRT=$(CERT_DIR)server.crt
CLIENT_KEY=$(CERT_DIR)client.key
CLIENT_CSR=$(CERT_DIR)client_cert.csr
CLIENT_CRT=$(CERT_DIR)client.crt
CLIENT_TLS_SECRET=vertica-client-tls
SERVER_TLS_SECRET=vertica-server-tls

.PHONY: lint
lint:
helm lint helm-charts/vertica helm-charts/vertica-int-tests

.PHONY: install-unittest-plugin
install-unittest-plugin:
ifeq ($(HELM_UNITTEST_PLUGIN_INSTALLED), 0)
helm plugin install https://github.com/quintush/helm-unittest
endif

.PHONY: run-unit-tests
run-unit-tests: install-unittest-plugin
helm unittest --helm3 --output-type JUnit --output-file $(TMPDIR)/unit-tests.xml helm-charts/vertica

.PHONY: stop-stern
stop-stern:
ifneq (,$(wildcard stern.pid))
kill -INT $(shell cat stern.pid) || :
rm stern.pid 2> /dev/null || :
endif

.PHONY: clean-int-tests
clean-int-tests: stop-stern
helm uninstall tests || :

.PHONY: run-int-tests
run-int-tests: clean-int-tests
mkdir -p $(INT_TEST_OUTPUT_DIR)
daemonize -c $(PWD) -p stern.pid -l stern.pid -v \
-e $(INT_TEST_OUTPUT_DIR)/int-tests.stderr \
-o $(INT_TEST_OUTPUT_DIR)/int-tests.stdout \
$(shell which stern) --timestamps oct-.\*
helm install tests \
--set communalStorage.path=${COMMUNAL_PATH} \
--set pythonToolsTag=${TAG} \
--set tls.serverSecret=${SERVER_TLS_SECRET} \
--set tls.clientSecret=${CLIENT_TLS_SECRET} \
--set pythonToolsRepo=${CONTAINER_REPO}python-tools \
${EXTRA_HELM_ARGS} \
helm-charts/vertica-int-tests
timeout --foreground ${INT_TEST_TIMEOUT} scripts/wait-for-int-tests.sh $(WAIT_FOR_INT_TESTS_ARGS)
$(MAKE) stop-stern

.PHONY: clean-deploy
clean-deploy: clean-tls-secrets
helm uninstall cluster 2> /dev/null || :
scripts/blastdb.sh 2> /dev/null || :

.PHONY: deploy-kind
deploy-kind: clean-deploy create-tls-secrets
helm install cluster \
-f helm-charts/vertica/kind-overrides.yaml \
--set image.server.tag=${TAG} \
--set db.storage.communal.path=${COMMUNAL_PATH} \
helm-charts/vertica
timeout --foreground 20m scripts/wait-for-deploy.sh

docker-build: docker-build-vertica docker-build-python-tools

.PHONY: docker-build-vertica
docker-build-vertica: docker-vertica/Dockerfile
cd docker-vertica \
&& make CONTAINER_REPO=${CONTAINER_REPO} TAG=${TAG}

.PHONY: docker-build-python-tools
docker-build-python-tools:
cd docker-python-tools \
&& docker build -t ${CONTAINER_REPO}python-tools:${TAG} .

.PHONY: docker-push
docker-push:
docker push ${CONTAINER_REPO}${VIMAGE_NAME}:${TAG}
docker push ${CONTAINER_REPO}python-tools:${TAG}

.PHONY: tls_config
tls_config:
@echo "[req] " > $(REQ_CONF)
@echo "prompt = no" >> $(REQ_CONF)
@echo "distinguished_name = CStore4Ever" >> $(REQ_CONF)
@echo "[CStore4Ever]" >> $(REQ_CONF)
@echo "C = US" >> $(REQ_CONF)
@echo "ST = Massacussetts" >> $(REQ_CONF)
@echo "O = $(ONAME)" >> $(REQ_CONF)
@echo "CN = INVALIDHOST" >> $(REQ_CONF)
@echo "emailAddress = [email protected]" >> $(REQ_CONF)

.PHONY: create-tls-keys
create-tls-keys:
mkdir -p $(CERT_DIR)
@echo "Generating SSL certificates"
@# Generate CA files (so we can sign server and client keys)
@$(MAKE) ONAME="Certificate Authority" tls_config
@openssl genrsa -out $(ROOT_KEY)
@openssl req -config $(REQ_CONF) -new -x509 -key $(ROOT_KEY) -out $(ROOT_CRT)
@# Make server private and public keys
@$(MAKE) ONAME="Vertica Server" tls_config
@openssl genrsa -out $(SERVER_KEY)
@openssl req -config $(REQ_CONF) -new -key $(SERVER_KEY) -out $(SERVER_CSR)
@openssl x509 -req -in $(SERVER_CSR) \
-days 3650 -sha1 -CAcreateserial -CA $(ROOT_CRT) -CAkey $(ROOT_KEY) \
-out $(SERVER_CRT)
@# Make client private and public keys
@$(MAKE) ONAME="Vertica Client" tls_config
@openssl genrsa -out $(CLIENT_KEY)
@openssl req -config $(REQ_CONF) -new -key $(CLIENT_KEY) -out $(CLIENT_CSR)
@openssl x509 -req -in $(CLIENT_CSR) \
-days 3650 -sha1 -CAcreateserial -CA $(ROOT_CRT) -CAkey $(ROOT_KEY) \
-out $(CLIENT_CRT)

.PHONY: clean-tls-secrets
clean-tls-secrets:
kubectl delete secret $(CLIENT_TLS_SECRET) || :
kubectl delete secret $(SERVER_TLS_SECRET) || :

.PHONY: create-tls-secrets
create-tls-secrets: clean-tls-secrets create-tls-keys
kubectl create secret tls $(CLIENT_TLS_SECRET) --cert=$(CLIENT_CRT) --key=$(CLIENT_KEY)
kubectl create secret generic $(SERVER_TLS_SECRET) \
--from-file=tls.crt=$(SERVER_CRT) \
--from-file=tls.key=$(SERVER_KEY) \
--from-file=tls.rootca=$(ROOT_CRT)
Loading

0 comments on commit 360e178

Please sign in to comment.