Skip to content

Latest commit

 

History

History
372 lines (284 loc) · 14.3 KB

CONTRIBUTING.md

File metadata and controls

372 lines (284 loc) · 14.3 KB

Contributing

These guidelines will help you get started with the Trivy-operator project.

Table of Contents

Contribution Workflow

Issues and Discussions

  • Feel free to open issues for any reason as long as you make it clear what this issue is about: bug/feature/proposal/comment.
  • For questions and general discussions, please do not open an issue, and instead create a discussion in the "Discussions" tab.
  • Please spend a minimal amount of time giving due diligence to existing issues or discussions. Your topic might be a duplicate. If it is, please add your comment to the existing one.
  • Please give your issue or discussion a meaningful title that will be clear for future users.
  • The issue should clearly explain the reason for opening, the proposal if you have any, and any relevant technical information.
  • For technical questions, please explain in detail what you were trying to do, provide an error message if applicable, and your versions of Trivy-Operator and your environment.

Pull Requests

  • Every Pull Request should have an associated Issue unless it is a trivial fix.
  • Your PR is more likely to be accepted if it focuses on just one change.
  • Describe what the PR does. There's no convention enforced, but please try to be concise and descriptive. Treat the PR description as a commit message. Titles that start with "fix"/"add"/"improve"/"remove" are good examples.
  • There's no need to add or tag reviewers, if your PR is left unattended for too long, you can add a comment to bring it up to attention, optionally "@" mention one of the maintainers that was involved with the issue.
  • If a reviewer commented on your code or asked for changes, please remember to mark the discussion as resolved after you address it and re-request a review.
  • When addressing comments, try to fix each suggestion in a separate commit.
  • Tests are not required at this point as Trivy-Operator is evolving fast, but if you can include tests that will be appreciated.

Conventional Commits

It is not that strict, but we use the Conventional commits in this repository. Each commit message doesn't have to follow conventions as long as it is clear and descriptive since it will be squashed and merged.

Set up your Development Environment

  1. Install Go

    The project requires Go 1.19 or later. We also assume that you're familiar with Go's GOPATH workspace convention, and have the appropriate environment variables set.

  2. Get the source code:

    git clone [email protected]:aquasecurity/trivy-operator.git
    cd trivy-operator
    
  3. Access to a Kubernetes cluster. We assume that you're using a KIND cluster. To create a single-node KIND cluster, run:

    kind create cluster
    

Note: Some of our tests performs integration testing by starting a local control plane using envtest. If you only run test using the Makefile (m̀ake test), no additional installation is required. But if you want to run some of these integration tests using go test or from your IDE, you'll have to install kubebuiler-tools.

Build Binaries

Binary Image Description
trivy-operator ghcr.io/aquasecurity/trivy-operator:dev Trivy Operator

To build all Trivy-operator binary, run:

make

This uses the go build command and builds binaries in the ./bin directory.

To build all Trivy-operator binary into Docker images, run:

make docker-build

To load Docker images into your KIND cluster, run:

kind load docker-image aquasecurity/trivy-operator:dev

Testing

We generally require tests to be added for all, but the most trivial of changes. However, unit tests alone don't provide guarantees about the behaviour of Trivy-operator. To verify that each Go module correctly interacts with its collaborators, more coarse grained integration tests might be required.

Run Tests

To run all tests with code coverage enabled, run:

make test

To open the test coverage report in your web browser, run:

go tool cover -html=coverage.txt

Run Integration Tests

The integration tests assumes that you have a working kubernetes cluster (e.g KIND cluster) and KUBECONFIG environment variable is pointing to that cluster configuration file. For example:

export KUBECONFIG=~/.kube/config

To open the test coverage report in your web browser, run:

go tool cover -html=itest/trivy-operator/coverage.txt

To run the integration tests for Trivy-operator Operator and view the coverage report, first do the prerequisite steps, and then run:

OPERATOR_NAMESPACE=trivy-system \
  OPERATOR_TARGET_NAMESPACES=default \
  OPERATOR_LOG_DEV_MODE=true \
  make itests-trivy-operator
go tool cover -html=itest/trivy-operator/coverage.txt

Code Coverage

In the CI workflow, after running all tests, we do upload code coverage reports to Codecov. Codecov will merge the reports automatically while maintaining the original upload context as explained here.

Custom Resource Definitions

Generating code and manifests

This project uses controller-gen to generate code and Kubernetes manifests from source-code and code markers. We currently generate:

  • Custom Resource Definitions (CRD) for CRDs defined in trivy-operator
  • ClusterRole that must be bound to the trivy-operator serviceaccount to allow it to function
  • Mandatory DeepCopy functions for a Go struct representing a CRD

This means that you should not try to modify any of these files directly, but instead change the code and code markers. Our Makefile contains a target to ensure that all generated files are up-to-date: So after doing modifications in code, affecting CRDs/ClusterRole, you should run make generate-all to regenerate everything.

Our CI will verify that all generated is up-to-date by running make verify-generated.

Any change to the CRD structs, including nested structs, will probably modify the CRD. This is also true for Go docs, as field/type doc becomes descriptions in CRDs.

When it comes to code markers added to the code, run controller-gen -h for detailed reference (add more h's to the command to get more details) or the markers documentation for an overview.

We are trying to place the RBAC markers close to the code that drives the requirement for permissions. This could lead to the same, or similar, RBAC markers multiple places in the code. This how we want it to be, since it will allow us to track RBAC changes to code changes. Any permission granted multiple times by markers will be deduplicated by controller-gen.

Test Trivy Operator

You can deploy the operator in the trivy-system namespace and configure it to watch the default namespace. In OLM terms such install mode is called SingleNamespace. The SingleNamespace mode is good to get started with a basic development workflow. For other install modes see Operator Multitenancy with OperatorGroups.

In cluster

  1. Build the operator binary into the Docker image and load it from your host into KIND cluster nodes:
    make docker-build-trivy-operator && kind load docker-image aquasecurity/trivy-operator:dev
    
  2. Create the trivy-operator Deployment in the trivy-system namespace to run the operator's container:
    kubectl create -k deploy/static
    

You can uninstall the operator with:

kubectl delete -k deploy/static

Out of cluster

  1. Deploy the operator in cluster:
    kubectl apply -f deploy/static/trivy-operator.yaml
    
  2. Scale the operator down to zero replicas:
    kubectl scale deployment trivy-operator \
      -n trivy-system \
      --replicas 0
    
  3. Delete pending scan jobs with:
    kubectl delete jobs -n trivy-system --all
    
  4. Run the main method of the operator program:
    OPERATOR_NAMESPACE=trivy-system \
      OPERATOR_TARGET_NAMESPACES=default \
      OPERATOR_LOG_DEV_MODE=true \
      OPERATOR_VULNERABILITY_SCANNER_ENABLED=true \
      OPERATOR_VULNERABILITY_SCANNER_SCAN_ONLY_CURRENT_REVISIONS=false \
      OPERATOR_CONFIG_AUDIT_SCANNER_ENABLED=true \
      OPERATOR_RBAC_ASSESSMENT_SCANNER_ENABLED=true \
      OPERATOR_CONFIG_AUDIT_SCANNER_SCAN_ONLY_CURRENT_REVISIONS=false \
      OPERATOR_VULNERABILITY_SCANNER_REPORT_TTL="" \
      OPERATOR_BATCH_DELETE_LIMIT=3 \
      OPERATOR_BATCH_DELETE_DELAY="30s" \
      go run cmd/trivy-operator/main.go
    

You can uninstall the operator with:

kubectl delete -f deploy/static/trivy-operator.yaml

Update Static YAML Manifests

We consider the Helm chart to be the master for deploying trivy-operator. Since some prefer to not use Helm, we also provide static resources to install the operator.

To avoid maintaining resources in multiple places, we have a created a script to (re)generate the static resources from the Helm chart.

So if modifying the operator resources, please do so by modifying the Helm chart, then run make manifests to ensure the static resources are up-to-date.

Operator Lifecycle Manager (OLM)

Install OLM

To install Operator Lifecycle Manager (OLM) run:

kubectl apply -f https://github.com/operator-framework/operator-lifecycle-manager/releases/download/v0.20.0/crds.yaml
kubectl apply -f https://github.com/operator-framework/operator-lifecycle-manager/releases/download/v0.20.0/olm.yaml

or

curl -L https://github.com/operator-framework/operator-lifecycle-manager/releases/download/v0.20.3/install.sh -o install.sh
chmod +x install.sh
./install.sh v0.20.0

Build the Catalog Image

The Starboard Operator metadata is formatted in packagemanifest layout, so you need to place it in the directory structure of the community-operators repository.

git clone [email protected]:k8s-operatorhub/community-operators.git
cd community-operators

Build the catalog image for OLM containing just Starboard Operator with a Dockerfile like this:

cat << EOF > starboard.Dockerfile
FROM quay.io/operator-framework/upstream-registry-builder as builder

COPY operators/starboard-operator manifests
RUN /bin/initializer -o ./bundles.db

FROM scratch
COPY --from=builder /etc/nsswitch.conf /etc/nsswitch.conf
COPY --from=builder /bundles.db /bundles.db
COPY --from=builder /bin/registry-server /registry-server
COPY --from=builder /bin/grpc_health_probe /bin/grpc_health_probe
EXPOSE 50051
ENTRYPOINT ["/registry-server"]
CMD ["--database", "bundles.db"]
EOF

Place the starboard.Dockerfile in the top-level directory of your cloned copy of the community-operators repository, build it and push to a registry from where you can download it to your Kubernetes cluster:

docker image build -f starboard.Dockerfile -t docker.io/<your account>/starboard-catalog:dev .
docker image push docker.io/<your account>/starboard-catalog:dev

Register the Catalog Image

Create a CatalogSource instance in the olm namespace to reference in the Operator catalog image that contains the Starboard Operator:

cat << EOF | kubectl apply -f -
apiVersion: operators.coreos.com/v1alpha1
kind: CatalogSource
metadata:
  name: starboard-catalog
  namespace: olm
spec:
  publisher: Starboard Maintainers
  displayName: Starboard Catalog
  sourceType: grpc
  image: docker.io/<your account>/starboard-catalog:dev
EOF

You can delete the default catalog that OLM ships with to avoid duplicate entries:

kubectl delete catalogsource operatorhubio-catalog -n olm

Inspect the list of loaded package manifests on the system with the following command to filter for the Starboard Operator:

$ kubectl get packagemanifests
NAME                 CATALOG             AGE
starboard-operator   Starboard Catalog   97s

If the Starboard Operator appears in this list, the catalog was successfully parsed and it is now available to install. Follow the installation instructions for OLM. Make sure that the Subscription's spec.source property refers to the starboard-catalog source instead of operatorhubio-catalog.

You can find more details about testing Operators with Operator Framework here.