From e8fcc056d66f39d992c09f60c26236db8caa31c7 Mon Sep 17 00:00:00 2001 From: Rolfe Dlugy-Hegwer Date: Wed, 25 May 2022 10:15:09 -0400 Subject: [PATCH] Generate adoc copies in /content/en/docs to #78 --- Gemfile.lock | 3 - content/en/docs/{_index.md => _index.adoc} | 76 +- content/en/docs/api/{build.md => build.adoc} | 301 ++++---- .../docs/api/{buildrun.md => buildrun.adoc} | 314 +++++--- content/en/docs/api/buildstrategies.adoc | 698 ++++++++++++++++++ content/en/docs/api/buildstrategies.md | 633 ---------------- ...{authentication.md => authentication.adoc} | 94 +-- content/en/docs/configuration.adoc | 63 ++ content/en/docs/configuration.md | 28 - content/en/docs/metrics.adoc | 159 ++++ content/en/docs/metrics.md | 82 -- .../en/docs/{profiling.md => profiling.adoc} | 24 +- 12 files changed, 1407 insertions(+), 1068 deletions(-) rename content/en/docs/{_index.md => _index.adoc} (65%) mode change 100755 => 100644 rename content/en/docs/api/{build.md => build.adoc} (63%) rename content/en/docs/api/{buildrun.md => buildrun.adoc} (53%) create mode 100644 content/en/docs/api/buildstrategies.adoc delete mode 100644 content/en/docs/api/buildstrategies.md rename content/en/docs/{authentication.md => authentication.adoc} (75%) create mode 100644 content/en/docs/configuration.adoc delete mode 100644 content/en/docs/configuration.md create mode 100644 content/en/docs/metrics.adoc delete mode 100644 content/en/docs/metrics.md rename content/en/docs/{profiling.md => profiling.adoc} (89%) diff --git a/Gemfile.lock b/Gemfile.lock index a335cc9f..b6e86f9f 100644 --- a/Gemfile.lock +++ b/Gemfile.lock @@ -8,6 +8,3 @@ PLATFORMS DEPENDENCIES asciidoctor (~> 2.0, >= 2.0.17) - -BUNDLED WITH - 2.3.7 diff --git a/content/en/docs/_index.md b/content/en/docs/_index.adoc old mode 100755 new mode 100644 similarity index 65% rename from content/en/docs/_index.md rename to content/en/docs/_index.adoc index ace4943d..a6d91dc4 --- a/content/en/docs/_index.md +++ b/content/en/docs/_index.adoc @@ -9,54 +9,55 @@ menu: weight: 20 --- - - Shipwright is an extensible framework for building container images on Kubernetes. Shipwright supports popular tools such as Kaniko, Cloud Native Buildpacks, Buildah, and more! -Shipwright is based around four elements for each build: +In Shipwright, each build is on the following elements: -1. Source code - the "what" you are trying to build -1. Output image - "where" you are trying to deliver your application -1. Build strategy - "how" your application is assembled -1. Invocation - "when" you want to build your application +* Source code - the "what" you are trying to build +* Output image - "where" you are trying to deliver your application +* Build strategy - "how" your application is assembled +* Invocation - "when" you want to build your application -## Comparison with local image builds +== Comparison with local image builds Developers who use Docker are familiar with this process: -1. Clone source from a git-based repository ("what") -2. Build the container image ("when" and "how") - - ```bash +. Clone source from a git-based repository ("what") +. Build the container image ("when" and "how") ++ +[source,terminal] +---- docker build -t registry.mycompany.com/myorg/myapp:latest . - ``` - -3. Push the container image to your registry ("where") +---- - ```bash +. Push the container image to your registry ("where") ++ +[source,terminal] +---- docker push registry.mycompany.com/myorg/myapp:latest - ``` +---- -## Shipwright Build APIs +== Shipwright Build APIs Shipwright's Build API consists of four core -[CustomResourceDefinitions](https://kubernetes.io/docs/concepts/extend-kubernetes/api-extension/custom-resources/#customresourcedefinitions) +https://kubernetes.io/docs/concepts/extend-kubernetes/api-extension/custom-resources/#customresourcedefinitions[CustomResourceDefinitions] (CRDs): -1. [`Build`](/docs/api/build/) - defines what to build, and where the application should be delivered. -1. [`BuildStrategy` and `ClusterBuildStrategy`](/docs/api/buildstrategies/) - defines how to build an application for an image - building tool. -1. [`BuildRun`](/docs/api/buildrun/) - invokes the build. - You create a `BuildRun` to tell Shipwright to start building your application. +* link:/docs/api/build/[`Build`] - defines what to build, and where the application should be delivered. +* link:/docs/api/buildstrategies/[`BuildStrategy` and `ClusterBuildStrategy`] - defines how to build an application for an image +building tool. +* link:/docs/api/buildrun/[`BuildRun`] - invokes the build. +You create a `BuildRun` to tell Shipwright to start building your application. -### Build +=== Build The `Build` object provides a playbook on how to assemble your specific application. The simplest build consists of a git source, a build strategy, and an output image: -```yaml +[source,yaml] +---- apiVersion: build.dev/v1alpha1 kind: Build metadata: @@ -71,11 +72,11 @@ spec: kind: ClusterBuildStrategy output: image: registry.mycompany.com/my-org/taxi-app:latest -``` +---- Builds can be extended to push to private registries, use a different Dockerfile, and more. -### BuildStrategy and ClusterBuildStrategy +=== BuildStrategy and ClusterBuildStrategy `BuildStrategy` and `ClusterBuildStrategy` are related APIs to define how a given tool should be used to assemble an application. They are distinguished by their scope - `BuildStrategy` objects @@ -85,7 +86,8 @@ The spec of a `BuildStrategy` or `ClusterBuildStrategy` consists of a `buildStep specifications. Below is an example spec for Kaniko, which can build an image from a Dockerfile within a container: -```yaml +[source,yaml] +---- # this is a fragment of a manifest spec: buildSteps: @@ -121,17 +123,17 @@ spec: requests: cpu: 250m memory: 65Mi -``` +---- -### BuildRun +=== BuildRun Each `BuildRun` object invokes a build on your cluster. You can think of these as a Kubernetes `Jobs` or Tekton `TaskRuns` - they represent a workload on your cluster, ultimately resulting in a -running `Pod`. See [`BuildRun`](/docs/api/buildrun/) for more details. +running `Pod`. See link:/docs/api/buildrun/[`BuildRun`] for more details. -## Further reading +== Further reading -- [Configuration](/docs/configuration/) -- Build controller observability - - [Metrics](/docs/metrics/) - - [Profiling](/docs/profiling/) +* link:/docs/configuration/[Configuration] +* Build controller observability + ** link:/docs/metrics/[Metrics] + ** link:/docs/profiling/[Profiling] diff --git a/content/en/docs/api/build.md b/content/en/docs/api/build.adoc similarity index 63% rename from content/en/docs/api/build.md rename to content/en/docs/api/build.adoc index 1b0863e2..c9a6a833 100644 --- a/content/en/docs/api/build.md +++ b/content/en/docs/api/build.adoc @@ -3,102 +3,129 @@ title: Build weight: 10 --- - - [Overview](#overview) - - [Build Controller](#build-controller) - - [Build Validations](#build-validations) - - [Configuring a Build](#configuring-a-build) - - [Defining the Source](#defining-the-source) - - [Defining the Strategy](#defining-the-strategy) - - [Defining ParamValues](#defining-paramvalues) - - [Defining the Builder or Dockerfile](#defining-the-builder-or-dockerfile) - - [Defining the Output](#defining-the-output) - - [BuildRun deletion](#BuildRun-deletion) - -## Overview +* <> +* <> +* <> +* <> + ** <> + ** <> + ** <> + ** <> + ** <> +* <> + +== Overview A `Build` resource allows the user to define: -- source -- sources -- strategy -- params -- builder -- dockerfile -- output -- env +* source +* sources +* strategy +* params +* builder +* dockerfile +* output +* env A `Build` is available within a namespace. -## Build Controller +== Build Controller The controller watches for: -- Updates on the `Build` resource (_CRD instance_) +* Updates on the `Build` resource (_CRD instance_) When the controller reconciles it: -- Validates if the referenced `StrategyRef` exists. -- Validates if the specified `params` exists on the referenced strategy parameters. It also validates if the `params` names collide with the Shipwright reserved names. -- Validates if the container `registry` output secret exists. -- Validates if the referenced `spec.source.url` endpoint exists. +* Validates if the referenced `StrategyRef` exists. +* Validates if the specified `params` exists on the referenced strategy parameters. It also validates if the `params` names collide with the Shipwright reserved names. +* Validates if the container `registry` output secret exists. +* Validates if the referenced `spec.source.url` endpoint exists. -## Build Validations +== Build Validations In order to prevent users from triggering `BuildRuns` (_execution of a Build_) that will eventually fail because of wrong or missing dependencies or configuration settings, the Build controller will validate them in advance. If all validations are successful, users can expect a `Succeeded` `Status.Reason`, however if any of the validations failed, users can rely on the `Status.Reason` and `Status.Message` fields, in order to understand the root cause. -| Status.Reason | Description | -| --- | --- | -| BuildStrategyNotFound | The referenced namespace-scope strategy doesn't exist. | -| ClusterBuildStrategyNotFound | The referenced cluster-scope strategy doesn't exist. | -| SetOwnerReferenceFailed | Setting ownerreferences between a Build and a BuildRun failed. This is triggered when making use of the `build.shipwright.io/build-run-deletion` annotation in a Build. | -| SpecSourceSecretRefNotFound | The secret used to authenticate to git doesn't exist. | -| SpecOutputSecretRefNotFound | The secret used to authenticate to the container registry doesn't exist. | -| SpecBuilderSecretRefNotFound | The secret used to authenticate to the container registry doesn't exist.| -| MultipleSecretRefNotFound | More than one secret is missing. At the moment, only three paths on a Build can specify a secret. | -| RestrictedParametersInUse | One or many defined `params` are colliding with Shipwright reserved parameters. See [Defining Params](#defining-params) for more information. | -| UndefinedParameter | One or many defined `params` are not defined in the referenced strategy. Please ensure that the strategy defines them under its `spec.parameters` list. | -| RemoteRepositoryUnreachable | The defined `spec.source.url` was not found. This validation only take place for http/https protocols. | -| BuildNameInvalid | The defined `Build` name (`metadata.name`) is invalid. The `Build` name should be a [valid label value](https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/#syntax-and-character-set). | -| SpecEnvNameCanNotBeBlank | Indicates that the name for a user provided environment variable is blank. | -| SpecEnvValueCanNotBeBlank | Indicates that the value for a user provided environment variable is blank. | - -## Configuring a Build +|=== +| Status.Reason | Description + +| BuildStrategyNotFound +| The referenced namespace-scope strategy doesn't exist. + +| ClusterBuildStrategyNotFound +| The referenced cluster-scope strategy doesn't exist. + +| SetOwnerReferenceFailed +| Setting ownerreferences between a Build and a BuildRun failed. This is triggered when making use of the `build.shipwright.io/build-run-deletion` annotation in a Build. + +| SpecSourceSecretRefNotFound +| The secret used to authenticate to git doesn't exist. + +| SpecOutputSecretRefNotFound +| The secret used to authenticate to the container registry doesn't exist. + +| SpecBuilderSecretRefNotFound +| The secret used to authenticate to the container registry doesn't exist. + +| MultipleSecretRefNotFound +| More than one secret is missing. At the moment, only three paths on a Build can specify a secret. + +| RestrictedParametersInUse +| One or many defined `params` are colliding with Shipwright reserved parameters. See <> for more information. + +| UndefinedParameter +| One or many defined `params` are not defined in the referenced strategy. Please ensure that the strategy defines them under its `spec.parameters` list. + +| RemoteRepositoryUnreachable +| The defined `spec.source.url` was not found. This validation only take place for http/https protocols. + +| BuildNameInvalid +| The defined `Build` name (`metadata.name`) is invalid. The `Build` name should be a https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/#syntax-and-character-set[valid label value]. + +| SpecEnvNameCanNotBeBlank +| Indicates that the name for a user provided environment variable is blank. + +| SpecEnvValueCanNotBeBlank +| Indicates that the value for a user provided environment variable is blank. +|=== + +== Configuring a Build The `Build` definition supports the following fields: -- Required: - - [`apiVersion`](https://kubernetes.io/docs/concepts/overview/working-with-objects/kubernetes-objects/#required-fields) - Specifies the API version, for example `shipwright.io/v1alpha1`. - - [`kind`](https://kubernetes.io/docs/concepts/overview/working-with-objects/kubernetes-objects/#required-fields) - Specifies the Kind type, for example `Build`. - - [`metadata`](https://kubernetes.io/docs/concepts/overview/working-with-objects/kubernetes-objects/#required-fields) - Metadata that identify the CRD instance, for example the name of the `Build`. - - `spec.source.URL` - Refers to the Git repository containing the source code. - - `spec.strategy` - Refers to the `BuildStrategy` to be used, see the [examples](../samples/buildstrategy) - - `spec.builder.image` - Refers to the image containing the build tools to build the source code. (_Use this path for Dockerless strategies, this is just required for `source-to-image` buildStrategy_) - - `spec.output`- Refers to the location where the generated image would be pushed. - - `spec.output.credentials.name`- Reference an existing secret to get access to the container registry. - -- Optional: - - `spec.paramValues` - Refers to a list of `key/value` that could be used to loosely type `parameters` in the `BuildStrategy`. - - `spec.dockerfile` - Path to a Dockerfile to be used for building an image. (_Use this path for strategies that require a Dockerfile_) - - `spec.sources` - [Sources](#Sources) describes a slice of artifacts that will be imported into project context, before the actual build process starts. - - `spec.timeout` - Defines a custom timeout. The value needs to be parsable by [ParseDuration](https://golang.org/pkg/time/#ParseDuration), for example `5m`. The default is ten minutes. The value can be overwritten in the `BuildRun`. - - `metadata.annotations[build.shipwright.io/build-run-deletion]` - Defines if delete all related BuildRuns when deleting the Build. The default is `false`. - - `spec.output.annotations` - Refers to a list of `key/value` that could be used to [annotate](https://github.com/opencontainers/image-spec/blob/main/annotations.md) the output image. - - `spec.output.labels` - Refers to a list of `key/value` that could be used to label the output image. - - `spec.env` - Specifies additional environment variables that should be passed to the build container. The available variables depend on the tool that is being used by the chosen build strategy. - -### Defining the Source +* Required: + ** https://kubernetes.io/docs/concepts/overview/working-with-objects/kubernetes-objects/#required-fields[`apiVersion`] - Specifies the API version, for example `shipwright.io/v1alpha1`. + ** https://kubernetes.io/docs/concepts/overview/working-with-objects/kubernetes-objects/#required-fields[`kind`] - Specifies the Kind type, for example `Build`. + ** https://kubernetes.io/docs/concepts/overview/working-with-objects/kubernetes-objects/#required-fields[`metadata`] - Metadata that identify the CRD instance, for example the name of the `Build`. + ** `spec.source.URL` - Refers to the Git repository containing the source code. + ** `spec.strategy` - Refers to the `BuildStrategy` to be used, see the link:../samples/buildstrategy[examples] + ** `spec.builder.image` - Refers to the image containing the build tools to build the source code. (_Use this path for Dockerless strategies, this is just required for `source-to-image` buildStrategy_) + ** `spec.output`- Refers to the location where the generated image would be pushed. + ** `spec.output.credentials.name`- Reference an existing secret to get access to the container registry. +* Optional: + ** `spec.paramValues` - Refers to a list of `key/value` that could be used to loosely type `parameters` in the `BuildStrategy`. + ** `spec.dockerfile` - Path to a Dockerfile to be used for building an image. (_Use this path for strategies that require a Dockerfile_) + ** `spec.sources` - <> describes a slice of artifacts that will be imported into project context, before the actual build process starts. + ** `spec.timeout` - Defines a custom timeout. The value needs to be parsable by https://golang.org/pkg/time/#ParseDuration[ParseDuration], for example `5m`. The default is ten minutes. The value can be overwritten in the `BuildRun`. + ** `metadata.annotations[build.shipwright.io/build-run-deletion]` - Defines if delete all related BuildRuns when deleting the Build. The default is `false`. + ** `spec.output.annotations` - Refers to a list of `key/value` that could be used to https://github.com/opencontainers/image-spec/blob/main/annotations.md[annotate] the output image. + ** `spec.output.labels` - Refers to a list of `key/value` that could be used to label the output image. + ** `spec.env` - Specifies additional environment variables that should be passed to the build container. The available variables depend on the tool that is being used by the chosen build strategy. + +=== Defining the Source A `Build` resource can specify a Git source, together with other parameters like: -- `source.credentials.name` - For private repositories, the name is a reference to an existing secret on the same namespace containing the `ssh` data. -- `source.revision` - An specific revision to select from the source repository, this can be a commit, tag or branch name. If not defined, it will fallback to the git repository default branch. -- `source.contextDir` - For repositories where the source code is not located at the root folder, you can specify this path here. +* `source.credentials.name` - For private repositories, the name is a reference to an existing secret on the same namespace containing the `ssh` data. +* `source.revision` - An specific revision to select from the source repository, this can be a commit, tag or branch name. If not defined, it will fallback to the git repository default branch. +* `source.contextDir` - For repositories where the source code is not located at the root folder, you can specify this path here. By default, the Build controller won't validate that the Git repository exists. If the validation is desired, users can define the `build.shipwright.io/verify.repository` annotation with `true` explicitly. For example: -Example of a `Build` with the **build.shipwright.io/verify.repository** annotation, in order to enable the `spec.source.url` validation. +Example of a `Build` with the *build.shipwright.io/verify.repository* annotation, in order to enable the `spec.source.url` validation. -```yaml +[source,yaml] +---- apiVersion: shipwright.io/v1alpha1 kind: Build metadata: @@ -109,13 +136,14 @@ spec: source: url: https://github.com/shipwright-io/sample-go contextDir: docker-build -``` +---- -_Note_: The Build controller only validates two scenarios. The first one where the endpoint uses an `http/https` protocol, the second one when a `ssh` protocol (_e.g. `git@`_) is defined and none referenced secret was provided(_e.g. source.credentials.name_). +NOTE: The Build controller only validates two scenarios. The first one where the endpoint uses an `http/https` protocol, the second one when a `ssh` protocol (_e.g. `git@`_) is defined and none referenced secret was provided(_e.g. source.credentials.name_). -Example of a `Build` with a source with **credentials** defined by the user. +Example of a `Build` with a source with *credentials* defined by the user. -```yaml +[source,yaml] +---- apiVersion: shipwright.io/v1alpha1 kind: Build metadata: @@ -125,11 +153,12 @@ spec: url: https://github.com/sclorg/nodejs-ex credentials: name: source-repository-credentials -``` +---- Example of a `Build` with a source that specifies an specific subfolder on the repository. -```yaml +[source,yaml] +---- apiVersion: shipwright.io/v1alpha1 kind: Build metadata: @@ -138,11 +167,12 @@ spec: source: url: https://github.com/SaschaSchwarze0/npm-simple contextDir: renamed -``` +---- Example of a `Build` that specifies the tag `v.0.1.0` for the git repository: -```yaml +[source,yaml] +---- apiVersion: shipwright.io/v1alpha1 kind: Build metadata: @@ -152,11 +182,12 @@ spec: url: https://github.com/shipwright-io/sample-go contextDir: docker-build revision: v0.1.0 -``` +---- Example of a `Build` that specifies environment variables: -```yaml +[source,yaml] +---- apiVersion: shipwright.io/v1alpha1 kind: Build metadata: @@ -170,12 +201,13 @@ spec: value: "example-value-1" - name: EXAMPLE_VAR_2 value: "example-value-2" -``` +---- Example of a `Build` that uses the Kubernetes Downward API to expose a `Pod` field as an environment variable: -```yaml +[source,yaml] +---- apiVersion: shipwright.io/v1alpha1 kind: Build metadata: @@ -189,12 +221,13 @@ spec: valueFrom: fieldRef: fieldPath: metadata.name -``` +---- Example of a `Build` that uses the Kubernetes Downward API to expose a `Container` field as an environment variable: -```yaml +[source,yaml] +---- apiVersion: shipwright.io/v1alpha1 kind: Build metadata: @@ -209,22 +242,23 @@ spec: resourceFieldRef: containerName: my-container resource: limits.memory -``` +---- -### Defining the Strategy +=== Defining the Strategy A `Build` resource can specify the `BuildStrategy` to use, these are: -- [Buildah](/docs/api/buildstrategies/#buildah) -- [Buildpacks-v3](/docs/api/buildstrategies/#buildpacks-v3) -- [BuildKit](/docs/api/buildstrategies/#buildkit) -- [Kaniko](/docs/api/buildstrategies/#kaniko) -- [ko](/docs/api/buildstrategies/#ko) -- [Source-to-Image](/docs/api/buildstrategies/#source-to-image) +* link:/docs/api/buildstrategies/#buildah[Buildah] +* link:/docs/api/buildstrategies/#buildpacks-v3[Buildpacks-v3] +* link:/docs/api/buildstrategies/#buildkit[BuildKit] +* link:/docs/api/buildstrategies/#kaniko[Kaniko] +* link:/docs/api/buildstrategies/#ko[ko] +* link:/docs/api/buildstrategies/#source-to-image[Source-to-Image] Defining the strategy is straightforward, you need to define the `name` and the `kind`. For example: -```yaml +[source,yaml] +---- apiVersion: shipwright.io/v1alpha1 kind: Build metadata: @@ -233,24 +267,25 @@ spec: strategy: name: buildpacks-v3 kind: ClusterBuildStrategy -``` +---- -### Defining ParamValues +=== Defining ParamValues A `Build` resource can specify _params_, these allow users to modify the behaviour of the referenced `BuildStrategy` steps. When using _params_, users should avoid: -- Defining a `spec.paramValues` name that doesn't match one of the `spec.parameters` defined in the `BuildStrategy`. -- Defining a `spec.paramValues` name that collides with the Shipwright reserved parameters. These are _BUILDER_IMAGE_,_DOCKERFILE_,_CONTEXT_DIR_ and any name starting with _shp-_. +* Defining a `spec.paramValues` name that doesn't match one of the `spec.parameters` defined in the `BuildStrategy`. +* Defining a `spec.paramValues` name that collides with the Shipwright reserved parameters. These are _BUILDER_IMAGE_,_DOCKERFILE_,_CONTEXT_DIR_ and any name starting with _shp-_. -In general, _params_ are tighly bound to Strategy _parameters_, please make sure you understand the contents of your strategy of choice, before defining _params_ in the _Build_. `BuildRun` resources allow users to override `Build` _params_, see the related [docs](./buildrun.md#defining-params) for more information. +In general, _params_ are tighly bound to Strategy _parameters_, please make sure you understand the contents of your strategy of choice, before defining _params_ in the _Build_. `BuildRun` resources allow users to override `Build` _params_, see the related link:./buildrun.md#defining-params[docs] for more information. -#### Example +==== Example The following `BuildStrategy` contains a single step ( _a-strategy-step_ ) with a command and arguments. The strategy defines a parameter( _sleep-time_ ) with a reasonable default, that is used in the step arguments, see _$(params.sleep-time)_. -```yaml +[source,yaml] +---- --- apiVersion: shipwright.io/v1alpha1 kind: BuildStrategy @@ -268,11 +303,12 @@ spec: - sleep args: - $(params.sleep-time) -``` +---- If users would like the above strategy to change its behaviour, e.g. _allow the step to trigger a sleep cmd longer than 1 second_, then users can modify the default behaviour, via their `Build` `spec.paramValues` definition. For example: -```yaml +[source,yaml] +---- --- apiVersion: shipwright.io/v1alpha1 kind: Build @@ -288,15 +324,16 @@ spec: strategy: name: sleepy-strategy kind: BuildStrategy -``` +---- The above `Build` definition uses _sleep-time_ param, a well-defined _parameter_ under its referenced `BuildStrategy`. By doing this, the user signalizes to the referenced sleepy-strategy, the usage of a different value for its _sleep-time_ parameter. -### Defining the Builder or Dockerfile +=== Defining the Builder or Dockerfile A `Build` resource can specify an image containing the tools to build the final image. Users can do this via the `spec.builder` or the `spec.dockerfile`. For example, the user choose the `Dockerfile` file under the source repository. -```yaml +[source,yaml] +---- apiVersion: shipwright.io/v1alpha1 kind: Build metadata: @@ -309,11 +346,12 @@ spec: name: buildah kind: ClusterBuildStrategy dockerfile: Dockerfile -``` +---- Another example, when the user chooses to use a `builder` image ( This is required for `source-to-image` buildStrategy, because for different code languages, they have different builders. ): -```yaml +[source,yaml] +---- apiVersion: shipwright.io/v1alpha1 kind: Build metadata: @@ -327,17 +365,18 @@ spec: kind: ClusterBuildStrategy builder: image: docker.io/centos/nodejs-10-centos7 -``` +---- -### Defining the Output +=== Defining the Output A `Build` resource can specify the output where the image should be pushed. For external private registries it is recommended to specify a secret with the related data to access it. There is an option available to specify the annotation and labels for the output image (annotations and labels mentioned here are specific to the container image and do not have any relation with the `Build` annotations). -**NOTE**: When you specify annotations or labels, the output image will get pushed twice. The first push comes from the build strategy. A follow-on update will then change the image configuration to add the annotations and labels. If you have automation in place based on push events in your container registry, be aware of this behavior. +*NOTE*: When you specify annotations or labels, the output image will get pushed twice. The first push comes from the build strategy. A follow-on update will then change the image configuration to add the annotations and labels. If you have automation in place based on push events in your container registry, be aware of this behavior. For example, the user specify a public registry: -```yaml +[source,yaml] +---- apiVersion: shipwright.io/v1alpha1 kind: Build metadata: @@ -353,11 +392,12 @@ spec: image: docker.io/centos/nodejs-10-centos7 output: image: image-registry.openshift-image-registry.svc:5000/build-examples/nodejs-ex -``` +---- Another example, is when the user specifies a private registry: -```yaml +[source,yaml] +---- apiVersion: shipwright.io/v1alpha1 kind: Build metadata: @@ -375,11 +415,12 @@ spec: image: us.icr.io/source-to-image-build/nodejs-ex credentials: name: icr-knbuild -``` +---- Example of user specifies image annotations and labels: -```yaml +[source,yaml] +---- apiVersion: shipwright.io/v1alpha1 kind: Build metadata: @@ -403,25 +444,28 @@ spec: labels: "maintainer": "team@my-company.com" "description": "This is my cool image" -``` +---- Annotations added to the output image can be verified by running the command: -```sh +[source,terminal] +---- docker manifest inspect us.icr.io/source-to-image-build/nodejs-ex | jq ".annotations" -``` +---- Labels added to the output image can be verified by running the command (image should be available in host machine): -```sh +[source,terminal] +---- docker inspect us.icr.io/source-to-image-build/nodejs-ex | jq ".[].Config.Labels" -``` +---- -### Sources +=== Sources Represents remote artifacts, as in external entities that will be added to the build context before the actual build starts. Therefore, you may employ `.spec.sources` to download artifacts from external repositories. -```yaml +[source,yaml] +---- apiVersion: shipwright.io/v1alpha1 kind: Build metadata: @@ -430,12 +474,12 @@ spec: sources: - name: project-logo url: https://gist.github.com/project/image.png -``` +---- Under `.spec.sources` we have the following attributes: -- `.name`: represents the name of resource, required attribute. -- `.url`: universal resource location (URL), required attribute. +* `.name`: represents the name of resource, required attribute. +* `.url`: universal resource location (URL), required attribute. When downloading artifacts the process is executed in the same directory where the application source-code is located, by default `/workspace/source`. @@ -443,15 +487,16 @@ Additionally, we have plan to keep evolving `.spec.sources` by adding more types At this initial stage, authentication is not supported therefore you can only download from sources without this mechanism in place. -## BuildRun deletion +== BuildRun deletion A `Build` can automatically delete a related `BuildRun`. To enable this feature set the `build.shipwright.io/build-run-deletion` annotation to `true` in the `Build` instance. By default the annotation is never present in a `Build` definition. See an example of how to define this annotation: -```yaml +[source,yaml] +---- apiVersion: shipwright.io/v1alpha1 kind: Build metadata: name: kaniko-golang-build annotations: build.shipwright.io/build-run-deletion: "true" -``` +---- diff --git a/content/en/docs/api/buildrun.md b/content/en/docs/api/buildrun.adoc similarity index 53% rename from content/en/docs/api/buildrun.md rename to content/en/docs/api/buildrun.adoc index adb90251..025c2d6a 100644 --- a/content/en/docs/api/buildrun.md +++ b/content/en/docs/api/buildrun.adoc @@ -4,69 +4,69 @@ draft: false weight: 40 --- -- [Overview](#overview) -- [BuildRun Controller](#buildrun-controller) -- [Configuring a BuildRun](#configuring-a-buildrun) - - [Defining the BuildRef](#defining-the-buildref) - - [Defining ParamValues](#defining-paramvalues) - - [Defining the ServiceAccount](#defining-the-serviceaccount) -- [Canceling a `BuildRun`](#canceling-a-buildrun) -- [BuildRun Status](#buildrun-status) - - [Understanding the state of a BuildRun](#understanding-the-state-of-a-buildrun) - - [Understanding failed BuildRuns](#understanding-failed-buildruns) - - [Step Results in BuildRun Status](#step-results-in-buildrun-status) - - [Build Snapshot](#build-snapshot) -- [Relationship with Tekton Tasks](#relationship-with-tekton-tasks) - -## Overview +* <> +* <> +* <> + ** <> + ** <> + ** <> +* <> +* <> + ** <> + ** <> + ** <> + ** <> +* <> + +== Overview The resource `BuildRun` (`buildruns.shipwright.io/v1alpha1`) is the build process of a `Build` resource definition which is executed in Kubernetes. A `BuildRun` resource allows the user to define: -- The `BuildRun` name, through which the user can monitor the status of the image construction. -- A referenced `Build` instance to use during the build construction. -- A service account for hosting all related secrets in order to build the image. +* The `BuildRun` name, through which the user can monitor the status of the image construction. +* A referenced `Build` instance to use during the build construction. +* A service account for hosting all related secrets in order to build the image. A `BuildRun` is available within a namespace. -## BuildRun Controller +== BuildRun Controller The controller watches for: -- Updates on a `Build` resource (_CRD instance_) -- Updates on a `TaskRun` resource (_CRD instance_) +* Updates on a `Build` resource (_CRD instance_) +* Updates on a `TaskRun` resource (_CRD instance_) When the controller reconciles it: -- Looks for any existing owned `TaskRuns` and update its parent `BuildRun` status. -- Retrieves the specified `SA` and sets this with the specify output secret on the `Build` resource. -- Generates a new tekton `TaskRun` if it does not exist, and set a reference to this resource(_as a child of the controller_). -- On any subsequent updates on the `TaskRun`, the parent `BuildRun` resource instance will be updated. +* Looks for any existing owned `TaskRuns` and update its parent `BuildRun` status. +* Retrieves the specified `SA` and sets this with the specify output secret on the `Build` resource. +* Generates a new tekton `TaskRun` if it does not exist, and set a reference to this resource(_as a child of the controller_). +* On any subsequent updates on the `TaskRun`, the parent `BuildRun` resource instance will be updated. -## Configuring a BuildRun +== Configuring a BuildRun The `BuildRun` definition supports the following fields: -- Required: - - [`apiVersion`](https://kubernetes.io/docs/concepts/overview/working-with-objects/kubernetes-objects/#required-fields) - Specifies the API version, for example `shipwright.io/v1alpha1`. - - [`kind`](https://kubernetes.io/docs/concepts/overview/working-with-objects/kubernetes-objects/#required-fields) - Specifies the Kind type, for example `BuildRun`. - - [`metadata`](https://kubernetes.io/docs/concepts/overview/working-with-objects/kubernetes-objects/#required-fields) - Metadata that identify the CRD instance, for example the name of the `BuildRun`. - - `spec.buildRef` - Specifies an existing `Build` resource instance to use. - -- Optional: - - `spec.serviceAccount` - Refers to the SA to use when building the image. (_defaults to the `default` SA_) - - `spec.timeout` - Defines a custom timeout. The value needs to be parsable by [ParseDuration](https://golang.org/pkg/time/#ParseDuration), for example `5m`. The value overwrites the value that is defined in the `Build`. - - `spec.paramValues` - Override any _params_ defined in the referenced `Build`, as long as their name matches. - - `spec.output.image` - Refers to a custom location where the generated image would be pushed. The value will overwrite the `output.image` value which is defined in `Build`. ( Note: other properties of the output, for example, the credentials cannot be specified in the buildRun spec. ) - - `spec.output.credentials.name` - Reference an existing secret to get access to the container registry. This secret will be added to the service account along with the ones requested by the `Build`. - - `spec.env` - Specifies additional environment variables that should be passed to the build container. Overrides any environment variables that are specified in the `Build` resource. The available variables depend on the tool that is being used by the chosen build strategy. - -### Defining the BuildRef +* Required: + ** https://kubernetes.io/docs/concepts/overview/working-with-objects/kubernetes-objects/#required-fields[`apiVersion`] - Specifies the API version, for example `shipwright.io/v1alpha1`. + ** https://kubernetes.io/docs/concepts/overview/working-with-objects/kubernetes-objects/#required-fields[`kind`] - Specifies the Kind type, for example `BuildRun`. + ** https://kubernetes.io/docs/concepts/overview/working-with-objects/kubernetes-objects/#required-fields[`metadata`] - Metadata that identify the CRD instance, for example the name of the `BuildRun`. + ** `spec.buildRef` - Specifies an existing `Build` resource instance to use. +* Optional: + ** `spec.serviceAccount` - Refers to the SA to use when building the image. (_defaults to the `default` SA_) + ** `spec.timeout` - Defines a custom timeout. The value needs to be parsable by https://golang.org/pkg/time/#ParseDuration[ParseDuration], for example `5m`. The value overwrites the value that is defined in the `Build`. + ** `spec.paramValues` - Override any _params_ defined in the referenced `Build`, as long as their name matches. + ** `spec.output.image` - Refers to a custom location where the generated image would be pushed. The value will overwrite the `output.image` value which is defined in `Build`. ( Note: other properties of the output, for example, the credentials cannot be specified in the buildRun spec. ) + ** `spec.output.credentials.name` - Reference an existing secret to get access to the container registry. This secret will be added to the service account along with the ones requested by the `Build`. + ** `spec.env` - Specifies additional environment variables that should be passed to the build container. Overrides any environment variables that are specified in the `Build` resource. The available variables depend on the tool that is being used by the chosen build strategy. + +=== Defining the BuildRef A `BuildRun` resource can reference a `Build` resource, that indicates what image to build. For example: -```yaml +[source,yaml] +---- apiVersion: shipwright.io/v1alpha1 kind: BuildRun metadata: @@ -74,15 +74,16 @@ metadata: spec: buildRef: name: buildpack-nodejs-build-namespaced -``` +---- -### Defining ParamValues +=== Defining ParamValues A `BuildRun` resource can override _paramValues_ defined in its referenced `Build`, as long as the `Build` defines the same _params_ name. For example, the following `BuildRun` overrides the value for _sleep-time_ param, that is defined in the _a-build_ `Build` resource. -```yaml +[source,yaml] +---- --- apiVersion: shipwright.io/v1alpha1 kind: BuildRun @@ -110,15 +111,16 @@ spec: strategy: name: sleepy-strategy kind: BuildStrategy -``` +---- -See more about `paramValues` usage in the related [Build](./build.md#defining-paramvalues) resource docs. +See more about `paramValues` usage in the related link:./build.md#defining-paramvalues[Build] resource docs. -### Defining the ServiceAccount +=== Defining the ServiceAccount A `BuildRun` resource can define a serviceaccount to use. Usually this SA will host all related secrets referenced on the `Build` resource, for example: -```yaml +[source,yaml] +---- apiVersion: shipwright.io/v1alpha1 kind: BuildRun metadata: @@ -128,21 +130,22 @@ spec: name: buildpack-nodejs-build-namespaced serviceAccount: name: pipeline -``` +---- You can also use set the `spec.serviceAccount.generate` path to `true`. This will generate the service account during runtime for you. -_**Note**_: When the SA is not defined, the `BuildRun` will default to the `default` SA in the namespace. +_*Note*_: When the SA is not defined, the `BuildRun` will default to the `default` SA in the namespace. -## Canceling a `BuildRun` +== Canceling a `BuildRun` To cancel a `BuildRun` that's currently executing, update its status to mark it as canceled. -When you cancel a `BuildRun`, the underlying `TaskRun` is marked as canceled per the [Tekton cancel `TaskRun` feature](https://github.com/tektoncd/pipeline/blob/main/docs/taskruns.md). +When you cancel a `BuildRun`, the underlying `TaskRun` is marked as canceled per the https://github.com/tektoncd/pipeline/blob/main/docs/taskruns.md[Tekton cancel `TaskRun` feature]. Example of canceling a `BuildRun`: -```yaml +[source,yaml] +---- apiVersion: shipwright.io/v1alpha1 kind: BuildRun metadata: @@ -150,13 +153,14 @@ metadata: spec: # [...] state: "BuildRunCanceled" -``` +---- -### Specifying Environment Variables +=== Specifying Environment Variables An example of a `BuildRun` that specifies environment variables: -```yaml +[source,yaml] +---- apiVersion: shipwright.io/v1alpha1 kind: BuildRun metadata: @@ -169,12 +173,13 @@ spec: value: "example-value-1" - name: EXAMPLE_VAR_2 value: "example-value-2" -``` +---- Example of a `BuildRun` that uses the Kubernetes Downward API to expose a `Pod` field as an environment variable: -```yaml +[source,yaml] +---- apiVersion: shipwright.io/v1alpha1 kind: BuildRun metadata: @@ -187,12 +192,13 @@ spec: valueFrom: fieldRef: fieldPath: metadata.name -``` +---- Example of a `BuildRun` that uses the Kubernetes Downward API to expose a `Container` field as an environment variable: -```yaml +[source,yaml] +---- apiVersion: shipwright.io/v1alpha1 kind: BuildRun metadata: @@ -206,33 +212,35 @@ spec: resourceFieldRef: containerName: my-container resource: limits.memory -``` +---- -## BuildRun Status +== BuildRun Status The `BuildRun` resource is updated as soon as the current image building status changes: -```sh +[source,terminal] +---- $ kubectl get buildrun buildpacks-v3-buildrun NAME SUCCEEDED REASON MESSAGE STARTTIME COMPLETIONTIME buildpacks-v3-buildrun Unknown Pending Pending 1s -``` +---- And finally: -```sh +[source,terminal] +---- $ kubectl get buildrun buildpacks-v3-buildrun NAME SUCCEEDED REASON MESSAGE STARTTIME COMPLETIONTIME buildpacks-v3-buildrun True Succeeded All Steps have completed executing 4m28s 16s -``` +---- The above allows users to get an overview of the building mechanism state. -### Understanding the state of a BuildRun +=== Understanding the state of a BuildRun A `BuildRun` resource stores the relevant information regarding the state of the object under `Status.Conditions`. -[Conditions](https://github.com/kubernetes/community/blob/master/contributors/devel/sig-architecture/api-conventions.md#typical-status-properties) allow users to easily understand the resource state, without needing to understand resource-specific details. +https://github.com/kubernetes/community/blob/master/contributors/devel/sig-architecture/api-conventions.md#typical-status-properties[Conditions] allow users to easily understand the resource state, without needing to understand resource-specific details. For the `BuildRun` we use a Condition of the type `Succeeded`, which is a well-known type for resources that run to completion. @@ -240,45 +248,142 @@ The `Status.Conditions` hosts different fields, like `Status`, `Reason` and `Mes The following table illustrates the different states a BuildRun can have under its `Status.Conditions`: +|=== | Status | Reason | CompletionTime is set | Description | -| --- | --- | --- | --- | -| Unknown | Pending | No | The BuildRun is waiting on a Pod in status Pending. | -| Unknown | Running | No | The BuildRun has been validate and started to perform its work. |l -| Unknown | Running | No | The BuildRun has been validate and started to perform its work. | -| Unknown | BuildRunCanceled | No | The user requested the BuildRun to be canceled. This results in the BuildRun controller requesting the TaskRun be canceled. Cancellation has not been done yet. | -| True | Succeeded | Yes | The BuildRun Pod is done. | -| False | Failed | Yes | The BuildRun failed in one of the steps. | -| False | BuildRunTimeout | Yes | The BuildRun timed out. | -| False | UnknownStrategyKind | Yes | The Build specified strategy Kind is unknown. (_options: ClusterBuildStrategy or BuildStrategy_) | -| False | ClusterBuildStrategyNotFound | Yes | The referenced cluster strategy was not found in the cluster. | -| False | BuildStrategyNotFound | Yes | The referenced namespaced strategy was not found in the cluster. | -| False | SetOwnerReferenceFailed | Yes | Setting ownerreferences from the BuildRun to the related TaskRun failed. | -| False | TaskRunIsMissing | Yes | The BuildRun related TaskRun was not found. | -| False | TaskRunGenerationFailed | Yes | The generation of a TaskRun spec failed. | -| False | ServiceAccountNotFound | Yes | The referenced service account was not found in the cluster. | -| False | BuildRegistrationFailed | Yes | The related Build in the BuildRun is on a Failed state. | -| False | BuildNotFound | Yes | The related Build in the BuildRun was not found. | -| False | BuildRunCanceled | Yes | The BuildRun and underlying TaskRun were canceled successfully. | -| False | BuildRunNameInvalid | Yes | The defined `BuildRun` name (`metadata.name`) is invalid. The `BuildRun` name should be a [valid label value](https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/#syntax-and-character-set). | -| False | PodEvicted | Yes | The BuildRun Pod was evicted from the node it was running on. See [API-initiated Eviction](https://kubernetes.io/docs/concepts/scheduling-eviction/api-eviction/) and [Node-pressure Eviction](https://kubernetes.io/docs/concepts/scheduling-eviction/node-pressure-eviction/) for more information. | - -_Note_: We heavily rely on the Tekton TaskRun [Conditions](https://github.com/tektoncd/pipeline/blob/main/docs/taskruns.md#monitoring-execution-status) for populating the BuildRun ones, with some exceptions. - -### Understanding failed BuildRuns + +| Unknown +| Pending +| No +| The BuildRun is waiting on a Pod in status Pending. +| + +| Unknown +| Running +| No +| The BuildRun has been validate and started to perform its work. +| l + +| Unknown +| Running +| No +| The BuildRun has been validate and started to perform its work. +| + +| Unknown +| BuildRunCanceled +| No +| The user requested the BuildRun to be canceled. This results in the BuildRun controller requesting the TaskRun be canceled. Cancellation has not been done yet. +| + +| True +| Succeeded +| Yes +| The BuildRun Pod is done. +| + +| False +| Failed +| Yes +| The BuildRun failed in one of the steps. +| + +| False +| BuildRunTimeout +| Yes +| The BuildRun timed out. +| + +| False +| UnknownStrategyKind +| Yes +| The Build specified strategy Kind is unknown. (_options: ClusterBuildStrategy or BuildStrategy_) +| + +| False +| ClusterBuildStrategyNotFound +| Yes +| The referenced cluster strategy was not found in the cluster. +| + +| False +| BuildStrategyNotFound +| Yes +| The referenced namespaced strategy was not found in the cluster. +| + +| False +| SetOwnerReferenceFailed +| Yes +| Setting ownerreferences from the BuildRun to the related TaskRun failed. +| + +| False +| TaskRunIsMissing +| Yes +| The BuildRun related TaskRun was not found. +| + +| False +| TaskRunGenerationFailed +| Yes +| The generation of a TaskRun spec failed. +| + +| False +| ServiceAccountNotFound +| Yes +| The referenced service account was not found in the cluster. +| + +| False +| BuildRegistrationFailed +| Yes +| The related Build in the BuildRun is on a Failed state. +| + +| False +| BuildNotFound +| Yes +| The related Build in the BuildRun was not found. +| + +| False +| BuildRunCanceled +| Yes +| The BuildRun and underlying TaskRun were canceled successfully. +| + +| False +| BuildRunNameInvalid +| Yes +| The defined `BuildRun` name (`metadata.name`) is invalid. The `BuildRun` name should be a https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/#syntax-and-character-set[valid label value]. +| + +| False +| PodEvicted +| Yes +| The BuildRun Pod was evicted from the node it was running on. See https://kubernetes.io/docs/concepts/scheduling-eviction/api-eviction/[API-initiated Eviction] and https://kubernetes.io/docs/concepts/scheduling-eviction/node-pressure-eviction/[Node-pressure Eviction] for more information. +| +|=== + +NOTE: We heavily rely on the Tekton TaskRun https://github.com/tektoncd/pipeline/blob/main/docs/taskruns.md#monitoring-execution-status[Conditions] for populating the BuildRun ones, with some exceptions. + +=== Understanding failed BuildRuns To make it easier for users to understand why did a BuildRun failed, users can infer from the `Status.FailedAt` field, the pod and container where the failure took place. In addition, the `Status.Conditions` will host under the `Message` field a compacted message containing the `kubectl` command to trigger, in order to retrieve the logs. -### Step Results in BuildRun Status +=== Step Results in BuildRun Status After the successful completion of a `BuildRun`, the `.status` field contains the results (`.status.taskResults`) emitted from the `TaskRun` steps generate by the `BuildRun` controller as part of processing the `BuildRun`. These results contain valuable metadata for users, like the _image digest_ or the _commit sha_ of the source code used for building. -The results from the source step will be surfaced to the `.status.sources` and the results from -the [output step](buildstrategies.md#system-results) will be surfaced to the `.status.output` field of a `BuildRun`. +The results from the source step will be surfaced to the `.status.sources` and the results from +the link:buildstrategies.md#system-results[output step] will be surfaced to the `.status.output` field of a `BuildRun`. Example of a `BuildRun` with surfaced results for `git` source: -```yaml +[source,yaml] +---- # [...] status: buildSpec: @@ -291,11 +396,12 @@ status: git: commitAuthor: xxx xxxxxx commitSha: f25822b85021d02059c9ac8a211ef3804ea8fdde -``` +---- Another example of a `BuildRun` with surfaced results for local source code(`bundle`) source: -```yaml +[source,yaml] +---- # [...] status: buildSpec: @@ -307,16 +413,16 @@ status: - name: default bundle: digest: sha256:0f5e2070b534f9b880ed093a537626e3c7fdd28d5328a8d6df8d29cd3da760c7 -``` +---- -**Note**: The digest and size of the output image are only included if the build strategy provides them. See [System results](buildstrategies.md#system-results). +NOTE: The digest and size of the output image are only included if the build strategy provides them. See link:buildstrategies.md#system-results[System results]. -### Build Snapshot +=== Build Snapshot For every BuildRun controller reconciliation, the `buildSpec` in the Status of the `BuildRun` is updated if an existing owned `TaskRun` is present. During this update, a `Build` resource snapshot is generated and embedded into the `status.buildSpec` path of the `BuildRun`. A `buildSpec` is just a copy of the original `Build` spec, from where the `BuildRun` executed a particular image build. The snapshot approach allows developers to see the original `Build` configuration. -## Relationship with Tekton Tasks +== Relationship with Tekton Tasks -The `BuildRun` resource abstracts the image construction by delegating this work to the Tekton Pipeline [TaskRun](https://github.com/tektoncd/pipeline/blob/main/docs/taskruns.md). Compared to a Tekton Pipeline [Task](https://github.com/tektoncd/pipeline/blob/main/docs/tasks.md), a `TaskRun` runs all `steps` until completion of the `Task` or until a failure occurs in the `Task`. +The `BuildRun` resource abstracts the image construction by delegating this work to the Tekton Pipeline https://github.com/tektoncd/pipeline/blob/main/docs/taskruns.md[TaskRun]. Compared to a Tekton Pipeline https://github.com/tektoncd/pipeline/blob/main/docs/tasks.md[Task], a `TaskRun` runs all `steps` until completion of the `Task` or until a failure occurs in the `Task`. The `BuildRun` controller during the Reconcile will generate a new `TaskRun`. During the execution, the controller will embed in the `TaskRun` `Task` definition the requires `steps` to execute. These `steps` are define in the strategy defined in the `Build` resource, either a `ClusterBuildStrategy` or a `BuildStrategy`. diff --git a/content/en/docs/api/buildstrategies.adoc b/content/en/docs/api/buildstrategies.adoc new file mode 100644 index 00000000..35fb33fa --- /dev/null +++ b/content/en/docs/api/buildstrategies.adoc @@ -0,0 +1,698 @@ +--- +title: BuildStrategy and ClusterBuildStrategy +weight: 20 +--- + +* <> +* <> +* <> +* <> + ** <> +* <> + ** <> +* <> + ** <> + ** <> +* <> + ** <> + ** <> + ** <> + ** <> +* <> + ** <> + ** <> +* <> + ** <> + ** <> +* <> +* <> +* <> +* <> +* <> + ** <> + ** <> + ** <> +* <> + +== Overview + +There are two types of strategies, the `ClusterBuildStrategy` (`clusterbuildstrategies.shipwright.io/v1alpha1`) and the `BuildStrategy` (`buildstrategies.shipwright.io/v1alpha1`). Both strategies define a shared group of steps, needed to fullfil the application build. + +A `ClusterBuildStrategy` is available cluster-wide, while a `BuildStrategy` is available within a namespace. + +== Available ClusterBuildStrategies + +Well-known strategies can be bootstrapped from link:../samples/buildstrategy[here]. The currently supported Cluster BuildStrategy are: + +|=== +| Name | Supported platforms + +| link:../samples/buildstrategy/buildah/buildstrategy_buildah_cr.yaml[buildah] +| linux/amd64 only + +| link:../samples/buildstrategy/buildkit/buildstrategy_buildkit_cr.yaml[BuildKit] +| all + +| link:../samples/buildstrategy/buildpacks-v3/buildstrategy_buildpacks-v3-heroku_cr.yaml[buildpacks-v3-heroku] +| linux/amd64 only + +| link:../samples/buildstrategy/buildpacks-v3/buildstrategy_buildpacks-v3_cr.yaml[buildpacks-v3] +| linux/amd64 only + +| link:../samples/buildstrategy/kaniko/buildstrategy_kaniko_cr.yaml[kaniko] +| all + +| link:../samples/buildstrategy/ko/buildstrategy_ko_cr.yaml[ko] +| all + +| link:../samples/buildstrategy/source-to-image/buildstrategy_source-to-image_cr.yaml[source-to-image] +| linux/amd64 only +|=== + +== Available BuildStrategies + +The current supported namespaces BuildStrategy are: + +|=== +| Name | Supported platforms + +| link:../samples/buildstrategy/buildpacks-v3/buildstrategy_buildpacks-v3-heroku_namespaced_cr.yaml[buildpacks-v3-heroku] +| linux/amd64 only + +| link:../samples/buildstrategy/buildpacks-v3/buildstrategy_buildpacks-v3_namespaced_cr.yaml[buildpacks-v3] +| linux/amd64 only +|=== + +''' + +== Buildah + +The `buildah` ClusterBuildStrategy consists of using https://github.com/containers/buildah[`buildah`] to build and push a container image, out of a `Dockerfile`. The `Dockerfile` should be specified on the `Build` resource. + +=== Installing Buildah Strategy + +To install use: + +[source,terminal] +---- +kubectl apply -f samples/buildstrategy/buildah/buildstrategy_buildah_cr.yaml +---- + +''' + +== Buildpacks v3 + +The https://buildpacks.io/[buildpacks-v3] BuildStrategy/ClusterBuildStrategy uses a Cloud Native Builder (https://buildpacks.io/docs/concepts/components/builder/[CNB]) container image, and is able to implement https://buildpacks.io/docs/concepts/components/lifecycle/[lifecycle commands]. The following CNB images are the most common options: + +* https://hub.docker.com/r/heroku/buildpacks/[`heroku/buildpacks:18`] +* https://hub.docker.com/r/cloudfoundry/cnb[`cloudfoundry/cnb:bionic`] +* https://hub.docker.com/r/paketobuildpacks/builder/tags[`docker.io/paketobuildpacks/builder:full`] + +=== Installing Buildpacks v3 Strategy + +You can install the `BuildStrategy` in your namespace or install the `ClusterBuildStrategy` at cluster scope so that it can be shared across namespaces. + +To install the cluster scope strategy, use (below is a heroku example, you can also use paketo sample): + +[source,terminal] +---- +kubectl apply -f samples/buildstrategy/buildpacks-v3/buildstrategy_buildpacks-v3-heroku_cr.yaml +---- + +To install the namespaced scope strategy, use: + +[source,terminal] +---- +kubectl apply -f samples/buildstrategy/buildpacks-v3/buildstrategy_buildpacks-v3-heroku_namespaced_cr.yaml +---- + +''' + +== Kaniko + +The `kaniko` ClusterBuildStrategy is composed by Kaniko's `executor` https://github.com/GoogleContainerTools/kaniko[kaniko], with the objective of building a container-image, out of a `Dockerfile` and context directory. The `kaniko-trivy` ClusterBuildStrategy adds https://github.com/aquasecurity/trivy[trivy] scanning and refuses to push images with critical vulnerabilities. + +=== Installing Kaniko Strategy + +To install the cluster scope strategy, use: + +[source,terminal] +---- +kubectl apply -f samples/buildstrategy/kaniko/buildstrategy_kaniko_cr.yaml +---- + +==== Scanning with Trivy + +You can also incorporate scanning into the ClusterBuildStrategy. The `kaniko-trivy` ClusterBuildStrategy builds the image with `kaniko`, then scans with https://github.com/aquasecurity/trivy[trivy]. The BuildRun will then exit with an error if there is a critical vulnerability, instead of pushing the vulnerable image into the container registry. + +To install the cluster scope strategy, use: + +[source,terminal] +---- +kubectl apply -f samples/buildstrategy/kaniko/buildstrategy_kaniko-trivy_cr.yaml +---- + +_Note: doing image scanning is not a substitute for trusting the Dockerfile you are building. The build process itself is also susceptible if the Dockerfile has a vulnerability. Frameworks/strategies such as build-packs or source-to-image (which avoid directly building a Dockerfile) should be considered if you need guardrails around the code you want to build._ + +''' + +== BuildKit + +https://github.com/moby/buildkit[BuildKit] is composed of the `buildctl` client and the `buildkitd` daemon. For the `buildkit` ClusterBuildStrategy, it runs on a https://github.com/moby/buildkit#daemonless[daemonless] mode, where both client and ephemeral daemon run in a single container. In addition, it runs without privileges (_https://github.com/moby/buildkit/blob/master/docs/rootless.md[rootless]_). + +=== Cache Exporters + +By default, the `buildkit` ClusterBuildStrategy will use caching to optimize the build times. When pushing an image to a registry, it will use the `inline` export cache, which pushes the image and cache together. Please refer to https://github.com/moby/buildkit#export-cache[export-cache docs] for more information. + +=== Known Limitations + +The `buildkit` ClusterBuildStrategy currently locks the following parameters: + +* A `Dockerfile` name needs to be `Dockerfile`, this is currently not configurable. +* Exporter caches are enabled by default, this is currently not configurable. +* To allow running rootless, it requires both https://kubernetes.io/docs/tutorials/clusters/apparmor/[AppArmor] as well as https://kubernetes.io/docs/tutorials/clusters/seccomp/[SecComp] to be disabled using the `unconfined` profile. + +=== Usage in Clusters with Pod Security Standards + +The BuildKit strategy contains fields with regards to security settings. It therefore depends on the respective cluster setup and administrative configuration. These settings are: + +* Defining the `unconfined` profile for both AppArmor and seccomp as required by the underlying `rootlesskit`. +* The `allowPrivilegeEscalation` settings is set to `true` to be able to use binaries that have the `setuid` bit set in order to run with "root" level privileges. In case of BuildKit, this is required by `rootlesskit` in order to set the user namespace mapping file `/proc//uid_map`. +* Use of non-root user with UID 1000/GID 1000 as the `runAsUser`. + +These settings have no effect in case Pod Security Standards are not used. + +_Please note:_ At this point in time, there is no way to run `rootlesskit` to start the BuildKit daemon without the `allowPrivilegeEscalation` flag set to `true`. Clusters with the `Restricted` security standard in place will not be able to use this build strategy. + +=== Installing BuildKit Strategy + +To install the cluster scope strategy, use: + +[source,terminal] +---- +kubectl apply -f samples/buildstrategy/buildkit/buildstrategy_buildkit_cr.yaml +---- + +''' + +== ko + +The `ko` ClusterBuilderStrategy is using https://github.com/google/ko[ko]'s publish command to build an image from a Golang main package. + +=== Installing ko Strategy + +To install the cluster scope strategy, use: + +[source,terminal] +---- +kubectl apply -f samples/buildstrategy/ko/buildstrategy_ko_cr.yaml +---- + +=== Parameters + +The build strategy provides the following parameters that you can set in a Build or BuildRun to control its behavior: + +|=== +| Parameter | Description | Default + +| `go-flags` +| Value for the GOFLAGS environment variable. +| Empty + +| `go-version` +| Version of Go, must match a tag from https://hub.docker.com/_/golang?tab=tags[the golang image] +| `1.16` + +| `ko-version` +| Version of ko, must be either `latest` for the newest release, or a https://github.com/google/ko/releases[ko release name] +| `latest` + +| `package-directory` +| The directory inside the context directory containing the main package. +| `.` + +| `target-platform` +| Target platform to be built. For example: `linux/arm64`. Multiple platforms can be provided separated by comma, for example: `linux/arm64,linux/amd64`. The value `all` will build all platforms supported by the base image. The value `current` will build the platform on which the build runs. +| `current` +|=== + +== Source to Image + +This BuildStrategy is composed by https://github.com/openshift/source-to-image[`source-to-image`] and https://github.com/GoogleContainerTools/kaniko[`kaniko`] in order to generate a `Dockerfile` and prepare the application to be built later on with a builder. + +`s2i` requires a specially crafted image, which can be informed as `builderImage` parameter on the `Build` resource. + +=== Installing Source to Image Strategy + +To install the cluster scope strategy use: + +[source,terminal] +---- +kubectl apply -f samples/buildstrategy/source-to-image/buildstrategy_source-to-image_cr.yaml +---- + +=== Build Steps + +. `s2i` in order to generate a `Dockerfile` and prepare source-code for image build; +. `kaniko` to create and push the container image to what is defined as `output.image`; + +== Strategy parameters + +Strategy parameters allow users to parameterize their strategy definition, by allowing users to control the _parameters_ values via the `Build` or `BuildRun` resources. + +Users defining _parameters_ under their strategies require to understand the following: + +* *Definition*: A list of parameters should be defined under `spec.parameters`. Each list item should consist of a _name_, a _description_ and a reasonable _default_ value (_type string_). Note that a default value is not mandatory. +* *Usage*: In order to use a parameter in the strategy steps, users should follow the following syntax: `$(params.your-parameter-name)` +* *Parameterize*: Any `Build` or `BuildRun` referencing your strategy, can set a value for _your-parameter-name_ parameter if needed. + +The following is an example of a strategy that defines and uses the `sleep-time` parameter: + +[source,yaml] +---- +--- +apiVersion: shipwright.io/v1alpha1 +kind: BuildStrategy +metadata: + name: sleepy-strategy +spec: + parameters: + - name: sleep-time + description: "time in seconds for sleeping" + default: "1" + buildSteps: + - name: a-strategy-step + image: alpine:latest + command: + - sleep + args: + - $(params.sleep-time) +---- + +See more information on how to use this parameter in a `Build` or `BuildRun` in the related link:./build.md#defining-paramvalues[docs]. + +== System parameters + +Contrary to the strategy `spec.parameters`, you can use system parameters and their values defined at runtime when defining the steps of a build strategy to access system information as well as information provided by the user in their Build or BuildRun. The following parameters are available: + +|=== +| Parameter | Description + +| `$(params.shp-source-root)` +| The absolute path to the directory that contains the user's sources. + +| `$(params.shp-source-context)` +| The absolute path to the context directory of the user's sources. If the user specified no value for `spec.source.contextDir` in their `Build`, then this value will equal the value for `$(params.shp-source-root)`. Note that this directory is not guaranteed to exist at the time the container for your step is started, you can therefore not use this parameter as a step's working directory. + +| `$(params.shp-output-image)` +| The URL of the image that the user wants to push as specified in the Build's `spec.output.image`, or the override from the BuildRun's `spec.output.image`. +|=== + +== System parameters vs Strategy Parameters Comparison + +|=== +| Parameter Type | User Configurable | Definition + +| System Parameter +| No +| At run-time, by the `BuildRun` controller. + +| Strategy Parameter +| Yes +| At build-time, during the `BuildStrategy` creation. +|=== + +== System results + +You can optionally store the size and digest of the image your build strategy created to a set of files. + +|=== +| Result file | Description + +| `$(results.shp-image-digest.path)` +| File to store the digest of the image. + +| `$(results.shp-image-size.path)` +| File to store the compressed size of the image. +|=== + +You can look at sample build strategies, such as link:../samples/buildstrategy/kaniko/buildstrategy_kaniko_cr.yaml[Kaniko], or link:../samples/buildstrategy/buildpacks-v3/buildstrategy_buildpacks-v3_cr.yaml[Buildpacks], to see how they fill some or all of the results files. + +This information will be available in the `.status.output` field of the BuildRun. + +[source,yaml] +---- +apiVersion: shipwright.io/v1alpha1 +kind: BuildRun +# [...] +status: + # [...] + output: + digest: sha256:07626e3c7fdd28d5328a8d6df8d29cd3da760c7f5e2070b534f9b880ed093a53 + size: "1989004" + # [...] +---- + +== Steps Resource Definition + +All strategies steps can include a definition of resources(_limits and requests_) for CPU, memory and disk. For strategies with more than one step, each step(_container_) could require more resources than others. Strategy admins are free to define the values that they consider the best fit for each step. Also, identical strategies with the same steps that are only different in their name and step resources can be installed on the cluster to allow users to create a build with smaller and larger resource requirements. + +=== Strategies with different resources + +If the strategy admins would require to have multiple flavours of the same strategy, where one strategy has more resources that the other. Then, multiple strategies for the same type should be defined on the cluster. In the following example, we use Kaniko as the type: + +[source,yaml] +---- +--- +apiVersion: shipwright.io/v1alpha1 +kind: ClusterBuildStrategy +metadata: + name: kaniko-small +spec: + buildSteps: + - name: build-and-push + image: gcr.io/kaniko-project/executor:v1.6.0 + workingDir: $(params.shp-source-root) + securityContext: + runAsUser: 0 + capabilities: + add: + - CHOWN + - DAC_OVERRIDE + - FOWNER + - SETGID + - SETUID + - SETFCAP + - KILL + env: + - name: DOCKER_CONFIG + value: /tekton/home/.docker + - name: AWS_ACCESS_KEY_ID + value: NOT_SET + - name: AWS_SECRET_KEY + value: NOT_SET + command: + - /kaniko/executor + args: + - --skip-tls-verify=true + - --dockerfile=$(build.dockerfile) + - --context=$(params.shp-source-context) + - --destination=$(params.shp-output-image) + - --snapshotMode=redo + - --push-retry=3 + resources: + limits: + cpu: 250m + memory: 65Mi + requests: + cpu: 250m + memory: 65Mi +--- +apiVersion: shipwright.io/v1alpha1 +kind: ClusterBuildStrategy +metadata: + name: kaniko-medium +spec: + buildSteps: + - name: build-and-push + image: gcr.io/kaniko-project/executor:v1.6.0 + workingDir: $(params.shp-source-root) + securityContext: + runAsUser: 0 + capabilities: + add: + - CHOWN + - DAC_OVERRIDE + - FOWNER + - SETGID + - SETUID + - SETFCAP + - KILL + env: + - name: DOCKER_CONFIG + value: /tekton/home/.docker + - name: AWS_ACCESS_KEY_ID + value: NOT_SET + - name: AWS_SECRET_KEY + value: NOT_SET + command: + - /kaniko/executor + args: + - --skip-tls-verify=true + - --dockerfile=$(build.dockerfile) + - --context=$(params.shp-source-context) + - --destination=$(params.shp-output-image) + - --snapshotMode=redo + - --push-retry=3 + resources: + limits: + cpu: 500m + memory: 1Gi + requests: + cpu: 500m + memory: 1Gi +---- + +The above provides more control and flexibility for the strategy admins. For `end-users`, all they need to do, is to reference the proper strategy. For example: + +[source,yaml] +---- +--- +apiVersion: shipwright.io/v1alpha1 +kind: Build +metadata: + name: kaniko-medium +spec: + source: + url: https://github.com/shipwright-io/sample-go + contextDir: docker-build + strategy: + name: kaniko + kind: ClusterBuildStrategy + dockerfile: Dockerfile +---- + +=== How does Tekton Pipelines handle resources + +The *Build* controller relies on the Tekton https://github.com/tektoncd/pipeline[pipeline controller] to schedule the `pods` that execute the above strategy steps. In a nutshell, the *Build* controller creates on run-time a Tekton *TaskRun*, and the *TaskRun* generates a new pod in the particular namespace. In order to build an image, the pod executes all the strategy steps one-by-one. + +Tekton manage each step resources *request* in a very particular way, see the https://github.com/tektoncd/pipeline/blob/main/docs/tasks.md#defining-steps[docs]. From this document, it mentions the following: + +____ +The CPU, memory, and ephemeral storage resource requests will be set to zero, or, if specified, the minimums set through LimitRanges in that Namespace, if the container image does not have the largest resource request out of all container images in the Task. This ensures that the Pod that executes the Task only requests enough resources to run a single container image in the Task rather than hoard resources for all container images in the Task at once. +____ + +=== Examples of Tekton resources management + +For a more concrete example, let´s take a look on the following scenarios: + +''' + +*Scenario 1.* Namespace without `LimitRange`, both steps with the same resource values. + +If we will apply the following resources: + +* link:../samples/build/build_buildah_cr.yaml[buildahBuild] +* link:../samples/buildrun/buildrun_buildah_cr.yaml[buildahBuildRun] +* link:../samples/buildstrategy/buildah/buildstrategy_buildah_cr.yaml[buildahClusterBuildStrategy] + +We will see some differences between the `TaskRun` definition and the `pod` definition. + +For the `TaskRun`, as expected we can see the resources on each `step`, as we previously define on our link:../samples/buildstrategy/buildah/buildstrategy_buildah_cr.yaml[strategy]. + +[source,terminal] +---- +$ kubectl -n test-build get tr buildah-golang-buildrun-9gmcx-pod-lhzbc -o json | jq '.spec.taskSpec.steps[] | select(.name == "step-buildah-bud" ) | .resources' +{ + "limits": { + "cpu": "500m", + "memory": "1Gi" + }, + "requests": { + "cpu": "250m", + "memory": "65Mi" + } +} + +$ kubectl -n test-build get tr buildah-golang-buildrun-9gmcx-pod-lhzbc -o json | jq '.spec.taskSpec.steps[] | select(.name == "step-buildah-push" ) | .resources' +{ + "limits": { + "cpu": "500m", + "memory": "1Gi" + }, + "requests": { + "cpu": "250m", + "memory": "65Mi" + } +} +---- + +The pod definition is different, while Tekton will only use the *highest* values of one container, and set the rest(lowest) to zero: + +[source,terminal] +---- +$ kubectl -n test-build get pods buildah-golang-buildrun-9gmcx-pod-lhzbc -o json | jq '.spec.containers[] | select(.name == "step-step-buildah-bud" ) | .resources' +{ + "limits": { + "cpu": "500m", + "memory": "1Gi" + }, + "requests": { + "cpu": "250m", + "ephemeral-storage": "0", + "memory": "65Mi" + } +} + +$ kubectl -n test-build get pods buildah-golang-buildrun-9gmcx-pod-lhzbc -o json | jq '.spec.containers[] | select(.name == "step-step-buildah-push" ) | .resources' +{ + "limits": { + "cpu": "500m", + "memory": "1Gi" + }, + "requests": { + "cpu": "0", <------------------- See how the request is set to ZERO. + "ephemeral-storage": "0", <------------------- See how the request is set to ZERO. + "memory": "0" <------------------- See how the request is set to ZERO. + } +} +---- + +In this scenario, only one container can have the `spec.resources.requests` definition. Even when both steps have the same values, only one container will get them, the others will be set to zero. + +''' + +*Scenario 2.* Namespace without `LimitRange`, steps with different resources: + +If we will apply the following resources: + +* link:../samples/build/build_buildah_cr.yaml[buildahBuild] +* link:../samples/buildrun/buildrun_buildah_cr.yaml[buildahBuildRun] +* We will use a modified buildah strategy, with the following steps resources: ++ +[source,yaml] +---- + - name: buildah-bud + image: quay.io/containers/buildah:v1.20.1 + workingDir: $(params.shp-source-root) + securityContext: + privileged: true + command: + - /usr/bin/buildah + args: + - bud + - --tag=$(params.shp-output-image) + - --file=$(build.dockerfile) + - $(build.source.contextDir) + resources: + limits: + cpu: 500m + memory: 1Gi + requests: + cpu: 250m + memory: 65Mi + volumeMounts: + - name: buildah-images + mountPath: /var/lib/containers/storage + - name: buildah-push + image: quay.io/containers/buildah:v1.20.1 + securityContext: + privileged: true + command: + - /usr/bin/buildah + args: + - push + - --tls-verify=false + - docker://$(params.shp-output-image) + resources: + limits: + cpu: 500m + memory: 1Gi + requests: + cpu: 250m + memory: 100Mi <------ See how we provide more memory to step-buildah-push, compared to the 65Mi of the other step +---- + +For the `TaskRun`, as expected we can see the resources on each `step`. + +[source,terminal] +---- +$ kubectl -n test-build get tr buildah-golang-buildrun-skgrp -o json | jq '.spec.taskSpec.steps[] | select(.name == "step-buildah-bud" ) | .resources' +{ + "limits": { + "cpu": "500m", + "memory": "1Gi" + }, + "requests": { + "cpu": "250m", + "memory": "65Mi" + } +} + +$ kubectl -n test-build get tr buildah-golang-buildrun-skgrp -o json | jq '.spec.taskSpec.steps[] | select(.name == "step-buildah-push" ) | .resources' +{ + "limits": { + "cpu": "500m", + "memory": "1Gi" + }, + "requests": { + "cpu": "250m", + "memory": "100Mi" + } +} +---- + +The pod definition is different, while Tekton will only use the *highest* values of one container, and set the rest(lowest) to zero: + +[source,terminal] +---- +$ kubectl -n test-build get pods buildah-golang-buildrun-95xq8-pod-mww8d -o json | jq '.spec.containers[] | select(.name == "step-step-buildah-bud" ) | .resources' +{ + "limits": { + "cpu": "500m", + "memory": "1Gi" + }, + "requests": { + "cpu": "250m", <------------------- See how the CPU is preserved + "ephemeral-storage": "0", + "memory": "0" <------------------- See how the memory is set to ZERO + } +} +$ kubectl -n test-build get pods buildah-golang-buildrun-95xq8-pod-mww8d -o json | jq '.spec.containers[] | select(.name == "step-step-buildah-push" ) | .resources' +{ + "limits": { + "cpu": "500m", + "memory": "1Gi" + }, + "requests": { + "cpu": "0", <------------------- See how the CPU is set to zero. + "ephemeral-storage": "0", + "memory": "100Mi" <------------------- See how the memory is preserved on this container + } +} +---- + +In the above scenario, we can see how the maximum numbers for resource requests are distributed between containers. The container `step-buildah-push` gets the `100mi` for the memory requests, while it was the one defining the highest number. At the same time, the container `step-buildah-bud` is assigned a `0` for its memory request. + +''' + +*Scenario 3.* Namespace *with* a `LimitRange`. + +When a `LimitRange` exists on the namespace, `Tekton Pipeline` controller will do the same approach as stated in the above two scenarios. The difference is that for the containers that have lower values, instead of zero, they will get the `minimum values of the LimitRange`. + +== Annotations + +Annotations can be defined for a BuildStrategy/ClusterBuildStrategy as for any other Kubernetes object. Annotations are propagated to the TaskRun and from there, Tekton propagates them to the Pod. Use cases for this are for example: + +* The Kubernetes https://kubernetes.io/docs/concepts/extend-kubernetes/compute-storage-net/network-plugins/#support-traffic-shaping[Network Traffic Shaping] feature looks for the `kubernetes.io/ingress-bandwidth` and `kubernetes.io/egress-bandwidth` annotations to limit the network bandwidth the `Pod` is allowed to use. +* The https://kubernetes.io/docs/tutorials/clusters/apparmor/[AppArmor profile of a container] is defined using the `container.apparmor.security.beta.kubernetes.io/` annotation. + +The following annotations are not propagated: + +* `kubectl.kubernetes.io/last-applied-configuration` +* `clusterbuildstrategy.shipwright.io/*` +* `buildstrategy.shipwright.io/*` +* `build.shipwright.io/*` +* `buildrun.shipwright.io/*` + +A Kubernetes administrator can further restrict the usage of annotations by using policy engines like https://www.openpolicyagent.org/[Open Policy Agent]. diff --git a/content/en/docs/api/buildstrategies.md b/content/en/docs/api/buildstrategies.md deleted file mode 100644 index 6926798e..00000000 --- a/content/en/docs/api/buildstrategies.md +++ /dev/null @@ -1,633 +0,0 @@ ---- -title: BuildStrategy and ClusterBuildStrategy -weight: 20 ---- - -- [Overview](#overview) -- [Available ClusterBuildStrategies](#available-clusterbuildstrategies) -- [Available BuildStrategies](#available-buildstrategies) -- [Buildah](#buildah) - - [Installing Buildah Strategy](#installing-buildah-strategy) -- [Buildpacks v3](#buildpacks-v3) - - [Installing Buildpacks v3 Strategy](#installing-buildpacks-v3-strategy) -- [Kaniko](#kaniko) - - [Installing Kaniko Strategy](#installing-kaniko-strategy) - - [Scanning with Trivy](#scanning-with-trivy) -- [BuildKit](#buildkit) - - [Cache Exporters](#cache-exporters) - - [Known Limitations](#known-limitations) - - [Usage in Clusters with Pod Security Standards](#usage-in-clusters-with-pod-security-standards) - - [Installing BuildKit Strategy](#installing-buildkit-strategy) -- [ko](#ko) - - [Installing ko Strategy](#installing-ko-strategy) - - [Parameters](#parameters) -- [Source to Image](#source-to-image) - - [Installing Source to Image Strategy](#installing-source-to-image-strategy) - - [Build Steps](#build-steps) -- [Strategy parameters](#strategy-parameters) -- [System parameters](#system-parameters) -- [System parameters vs Strategy Parameters Comparison](#system-parameters-vs-strategy-parameters-comparison) -- [System results](#system-results) -- [Steps Resource Definition](#steps-resource-definition) - - [Strategies with different resources](#strategies-with-different-resources) - - [How does Tekton Pipelines handle resources](#how-does-tekton-pipelines-handle-resources) - - [Examples of Tekton resources management](#examples-of-tekton-resources-management) -- [Annotations](#annotations) - -## Overview - -There are two types of strategies, the `ClusterBuildStrategy` (`clusterbuildstrategies.shipwright.io/v1alpha1`) and the `BuildStrategy` (`buildstrategies.shipwright.io/v1alpha1`). Both strategies define a shared group of steps, needed to fullfil the application build. - -A `ClusterBuildStrategy` is available cluster-wide, while a `BuildStrategy` is available within a namespace. - -## Available ClusterBuildStrategies - -Well-known strategies can be bootstrapped from [here](../samples/buildstrategy). The currently supported Cluster BuildStrategy are: - -| Name | Supported platforms | -| ---- | ------------------- | -| [buildah](../samples/buildstrategy/buildah/buildstrategy_buildah_cr.yaml) | linux/amd64 only | -| [BuildKit](../samples/buildstrategy/buildkit/buildstrategy_buildkit_cr.yaml) | all | -| [buildpacks-v3-heroku](../samples/buildstrategy/buildpacks-v3/buildstrategy_buildpacks-v3-heroku_cr.yaml) | linux/amd64 only | -| [buildpacks-v3](../samples/buildstrategy/buildpacks-v3/buildstrategy_buildpacks-v3_cr.yaml) | linux/amd64 only | -| [kaniko](../samples/buildstrategy/kaniko/buildstrategy_kaniko_cr.yaml) | all | -| [ko](../samples/buildstrategy/ko/buildstrategy_ko_cr.yaml) | all | -| [source-to-image](../samples/buildstrategy/source-to-image/buildstrategy_source-to-image_cr.yaml) | linux/amd64 only | - -## Available BuildStrategies - -The current supported namespaces BuildStrategy are: - -| Name | Supported platforms | -| ---- | ------------------- | -| [buildpacks-v3-heroku](../samples/buildstrategy/buildpacks-v3/buildstrategy_buildpacks-v3-heroku_namespaced_cr.yaml) | linux/amd64 only | -| [buildpacks-v3](../samples/buildstrategy/buildpacks-v3/buildstrategy_buildpacks-v3_namespaced_cr.yaml) | linux/amd64 only | - ---- - -## Buildah - -The `buildah` ClusterBuildStrategy consists of using [`buildah`](https://github.com/containers/buildah) to build and push a container image, out of a `Dockerfile`. The `Dockerfile` should be specified on the `Build` resource. - -### Installing Buildah Strategy - -To install use: - -```sh -kubectl apply -f samples/buildstrategy/buildah/buildstrategy_buildah_cr.yaml -``` - ---- - -## Buildpacks v3 - -The [buildpacks-v3][buildpacks] BuildStrategy/ClusterBuildStrategy uses a Cloud Native Builder ([CNB][cnb]) container image, and is able to implement [lifecycle commands][lifecycle]. The following CNB images are the most common options: - -- [`heroku/buildpacks:18`][hubheroku] -- [`cloudfoundry/cnb:bionic`][hubcloudfoundry] -- [`docker.io/paketobuildpacks/builder:full`](https://hub.docker.com/r/paketobuildpacks/builder/tags) - -### Installing Buildpacks v3 Strategy - -You can install the `BuildStrategy` in your namespace or install the `ClusterBuildStrategy` at cluster scope so that it can be shared across namespaces. - -To install the cluster scope strategy, use (below is a heroku example, you can also use paketo sample): - -```sh -kubectl apply -f samples/buildstrategy/buildpacks-v3/buildstrategy_buildpacks-v3-heroku_cr.yaml -``` - -To install the namespaced scope strategy, use: - -```sh -kubectl apply -f samples/buildstrategy/buildpacks-v3/buildstrategy_buildpacks-v3-heroku_namespaced_cr.yaml -``` - ---- - -## Kaniko - -The `kaniko` ClusterBuildStrategy is composed by Kaniko's `executor` [kaniko], with the objective of building a container-image, out of a `Dockerfile` and context directory. The `kaniko-trivy` ClusterBuildStrategy adds [trivy](https://github.com/aquasecurity/trivy) scanning and refuses to push images with critical vulnerabilities. - -### Installing Kaniko Strategy - -To install the cluster scope strategy, use: - -```sh -kubectl apply -f samples/buildstrategy/kaniko/buildstrategy_kaniko_cr.yaml -``` - -#### Scanning with Trivy - -You can also incorporate scanning into the ClusterBuildStrategy. The `kaniko-trivy` ClusterBuildStrategy builds the image with `kaniko`, then scans with [trivy](https://github.com/aquasecurity/trivy). The BuildRun will then exit with an error if there is a critical vulnerability, instead of pushing the vulnerable image into the container registry. - -To install the cluster scope strategy, use: - -```sh -kubectl apply -f samples/buildstrategy/kaniko/buildstrategy_kaniko-trivy_cr.yaml -``` - -*Note: doing image scanning is not a substitute for trusting the Dockerfile you are building. The build process itself is also susceptible if the Dockerfile has a vulnerability. Frameworks/strategies such as build-packs or source-to-image (which avoid directly building a Dockerfile) should be considered if you need guardrails around the code you want to build.* - ---- - -## BuildKit - -[BuildKit](https://github.com/moby/buildkit) is composed of the `buildctl` client and the `buildkitd` daemon. For the `buildkit` ClusterBuildStrategy, it runs on a [daemonless](https://github.com/moby/buildkit#daemonless) mode, where both client and ephemeral daemon run in a single container. In addition, it runs without privileges (_[rootless](https://github.com/moby/buildkit/blob/master/docs/rootless.md)_). - -### Cache Exporters - -By default, the `buildkit` ClusterBuildStrategy will use caching to optimize the build times. When pushing an image to a registry, it will use the `inline` export cache, which pushes the image and cache together. Please refer to [export-cache docs](https://github.com/moby/buildkit#export-cache) for more information. - -### Known Limitations - -The `buildkit` ClusterBuildStrategy currently locks the following parameters: - -- A `Dockerfile` name needs to be `Dockerfile`, this is currently not configurable. -- Exporter caches are enabled by default, this is currently not configurable. -- To allow running rootless, it requires both [AppArmor](https://kubernetes.io/docs/tutorials/clusters/apparmor/) as well as [SecComp](https://kubernetes.io/docs/tutorials/clusters/seccomp/) to be disabled using the `unconfined` profile. - -### Usage in Clusters with Pod Security Standards - -The BuildKit strategy contains fields with regards to security settings. It therefore depends on the respective cluster setup and administrative configuration. These settings are: - -- Defining the `unconfined` profile for both AppArmor and seccomp as required by the underlying `rootlesskit`. -- The `allowPrivilegeEscalation` settings is set to `true` to be able to use binaries that have the `setuid` bit set in order to run with "root" level privileges. In case of BuildKit, this is required by `rootlesskit` in order to set the user namespace mapping file `/proc//uid_map`. -- Use of non-root user with UID 1000/GID 1000 as the `runAsUser`. - -These settings have no effect in case Pod Security Standards are not used. - -_Please note:_ At this point in time, there is no way to run `rootlesskit` to start the BuildKit daemon without the `allowPrivilegeEscalation` flag set to `true`. Clusters with the `Restricted` security standard in place will not be able to use this build strategy. - -### Installing BuildKit Strategy - -To install the cluster scope strategy, use: - -```sh -kubectl apply -f samples/buildstrategy/buildkit/buildstrategy_buildkit_cr.yaml -``` - ---- - -## ko - -The `ko` ClusterBuilderStrategy is using [ko](https://github.com/google/ko)'s publish command to build an image from a Golang main package. - -### Installing ko Strategy - -To install the cluster scope strategy, use: - -```sh -kubectl apply -f samples/buildstrategy/ko/buildstrategy_ko_cr.yaml -``` - -### Parameters - -The build strategy provides the following parameters that you can set in a Build or BuildRun to control its behavior: - -| Parameter | Description | Default | -| -- | -- | -- | -| `go-flags` | Value for the GOFLAGS environment variable. | Empty | -| `go-version` | Version of Go, must match a tag from [the golang image](https://hub.docker.com/_/golang?tab=tags) | `1.16` | -| `ko-version` | Version of ko, must be either `latest` for the newest release, or a [ko release name](https://github.com/google/ko/releases) | `latest` | -| `package-directory` | The directory inside the context directory containing the main package. | `.` | -| `target-platform` | Target platform to be built. For example: `linux/arm64`. Multiple platforms can be provided separated by comma, for example: `linux/arm64,linux/amd64`. The value `all` will build all platforms supported by the base image. The value `current` will build the platform on which the build runs. | `current` | - -## Source to Image - -This BuildStrategy is composed by [`source-to-image`][s2i] and [`kaniko`][kaniko] in order to generate a `Dockerfile` and prepare the application to be built later on with a builder. - -`s2i` requires a specially crafted image, which can be informed as `builderImage` parameter on the `Build` resource. - -### Installing Source to Image Strategy - -To install the cluster scope strategy use: - -```sh -kubectl apply -f samples/buildstrategy/source-to-image/buildstrategy_source-to-image_cr.yaml -``` - -### Build Steps - -1. `s2i` in order to generate a `Dockerfile` and prepare source-code for image build; -2. `kaniko` to create and push the container image to what is defined as `output.image`; - -[buildpacks]: https://buildpacks.io/ -[cnb]: https://buildpacks.io/docs/concepts/components/builder/ -[lifecycle]: https://buildpacks.io/docs/concepts/components/lifecycle/ -[hubheroku]: https://hub.docker.com/r/heroku/buildpacks/ -[hubcloudfoundry]: https://hub.docker.com/r/cloudfoundry/cnb -[kaniko]: https://github.com/GoogleContainerTools/kaniko -[s2i]: https://github.com/openshift/source-to-image -[buildah]: https://github.com/containers/buildah - -## Strategy parameters - -Strategy parameters allow users to parameterize their strategy definition, by allowing users to control the _parameters_ values via the `Build` or `BuildRun` resources. - -Users defining _parameters_ under their strategies require to understand the following: - -- **Definition**: A list of parameters should be defined under `spec.parameters`. Each list item should consist of a _name_, a _description_ and a reasonable _default_ value (_type string_). Note that a default value is not mandatory. -- **Usage**: In order to use a parameter in the strategy steps, users should follow the following syntax: `$(params.your-parameter-name)` -- **Parameterize**: Any `Build` or `BuildRun` referencing your strategy, can set a value for _your-parameter-name_ parameter if needed. - -The following is an example of a strategy that defines and uses the `sleep-time` parameter: - -```yaml ---- -apiVersion: shipwright.io/v1alpha1 -kind: BuildStrategy -metadata: - name: sleepy-strategy -spec: - parameters: - - name: sleep-time - description: "time in seconds for sleeping" - default: "1" - buildSteps: - - name: a-strategy-step - image: alpine:latest - command: - - sleep - args: - - $(params.sleep-time) -``` - -See more information on how to use this parameter in a `Build` or `BuildRun` in the related [docs](./build.md#defining-paramvalues). - -## System parameters - -Contrary to the strategy `spec.parameters`, you can use system parameters and their values defined at runtime when defining the steps of a build strategy to access system information as well as information provided by the user in their Build or BuildRun. The following parameters are available: - -| Parameter | Description | -| ------------------------------ | ----------- | -| `$(params.shp-source-root)` | The absolute path to the directory that contains the user's sources. | -| `$(params.shp-source-context)` | The absolute path to the context directory of the user's sources. If the user specified no value for `spec.source.contextDir` in their `Build`, then this value will equal the value for `$(params.shp-source-root)`. Note that this directory is not guaranteed to exist at the time the container for your step is started, you can therefore not use this parameter as a step's working directory. | -| `$(params.shp-output-image)` | The URL of the image that the user wants to push as specified in the Build's `spec.output.image`, or the override from the BuildRun's `spec.output.image`. | - -## System parameters vs Strategy Parameters Comparison - -| Parameter Type | User Configurable | Definition | -| ------------------ | ------------ | ------------- | -| System Parameter | No | At run-time, by the `BuildRun` controller. | -| Strategy Parameter | Yes | At build-time, during the `BuildStrategy` creation. | - -## System results - -You can optionally store the size and digest of the image your build strategy created to a set of files. - -| Result file | Description | -| ---------------------------------- | ----------------------------------------------- | -| `$(results.shp-image-digest.path)` | File to store the digest of the image. | -| `$(results.shp-image-size.path)` | File to store the compressed size of the image. | - -You can look at sample build strategies, such as [Kaniko](../samples/buildstrategy/kaniko/buildstrategy_kaniko_cr.yaml), or [Buildpacks](../samples/buildstrategy/buildpacks-v3/buildstrategy_buildpacks-v3_cr.yaml), to see how they fill some or all of the results files. - -This information will be available in the `.status.output` field of the BuildRun. - -```yaml -apiVersion: shipwright.io/v1alpha1 -kind: BuildRun -# [...] -status: - # [...] - output: - digest: sha256:07626e3c7fdd28d5328a8d6df8d29cd3da760c7f5e2070b534f9b880ed093a53 - size: "1989004" - # [...] -``` - -## Steps Resource Definition - -All strategies steps can include a definition of resources(_limits and requests_) for CPU, memory and disk. For strategies with more than one step, each step(_container_) could require more resources than others. Strategy admins are free to define the values that they consider the best fit for each step. Also, identical strategies with the same steps that are only different in their name and step resources can be installed on the cluster to allow users to create a build with smaller and larger resource requirements. - -### Strategies with different resources - -If the strategy admins would require to have multiple flavours of the same strategy, where one strategy has more resources that the other. Then, multiple strategies for the same type should be defined on the cluster. In the following example, we use Kaniko as the type: - -```yaml ---- -apiVersion: shipwright.io/v1alpha1 -kind: ClusterBuildStrategy -metadata: - name: kaniko-small -spec: - buildSteps: - - name: build-and-push - image: gcr.io/kaniko-project/executor:v1.6.0 - workingDir: $(params.shp-source-root) - securityContext: - runAsUser: 0 - capabilities: - add: - - CHOWN - - DAC_OVERRIDE - - FOWNER - - SETGID - - SETUID - - SETFCAP - - KILL - env: - - name: DOCKER_CONFIG - value: /tekton/home/.docker - - name: AWS_ACCESS_KEY_ID - value: NOT_SET - - name: AWS_SECRET_KEY - value: NOT_SET - command: - - /kaniko/executor - args: - - --skip-tls-verify=true - - --dockerfile=$(build.dockerfile) - - --context=$(params.shp-source-context) - - --destination=$(params.shp-output-image) - - --snapshotMode=redo - - --push-retry=3 - resources: - limits: - cpu: 250m - memory: 65Mi - requests: - cpu: 250m - memory: 65Mi ---- -apiVersion: shipwright.io/v1alpha1 -kind: ClusterBuildStrategy -metadata: - name: kaniko-medium -spec: - buildSteps: - - name: build-and-push - image: gcr.io/kaniko-project/executor:v1.6.0 - workingDir: $(params.shp-source-root) - securityContext: - runAsUser: 0 - capabilities: - add: - - CHOWN - - DAC_OVERRIDE - - FOWNER - - SETGID - - SETUID - - SETFCAP - - KILL - env: - - name: DOCKER_CONFIG - value: /tekton/home/.docker - - name: AWS_ACCESS_KEY_ID - value: NOT_SET - - name: AWS_SECRET_KEY - value: NOT_SET - command: - - /kaniko/executor - args: - - --skip-tls-verify=true - - --dockerfile=$(build.dockerfile) - - --context=$(params.shp-source-context) - - --destination=$(params.shp-output-image) - - --snapshotMode=redo - - --push-retry=3 - resources: - limits: - cpu: 500m - memory: 1Gi - requests: - cpu: 500m - memory: 1Gi -``` - -The above provides more control and flexibility for the strategy admins. For `end-users`, all they need to do, is to reference the proper strategy. For example: - -```yaml ---- -apiVersion: shipwright.io/v1alpha1 -kind: Build -metadata: - name: kaniko-medium -spec: - source: - url: https://github.com/shipwright-io/sample-go - contextDir: docker-build - strategy: - name: kaniko - kind: ClusterBuildStrategy - dockerfile: Dockerfile -``` - -### How does Tekton Pipelines handle resources - -The **Build** controller relies on the Tekton [pipeline controller](https://github.com/tektoncd/pipeline) to schedule the `pods` that execute the above strategy steps. In a nutshell, the **Build** controller creates on run-time a Tekton **TaskRun**, and the **TaskRun** generates a new pod in the particular namespace. In order to build an image, the pod executes all the strategy steps one-by-one. - -Tekton manage each step resources **request** in a very particular way, see the [docs](https://github.com/tektoncd/pipeline/blob/main/docs/tasks.md#defining-steps). From this document, it mentions the following: - -> The CPU, memory, and ephemeral storage resource requests will be set to zero, or, if specified, the minimums set through LimitRanges in that Namespace, if the container image does not have the largest resource request out of all container images in the Task. This ensures that the Pod that executes the Task only requests enough resources to run a single container image in the Task rather than hoard resources for all container images in the Task at once. - -### Examples of Tekton resources management - -For a more concrete example, let´s take a look on the following scenarios: - ---- - -**Scenario 1.** Namespace without `LimitRange`, both steps with the same resource values. - -If we will apply the following resources: - -- [buildahBuild](../samples/build/build_buildah_cr.yaml) -- [buildahBuildRun](../samples/buildrun/buildrun_buildah_cr.yaml) -- [buildahClusterBuildStrategy](../samples/buildstrategy/buildah/buildstrategy_buildah_cr.yaml) - -We will see some differences between the `TaskRun` definition and the `pod` definition. - -For the `TaskRun`, as expected we can see the resources on each `step`, as we previously define on our [strategy](../samples/buildstrategy/buildah/buildstrategy_buildah_cr.yaml). - -```sh -$ kubectl -n test-build get tr buildah-golang-buildrun-9gmcx-pod-lhzbc -o json | jq '.spec.taskSpec.steps[] | select(.name == "step-buildah-bud" ) | .resources' -{ - "limits": { - "cpu": "500m", - "memory": "1Gi" - }, - "requests": { - "cpu": "250m", - "memory": "65Mi" - } -} - -$ kubectl -n test-build get tr buildah-golang-buildrun-9gmcx-pod-lhzbc -o json | jq '.spec.taskSpec.steps[] | select(.name == "step-buildah-push" ) | .resources' -{ - "limits": { - "cpu": "500m", - "memory": "1Gi" - }, - "requests": { - "cpu": "250m", - "memory": "65Mi" - } -} -``` - -The pod definition is different, while Tekton will only use the **highest** values of one container, and set the rest(lowest) to zero: - -```sh -$ kubectl -n test-build get pods buildah-golang-buildrun-9gmcx-pod-lhzbc -o json | jq '.spec.containers[] | select(.name == "step-step-buildah-bud" ) | .resources' -{ - "limits": { - "cpu": "500m", - "memory": "1Gi" - }, - "requests": { - "cpu": "250m", - "ephemeral-storage": "0", - "memory": "65Mi" - } -} - -$ kubectl -n test-build get pods buildah-golang-buildrun-9gmcx-pod-lhzbc -o json | jq '.spec.containers[] | select(.name == "step-step-buildah-push" ) | .resources' -{ - "limits": { - "cpu": "500m", - "memory": "1Gi" - }, - "requests": { - "cpu": "0", <------------------- See how the request is set to ZERO. - "ephemeral-storage": "0", <------------------- See how the request is set to ZERO. - "memory": "0" <------------------- See how the request is set to ZERO. - } -} -``` - -In this scenario, only one container can have the `spec.resources.requests` definition. Even when both steps have the same values, only one container will get them, the others will be set to zero. - ---- - -**Scenario 2.** Namespace without `LimitRange`, steps with different resources: - -If we will apply the following resources: - -- [buildahBuild](../samples/build/build_buildah_cr.yaml) -- [buildahBuildRun](../samples/buildrun/buildrun_buildah_cr.yaml) -- We will use a modified buildah strategy, with the following steps resources: - - ```yaml - - name: buildah-bud - image: quay.io/containers/buildah:v1.20.1 - workingDir: $(params.shp-source-root) - securityContext: - privileged: true - command: - - /usr/bin/buildah - args: - - bud - - --tag=$(params.shp-output-image) - - --file=$(build.dockerfile) - - $(build.source.contextDir) - resources: - limits: - cpu: 500m - memory: 1Gi - requests: - cpu: 250m - memory: 65Mi - volumeMounts: - - name: buildah-images - mountPath: /var/lib/containers/storage - - name: buildah-push - image: quay.io/containers/buildah:v1.20.1 - securityContext: - privileged: true - command: - - /usr/bin/buildah - args: - - push - - --tls-verify=false - - docker://$(params.shp-output-image) - resources: - limits: - cpu: 500m - memory: 1Gi - requests: - cpu: 250m - memory: 100Mi <------ See how we provide more memory to step-buildah-push, compared to the 65Mi of the other step - ``` - -For the `TaskRun`, as expected we can see the resources on each `step`. - -```sh -$ kubectl -n test-build get tr buildah-golang-buildrun-skgrp -o json | jq '.spec.taskSpec.steps[] | select(.name == "step-buildah-bud" ) | .resources' -{ - "limits": { - "cpu": "500m", - "memory": "1Gi" - }, - "requests": { - "cpu": "250m", - "memory": "65Mi" - } -} - -$ kubectl -n test-build get tr buildah-golang-buildrun-skgrp -o json | jq '.spec.taskSpec.steps[] | select(.name == "step-buildah-push" ) | .resources' -{ - "limits": { - "cpu": "500m", - "memory": "1Gi" - }, - "requests": { - "cpu": "250m", - "memory": "100Mi" - } -} -``` - -The pod definition is different, while Tekton will only use the **highest** values of one container, and set the rest(lowest) to zero: - -```sh -$ kubectl -n test-build get pods buildah-golang-buildrun-95xq8-pod-mww8d -o json | jq '.spec.containers[] | select(.name == "step-step-buildah-bud" ) | .resources' -{ - "limits": { - "cpu": "500m", - "memory": "1Gi" - }, - "requests": { - "cpu": "250m", <------------------- See how the CPU is preserved - "ephemeral-storage": "0", - "memory": "0" <------------------- See how the memory is set to ZERO - } -} -$ kubectl -n test-build get pods buildah-golang-buildrun-95xq8-pod-mww8d -o json | jq '.spec.containers[] | select(.name == "step-step-buildah-push" ) | .resources' -{ - "limits": { - "cpu": "500m", - "memory": "1Gi" - }, - "requests": { - "cpu": "0", <------------------- See how the CPU is set to zero. - "ephemeral-storage": "0", - "memory": "100Mi" <------------------- See how the memory is preserved on this container - } -} -``` - -In the above scenario, we can see how the maximum numbers for resource requests are distributed between containers. The container `step-buildah-push` gets the `100mi` for the memory requests, while it was the one defining the highest number. At the same time, the container `step-buildah-bud` is assigned a `0` for its memory request. - ---- - -**Scenario 3.** Namespace **with** a `LimitRange`. - -When a `LimitRange` exists on the namespace, `Tekton Pipeline` controller will do the same approach as stated in the above two scenarios. The difference is that for the containers that have lower values, instead of zero, they will get the `minimum values of the LimitRange`. - -## Annotations - -Annotations can be defined for a BuildStrategy/ClusterBuildStrategy as for any other Kubernetes object. Annotations are propagated to the TaskRun and from there, Tekton propagates them to the Pod. Use cases for this are for example: - -- The Kubernetes [Network Traffic Shaping](https://kubernetes.io/docs/concepts/extend-kubernetes/compute-storage-net/network-plugins/#support-traffic-shaping) feature looks for the `kubernetes.io/ingress-bandwidth` and `kubernetes.io/egress-bandwidth` annotations to limit the network bandwidth the `Pod` is allowed to use. -- The [AppArmor profile of a container](https://kubernetes.io/docs/tutorials/clusters/apparmor/) is defined using the `container.apparmor.security.beta.kubernetes.io/` annotation. - -The following annotations are not propagated: - -- `kubectl.kubernetes.io/last-applied-configuration` -- `clusterbuildstrategy.shipwright.io/*` -- `buildstrategy.shipwright.io/*` -- `build.shipwright.io/*` -- `buildrun.shipwright.io/*` - -A Kubernetes administrator can further restrict the usage of annotations by using policy engines like [Open Policy Agent](https://www.openpolicyagent.org/). diff --git a/content/en/docs/authentication.md b/content/en/docs/authentication.adoc similarity index 75% rename from content/en/docs/authentication.md rename to content/en/docs/authentication.adoc index 01fc47a9..8ba24d7b 100644 --- a/content/en/docs/authentication.md +++ b/content/en/docs/authentication.adoc @@ -4,26 +4,27 @@ title: Authentication during builds The following document provides an introduction around the different authentication methods that can take place during an image build when using the Build controller. -- [Overview](#overview) -- [Build Secrets Annotation](#build-secrets-annotation) -- [Authentication for Git](#authentication-for-git) - - [Basic authentication](#basic-authentication) - - [SSH authentication](#ssh-authentication) - - [Usage of git secret](#usage-of-git-secret) -- [Authentication to container registries](#authentication-to-container-registries) - - [Docker Hub](#docker-hub) - - [Usage of registry secret](#usage-of-registry-secret) -- [References](#references) +* <> +* <> +* <> + ** <> + ** <> + ** <> +* <> + ** <> + ** <> +* <> -## Overview +== Overview -There are two places where users might need to define authentication when building images. Authentication to a container registry is the most common one, but also users might have the need to define authentications for pulling source-code from Git. Overall, the authentication is done via the definition of [secrets](https://kubernetes.io/docs/concepts/configuration/secret/) in which the require sensitive data will be stored. +There are two places where users might need to define authentication when building images. Authentication to a container registry is the most common one, but also users might have the need to define authentications for pulling source-code from Git. Overall, the authentication is done via the definition of https://kubernetes.io/docs/concepts/configuration/secret/[secrets] in which the require sensitive data will be stored. -## Build Secrets Annotation +== Build Secrets Annotation Users need to add an annotation `build.shipwright.io/referenced.secret: "true"` to a build secret so that build controller can decide to take a reconcile action when a secret event (`create`, `update` and `delete`) happens. Below is a secret example with build annotation: -```yaml +[source,yaml] +---- apiVersion: v1 data: .dockerconfigjson: xxxxx @@ -33,31 +34,33 @@ metadata: build.shipwright.io/referenced.secret: "true" name: secret-docker type: kubernetes.io/dockerconfigjson -``` +---- This annotation will help us filter secrets which are not referenced on a Build instance. That means if a secret doesn't have this annotation, then although event happens on this secret, Build controller will not reconcile. Being able to reconcile on secrets events allow the Build controller to re-trigger validations on the Build configuration, allowing users to understand if a dependency is missing. If you are using `kubectl` command create secrets, then you can first create build secret using `kubectl create secret` command and annotate this secret using `kubectl annotate secrets`. Below is an example: -```sh +[source,terminal] +---- kubectl -n ${namespace} create secret docker-registry example-secret --docker-server=${docker-server} --docker-username="${username}" --docker-password="${password}" --docker-email=me@here.com kubectl -n ${namespace} annotate secrets example-secret build.shipwright.io/referenced.secret='true' -``` +---- -## Authentication for Git +== Authentication for Git There are two ways for authenticating into Git (_applies to both GitLab or GitHub_): SSH and basic authentication. -### SSH authentication +=== SSH authentication For the SSH authentication you must use the tekton annotations to specify the hostname(s) of the git repository providers that you use. This is github.com for GitHub, or gitlab.com for GitLab. As seen in the following example, there are three things to notice: -- The Kubernetes secret should be of the type `kubernetes.io/ssh-auth` -- The `data.ssh-privatekey` can be generated by following the command example `base64 <~/.ssh/id_rsa`, where `~/.ssh/id_rsa` is the key used to authenticate into Git. +* The Kubernetes secret should be of the type `kubernetes.io/ssh-auth` +* The `data.ssh-privatekey` can be generated by following the command example `base64 <~/.ssh/id_rsa`, where `~/.ssh/id_rsa` is the key used to authenticate into Git. -```yaml +[source,yaml] +---- apiVersion: v1 kind: Secret metadata: @@ -67,16 +70,17 @@ metadata: type: kubernetes.io/ssh-auth data: ssh-privatekey: -``` +---- -### Basic authentication +=== Basic authentication The Basic authentication is very similar to the ssh one, but with the following differences: -- The Kubernetes secret should be of the type `kubernetes.io/basic-auth` -- The `stringData` should host your user and password in clear text. +* The Kubernetes secret should be of the type `kubernetes.io/basic-auth` +* The `stringData` should host your user and password in clear text. -```yaml +[source,yaml] +---- apiVersion: v1 kind: Secret metadata: @@ -87,9 +91,9 @@ type: kubernetes.io/basic-auth stringData: username: password: -``` +---- -### Usage of git secret +=== Usage of git secret With the right secret in place(_note: Ensure creation of secret in the proper Kubernetes namespace_), users should reference it on their Build YAML definitions. @@ -97,7 +101,8 @@ Depending on the secret type, there are two ways of doing this: When using ssh auth, users should follow: -```yaml +[source,yaml] +---- apiVersion: shipwright.io/v1alpha1 kind: Build metadata: @@ -107,11 +112,12 @@ spec: url: git@gitlab.com:eduardooli/newtaxi.git credentials: name: secret-git-ssh-auth -``` +---- When using basic auth, users should follow: -```yaml +[source,yaml] +---- apiVersion: shipwright.io/v1alpha1 kind: Build metadata: @@ -121,34 +127,36 @@ spec: url: https://gitlab.com/eduardooli/newtaxi.git credentials: name: secret-git-basic-auth -``` +---- -## Authentication to container registries +== Authentication to container registries For pushing images to private registries, users require to define a secret in their respective namespace. -### Docker Hub +=== Docker Hub Follow the following command to generate your secret: -```sh +[source,terminal] +---- kubectl --namespace create secret docker-registry \ --docker-server= \ --docker-username= \ --docker-password= \ --docker-email=me@here.com kubectl --namespace annotate secrets build.shipwright.io/referenced.secret='true' -``` +---- -_Notes:_ When generating a secret to access docker hub, the `REGISTRY_HOST` value should be `https://index.docker.io/v1/`, the username is the Docker ID. +_Notes:_ When generating a secret to access docker hub, the `REGISTRY_HOST` value should be `+https://index.docker.io/v1/+`, the username is the Docker ID. _Notes:_ The value of `PASSWORD` can be your user docker hub password, or an access token. A docker access token can be created via _Account Settings_, then _Security_ in the sidebar, and the _New Access Token_ button. -### Usage of registry secret +=== Usage of registry secret With the right secret in place (_note: Ensure creation of secret in the proper Kubernetes namespace_), users should reference it on their Build YAML definitions. For container registries, the secret should be placed under the `spec.output.credentials` path. -```yaml +[source,yaml] +---- apiVersion: shipwright.io/v1alpha1 kind: Build metadata: @@ -158,8 +166,8 @@ metadata: image: docker.io/foobar/sample:latest credentials: name: -``` +---- -## References +== References -See more information in the official Tekton [documentation](https://github.com/tektoncd/pipeline/blob/main/docs/auth.md#configuring-ssh-auth-authentication-for-git) for authentication. +See more information in the official Tekton https://github.com/tektoncd/pipeline/blob/main/docs/auth.md#configuring-ssh-auth-authentication-for-git[documentation] for authentication. diff --git a/content/en/docs/configuration.adoc b/content/en/docs/configuration.adoc new file mode 100644 index 00000000..55f0cd50 --- /dev/null +++ b/content/en/docs/configuration.adoc @@ -0,0 +1,63 @@ +--- +title: "Configuration" +draft: false +--- + +The controller is installed into Kubernetes with reasonable defaults. However, there are some settings that can be overridden using environment variables in link:../deploy/500-controller.yaml[`controller.yaml`]. + +The following environment variables are available: + +|=== +| Environment Variable | Description + +| `CTX_TIMEOUT` +| Override the default context timeout used for all Custom Resource Definition reconciliation operations. + +| `REMOTE_ARTIFACTS_CONTAINER_IMAGE` +| Specify the container image used for the `.spec.sources` remote artifacts download, by default it uses `busybox:latest`. + +| `GIT_CONTAINER_TEMPLATE` +| JSON representation of a https://pkg.go.dev/k8s.io/api/core/v1#Container[Container] template that is used for steps that clone a Git repository. Default is `{"image":"quay.io/shipwright/git:latest", "command":["/ko-app/git"], "securityContext":{"runAsUser":1000,"runAsGroup":1000}}`. The following properties are ignored as they are set by the controller: `args`, `name`. + +| `GIT_CONTAINER_IMAGE` +| Custom container image for Git clone steps. If `GIT_CONTAINER_TEMPLATE` is also specifying an image, then the value for `GIT_CONTAINER_IMAGE` has precedence. + +| `MUTATE_IMAGE_CONTAINER_TEMPLATE` +| JSON representation of a https://pkg.go.dev/k8s.io/api/core/v1#Container[Container] template that is used for steps that mutates an image if a `Build` has annotations or labels defined in the output. Default is `{"image": "quay.io/shipwright/mutate-image:latest", "command": ["/ko-app/mutate-image"], "env": [{"name": "HOME","value": "/tekton/home"}], "securityContext": {"runAsUser": 0, "capabilities": {"add": ["DAC_OVERRIDE"]}}}`. The following properties are ignored as they are set by the controller: `args`, `name`. + +| `MUTATE_IMAGE_CONTAINER_IMAGE` +| Custom container image that is used for steps that mutates an image if a `Build` has annotations or labels defined in the output. If `MUTATE_IMAGE_CONTAINER_TEMPLATE` is also specifying an image, then the value for `MUTATE_IMAGE_CONTAINER_IMAGE` has precedence. + +| `BUILD_CONTROLLER_LEADER_ELECTION_NAMESPACE` +| Set the namespace to be used to store the `shipwright-build-controller` lock, by default it is in the same namespace as the controller itself. + +| `BUILD_CONTROLLER_LEASE_DURATION` +| Override the `LeaseDuration`, which is the duration that non-leader candidates will wait to force acquire leadership. + +| `BUILD_CONTROLLER_RENEW_DEADLINE` +| Override the `RenewDeadline`, which is the duration that the acting leader will retry refreshing leadership before giving up. + +| `BUILD_CONTROLLER_RETRY_PERIOD` +| Override the `RetryPeriod`, which is the duration the LeaderElector clients should wait between tries of actions. + +| `BUILD_MAX_CONCURRENT_RECONCILES` +| The number of concurrent reconciles by the build controller. A value of 0 or lower will use the default from the https://pkg.go.dev/sigs.k8s.io/controller-runtime/pkg/controller#Options[controller-runtime controller Options]. Default is 0. + +| `BUILDRUN_MAX_CONCURRENT_RECONCILES` +| The number of concurrent reconciles by the buildrun controller. A value of 0 or lower will use the default from the https://pkg.go.dev/sigs.k8s.io/controller-runtime/pkg/controller#Options[controller-runtime controller Options]. Default is 0. + +| `BUILDSTRATEGY_MAX_CONCURRENT_RECONCILES` +| The number of concurrent reconciles by the buildstrategy controller. A value of 0 or lower will use the default from the https://pkg.go.dev/sigs.k8s.io/controller-runtime/pkg/controller#Options[controller-runtime controller Options]. Default is 0. + +| `CLUSTERBUILDSTRATEGY_MAX_CONCURRENT_RECONCILES` +| The number of concurrent reconciles by the clusterbuildstrategy controller. A value of 0 or lower will use the default from the https://pkg.go.dev/sigs.k8s.io/controller-runtime/pkg/controller#Options[controller-runtime controller Options]. Default is 0. + +| `KUBE_API_BURST` +| Burst to use for the Kubernetes API client. See https://pkg.go.dev/k8s.io/client-go/rest#Config.Burst[Config.Burst]. A value of 0 or lower will use the default from client-go, which currently is 10. Default is 0. + +| `KUBE_API_QPS` +| QPS to use for the Kubernetes API client. See https://pkg.go.dev/k8s.io/client-go/rest#Config.QPS[Config.QPS]. A value of 0 or lower will use the default from client-go, which currently is 5. Default is 0. + +| `TERMINATION_LOG_PATH` +| Path of the termination log. This is where controller application will write the reason of its termination. Default value is `/dev/termination-log`. +|=== diff --git a/content/en/docs/configuration.md b/content/en/docs/configuration.md deleted file mode 100644 index af8aec5a..00000000 --- a/content/en/docs/configuration.md +++ /dev/null @@ -1,28 +0,0 @@ ---- -title: "Configuration" -draft: false ---- - -The controller is installed into Kubernetes with reasonable defaults. However, there are some settings that can be overridden using environment variables in [`controller.yaml`](../deploy/500-controller.yaml). - -The following environment variables are available: - -| Environment Variable | Description | -| --- | --- | -| `CTX_TIMEOUT` | Override the default context timeout used for all Custom Resource Definition reconciliation operations. | -| `REMOTE_ARTIFACTS_CONTAINER_IMAGE` | Specify the container image used for the `.spec.sources` remote artifacts download, by default it uses `busybox:latest`. | -| `GIT_CONTAINER_TEMPLATE` | JSON representation of a [Container](https://pkg.go.dev/k8s.io/api/core/v1#Container) template that is used for steps that clone a Git repository. Default is `{"image":"quay.io/shipwright/git:latest", "command":["/ko-app/git"], "securityContext":{"runAsUser":1000,"runAsGroup":1000}}`. The following properties are ignored as they are set by the controller: `args`, `name`. | -| `GIT_CONTAINER_IMAGE` | Custom container image for Git clone steps. If `GIT_CONTAINER_TEMPLATE` is also specifying an image, then the value for `GIT_CONTAINER_IMAGE` has precedence. | -| `MUTATE_IMAGE_CONTAINER_TEMPLATE` | JSON representation of a [Container](https://pkg.go.dev/k8s.io/api/core/v1#Container) template that is used for steps that mutates an image if a `Build` has annotations or labels defined in the output. Default is `{"image": "quay.io/shipwright/mutate-image:latest", "command": ["/ko-app/mutate-image"], "env": [{"name": "HOME","value": "/tekton/home"}], "securityContext": {"runAsUser": 0, "capabilities": {"add": ["DAC_OVERRIDE"]}}}`. The following properties are ignored as they are set by the controller: `args`, `name`. | -| `MUTATE_IMAGE_CONTAINER_IMAGE` | Custom container image that is used for steps that mutates an image if a `Build` has annotations or labels defined in the output. If `MUTATE_IMAGE_CONTAINER_TEMPLATE` is also specifying an image, then the value for `MUTATE_IMAGE_CONTAINER_IMAGE` has precedence. | -| `BUILD_CONTROLLER_LEADER_ELECTION_NAMESPACE` | Set the namespace to be used to store the `shipwright-build-controller` lock, by default it is in the same namespace as the controller itself. | -| `BUILD_CONTROLLER_LEASE_DURATION` | Override the `LeaseDuration`, which is the duration that non-leader candidates will wait to force acquire leadership. | -| `BUILD_CONTROLLER_RENEW_DEADLINE` | Override the `RenewDeadline`, which is the duration that the acting leader will retry refreshing leadership before giving up. | -| `BUILD_CONTROLLER_RETRY_PERIOD` | Override the `RetryPeriod`, which is the duration the LeaderElector clients should wait between tries of actions. | -| `BUILD_MAX_CONCURRENT_RECONCILES` | The number of concurrent reconciles by the build controller. A value of 0 or lower will use the default from the [controller-runtime controller Options](https://pkg.go.dev/sigs.k8s.io/controller-runtime/pkg/controller#Options). Default is 0. | -| `BUILDRUN_MAX_CONCURRENT_RECONCILES` | The number of concurrent reconciles by the buildrun controller. A value of 0 or lower will use the default from the [controller-runtime controller Options](https://pkg.go.dev/sigs.k8s.io/controller-runtime/pkg/controller#Options). Default is 0. | -| `BUILDSTRATEGY_MAX_CONCURRENT_RECONCILES` | The number of concurrent reconciles by the buildstrategy controller. A value of 0 or lower will use the default from the [controller-runtime controller Options](https://pkg.go.dev/sigs.k8s.io/controller-runtime/pkg/controller#Options). Default is 0. | -| `CLUSTERBUILDSTRATEGY_MAX_CONCURRENT_RECONCILES` | The number of concurrent reconciles by the clusterbuildstrategy controller. A value of 0 or lower will use the default from the [controller-runtime controller Options](https://pkg.go.dev/sigs.k8s.io/controller-runtime/pkg/controller#Options). Default is 0. | -| `KUBE_API_BURST` | Burst to use for the Kubernetes API client. See [Config.Burst](https://pkg.go.dev/k8s.io/client-go/rest#Config.Burst). A value of 0 or lower will use the default from client-go, which currently is 10. Default is 0. | -| `KUBE_API_QPS` | QPS to use for the Kubernetes API client. See [Config.QPS](https://pkg.go.dev/k8s.io/client-go/rest#Config.QPS). A value of 0 or lower will use the default from client-go, which currently is 5. Default is 0. | -| `TERMINATION_LOG_PATH` | Path of the termination log. This is where controller application will write the reason of its termination. Default value is `/dev/termination-log`. | diff --git a/content/en/docs/metrics.adoc b/content/en/docs/metrics.adoc new file mode 100644 index 00000000..b4443211 --- /dev/null +++ b/content/en/docs/metrics.adoc @@ -0,0 +1,159 @@ +--- +title: Build Controller Metrics +linkTitle: Metrics +--- + +The Build component exposes several metrics to help you monitor the health and behavior of your build resources. + +Following build metrics are exposed on port `8383`. + +|=== +| Name | Type | Description | Labels | Status + +| `build_builds_registered_total` +| Counter +| Number of total registered Builds. +| `buildstrategy=`^1^ + +`namespace=`^1^ + +`build=`^1^ +| experimental + +| `build_buildruns_completed_total` +| Counter +| Number of total completed BuildRuns. +| `buildstrategy=`^1^ + +`namespace=`^1^ + +`build=`^1^ + +`buildrun=`^1^ +| experimental + +| `build_buildrun_establish_duration_seconds` +| Histogram +| BuildRun establish duration in seconds. +| `buildstrategy=`^1^ + +`namespace=`^1^ + +`build=`^1^ + +`buildrun=`^1^ +| experimental + +| `build_buildrun_completion_duration_seconds` +| Histogram +| BuildRun completion duration in seconds. +| `buildstrategy=`^1^ + +`namespace=`^1^ + +`build=`^1^ + +`buildrun=`^1^ +| experimental + +| `build_buildrun_rampup_duration_seconds` +| Histogram +| BuildRun ramp-up duration in seconds +| `buildstrategy=`^1^ + +`namespace=`^1^ + +`build=`^1^ + +`buildrun=`^1^ +| experimental + +| `build_buildrun_taskrun_rampup_duration_seconds` +| Histogram +| BuildRun taskrun ramp-up duration in seconds. +| `buildstrategy=`^1^ + +`namespace=`^1^ + +`build=`^1^ + +`buildrun=`^1^ +| experimental + +| `build_buildrun_taskrun_pod_rampup_duration_seconds` +| Histogram +| BuildRun taskrun pod ramp-up duration in seconds. +| `buildstrategy=`^1^ + +`namespace=`^1^ + +`build=`^1^ + +`buildrun=`^1^ +| experimental +|=== + +^1^ Labels for metric are disabled by default. See <> to enable them. + +== Configuration of histogram buckets + +Environment variables can be set to use custom buckets for the histogram metrics: + +|=== +| Metric | Environment variable | Default + +| `build_buildrun_establish_duration_seconds` +| `PROMETHEUS_BR_EST_DUR_BUCKETS` +| `0,1,2,3,5,7,10,15,20,30` + +| `build_buildrun_completion_duration_seconds` +| `PROMETHEUS_BR_COMP_DUR_BUCKETS` +| `50,100,150,200,250,300,350,400,450,500` + +| `build_buildrun_rampup_duration_seconds` +| `PROMETHEUS_BR_RAMPUP_DUR_BUCKETS` +| `0,1,2,3,4,5,6,7,8,9,10` + +| `build_buildrun_taskrun_rampup_duration_seconds` +| `PROMETHEUS_BR_RAMPUP_DUR_BUCKETS` +| `0,1,2,3,4,5,6,7,8,9,10` + +| `build_buildrun_taskrun_pod_rampup_duration_seconds` +| `PROMETHEUS_BR_RAMPUP_DUR_BUCKETS` +| `0,1,2,3,4,5,6,7,8,9,10` +|=== + +The values have to be a comma-separated list of numbers. You need to set the environment variable for the build controller for your customization to become active. When running locally, set the variable right before starting the controller: + +[source,terminal] +---- +export PROMETHEUS_BR_COMP_DUR_BUCKETS=30,60,90,120,180,240,300,360,420,480 +make local +---- + +When you deploy the build controller in a Kubernetes cluster, you need to extend the `spec.containers[0].spec.env` section of the sample deployment file, link:../deploy/500-controller.yaml[controller.yaml]. Add an additional entry: + +[source,yaml] +---- +[...] + env: + - name: PROMETHEUS_BR_COMP_DUR_BUCKETS + value: "30,60,90,120,180,240,300,360,420,480" +[...] +---- + +== Configuration of metric labels + +As the amount of buckets and labels has a direct impact on the number of Prometheus time series, you can selectively enable labels that you are interested in using the `PROMETHEUS_ENABLED_LABELS` environment variable. The supported labels are: + +* buildstrategy +* namespace +* build +* buildrun + +Use a comma-separated value to enable multiple labels. For example: + +[source,terminal] +---- +export PROMETHEUS_ENABLED_LABELS=namespace +make local +---- + +or + +[source,terminal] +---- +export PROMETHEUS_ENABLED_LABELS=buildstrategy,namespace,build +make local +---- + +When you deploy the build controller in a Kubernetes cluster, you need to extend the `spec.containers[0].spec.env` section of the sample deployment file, link:../deploy/controller.yaml[controller.yaml]. Add an additional entry: + +[source,yaml] +---- +[...] + env: + - name: PROMETHEUS_ENABLED_LABELS + value: namespace +[...] +---- diff --git a/content/en/docs/metrics.md b/content/en/docs/metrics.md deleted file mode 100644 index ef09fb8d..00000000 --- a/content/en/docs/metrics.md +++ /dev/null @@ -1,82 +0,0 @@ ---- -title: Build Controller Metrics -linkTitle: Metrics ---- - -The Build component exposes several metrics to help you monitor the health and behavior of your build resources. - -Following build metrics are exposed on port `8383`. - -| Name | Type | Description | Labels | Status | -|:-----------------------------------------------------|:----------|:--------------------------------------------------|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:-------------| -| `build_builds_registered_total` | Counter | Number of total registered Builds. | buildstrategy= 1
namespace= 1
build= 1 | experimental | -| `build_buildruns_completed_total` | Counter | Number of total completed BuildRuns. | buildstrategy= 1
namespace= 1
build= 1
buildrun= 1 | experimental | -| `build_buildrun_establish_duration_seconds` | Histogram | BuildRun establish duration in seconds. | buildstrategy= 1
namespace= 1
build= 1
buildrun= 1 | experimental | -| `build_buildrun_completion_duration_seconds` | Histogram | BuildRun completion duration in seconds. | buildstrategy= 1
namespace= 1
build= 1
buildrun= 1 | experimental | -| `build_buildrun_rampup_duration_seconds` | Histogram | BuildRun ramp-up duration in seconds | buildstrategy= 1
namespace= 1
build= 1
buildrun= 1 | experimental | -| `build_buildrun_taskrun_rampup_duration_seconds` | Histogram | BuildRun taskrun ramp-up duration in seconds. | buildstrategy= 1
namespace= 1
build= 1
buildrun= 1 | experimental | -| `build_buildrun_taskrun_pod_rampup_duration_seconds` | Histogram | BuildRun taskrun pod ramp-up duration in seconds. | buildstrategy= 1
namespace= 1
build= 1
buildrun= 1 | experimental | - -1 Labels for metric are disabled by default. See [Configuration of metric labels](#configuration-of-metric-labels) to enable them. - -## Configuration of histogram buckets - -Environment variables can be set to use custom buckets for the histogram metrics: - -| Metric | Environment variable | Default | -| ---------------------------------------------------- | ---------------------------------- | ---------------------------------------- | -| `build_buildrun_establish_duration_seconds` | `PROMETHEUS_BR_EST_DUR_BUCKETS` | `0,1,2,3,5,7,10,15,20,30` | -| `build_buildrun_completion_duration_seconds` | `PROMETHEUS_BR_COMP_DUR_BUCKETS` | `50,100,150,200,250,300,350,400,450,500` | -| `build_buildrun_rampup_duration_seconds` | `PROMETHEUS_BR_RAMPUP_DUR_BUCKETS` | `0,1,2,3,4,5,6,7,8,9,10` | -| `build_buildrun_taskrun_rampup_duration_seconds` | `PROMETHEUS_BR_RAMPUP_DUR_BUCKETS` | `0,1,2,3,4,5,6,7,8,9,10` | -| `build_buildrun_taskrun_pod_rampup_duration_seconds` | `PROMETHEUS_BR_RAMPUP_DUR_BUCKETS` | `0,1,2,3,4,5,6,7,8,9,10` | - -The values have to be a comma-separated list of numbers. You need to set the environment variable for the build controller for your customization to become active. When running locally, set the variable right before starting the controller: - -```bash -export PROMETHEUS_BR_COMP_DUR_BUCKETS=30,60,90,120,180,240,300,360,420,480 -make local -``` - -When you deploy the build controller in a Kubernetes cluster, you need to extend the `spec.containers[0].spec.env` section of the sample deployment file, [controller.yaml](../deploy/500-controller.yaml). Add an additional entry: - -```yaml -[...] - env: - - name: PROMETHEUS_BR_COMP_DUR_BUCKETS - value: "30,60,90,120,180,240,300,360,420,480" -[...] -``` - -## Configuration of metric labels - -As the amount of buckets and labels has a direct impact on the number of Prometheus time series, you can selectively enable labels that you are interested in using the `PROMETHEUS_ENABLED_LABELS` environment variable. The supported labels are: - -* buildstrategy -* namespace -* build -* buildrun - -Use a comma-separated value to enable multiple labels. For example: - -```bash -export PROMETHEUS_ENABLED_LABELS=namespace -make local -``` - -or - -```bash -export PROMETHEUS_ENABLED_LABELS=buildstrategy,namespace,build -make local -``` - -When you deploy the build controller in a Kubernetes cluster, you need to extend the `spec.containers[0].spec.env` section of the sample deployment file, [controller.yaml](../deploy/controller.yaml). Add an additional entry: - -```yaml -[...] - env: - - name: PROMETHEUS_ENABLED_LABELS - value: namespace -[...] -``` diff --git a/content/en/docs/profiling.md b/content/en/docs/profiling.adoc similarity index 89% rename from content/en/docs/profiling.md rename to content/en/docs/profiling.adoc index e0bd6e76..3be04e64 100644 --- a/content/en/docs/profiling.md +++ b/content/en/docs/profiling.adoc @@ -5,36 +5,40 @@ linkTitle: Profiling The build controller supports a `pprof` profiling mode, which is omitted from the binary by default. To use the profiling, use the controller image that was built with `pprof` enabled. -## Enable `pprof` in the build controller +== Enable `pprof` in the build controller In the Kubernetes cluster, edit the `shipwright-build-controller` deployment to use the container tag with the `debug` suffix. -```sh +[source,terminal] +---- kubectl --namespace set image \ deployment/shipwright-build-controller \ shipwright-build-controller="$(kubectl --namespace get deployment shipwright-build-controller --output jsonpath='{.spec.template.spec.containers[].image}')-debug" -``` +---- -## Connect `go pprof` to build controller +== Connect `go pprof` to build controller Depending on the respective setup, there could be multiple build controller pods for high availability reasons. In this case, you have to look-up the current leader first. The following command can be used to verify the currently active leader: -```sh +[source,terminal] +---- kubectl --namespace get configmap shipwright-build-controller-lock --output json \ | jq --raw-output '.metadata.annotations["control-plane.alpha.kubernetes.io/leader"]' \ | jq --raw-output .holderIdentity -``` +---- The `pprof` endpoint is not exposed in the cluster and can only be used from inside the container. Therefore, set-up port-forwarding to make the `pprof` port available locally. -```sh +[source,terminal] +---- kubectl --namespace port-forward 8383:8383 -``` +---- Now, you can setup a local webserver to browse through the profiling data. -```sh +[source,terminal] +---- go tool pprof -http localhost:8080 http://localhost:8383/debug/pprof/heap -``` +---- _Please note:_ For it to work, you have to have `graphviz` installed on your system, for example using `brew install graphviz`, `apt-get install graphviz`, `yum install graphviz`, or similar.