This document is intended to be a guide to enable developers who would like to add a chart component to multiclusterhub-repo. Charts deployed via multiclusterhub-repo are expected meet a certain standard. This includes being properly validated and being able to accept certain values and overrides from the multiclusterhub-controller. To properly onboard a component, developers should ensure their associated images, CRDs, and charts are properly integrated.
Table of Contents generated with DocToc
For help onboarding images into the pipeline, please see the following doc.
We do not allow for CRDs to be installed or managed via helm. All CRDs must be removed from the charts before onboarding.
To enable CRDs to be installed, upgraded, and uninstalled properly, ensure your CRD(s) are added to the hub-crds repository.
See Contributing.md for help contributing to this repository.
In order for a chart to be onboarded properly, ensure your chart meets the specifications below.
The chart repository must be public and must be inside the github.com/stolostron organization. If creating a new repo from scratch, this repository must be approved by the organization.
The chart must be valid and helm linted.
A charts GitHub repository must be enabled with fast forwarding to release branches to ensure that our automation is able to pick up new updates to the chart across multiple versioned branches. See the following doc to ensure your charts GitHub repository is onboarded properly.
Before beginning to onboard your chart, please see our contributing.md. Follow the steps here for to ensure that the repo owners are aware of the desired change and can handle the desired change in a standard fashion. An issue must be created to ensure this can be tracked, as changes are also required in the MultiClusterHub Operator, to create an Application Subscription when installing the MCH.
A chart must not use the default service account. Instead it must be sure to create its own service account. This is done to ensure that each chart is properly managing its permissions and privileges.
Images must be added under global.imageOverrides
in the values.yaml file of a chart. Each image referenced in the chart, must have an accompanying image key. A chart must also accept global.imageOverrides.pullPolicy
value. This should default to a preset of Always
. The Image key will be available after the image has been successfully onboarded. We do not allow for static image pinning in charts, images must be overrideable.
# Source: nginx/values.yaml
global:
imageOverrides:
## Image Key/Value pairs
nginx: "repository.to/nginx-image:latest"
pullPolicy: Always
Images can be referenced in a deployment like so -
# Source: nginx/templates/deployment.yaml
containers:
- name: nginx
image: {{ .Values.global.imageOverrides.nginx }}
imagePullPolicy: {{ .Values.global.pullPolicy }}
Unless a replicaCount of 1 is always desirable, it is necessary to add hubconfig.replicaCount
to the values.yaml file. This will allow the MCH CR to toggle between basic and high availability installation modes.
# Source: nginx/values.yaml
hubconfig:
replicaCount: 1
replicaCount can be referenced in a deployment like so -
# Source: nginx/templates/deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ template "nginx.fullname" . }}-console-v2
spec:
replicas: {{ .Values.hubconfig.replicaCount }}
...
Affinity must be set in a charts deployment(s) like shown below. Ensure the ocm-antiaffinity-selector
labels is set and that the value of this label is unique to your chart. We require affinity to be specified as follows below to prefer that pods are scheduled onto correct nodes when deploying in High Availability(HA) mode.
# Source: nginx/templates/deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
...
spec:
template:
metadata:
labels:
ocm-antiaffinity-selector: <ANTIAFFINITY-SELECTOR> # Add this label
...
spec:
...
affinity:
nodeAffinity:
...
podAntiAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- weight: 70
podAffinityTerm:
topologyKey: topology.kubernetes.io/zone
labelSelector:
matchExpressions:
- key: ocm-antiaffinity-selector
operator: In
values:
- <ANTIAFFINITY-SELECTOR> # Add this label
- key: component
operator: In
values:
- <ANTIAFFINITY-SELECTOR> # Add this label
- weight: 35
podAffinityTerm:
topologyKey: kubernetes.io/hostname
labelSelector:
matchExpressions:
- key: ocm-antiaffinity-selector
operator: In
values:
- <ANTIAFFINITY-SELECTOR> # Add this label
- key: component
operator: In
values:
- <ANTIAFFINITY-SELECTOR> # Add this label
...
Tolerations must be set in a charts deployment(s). These values do not need changed or altered. We require the tolerations be set to ensure pods are deployed properly onto infrastructure nodes.
# Source: nginx/templates/deployment.yaml
tolerations:
- key: dedicated
operator: Exists
effect: NoSchedule
- effect: NoSchedule
key: node-role.kubernetes.io/infra
operator: Exists
In the values.yaml file of a chart, there must be a hubconfig.nodeSelector
key. This should be given a value of null
to start. Overrides to nodeSelector are applied from the MCH CR and are passed down to each chart through this key, allowing a user to select which nodes the pods will be deployed upon.
# Source: nginx/values.yaml
org: open-cluster-management
hubconfig:
nodeSelector: null
# Source: nginx/templates/deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
...
spec:
...
template:
...
spec:
...
containers:
...
{{- with .Values.hubconfig.nodeSelector }}
nodeSelector:
{{ toYaml . | indent 8 }}
{{- end }}
Clusterroles and clusterrolebindings installed as part of open-cluster-management require a prefix specifying their hierarchy/ownership to avoid conflicts and standardize. This should be formatted in a way that is <org-name>:<release-name>:<clusterrole/clusterrolebinding-name>
. A full clusterrole name should resemble the following - open-cluster-management:nginx-chart-72fa7:clusterrole
.
# Source: nginx/values.yaml
org: open-cluster-management
See clusterrole.yaml and clusterrolebinding.yaml
The following security policies must be specified in the deployments, unless an exemption is approved. This is done to minimize security risks and attack vectors.
# Source: nginx/templates/deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
...
spec:
...
template:
...
spec:
hostNetwork: false
hostPID: false
hostIPC: false
securityContext:
runAsNonRoot: true
containers:
- name: nginx
...
securityContext:
allowPrivilegeEscalation: false
capabilities:
drop:
- ALL
privileged: false
readOnlyRootFilesystem: true
...
Global HTTP proxy configurations must be read into each component if they are passed down from the hub. This is done to ensure that proxy support is respected if it is configured. If the proxyConfigs map is empty, no enviornment variables will be passed down to the deployments. When a proxy is configured, the hub deployment passes down the 3 preconfigured proxy environment variables (HTTP_PROXY, HTTPS_PROXY, NO_PROXY) to the appsub, so they can be utilized. Adding this is the first step in ensuring proxy configuration is supported. Components are expected to do their due diligence to ensure their components function properly with the global configuration if they utilize a proxy.
hubconfig:
proxyConfigs: {}
In each deployment, the following HTTP Proxy environment variables can be read in as shown below.
apiVersion: apps/v1
kind: Deployment
metadata:
...
spec:
...
template:
...
spec:
containers:
- name: nginx
env:
- name: DEBUG_LEVEL
value: "info"
{{- if .Values.hubconfig.proxyConfigs }}
- name: HTTP_PROXY
value: {{ .Values.hubconfig.proxyConfigs.HTTP_PROXY }}
- name: HTTPS_PROXY
value: {{ .Values.hubconfig.proxyConfigs.HTTPS_PROXY }}
- name: NO_PROXY
value: {{ .Values.hubconfig.proxyConfigs.NO_PROXY }}
{{- end }}
...
In some specific cases, a chart may require an override to be set in the Multiclusterhub CR to enable it to be passed down to the chart. In these cases, please open an issue describing the desired capability against the installer team for feedback.