diff --git a/helm-charts/falcon-image-analyzer/README.md b/helm-charts/falcon-image-analyzer/README.md new file mode 100644 index 00000000..75b08cd5 --- /dev/null +++ b/helm-charts/falcon-image-analyzer/README.md @@ -0,0 +1,113 @@ +# CrowdStrike Falcon Helm Chart + +[Falcon](https://www.crowdstrike.com/) is the [CrowdStrike](https://www.crowdstrike.com/) +platform purpose-built to stop breaches via a unified set of cloud-delivered +technologies that prevent all types of attacks — including malware and much +more. + +# Kubernetes Cluster Compatability + +The Falcon Helm chart has been tested to deploy on the following Kubernetes distributions: + +* Amazon Elastic Kubernetes Service (EKS) + * Daemonset (node) sensor support for EKS nodes + * Container sensor support for EKS Fargate nodes +* Azure Kubernetes Service (AKS) +* Google Kubernetes Engine (GKE) +* Rancher K3s +* OpenShift Kubernetes + +# Dependencies + +1. Requires a x86_64 Kubernetes cluster +1. Before deploying the Helm chart, you should have a Falcon Linux Sensor and/or Falcon Container sensor in your own container registry or use CrowdStrike's registry before installing the Helm Chart. See the Deployment Considerations for more. +1. Helm 3.x is installed and supported by the Kubernetes vendor. + +# Installation + +### Add the CrowdStrike Falcon Helm repository + +``` +helm repo add crowdstrike https://crowdstrike.github.io/falcon-helm +``` + +### Update the local Helm repository Cache + +``` +helm repo update +``` + +# Falcon Configuration Options + +The following tables lists the Falcon Sensor configurable parameters and their default values. + +| Parameter | Description | Default | +|:---------------------------------------|:--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:----------------------------------------------------------------------------------| +| `daemonset.enabled` | set to true if running in watcher mode i.e. `crowdstrikeConfig.agentRunmode` is `socket` | false | +| `deployment.enabled` | set to true if running in watcher mode i.e. `crowdstrikeConfig.agentRunmode` is `watcher` | false | +| `image.repo` | iar image repo name. | `registry.crowdstrike.com/falcon-imageanalyzer/us-1/release/falcon-imageanalyzer` | +| `image.tag` | image tag version | None | +| `azure.enabled` | set to true if cluster is azure aks OR self managed on azure nodes | false | +| `azure.azureConfig` | azure config file path | `/etc/kubernetes/azure.json` | +| `gcp.enabled` | set to true if cluster is azure aks OR self managed on google cloud gcp nodes | false | +| `crowdstrikeConfig.clusterName` | cluster name | None | +| `crowdstrikeConfig.enableDebug` | set to true for debug level log verbosity | false | +| `crowdstrikeConfig.clientID` | crowdstrike falcon OAuth API Client ID | None | +| `crowdstrikeConfig.clientSecret` | crowdstrike falcon OAuth API Client secret | None | +| `crowdstrikeConfig.cid` | customer ID ( CID ) | None | +| `crowdstrikeConfig.dockerAPIToken` | Crowdstrike Artifactory Image Pull Token for pulling IAR image directly from `registry.crowdstrike.com` | None | +| `crowdstrikeConfig.existingSecret` | existing secret ref name of the customer kubernetes cluster | None | +| `crowdstrikeConfig.agentRunmode` | agent run mode `watcher` or `socket` for kubernetes set this along with deployment.enabled and daemonset.enabled respectively | None | +| `crowdstrikeConfig.agentRegion` | region of the crowdstike api to connect to us-1/us-2/eu-1 | None | +| `crowdstrikeConfig.agentRuntime` | the underlying runtime of the OS. docker/containerd/podman/crio . ONLY TO BE USED with `crowdstrikeConfig.agentRunmode` = `socket` | None | +| `crowdstrikeConfig.agentRuntimeSocket` | the unix socket path for the runtime socket .ef. `unix///var/run/docker.sock` . ONLY TO BE USED with `crowdstrikeConfig.agentRunmode` = `socket` | None | + + + + +## Installing on Kubernetes Cluster Nodes + +### Deployment Considerations + +To ensure a successful deployment, you will want to ensure that: +1. By default, the Helm Chart installs in the `default` namespace. Best practices for deploying to Kubernetes is to create a new namespace. This can be done by adding `--create-namespace -n falcon-image-analyzer` to your `helm install` command. The namespace can be any name that you wish to use. +1. You must be a cluster administrator to deploy Helm Charts to the cluster. +1. CrowdStrike's Helm Chart is a project, not a product, and released to the community as a way to automate sensor deployment to kubernetes clusters. The upstream repository for this project is [https://github.com/CrowdStrike/falcon-helm](https://github.com/CrowdStrike/falcon-helm). + +### Pod Security Standards + +Starting with Kubernetes 1.25, Pod Security Standards will be enforced. Setting the appropriate Pod Security Standards policy needs to be performed by adding a label to the namespace. Run the following command replacing `my-existing-namespace` with the namespace that you have installed the falcon sensors e.g. `falcon-system`.. +``` +kubectl label --overwrite ns my-existing-namespace \ + pod-security.kubernetes.io/enforce=privileged +``` + +If desired to silence the warning and change the auditing level for the Pod Security Standard, add the following labels +``` +kubectl label ns --overwrite my-existing-namespace pod-security.kubernetes.io/audit=privileged +kubectl label ns --overwrite my-existing-namespace pod-security.kubernetes.io/warn=privileged +``` + +### Install CrowdStrike Falcon Helm Chart on Kubernetes Nodes + +Before installing the IAR. please set the values of the helm chart variables and save in some path as yaml file. + +``` +helm upgrade --install -f path-to-my-values.yaml \ + --create-namespace -n falcon-image-analyzer imageanalyzer falcon-helm crowdstrike/falcon-image-analyzer +``` + + +For more details please see the [falcon-helm](https://github.com/CrowdStrike/falcon-helm) repository. + +``` +helm show values crowdstrike/falcon-sensor +``` + + +### Uninstall Helm Chart +To uninstall, run the following command: +``` +helm uninstall imageanalyzer -n falcon-image-analyzer && kubectl delete namespace falcon-image-analyzer +``` + diff --git a/helm-charts/falcon-image-analyzer/templates/_helpers.tpl b/helm-charts/falcon-image-analyzer/templates/_helpers.tpl index 3e57c944..d5cdc68a 100644 --- a/helm-charts/falcon-image-analyzer/templates/_helpers.tpl +++ b/helm-charts/falcon-image-analyzer/templates/_helpers.tpl @@ -88,13 +88,13 @@ runAsGroup: {{ .Values.securityContext.runAsGroup | default 0 }} {{- end }} {{- end }} {{- else -}} -{{- .Values.volumeMounts | toYaml -}} +{{- .Values.volumeMounts | toYaml }} {{- end }} {{- end }} {{- define "falcon-image-analyzer.volumes" -}} {{- if lt (len .Values.volumes) 2 -}} -{{- .Values.volumes | toYaml -}} +{{- .Values.volumes | toYaml }} {{- if eq .Values.crowdstrikeConfig.agentRunmode "socket" }} - name: var-run hostPath: @@ -120,7 +120,7 @@ runAsGroup: {{ .Values.securityContext.runAsGroup | default 0 }} {{- end }} {{- end }} {{- else -}} -{{- .Values.volumes | toYaml -}} +{{- .Values.volumes | toYaml }} {{- end }} {{- end }} diff --git a/helm-charts/falcon-image-analyzer/templates/daemonset.yaml b/helm-charts/falcon-image-analyzer/templates/daemonset.yaml index e6db71eb..35ac07d6 100644 --- a/helm-charts/falcon-image-analyzer/templates/daemonset.yaml +++ b/helm-charts/falcon-image-analyzer/templates/daemonset.yaml @@ -34,6 +34,21 @@ spec: {{- if .Values.podSecurityContext }} {{- toYaml .Values.podSecurityContext | nindent 8 }} {{- end }} + {{- if .Values.gcp.enabled }} + initContainers: + - name: {{ .Chart.Name }}-init-container + image: "gcr.io/google.com/cloudsdktool/cloud-sdk:alpine" + imagePullPolicy: "Always" + command: + - '/bin/bash' + - '-c' + - | + curl -sS -H 'Metadata-Flavor: Google' 'http://metadata.google.internal/computeMetadata/v1/instance/service-accounts/default/token' --retry 30 --retry-connrefused --retry-max-time 60 --connect-timeout 3 --fail --retry-all-errors > /dev/null && exit 0 || echo 'Retry limit exceeded. Failed to wait for metadata server to be available. Check if the gke-metadata-server Pod in the kube-system namespace is healthy.' >&2; exit 1 + securityContext: + runAsUser: 0 + runAsNonRoot: false + privileged: false + {{- end }} containers: - name: {{ .Chart.Name }} securityContext: @@ -56,12 +71,22 @@ spec: {{- end }} volumeMounts: {{- (include "falcon-image-analyzer.volumeMounts" .) | nindent 12 }} + {{- if .Values.azure.enabled }} + - name: azure-config + mountPath: /etc/kubernetes/azure.json + {{- end }} {{- with .Values.nodeSelector }} nodeSelector: {{- toYaml . | nindent 8 }} {{- end }} volumes: {{- include "falcon-image-analyzer.volumes" . | nindent 8 }} + {{- if .Values.azure.enabled }} + - name: azure-config + hostPath: + path: {{ .Values.azure.azureConfig }} + type: File + {{- end }} {{- with .Values.affinity }} affinity: {{- toYaml . | nindent 8 }} diff --git a/helm-charts/falcon-image-analyzer/templates/deployment.yaml b/helm-charts/falcon-image-analyzer/templates/deployment.yaml index 66b3c7c1..ad3149cd 100644 --- a/helm-charts/falcon-image-analyzer/templates/deployment.yaml +++ b/helm-charts/falcon-image-analyzer/templates/deployment.yaml @@ -35,12 +35,26 @@ spec: {{- if .Values.podSecurityContext }} {{- toYaml .Values.podSecurityContext | nindent 8 }} {{- end }} + {{- if .Values.gcp.enabled }} + initContainers: + - name: {{ .Chart.Name }}-init-container + image: "gcr.io/google.com/cloudsdktool/cloud-sdk:alpine" + imagePullPolicy: "Always" + command: + - '/bin/bash' + - '-c' + - | + curl -sS -H 'Metadata-Flavor: Google' 'http://metadata.google.internal/computeMetadata/v1/instance/service-accounts/default/token' --retry 30 --retry-connrefused --retry-max-time 60 --connect-timeout 3 --fail --retry-all-errors > /dev/null && exit 0 || echo 'Retry limit exceeded. Failed to wait for metadata server to be available. Check if the gke-metadata-server Pod in the kube-system namespace is healthy.' >&2; exit 1 + securityContext: + runAsUser: 0 + runAsNonRoot: false + privileged: false + {{- end }} containers: - name: {{ .Chart.Name }} securityContext: - {{- if .Values.securityContext }} - {{- toYaml .Values.securityContext | nindent 12 }} - {{- end }} + runAsUser: 0 + privileged: false resources: {{- if .Values.resources }} {{- toYaml .Values.resources | nindent 12 }} @@ -58,15 +72,23 @@ spec: name: {{ include "falcon-image-analyzer.fullname" . }} {{- end }} volumeMounts: - {{- toYaml .Values.volumeMounts | default "" | nindent 12 }} + {{- toYaml .Values.volumeMounts | default "" | nindent 12 }} + {{- if .Values.azure.enabled }} + - name: azure-config + mountPath: /etc/kubernetes/azure.json + {{- end }} {{- with .Values.nodeSelector }} nodeSelector: {{- toYaml . | nindent 8 }} {{- end }} - {{- with .Values.volumes }} volumes: - {{- toYaml . | default "" | nindent 8 }} - {{- end}} + {{- toYaml .Values.volumes | default "" | nindent 8 }} + {{- if .Values.azure.enabled }} + - name: azure-config + hostPath: + path: {{ .Values.azure.azureConfig }} + type: File + {{- end }} {{- with .Values.affinity }} affinity: {{- toYaml . | nindent 8 }} diff --git a/helm-charts/falcon-image-analyzer/test.yaml b/helm-charts/falcon-image-analyzer/test.yaml new file mode 100644 index 00000000..2430c045 --- /dev/null +++ b/helm-charts/falcon-image-analyzer/test.yaml @@ -0,0 +1,15 @@ +daemonset: + enabled: true + +image: + repository: "registry-dodo.viper.eyrie.cloud:5000/cloud/cs-imageanalyzer" + tag: 0.25.0-pre.pr-125-build-1 +crowdstrikeConfig: + clientID: "49e175c2d5b94ed088fba11189f1108d" + clientSecret: "aAXpW3fBrm27wZVSE140DlgQCRt9UKY5F8cix6MP" + clusterName: my-test-cluster + agentRunmode: socket + cid: 1234567890ABCDEF1234567890ABCDEF-12 + agentRuntime: containerd + agentRuntimeSocket: unix:///run/containerd/my.sock + dockerAPIToken: diff --git a/helm-charts/falcon-image-analyzer/values.yaml b/helm-charts/falcon-image-analyzer/values.yaml index 685bbeab..8b12439b 100644 --- a/helm-charts/falcon-image-analyzer/values.yaml +++ b/helm-charts/falcon-image-analyzer/values.yaml @@ -11,9 +11,11 @@ daemonset: deployment: enabled: false + +# Do not override anywhere in values - Always 1 for Deployment. NA for daemonset replicaCount: 1 image: - repository: registry.crowdstrike.com/ivan-agent/us-1/release/cs-imageanalyzer + repository: registry.crowdstrike.com/falcon-imageanalyzer/us-1/release/falcon-imageanalyzer # Overrides the image tag. In general, tags should not be used (including semver tags or `latest`). This variable is provided for those # who have yet to move off of using tags. The sha256 digest should be used in place of tags for increased security and image immutability. tag: @@ -22,7 +24,7 @@ image: # Example digest variable configuration: # digest: sha256:ffdc91f66ef8570bd7612cf19145563a787f552656f5eec43cd80ef9caca0398 digest: - pullPolicy: IfNotPresent + pullPolicy: Always # Use this if you have a base64 encoded docker # config json with user and pass of your own @@ -61,6 +63,17 @@ affinity: {} priorityClassName: "" + # For AKS without the pulltoken option +azure: + enabled: false + + # Path to the Kubernetes Azure config file on worker nodes + azureConfig: /etc/kubernetes/azure.json + +# GCP GKE workload identity init container +gcp: + enabled: false + # This is a mandatory mount for both deployment and daemon set. # this is used as a tmp working space for image storage volumes: