Skip to content

Commit

Permalink
Merge pull request #1 from camptocamp/fetch_selector
Browse files Browse the repository at this point in the history
Add webinterface and stream logs
  • Loading branch information
Vampouille authored Sep 19, 2023
2 parents 598039e + 9e8ba81 commit f70df5c
Show file tree
Hide file tree
Showing 2,120 changed files with 168,647 additions and 291 deletions.
48 changes: 48 additions & 0 deletions .github/workflows/build-image.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,48 @@
#
name: Create and publish a Docker image

# Configures this workflow to run every time a change is pushed to the branch called `release`.
on:
push:
branches: ['master']

# Defines two custom environment variables for the workflow. These are used for the Container registry domain, and a name for the Docker image that this workflow builds.
env:
REGISTRY: ghcr.io
IMAGE_NAME: ${{ github.repository }}

# There is a single job in this workflow. It's configured to run on the latest available version of Ubuntu.
jobs:
build-and-push-image:
runs-on: ubuntu-latest
# Sets the permissions granted to the `GITHUB_TOKEN` for the actions in this job.
permissions:
contents: read
packages: write
#
steps:
- name: Checkout repository
uses: actions/checkout@v3
# Uses the `docker/login-action` action to log in to the Container registry registry using the account and password that will publish the packages. Once published, the packages are scoped to the account defined here.
- name: Log in to the Container registry
uses: docker/login-action@65b78e6e13532edd9afa3aa52ac7964289d1a9c1
with:
registry: ${{ env.REGISTRY }}
username: ${{ github.actor }}
password: ${{ secrets.GITHUB_TOKEN }}
# This step uses [docker/metadata-action](https://github.com/docker/metadata-action#about) to extract tags and labels that will be applied to the specified image. The `id` "meta" allows the output of this step to be referenced in a subsequent step. The `images` value provides the base name for the tags and labels.
- name: Extract metadata (tags, labels) for Docker
id: meta
uses: docker/metadata-action@9ec57ed1fcdbf14dcef7dfbe97b2010124a938b7
with:
images: ${{ env.REGISTRY }}/${{ env.IMAGE_NAME }}
# This step uses the `docker/build-push-action` action to build the image, based on your repository's `Dockerfile`. If the build succeeds, it pushes the image to GitHub Packages.
# It uses the `context` parameter to define the build's context as the set of files located in the specified path. For more information, see "[Usage](https://github.com/docker/build-push-action#usage)" in the README of the `docker/build-push-action` repository.
# It uses the `tags` and `labels` parameters to tag and label the image with the output from the "meta" step.
- name: Build and push Docker image
uses: docker/build-push-action@f2a1d5e99d037542a71f64918e516c093c6f3fc4
with:
context: .
push: true
tags: ${{ steps.meta.outputs.tags }}
labels: ${{ steps.meta.outputs.labels }}
8 changes: 8 additions & 0 deletions Dockerfile
Original file line number Diff line number Diff line change
@@ -0,0 +1,8 @@
FROM python:3-bookworm

WORKDIR /usr/src/
COPY . .

RUN pip install -r requirements.txt

ENTRYPOINT ["python3", "-m", "builder"]
126 changes: 86 additions & 40 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,74 +1,120 @@
# policy builder
<picture>
<source media="(prefers-color-scheme: light)" srcset="https://raw.githubusercontent.com/camptocamp/tetragon-policy-builder/fetch_selector/static/logo.png" width="200">
<img src="https://raw.githubusercontent.com/camptocamp/tetragon-policy-builder/fetch_selector/static/logo.png" width="200">
</picture>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;
<a href="https://github.com/camptocamp/tetragon-policy-builder/blob/fetch_selector/screenshot1.png">
<picture style="margin-left: 100px;">
<source media="(prefers-color-scheme: light)" srcset="https://raw.githubusercontent.com/camptocamp/tetragon-policy-builder/fetch_selector/screenshot2.png" height="200">
<img src="https://raw.githubusercontent.com/camptocamp/tetragon-policy-builder/fetch_selector/static/screenshot2.png" height="200">
</picture>
</a>

# Tetragon Policy Builder

This tool is a proof-of-concept tool, which profiles apps running in k8s,
and issues TracingPolicies allowing only processes run under the profiles.

It parses output from Tetragon and creates Cilium TracingPolicies, which
enable only , per namespace, per workloads, some whitelisted processes.
enable only, per namespace, per workloads, some whitelisted processes.

[Tetragon](https://github.com/cilium/tetragon) *MUST* be running in the
kubernetes cluster with the `tetragon.enablePolicyFilter: true`
[value](https://tetragon.cilium.io/docs/reference/helm-chart/#values).

## requirements
* python3
* pip
## Deploy Tetragon

## install
Check the [Quick Start guide](https://tetragon.cilium.io/docs/getting-started/kubernetes-quickstart-guide/):

* git clone
* pip install -r /path/to/requirements.txt
```bash
$ helm repo add cilium https://helm.cilium.io
$ helm repo update
$ helm install tetragon cilium/tetragon -n kube-system --set tetragon.enablePolicyFilter=true
```

## run example
## Deploy the policy builder

### Using helm

### help
```bash
$ git clone https://github.com/camptocamp/tetragon-policy-builder.git
$ cd tetragon-policy-builder
$ helm install -n kube-system policy-builder helm/tetragon-policy-builder
```

Then you can open a "port-forward" to access the web UI:

```bash
>> python3 builder.py -h
usage: builder.py [-h] [--file FILE] [--eol-parser EOL_PARSER] [--output OUTPUT]
kubectl port-forward -n kube-system deploy/policy-builder 5000:5000
```

Process input from stdin or a file.
and access the interface with your wen browser: [http://localhost:5000/](http://localhost:5000/)

options:
-h, --help show this help message and exit
--file FILE path to input file
--eol-parser EOL_PARSER
use EOL parser instead of braces count based parser
--output OUTPUT path to input file
After uninstalling with helm:

```bash
$ helm uninstall -n kube-system policy-builder
```

### parse from file
You will need to cleanup some configmap created by the policy builder, you can
list configmaps with:

``` bash
$ kubectl get cm -A -l generated-by=tetragon-policy-builder
```

and then delete configmap with:

```bash
$ kubectl delete cm -A -l generated-by=tetragon-policy-builder
```

>> python3 -m builder --file my_tetragon_dump.txt --output test.yaml
## Using Docker

# events parsed
# EventProcessExec(ns='thanos', wl='thanos-bucketweb', bin='/bin/oauth2-proxy')
# EventProcessExec(ns='thanos', wl='thanos-compactor', bin='/bin/thanos')
# EventProcessExec(ns='thanos', wl='thanos-compactor', bin='/bin/test2')
The docker container will need to authenticate to the kubernetes. You will need
to share the kubeconfig to the container:

>> head test.yaml
```bash
$ docker run -p 5000:5000 -e KUBECONFIG=/tmp/kubeconfig -v $KUBECONFIG:/tmp/kubeconfig ghcr.io/camptocamp/tetragon-policy-builder:latest
```

---
apiVersion: cilium.io/v1alpha1
kind: TracingPolicy
metadata:
name: "policy-thanos-bucketweb-whitelist"
namespace: "thanos"
spec:
tracepoints:
- subsystem: "raw_syscalls"
Be sure to cleanup configmap created by the policy builder with:

```bash
$ kubectl delete cm -A -l generated-by=tetragon-policy-builder
```

Use you web browser to access the interface: [http://localhost:5000/](http://localhost:5000/)

## Directly on the workstation for dev purpose

```bash
$ git clone https://github.com/camptocamp/tetragon-policy-builder.git
$ cd tetragon-policy-builder
$ virtualenv venv
$ . venv/bin/activate
$ pip install -r /path/to/requirements.txt
$ python3 -m builder
```

### parse from tetragon stdout
Ensure that you have access to a kubernetes cluster:

```bash
$ kubectl cluster-info
Kubernetes control plane is running at https://e07df10a-56d2-11ee-bd90-a77949f1c0d2.sks-de-fra-1.exo.io:443
CoreDNS is running at https://e07df10a-56d2-11ee-bd90-a77949f1c0d2.sks-de-fra-1.exo.io:443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy

stop w/ SIGINT, which -depending on your args- triggers file creation or prints on stdout
To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
```

Launch the policy builder:
```bash
>> kubectl logs --follow daemonset/tetragon -n tetragon -c export-stdout | python3 -m builder --output policies.yaml
Found 5 pods, using pod/tetragon-vz57z
^Cwriting to file: policies.yaml
python3 -m builder
```

Open the web interface: [http://localhost:5000/](http://localhost:5000/)

Be sure to cleanup configmap created by the policy builder with:

```bash
$ kubectl delete cm -A -l generated-by=tetragon-policy-builder
```
Loading

0 comments on commit f70df5c

Please sign in to comment.