Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat(pomerium): HA with Postresql backend #35

Draft
wants to merge 4 commits into
base: main
Choose a base branch
from
Draft
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 2 additions & 0 deletions katalog/pomerium/config/config.example.env
Original file line number Diff line number Diff line change
Expand Up @@ -10,3 +10,5 @@ IDP_PROVIDER=oidc
# used by `ingress.yaml` by default.
IDP_PROVIDER_URL=https://dex.example.com/
IDP_SCOPES=openid profile email offline_access groups

DATABROKER_STORAGE_TYPE=postgres
2 changes: 1 addition & 1 deletion katalog/pomerium/deploy.yml
Original file line number Diff line number Diff line change
Expand Up @@ -12,7 +12,7 @@ spec:
selector:
matchLabels:
app: pomerium
replicas: 1
replicas: 2
template:
metadata:
labels:
Expand Down
1 change: 1 addition & 0 deletions katalog/pomerium/kustomization.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -19,6 +19,7 @@ resources:
- svc.yml
- ingress.yml
- monitoring
- postgres

secretGenerator:
- name: pomerium-env
Expand Down
25 changes: 25 additions & 0 deletions katalog/pomerium/postgres/MAINTENANCE.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,25 @@
# Postgres

Upstream documentation is located at: <https://github.com/bitnami/charts/tree/main/bitnami/postgresql>

> ⚠️ Notice that the component that we deploy is "Postgres" solution via the official Bitnami chart.

Releases of Postgres chart can be found in the official CHANGELOG: <https://github.com/bitnami/charts/tree/main/bitnami/postgresql>

## Update

To update the Postgres package, follow the next steps:

1. The manifests for Pomerium are taken by the official bitnami chart. Follow the **CHANGELOG** for checking the braking change and update the image tag accordingly. To generate the yaml file via helm:

```bash
helm template postgres bitnami/postgresql --set auth.postgresPassword=<random_password> --set image.tag=<official_postgres_image> --set primary.persistence.enabled=false > postgres.yml
```

For the password any random string of 32 chars like in the example below:

```bash
pwgen -1 32 -c
```

2. Update the documentation.
10 changes: 10 additions & 0 deletions katalog/pomerium/postgres/kustomization.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,10 @@
# Copyright (c) 2017-present SIGHUP s.r.l All rights reserved.
# Use of this source code is governed by a BSD-style
# license that can be found in the LICENSE file.

apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization

resources:
- ./postgresql.yml

311 changes: 311 additions & 0 deletions katalog/pomerium/postgres/postgresql.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,311 @@
# Copyright (c) 2017-present SIGHUP s.r.l All rights reserved.
# Use of this source code is governed by a BSD-style
# license that can be found in the LICENSE file.

# Source: postgresql/templates/primary/networkpolicy.yaml
kind: NetworkPolicy
apiVersion: networking.k8s.io/v1
metadata:
name: postgres-postgresql
namespace: "pomerium"
labels:
app.kubernetes.io/instance: postgres
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/name: postgresql
app.kubernetes.io/version: 17.0.0
helm.sh/chart: postgresql-16.0.3
app.kubernetes.io/component: primary
spec:
podSelector:
matchLabels:
app.kubernetes.io/instance: postgres
app.kubernetes.io/name: postgresql
app.kubernetes.io/component: primary
policyTypes:
- Ingress
- Egress
egress:
- {}
ingress:
- ports:
- port: 5432
---
# Source: postgresql/templates/primary/pdb.yaml
apiVersion: policy/v1
kind: PodDisruptionBudget
metadata:
name: postgres-postgresql
namespace: "pomerium"
labels:
app.kubernetes.io/instance: postgres
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/name: postgresql
app.kubernetes.io/version: 17.0.0
helm.sh/chart: postgresql-16.0.3
app.kubernetes.io/component: primary
spec:
maxUnavailable: 1
selector:
matchLabels:
app.kubernetes.io/instance: postgres
app.kubernetes.io/name: postgresql
app.kubernetes.io/component: primary
---
# Source: postgresql/templates/serviceaccount.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
name: postgres-postgresql
namespace: "pomerium"
labels:
app.kubernetes.io/instance: postgres
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/name: postgresql
app.kubernetes.io/version: 17.0.0
helm.sh/chart: postgresql-16.0.3
automountServiceAccountToken: false
---
# Source: postgresql/templates/secrets.yaml
apiVersion: v1
kind: Secret
metadata:
name: postgres-postgresql
namespace: "pomerium"
labels:
app.kubernetes.io/instance: postgres
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/name: postgresql
app.kubernetes.io/version: 17.0.0
helm.sh/chart: postgresql-16.0.3
type: Opaque
data:
postgres-password: "b29wOU9kOHV3ZWk2U29vQnVhaGFlZmF4"
Copy link
Member

@ralgozino ralgozino Nov 15, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

should we ask for this password to the user or is it safe to leave it hard coded?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I've generated with the helm template (more details in the MAINTANANCE.MD). The safer way is have the password hardcoded. In any case by following the maintanance guide the user can customize the password as he wish

# We don't auto-generate LDAP password when it's not provided as we do for other passwords
---
# Source: postgresql/templates/primary/svc-headless.yaml
apiVersion: v1
kind: Service
metadata:
name: postgres-postgresql-hl
namespace: "pomerium"
labels:
app.kubernetes.io/instance: postgres
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/name: postgresql
app.kubernetes.io/version: 17.0.0
helm.sh/chart: postgresql-16.0.3
app.kubernetes.io/component: primary
annotations:
spec:
type: ClusterIP
clusterIP: None
# We want all pods in the StatefulSet to have their addresses published for
# the sake of the other Postgresql pods even before they're ready, since they
# have to be able to talk to each other in order to become ready.
publishNotReadyAddresses: true
ports:
- name: tcp-postgresql
port: 5432
targetPort: tcp-postgresql
selector:
app.kubernetes.io/instance: postgres
app.kubernetes.io/name: postgresql
app.kubernetes.io/component: primary
---
# Source: postgresql/templates/primary/svc.yaml
apiVersion: v1
kind: Service
metadata:
name: postgres-postgresql
namespace: "pomerium"
labels:
app.kubernetes.io/instance: postgres
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/name: postgresql
app.kubernetes.io/version: 17.0.0
helm.sh/chart: postgresql-16.0.3
app.kubernetes.io/component: primary
spec:
type: ClusterIP
sessionAffinity: None
ports:
- name: tcp-postgresql
port: 5432
targetPort: tcp-postgresql
nodePort: null
selector:
app.kubernetes.io/instance: postgres
app.kubernetes.io/name: postgresql
app.kubernetes.io/component: primary
---
# Source: postgresql/templates/primary/statefulset.yaml
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: postgres-postgresql
namespace: "pomerium"
labels:
app.kubernetes.io/instance: postgres
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/name: postgresql
app.kubernetes.io/version: 17.0.0
helm.sh/chart: postgresql-16.0.3
app.kubernetes.io/component: primary
spec:
replicas: 1
serviceName: postgres-postgresql-hl
updateStrategy:
rollingUpdate: {}
type: RollingUpdate
selector:
matchLabels:
app.kubernetes.io/instance: postgres
app.kubernetes.io/name: postgresql
app.kubernetes.io/component: primary
template:
metadata:
name: postgres-postgresql
labels:
app.kubernetes.io/instance: postgres
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/name: postgresql
app.kubernetes.io/version: 17.0.0
helm.sh/chart: postgresql-16.0.3
app.kubernetes.io/component: primary
spec:
serviceAccountName: postgres-postgresql

automountServiceAccountToken: false
affinity:
podAffinity:

podAntiAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- podAffinityTerm:
labelSelector:
matchLabels:
app.kubernetes.io/instance: postgres
app.kubernetes.io/name: postgresql
app.kubernetes.io/component: primary
topologyKey: kubernetes.io/hostname
weight: 1
nodeAffinity:

securityContext:
fsGroup: 1001
fsGroupChangePolicy: Always
supplementalGroups: []
sysctls: []
hostNetwork: false
hostIPC: false
containers:
- name: postgresql
image: docker.io/bitnami/postgresql:17.0.0-debian-12-r3
imagePullPolicy: "IfNotPresent"
securityContext:
allowPrivilegeEscalation: false
capabilities:
drop:
- ALL
privileged: false
readOnlyRootFilesystem: true
runAsGroup: 1001
runAsNonRoot: true
runAsUser: 1001
seLinuxOptions: {}
seccompProfile:
type: RuntimeDefault
env:
- name: BITNAMI_DEBUG
value: "false"
- name: POSTGRESQL_PORT_NUMBER
value: "5432"
- name: POSTGRESQL_VOLUME_DIR
value: "/bitnami/postgresql"
- name: PGDATA
value: "/bitnami/postgresql/data"
# Authentication
- name: POSTGRES_PASSWORD
valueFrom:
secretKeyRef:
name: postgres-postgresql
key: postgres-password
# LDAP
- name: POSTGRESQL_ENABLE_LDAP
value: "no"
# TLS
- name: POSTGRESQL_ENABLE_TLS
value: "no"
# Audit
- name: POSTGRESQL_LOG_HOSTNAME
value: "false"
- name: POSTGRESQL_LOG_CONNECTIONS
value: "false"
- name: POSTGRESQL_LOG_DISCONNECTIONS
value: "false"
- name: POSTGRESQL_PGAUDIT_LOG_CATALOG
value: "off"
# Others
- name: POSTGRESQL_CLIENT_MIN_MESSAGES
value: "error"
- name: POSTGRESQL_SHARED_PRELOAD_LIBRARIES
value: "pgaudit"
ports:
- name: tcp-postgresql
containerPort: 5432
livenessProbe:
failureThreshold: 6
initialDelaySeconds: 30
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 5
exec:
command:
- /bin/sh
- -c
- exec pg_isready -U "postgres" -h 127.0.0.1 -p 5432
readinessProbe:
failureThreshold: 6
initialDelaySeconds: 5
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 5
exec:
command:
- /bin/sh
- -c
- -e
- |
exec pg_isready -U "postgres" -h 127.0.0.1 -p 5432
[ -f /opt/bitnami/postgresql/tmp/.initialized ] || [ -f /bitnami/postgresql/.initialized ]
resources:
limits:
cpu: 150m
ephemeral-storage: 2Gi
memory: 192Mi
requests:
cpu: 100m
ephemeral-storage: 50Mi
memory: 128Mi
volumeMounts:
- name: empty-dir
mountPath: /tmp
subPath: tmp-dir
- name: empty-dir
mountPath: /opt/bitnami/postgresql/conf
subPath: app-conf-dir
- name: empty-dir
mountPath: /opt/bitnami/postgresql/tmp
subPath: app-tmp-dir
- name: dshm
mountPath: /dev/shm
- name: data
mountPath: /bitnami/postgresql
volumes:
- name: empty-dir
emptyDir: {}
- name: dshm
emptyDir:
medium: Memory
- name: data
emptyDir: {}
2 changes: 2 additions & 0 deletions katalog/pomerium/secrets/pomerium.example.env
Original file line number Diff line number Diff line change
Expand Up @@ -4,3 +4,5 @@ COOKIE_SECRET=super-secret-cookie
IDP_CLIENT_SECRET=super-secret-idp
# SHARED_SECRET is obtained with `head -c32 /dev/urandom | base64` see https://www.pomerium.io/reference/#shared-secret
SHARED_SECRET=super-secret-shared

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This env var is missing:

Suggested change
DATABROKER_STORAGE_TYPE=postgres

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It is in "config.example.env"

DATABROKER_STORAGE_CONNECTION_STRING=postgresql://postgres:oop9Od8uwei6SooBuahaefax@postgres-postgresql.pomerium.svc.cluster.local:5432/postgres?sslmode=disable