Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

How to monitor an AtlasMigration sync from an helm chart init container #3277

Open
RouquinBlanc opened this issue Dec 19, 2024 · 1 comment

Comments

@RouquinBlanc
Copy link

This is a question following the blog post on using AtlasMigration custom resources to apply migrations: https://atlasgo.io/blog/2023/07/03/versioned-migrations-kubernetes

We're interested in bundling our application with an AtlasMigration into a helm chart (while the atlas operator is installed in the cluster), but we would like to make sure the migrations are applied before starting our application. While it's pretty easy to check for the availability of the database via an init container, it seems a bit more tricky to ensure that the migration is in sync, and we could end up trying to make requests while the tables are creating or updating.

The article suggests using kubernetes events on custom resources, but those events would require elevated rights on a role binding. Do you know any good way to monitor for the application of migrations, would it be at initContainer or helm level?

Thanks in advance!

@rotemtam
Copy link
Member

Hello @RouquinBlanc

Great question.

Helm

There are a few ways to handle this. One way I've seen work successfully is to use a pre-upgrade hook:

apiVersion: batch/v1
kind: Job
metadata:
  name: "{{ .Release.Name }}-admin-migrate-{{ now | unixEpoch }}"
  labels:
    helm.sh/chart: "{{ .Chart.Name }}-{{ .Chart.Version }}"
  annotations:
    "helm.sh/hook": post-install,pre-upgrade
    "helm.sh/hook-delete-policy": before-hook-creation,hook-succeeded
spec:
  template:
    metadata:
      name: "{{ .Release.Name }}-admin-migrate"
      labels:
        app.kubernetes.io/managed-by: {{ .Release.Service | quote }}
        app.kubernetes.io/instance: {{ .Release.Name | quote }}
        helm.sh/chart: "{{ .Chart.Name }}-{{ .Chart.Version }}"
    spec:
      serviceAccountName: {{ include "app.serviceAccountName" . }}
      restartPolicy: Never
      containers:
        - name: migrate
          image: bitnami/kubectl
          args:
            - wait
            - --for=condition=ready
            - atlasmigrations/{{ include "app.fullname" . }}

What this does is use kubectl wait from within the cluster until the AtlasMigration is in a ready state.

Helm option 2

If you are locked out of the k8s API, you can run Atlas as a Job and use the migrate status command to check if there are pending migrations. You can use `--format "{{ json . }}", to get the output as JSON and parse it.

Argo / Flux

Another way, is to use ArgoCD or FluxCD. Both have native mechanisms for waiting for the migration to complete before continuing.

  • In FluxCD, use dependsOn, see [https://atlasgo.io/guides/deploying/k8s-flux#implement-the-deployment-flow]
  • In ArgoCD, use syncWave - Argo natively understands AtlasMigration and will wait for Atlas to finish.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants