This guide describes our Release and Deployment processes.
The Language Forge project is under active development and as a project team we value shipping early and shipping often. In the past we have used a form of semantic versioning for our version names, however moving forward our releases will be publicized on our community support site as the YYYY-MM release. We will publish a summary of changes on our community site, once a month for all releases/changes that occurred during the prior month.
Releases are tagged in Git using the naming convention vYYYYMMDD
and Docker images as YYYYMMDD
(omitting the preceding v
). In the event that we release twice in a single day, the release shall be named YYYYMMDDb
, containing a distinguishing trailing letter.
Language Forge is built to run in a containerized environment. Kubernetes is our chosen runtime platform for production. Deployments are automated under the right circumstances using GitHub Actions.
Staging deployments can be manually run with VERSION=<some-docker-tag-or-semver> make deploy-staging
.
Current workflow:
- merge PR into or make commits on
develop
branch - this will kick off the GHA (
.github/workflows/staging.yml
) to build, test and publish the necessary images to Docker Hub (https://hub.docker.com/r/sillsdev/web-languageforge/tags) and deploy this code to the staging environment at https://qa.languageforge.org
Production deployments can be manually run with VERSION=<some-docker-tag-or-semver> make deploy-prod
.
Current workflow:
- merge from
develop
intomaster
- "Draft a new release" on https://github.com/sillsdev/web-languageforge/releases with a
v#.#.#
tag format - "Publish" the new release
- this will kick off the GHA (
.github/workflows/production.yml
) to build, test and publish the necessary images to Docker Hub (https://hub.docker.com/r/sillsdev/web-languageforge/tags) and deploy this code to the production environment at https://languageforge.org
Various tagged images are maintained in Docker Hub. If you need to revert to a previous version, you can do so by running the deployments scripts with the appropriate permissions or utilizing the Kubernetes UI to change the image of a deployment at any time.
Backups will be established automatically by LTOps and utilized by LF through the storageClassName
property in a Persistent Volume Claim. This storage class provided by LTOps establishes both a frequency and retention for a backup. Any time a restoration is needed, the LF team will need to coordinate the effort with LTOps. The process of restoring from a point in time will require the application be brought down for maintenance. The process will roughly follow these steps:
- Notify LTOps of the need to restore a backup (App team)
- Coordinate a time to bring the app down for maintenance (LTOps/App team)
- Scale the app down (LTOps/App team)
- Initiate the Backup restore (LTOps)
- Notify app team of the restoration completion (LTOps)
- Scale the app up (LTOps/App team)
- Test the app (App team)
- Communicate maintenance completion