diff --git a/.nojekyll b/.nojekyll new file mode 100644 index 0000000000..e69de29bb2 diff --git a/0.gif b/0.gif new file mode 100644 index 0000000000..b93949847f Binary files /dev/null and b/0.gif differ diff --git a/2.gif b/2.gif new file mode 100644 index 0000000000..73113734a4 Binary files /dev/null and b/2.gif differ diff --git a/3.png b/3.png new file mode 100644 index 0000000000..841d41dc27 Binary files /dev/null and b/3.png differ diff --git a/4.gif b/4.gif new file mode 100644 index 0000000000..b417031da4 Binary files /dev/null and b/4.gif differ diff --git a/404.html b/404.html new file mode 100644 index 0000000000..7b6bf831ec --- /dev/null +++ b/404.html @@ -0,0 +1,2633 @@ + + + + + + + + + + + + + + + + + + + + Lagoon Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +
+
+ +
+ + + + +
+ + +
+ +
+ + + + + + +
+
+ + + +
+
+
+ + + + +
+
+
+ + + +
+
+
+ + + +
+
+
+ + + +
+
+ +

404 - Not found

+ +
+
+ + +
+ +
+ + + +
+
+
+
+ + + + + + + + + + + + + \ No newline at end of file diff --git a/5.gif b/5.gif new file mode 100644 index 0000000000..abadc328ba Binary files /dev/null and b/5.gif differ diff --git a/6.gif b/6.gif new file mode 100644 index 0000000000..d8964d33da Binary files /dev/null and b/6.gif differ diff --git a/7.gif b/7.gif new file mode 100644 index 0000000000..65df313e10 Binary files /dev/null and b/7.gif differ diff --git a/8.png b/8.png new file mode 100644 index 0000000000..3ad2544075 Binary files /dev/null and b/8.png differ diff --git a/CNAME b/CNAME new file mode 100644 index 0000000000..6f82eda63b --- /dev/null +++ b/CNAME @@ -0,0 +1 @@ +docs.lagoon.sh \ No newline at end of file diff --git a/administering-lagoon/container_overview.png b/administering-lagoon/container_overview.png new file mode 100644 index 0000000000..ee4e84c36f Binary files /dev/null and b/administering-lagoon/container_overview.png differ diff --git a/administering-lagoon/create-project.gql b/administering-lagoon/create-project.gql new file mode 100644 index 0000000000..53421c4522 --- /dev/null +++ b/administering-lagoon/create-project.gql @@ -0,0 +1,58 @@ +# See the docs for a detailed explanation about this file: +# https://docs.lagoon.sh/administering-lagoon/graphql-queries/#creating-the-first-project + +# 1. Create a cluster (Kubernetes or OpenShift) +mutation { + addKubernetes( + input: { + # TODO: Fill in the name field + # This is the unique identifier of the Kubernetes cluster + name: "" + # TODO: Fill in consoleUrl field + # This is the URL of the Kubernetes cluster + consoleUrl: "" + # TODO: Fill in the token field + # This is the token of the `lagoon` service account created in this cluster (this is the same token that we also used during installation of Lagoon) + token: "" + } + ) { + name + # TODO: Make a note of the Kubernetes ID that comes back in the response + id + } +} + +# 2. Create a project and assign it the Cluster +mutation { + addProject( + input: { + # TODO: Fill in the name field + # This is the project name + name: "" + # TODO: Fill in the private key field (replace newlines with '\n') + # This is the private key for a project, which is used to access the git code. If no private key is added, Lagoon will create a private key, which can later be accessed by loading the project. + privateKey: "" + # TODO: Fill in the kubernetes field + # This is the ID of the Kubernetes or OpenShift to assign to the project + kubernetes: 0 + # TODO: Fill in the name field + # This is the project name + gitUrl: "" + # TODO: Fill in the branches to be deployed + branches: "" + # TODO: Define the production environment + productionEnvironment: "" + } + ) { + name + kubernetes { + name + id + } + gitUrl + activeSystemsDeploy + activeSystemsRemove + branches + pullrequests + } +} diff --git a/administering-lagoon/feature-flags/index.html b/administering-lagoon/feature-flags/index.html new file mode 100644 index 0000000000..09814a2676 --- /dev/null +++ b/administering-lagoon/feature-flags/index.html @@ -0,0 +1,2778 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + Feature Flags - Lagoon Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ + + + Skip to content + + +
+
+ +
+ + + + +
+ + +
+ +
+ + + + + + +
+
+ + + +
+
+
+ + + + +
+
+
+ + + +
+
+
+ + + +
+
+
+ + + +
+
+ + + + + + + + + + + + + +

Feature flags#

+

Some Lagoon features can be controlled by setting feature flags. +This is designed to assist users and administrators to roll out new platform features in a controlled manner.

+

Environment variables#

+

The following environment variables can be set on an environment or project to toggle feature flags.

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Environment Variable NameActive scopeVersion introducedVersion removedDefault ValueDescription
LAGOON_FEATURE_FLAG_ROOTLESS_WORKLOADglobal2.2.0-disabledSet to enabled to set a non-root pod security context on the pods in this environment or project.

This flag will eventually be deprecated, at which point non-root workloads will be enforced.
LAGOON_FEATURE_FLAG_ISOLATION_NETWORK_POLICYglobal2.2.0-disabledSet to enabled to add a default namespace isolation network policy to each environment on deployment.

This flag will eventually be deprecated, at which point the namespace isolation network policy will be enforced.

NOTE: enabling and then disabling this feature will not remove any existing network policy from previous deployments. Those must be removed manually.
+

Cluster-level controls#

+

Feature flags may also be controlled at the cluster level. There is support for this in the lagoon-build-deploy chart. +For each feature flag there are two flavours of values which can be set: default and force.

+
    +
  • default controls the default policy for environments deployed to the cluster, but can be overridden at the project or environment level by the environment variables documented above.
  • +
  • force also controls the policy for environments deployed to the cluster, but cannot be overridden by the environment variables documented above.
  • +
+ + + + + + + + +
+
+ + +
+ +
+ + + +
+
+
+
+ + + + + + + + + + + + + \ No newline at end of file diff --git a/administering-lagoon/graphiql-2020-01-29-18-05-54.png b/administering-lagoon/graphiql-2020-01-29-18-05-54.png new file mode 100644 index 0000000000..60912674a3 Binary files /dev/null and b/administering-lagoon/graphiql-2020-01-29-18-05-54.png differ diff --git a/administering-lagoon/graphiql-2020-01-29-18-07-28.png b/administering-lagoon/graphiql-2020-01-29-18-07-28.png new file mode 100644 index 0000000000..891ac952f3 Binary files /dev/null and b/administering-lagoon/graphiql-2020-01-29-18-07-28.png differ diff --git a/administering-lagoon/graphiql-2020-01-29-20-10-32.png b/administering-lagoon/graphiql-2020-01-29-20-10-32.png new file mode 100644 index 0000000000..d5e283b64e Binary files /dev/null and b/administering-lagoon/graphiql-2020-01-29-20-10-32.png differ diff --git a/administering-lagoon/graphql-queries/index.html b/administering-lagoon/graphql-queries/index.html new file mode 100644 index 0000000000..34748b660d --- /dev/null +++ b/administering-lagoon/graphql-queries/index.html @@ -0,0 +1,3436 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + GraphQL API - Lagoon Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ + + + Skip to content + + +
+
+ +
+ + + + +
+ + +
+ +
+ + + + + + +
+
+ + + +
+
+
+ + + + +
+
+
+ + + + + + + +
+
+ + + + + + + + + + + + + +

GraphQL API#

+

Running GraphQL queries#

+

Direct API interactions in Lagoon are done via GraphQL.

+

In order to authenticate with the API, we need a JWT (JSON Web Token) that allows us to use the GraphQL API as admin. To generate this token, open the terminal of the storage-calculator pod via your Kubernetes UI, or via kubectl and run the following command:

+
Generate JWT token.
./create_jwt.py
+
+

This will return a long string which is the JWT token. Make a note of this, as we will need it to send queries.

+

We also need the URL of the API endpoint, which can be found under "Ingresses" in your Kubernetes UI or via kubectl on the command line. Make a note of this endpoint URL, which we will also need.

+

To compose and send GraphQL queries, we recommend GraphiQL.app, a desktop GraphQL client with features such as autocomplete. To continue with the next steps, install and start the app.

+

Under "GraphQL Endpoint", enter the API endpoint URL with /graphql on the end. Then click on "Edit HTTP Headers" and add a new header:

+
    +
  • "Header name": Authorization
  • +
  • "Header value": Bearer [JWT token] (make sure that the JWT token has no spaces, as this would not work)
  • +
+

Press ESC to close the HTTP header overlay and now we are ready to send the first GraphQL request!

+

Editing HTTP Headers in GraphiQL.

+

Enter this in the left panel

+
Running a query
query allProjects{
+  allProjects {
+    name
+  }
+}
+
+

Running a query in GraphiQL.

+

And press the ▶️ button (or press CTRL+ENTER).

+

If all went well, your first GraphQL response should appear shortly afterwards in the right pane.

+

Creating the first project#

+

Let's create the first project for Lagoon to deploy! For this we'll use the queries from the GraphQL query template in create-project.gql.

+

For each of the queries (the blocks starting with mutation {), fill in all of the empty fields marked by TODO comments and run the queries in GraphiQL.app. This will create one of each of the following two objects:

+
    +
  1. kubernetes : The Kubernetes (or Openshift) cluster to which Lagoon should deploy. Lagoon is not only capable of deploying to its own Kubernetes cluster, but also to any Kubernetes cluster anywhere in the world.
  2. +
  3. project : The Lagoon project to be deployed, which is a Git repository with a .lagoon.yml configuration file committed in the root.
  4. +
+

Allowing access to the project#

+

In Lagoon, each developer authenticates via their SSH key(s). This determines their access to:

+
    +
  1. The Lagoon API, where they can see and edit projects they have access to.
  2. +
  3. Remote shell access to containers that are running in projects they have access to.
  4. +
  5. The Lagoon logging system, where a developer can find request logs, container logs, Lagoon logs and more.
  6. +
+

To allow access to the project, we first need to add a new group to the API:

+
Add group to API
mutation {
+  addGroup (
+    input: {
+      # TODO: Enter the name for your new group.
+      name: ""
+    }
+  )     {
+    id
+    name
+  }
+}
+
+

Then we need to add a new user to the API:

+
Add new user to API
mutation {
+  addUser(
+    input: {
+      email: "michael.schmid@example.com"
+      firstName: "Michael"
+      lastName: "Schmid"
+      comment: "CTO"
+    }
+  ) {
+    # TODO: Make a note of the user ID that is returned.
+    id
+  }
+}
+
+

Then we can add an SSH public key for the user to the API:

+
Add SSH public key for the user to API
mutation {
+  addSshKey(
+    input: {
+      # TODO: Fill in the name field.
+      # This is a non-unique identifier for the SSH key.
+      name: ""
+      # TODO: Fill in the keyValue field.
+      # This is the actual SSH public key (without the type at the beginning and without the comment at the end, ex. `AAAAB3NzaC1yc2EAAAADAQ...3QjzIOtdQERGZuMsi0p`).
+      keyValue: ""
+      # TODO: Fill in the keyType field.
+      # Valid values are either SSH_RSA, SSH_ED25519, ECDSA_SHA2_NISTP256/384/521
+      keyType: SSH_RSA
+      user: {
+        # TODO: Fill in the userId field.
+        # This is the user ID that we noted from the addUser query.
+        id:"0",
+        email:"michael.schmid@example.com"
+      }
+    }
+  ) {
+    id
+  }
+}
+
+

After we add the key, we need to add the user to a group:

+
Add user to group
mutation {
+  addUserToGroup (
+    input: {
+      user: {
+        #TODO: Enter the email address of the user.
+        email: ""
+      }
+      group: {
+        #TODO: Enter the name of the group you want to add the user to.
+        name: ""
+      }
+      #TODO: Enter the role of the user.
+      role: OWNER
+
+    }
+  ) {
+    id
+    name
+  }
+}
+
+

After running one or more of these kinds of queries, the user will be granted access to create tokens via SSH, access containers and more.

+

Adding notifications to the project#

+

If you want to know what is going on during a deployment, we suggest configuring notifications for your project, which provide:

+
    +
  • Push notifications
  • +
  • Build start information
  • +
  • Build success or failure messages
  • +
  • And many more!
  • +
+

As notifications can be quite different in terms of the information they need, each notification type has its own mutation.

+

As with users, we first add the notification:

+
Add notification
mutation {
+  addNotificationSlack(
+    input: {
+      # TODO: Fill in the name field.
+      # This is your own identifier for the notification.
+      name: ""
+      # TODO: Fill in the channel field.
+      # This is the channel for the message to be sent to.
+      channel: ""
+      # TODO: Fill in the webhook field.
+      # This is the URL of the webhook where messages should be sent, this is usually provided by the chat system to you.
+      webhook: ""
+    }
+  ) {
+    id
+  }
+}
+
+

After the notification is created, we can now assign it to our project:

+
Assign notification to project
mutation {
+  addNotificationToProject(
+    input: {
+      notificationType: SLACK
+      # TODO: Fill in the project field.
+      # This is the project name.
+      project: ""
+      # TODO: Fill in the notification field.
+      # This is the notification name.
+      notificationName: ""
+      # TODO: OPTIONAL
+      # The kind notification class you're interested in defaults to DEPLOYMENT
+      contentType: DEPLOYMENT/PROBLEM
+      # TODO: OPTIONAL
+      # Related to contentType PROBLEM, we can set the threshold for the kinds of problems
+      # we'd like to be notified about
+      notificationSeverityThreshold "NONE/UNKNOWN/NEGLIGIBLE/LOW/MEDIUM/HIGH/CRITICAL
+    }
+  ) {
+    id
+  }
+}
+
+

Now for every deployment you will receive messages in your defined channel.

+

Example GraphQL queries#

+

Adding a new Kubernetes target#

+
+

Note

+

In Lagoon, both addKubernetes and addOpenshift can be used for both Kubernetes and OpenShift targets - either will work interchangeably.

+
+

The cluster to which Lagoon should deploy.

+
Add Kubernetes target
mutation {
+  addKubernetes(
+    input: {
+      # TODO: Fill in the name field.
+      # This is the unique identifier of the cluster.
+      name: ""
+      # TODO: Fill in consoleUrl field.
+      # This is the URL of the Kubernetes cluster
+      consoleUrl: ""
+      # TODO: Fill in the token field.
+      # This is the token of the `lagoon` service account created in this cluster (this is the same token that we also used during installation of Lagoon).
+      token: ""
+    }
+  ) {
+    name
+    id
+  }
+}
+
+

Adding a group to a project#

+

This query will add a group to a project. Users of that group will be able to access the project. They will be able to make changes, based on their role in that group.

+
Add a group to a project
mutation {
+  addGroupsToProject (
+    input: {
+      project: {
+        #TODO: Enter the name of the project.
+        name: ""
+      }
+      groups: {
+        #TODO: Enter the name of the group that will be added to the project.
+        name: ""
+      }
+    }
+  ) {
+    id
+  }
+}
+
+

Adding a new project#

+

This query adds a new Lagoon project to be deployed, which is a Git repository with a .lagoon.yml configuration file committed in the root.

+

If you omit the privateKey field, a new SSH key for the project will be generated automatically.

+

If you would like to reuse a key from another project. you will need to supply the key in the addProject mutation.

+
Add a new project
mutation {
+  addProject(
+    input: {
+      # TODO: Fill in the name field.
+      # This is the project name.
+      name: ""
+      # TODO: Fill in the private key field (replace newlines with '\n').
+      # This is the private key for a project, which is used to access the Git code.
+      privateKey: ""
+      # TODO: Fill in the Kubernetes field.
+      # This is the ID of the Kubernetes or OpenShift to assign to the project.
+      kubernetes: 0
+      # TODO: Fill in the name field.
+      # This is the project name.
+      gitUrl: ""
+      # TODO: Fill in the branches to be deployed.
+      branches: ""
+      # TODO: Define the production environment.
+      productionEnvironment: ""
+    }
+  ) {
+    name
+    kubernetes {
+      name
+      id
+    }
+    gitUrl
+    activeSystemsDeploy
+    activeSystemsRemove
+    branches
+    pullrequests
+  }
+}
+
+

List projects and groups#

+

This is a good query to see an overview of all projects, clusters and groups that exist within our Lagoon.

+
Get an overview of all projects, clusters, and groups
query {
+  allProjects {
+    name
+    gitUrl
+  }
+  allKubernetes {
+    name
+    id
+  }
+  allGroups{
+    id
+    name
+    members {
+      # This will display the users in this group.
+      user {
+        id
+        firstName
+        lastName
+      }
+      role
+    }
+    groups {
+      id
+      name
+    }
+  }
+}
+
+

Single project#

+

If you want a detailed look at a single project, this query has been proven quite good:

+
Take a detailed look at one project
query {
+  projectByName(
+    # TODO: Fill in the project name.
+    name: ""
+  ) {
+    id
+    branches
+    gitUrl
+    pullrequests
+    productionEnvironment
+    notifications(type: SLACK) {
+      ... on NotificationSlack {
+        name
+        channel
+        webhook
+        id
+      }
+    }
+    environments {
+      name
+      deployType
+      environmentType
+    }
+    kubernetes {
+      id
+    }
+  }
+}
+
+

Querying a project by its Git URL#

+

Don't remember the name of a project, but know the Git URL? Search no longer, there is a GraphQL query for that:

+
Query project by Git URL
query {
+  projectByGitUrl(gitUrl: "git@server.com:org/repo.git") {
+    name
+  }
+}
+
+

Updating objects#

+

The Lagoon GraphQL API can not only display objects and create objects, it also has the capability to update existing objects, using a patch object.

+

Update the branches to deploy within a project:

+
Update deploy branches.
mutation {
+  updateProject(
+    input: { id: 109, patch: { branches: "^(prod|stage|dev|update)$" } }
+  ) {
+    id
+  }
+}
+
+

Update the production environment within a project:

+
+

Warning

+

This requires a redeploy in order for the changes to be reflected in the containers.

+
+
Update prod environment
 mutation {
+   updateProject(
+    input: { id: 109, patch: { productionEnvironment: "main" } }
+  ) {
+    id
+  }
+}
+
+

You can also combine multiple changes at once:

+
Update prod environment and set deploy branches.
mutation {
+  updateProject(
+    input: {
+      id: 109
+      patch: {
+        productionEnvironment: "main"
+        branches: "^(prod|stage|dev|update)$"
+      }
+    }
+  ) {
+    id
+  }
+}
+
+

Deleting Environments#

+

You can also use the Lagoon GraphQL API to delete an environment. You'll need to know the project name and the environment name in order to run the command.

+
Delete environment.
mutation {
+  deleteEnvironment(
+    input: {
+      # TODO: Fill in the name field.
+      # This is the environment name.
+      name:""
+      # TODO: Fill in the project field.
+      # This is the project name.
+      project:""
+      execute:true
+    }
+  )
+}
+
+

Querying a project to see what groups and users are assigned#

+

Want to see what groups and users have access to a project? Want to know what their roles are? Do I have a query for you! Using the query below you can search for a project and display the groups, users, and roles that are assigned to that project.

+
Query groups, users, and roles assigned to project
query search{
+  projectByName(
+    #TODO: Enter the name of the project.
+    name: ""
+  ) {
+    id,
+    branches,
+    productionEnvironment,
+    pullrequests,
+    gitUrl,
+    kubernetes {
+      id
+    },
+     groups{
+      id
+      name
+      groups {
+        id
+        name
+      }
+      members {
+        role
+        user {
+          id
+          email
+        }
+      }
+    }
+  }
+}
+
+

Maintaining project metadata#

+

Project metadata can be assigned using arbitrary key/value pairs. Projects can then be queried by the associated metadata; for example you may categorize projects by type of software, version number, or any other categorization you may wish to query on later.

+

Add/update metadata on a project#

+

Updates to metadata expect a key/value pair. It operates as an UPSERT, meaning if a key already exists the value will be updated, otherwise inserted.

+

You may have any number of k/v pairs stored against a project.

+
Add a key/value pair to metadata
mutation {
+  updateProjectMetadata(
+    input: { id: 1,  patch: { key: "type", value: "saas" } }
+  ) {
+    id
+    metadata
+  }
+}
+
+

Query for projects by metadata#

+

Queries may be by key only (e.g return all projects where a specific key exists) or both key and value where both key and value must match.

+

All projects that have the version tag:

+
Query by metadata
query projectsByMetadata {
+  projectsByMetadata(metadata: [{key: "version"] ) {
+    id
+    name
+  }
+}
+
+

All projects that have the version tag, specifically version 8:

+
Query by metadata
query projectsByMetadata {
+  projectsByMetadata(metadata: [{key: "version", value: "8"] ) {
+    id
+    name
+  }
+}
+
+

Removing metadata on a project#

+

Metadata can be removed on a per-key basis. Other metadata key/value pairs will persist.

+
Remove metadata
mutation {
+  removeProjectMetadataByKey (
+    input: { id: 1,  key: "version" }
+  ) {
+    id
+    metadata
+  }
+}
+
+ + + + + + + + +
+
+ + +
+ +
+ + + +
+
+
+
+ + + + + + + + + + + + + \ No newline at end of file diff --git a/administering-lagoon/projects_overview.png b/administering-lagoon/projects_overview.png new file mode 100644 index 0000000000..79f8902041 Binary files /dev/null and b/administering-lagoon/projects_overview.png differ diff --git a/administering-lagoon/rbac/index.html b/administering-lagoon/rbac/index.html new file mode 100644 index 0000000000..0de078189d --- /dev/null +++ b/administering-lagoon/rbac/index.html @@ -0,0 +1,5960 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + Role-Based Access Control (RBAC) - Lagoon Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ + + + Skip to content + + +
+
+ +
+ + + + +
+ + +
+ +
+ + + + + + +
+
+ + + +
+
+
+ + + + +
+
+
+ + + +
+
+
+ + + +
+
+
+ + + +
+
+ + + + + + + + + + + + + +

Role-Based Access Control (RBAC)#

+

Version 1.0 of Lagoon changed how you access your projects! Access to your project is handled via groups, with projects assigned to one or multiple groups. Users are added to groups with a role. Groups can also be nested within sub-groups. This change provides a lot more flexibility and the possibility to recreate real world teams within Lagoon.

+

Roles#

+

When assigning a user to a group, you need to provide a group role for that user inside this group. Each one of the 5 current existing group roles gives the user different permissions to the group and projects assigned to the group. Here are the platform-wide roles and the group roles that are currently found in Lagoon:

+

Platform-Wide Roles#

+

Platform-Wide Admin#

+

The platform-wide admin has access to everything across all of Lagoon. That includes dangerous mutations like deleting all projects. Use very, very, very carefully.

+

Platform-Wide Owner#

+

The platform-wide owner has access to every Lagoon group, like the group owner role, and can be used if you need a user that needs access to everything but you don't want to assign the user to every group.

+

Group Roles#

+

Owner#

+

The owner role can do everything within a group and its associated projects. They can add and manage users of a group. Be careful with this role, as it can delete projects and production environments!

+

Maintainer#

+

The maintainer role can do everything within a group and its associated projects except deleting the project itself or the production environment. They can add and manage users of a group.

+

Developer#

+

The developer role has SSH access only to development environments. This role cannot access, update or delete the production environment. They can run a sync task with the production environment as a source, but not as the destination. Cannot manage users of a group.

+
+

IMPORTANT

+

This role does not prevent the deployment of the production environment as a deployment is triggered via a Git push! You need to make sure that your Git server prevents these users from pushing into the branch defined as production environment.

+
+

Reporter#

+

The reporter role has view access only. They cannot access any environments via SSH or make modifications to them. They can run cache-clear tasks. This role is mostly used for stakeholders to have access to Lagoon UI and logging.

+

Guest#

+

The guest role has the same privileges as the reporter role listed above.

+

Here is a table that lists the roles and the access they have:

+

Lagoon 1.0.0 RBAC Permission Matrix#

+
+
+
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
NameResourceScopeAttributes
addSshKeyssh_keyadduserID
updateSshKeyssh_keyupdateuserID
deleteSshKeyssh_keydeleteuserID
getUserSshKeysssh_keyview:useruserID
updateUseruserupdateuserID
deleteUseruserdeleteuserID
+
+
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
NameResourceScopeAttributes
getBackupsByEnvironmentIddeploymentviewprojectID
getEnvironmentsByProjectIdenvironmentviewprojectID
getEnvironmentServicesByEnvironmentIdenvironmentviewprojectID
getEnvVarsByEnvironmentIdenv_varenvironment:view:developmentprojectID
getEnvVarsByEnvironmentIdenv_varenvironment:view:productionprojectID
getEnvVarsByProjectIdenv_varproject:viewprojectID
addGroupgroupadd
getOpenshiftByProjectIdopenshiftviewprojectID
addProjectprojectadd
getProjectByEnvironmentIdprojectviewprojectID
getProjectByGitUrlprojectviewprojectID
getProjectByNameprojectviewprojectID
addRestorerestoreaddprojectID
updateRestorerestoreupdateprojectID
taskDrushCacheCleartaskdrushCacheClear:developmentprojectID
taskDrushCacheCleartaskdrushCacheClear:productionprojectID
taskDrushCrontaskdrushCron:developmentprojectID
taskDrushCrontaskdrushCron:productionprojectID
getFilesByTaskIdtaskviewprojectID
getTasksByEnvironmentIdtaskviewprojectID
getTaskByRemoteIdtaskviewprojectID
getTaskByIdtaskviewprojectID
addUseruseradd
+
+
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
NameResourceScopeAttributes
addBackupbackupaddprojectID
getBackupsByEnvironmentIdbackupviewprojectID
addEnvVariable (to Environment)env_varenvironment:add:developmentprojectID
deleteEnvVariable (from Environment)env_varenvironment:delete:developmentprojectID
getEnvVarsByEnvironmentIdenv_varenvironment:viewValue:developmentprojectID
addOrUpdateEnvironmentenvironmentaddOrUpdate:developmentprojectID
updateEnvironmentenvironmentupdate:developmentprojectID
deleteEnvironmentenvironmentdelete:developmentprojectID
addDeploymentenvironmentdeploy:developmentprojectID
setEnvironmentServicesenvironmentupdate:developmentprojectID
deployEnvironmentLatestenvironmentdeploy:developmentprojectID
deployEnvironmentBranchenvironmentdeploy:developmentprojectID
deployEnvironmentPullrequestenvironmentdeploy:developmentprojectID
deployEnvironmentPromoteenvironmentdeploy:developmentprojectID
userCanSshToEnvironmentenvironmentssh:developmentprojectID
getNotificationsByProjectIdnotificationviewprojectID
addTasktaskadd:developmentprojectID
taskDrushArchiveDumptaskdrushArchiveDump:developmentprojectID
taskDrushArchiveDumptaskdrushArchiveDump:productionprojectID
taskDrushSqlDumptaskdrushSqlDump:developmentprojectID
taskDrushSqlDumptaskdrushSqlDump:productionprojectID
taskDrushUserLogintaskdrushUserLogin:destination:developmentenvironmentID
taskDrushSqlSynctaskdrushSqlSync:source:developmentprojectID
taskDrushSqlSynctaskdrushSqlSync:source:productionprojectID
taskDrushSqlSynctaskdrushSqlSync:destination:developmentprojectID
taskDrushRsyncFilestaskdrushRsync:source:developmentprojectID
taskDrushRsyncFilestaskdrushRsync:source:productionprojectID
taskDrushRsyncFilestaskdrushRsync:destination:developmentprojectID
deleteTasktaskdeleteprojectID
updateTasktaskupdateprojectID
uploadFilesForTasktaskupdateprojectID
deleteFilesForTasktaskdeleteprojectID
getBackupsByEnvironmentIddeploymentviewprojectID
getEnvironmentsByProjectIdenvironmentviewprojectID
getEnvironmentServicesBy
EnvironmentId
environmentviewprojectID
getEnvVarsByEnvironmentIdenv_varenvironment:view:developmentprojectID
getEnvVarsByEnvironmentIdenv_varenvironment:view:productionprojectID
getEnvVarsByProjectIdenv_varproject:viewprojectID
addGroupgroupadd
getOpenshiftByProjectIdopenshiftviewprojectID
addProjectprojectadd
getProjectByEnvironmentIdprojectviewprojectID
getProjectByGitUrlprojectviewprojectID
getProjectByNameprojectviewprojectID
addRestorerestoreaddprojectID
updateRestorerestoreupdateprojectID
taskDrushCacheCleartaskdrushCacheClear:developmentprojectID
taskDrushCacheCleartaskdrushCacheClear:productionprojectID
taskDrushCrontaskdrushCron:developmentprojectID
taskDrushCrontaskdrushCron:productionprojectID
getFilesByTaskIdtaskviewprojectID
getTasksByEnvironmentIdtaskviewprojectID
getTaskByRemoteIdtaskviewprojectID
getTaskByIdtaskviewprojectID
addUseruseradd
+
+
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
NameResourceScopeAttributes
deleteBackupbackupdeleteprojectID
addEnvVariable (to Project)env_varproject:addprojectID
addEnvVariable (to Environment)env_varenvironment:add:productionprojectID
deleteEnvVariableenv_vardeleteprojectID
deleteEnvVariable (from Project)env_varproject:deleteprojectID
deleteEnvVariable (from Environment)env_varenvironment:delete:productionprojectID
getEnvVarsByProjectIdenv_varproject:viewValueprojectID
getEnvVarsByEnvironmentIdenv_varenvironment:viewValue:productionprojectID
addOrUpdateEnvironmentenvironmentaddOrUpdate:productionprojectID
updateEnvironmentenvironmentupdate:productionprojectID
addDeploymentenvironmentdeploy:productionprojectID
deleteDeploymentdeploymentdeleteprojectID
updateDeploymentdeploymentupdateprojectID
setEnvironmentServicesenvironmentupdate:productionprojectID
deployEnvironmentLatestenvironmentdeploy:productionprojectID
deployEnvironmentBranchenvironmentdeploy:productionprojectID
deployEnvironmentPullrequestenvironmentdeploy:productionprojectID
deployEnvironmentPromoteenvironmentdeploy:productionprojectID
userCanSshToEnvironmentenvironmentssh:productionprojectID
updateGroupgroupupdategroupID
deleteGroupgroupdeletegroupID
addUserToGroupgroupaddUsergroupID
removeUserFromGroupgroupremoveUsergroupID
addNotificationToProjectprojectaddNotificationprojectID
removeNotificationFromProjectprojectremoveNotificationprojectID
updateProjectprojectupdateprojectID
addGroupsToProjectprojectaddGroupprojectID
removeGroupsFromProjectprojectremoveGroupprojectID
addTasktaskadd:productionprojectID
taskDrushUserLogintaskdrushUserLogin:destination:productionenvironmentID
taskDrushSqlSynctaskdrushSqlSync:destination:productionprojectID
taskDrushRsyncFilestaskdrushRsync:destination:productionprojectID
addBackupbackupaddprojectID
getBackupsByEnvironmentIdbackupviewprojectID
addEnvVariable (to Environment)env_varenvironment:add:developmentprojectID
deleteEnvVariable (from Environment)env_varenvironment:delete:developmentprojectID
getEnvVarsByEnvironmentIdenv_varenvironment:viewValue:developmentprojectID
addOrUpdateEnvironmentenvironmentaddOrUpdate:developmentprojectID
updateEnvironmentenvironmentupdate:developmentprojectID
deleteEnvironmentenvironmentdelete:developmentprojectID
addDeploymentenvironmentdeploy:developmentprojectID
setEnvironmentServicesenvironmentupdate:developmentprojectID
deployEnvironmentLatestenvironmentdeploy:developmentprojectID
deployEnvironmentBranchenvironmentdeploy:developmentprojectID
deployEnvironmentPullrequestenvironmentdeploy:developmentprojectID
deployEnvironmentPromoteenvironmentdeploy:developmentprojectID
getNotificationsByProjectIdnotificationviewprojectID
addTasktaskadd:developmentprojectID
taskDrushArchiveDumptaskdrushArchiveDump:developmentprojectID
taskDrushArchiveDumptaskdrushArchiveDump:productionprojectID
taskDrushSqlDumptaskdrushSqlDump:developmentprojectID
taskDrushSqlDumptaskdrushSqlDump:productionprojectID
taskDrushUserLogintaskdrushUserLogin:destination:developmentenvironmentID
taskDrushSqlSynctaskdrushSqlSync:source:developmentprojectID
taskDrushSqlSynctaskdrushSqlSync:source:productionprojectID
taskDrushSqlSynctaskdrushSqlSync:destination:developmentprojectID
taskDrushRsyncFilestaskdrushRsync:source:developmentprojectID
taskDrushRsyncFilestaskdrushRsync:source:productionprojectID
taskDrushRsyncFilestaskdrushRsync:destination:developmentprojectID
deleteTasktaskdeleteprojectID
updateTasktaskupdateprojectID
uploadFilesForTasktaskupdateprojectID
deleteFilesForTasktaskdeleteprojectID
getBackupsByEnvironmentIddeploymentviewprojectID
getEnvironmentsByProjectIdenvironmentviewprojectID
getEnvironmentServicesBy
EnvironmentId
environmentviewprojectID
getEnvVarsByEnvironmentIdenv_varenvironment:view:developmentprojectID
getEnvVarsByEnvironmentIdenv_varenvironment:view:productionprojectID
getEnvVarsByProjectIdenv_varproject:viewprojectID
addGroupgroupadd
getOpenshiftByProjectIdopenshiftviewprojectID
addProjectprojectadd
getProjectByEnvironmentIdprojectviewprojectID
getProjectByGitUrlprojectviewprojectID
getProjectByNameprojectviewprojectID
addRestorerestoreaddprojectID
updateRestorerestoreupdateprojectID
taskDrushCacheCleartaskdrushCacheClear:developmentprojectID
taskDrushCacheCleartaskdrushCacheClear:productionprojectID
taskDrushCrontaskdrushCron:developmentprojectID
taskDrushCrontaskdrushCron:productionprojectID
getFilesByTaskIdtaskviewprojectID
getTasksByEnvironmentIdtaskviewprojectID
getTaskByRemoteIdtaskviewprojectID
getTaskByIdtaskviewprojectID
addUseruseradd
+
+
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
NameResourceScopeAttributes
deleteEnvironmentenvironmentdelete:productionprojectID
deleteProjectprojectdeleteprojectID
getProjectByEnvironmentIdprojectviewPrivateKeyprojectID
getProjectByGitUrlprojectviewPrivateKeyprojectID
getProjectByNameprojectviewPrivateKeyprojectID
deleteBackupbackupdeleteprojectID
addEnvVariable (to Project)env_varproject:addprojectID
addEnvVariable (to Environment)env_varenvironment:add:productionprojectID
deleteEnvVariableenv_vardeleteprojectID
deleteEnvVariable (from Project)env_varproject:deleteprojectID
deleteEnvVariable (from Environment)env_varenvironment:delete:productionprojectID
getEnvVarsByProjectIdenv_varproject:viewValueprojectID
getEnvVarsByEnvironmentIdenv_varenvironment:viewValue:productionprojectID
addOrUpdateEnvironmentenvironmentaddOrUpdate:productionprojectID
updateEnvironmentenvironmentupdate:productionprojectID
addDeploymentenvironmentdeploy:productionprojectID
deleteDeploymentdeploymentdeleteprojectID
updateDeploymentdeploymentupdateprojectID
setEnvironmentServicesenvironmentupdate:productionprojectID
deployEnvironmentLatestenvironmentdeploy:productionprojectID
deployEnvironmentBranchenvironmentdeploy:productionprojectID
deployEnvironmentPullrequestenvironmentdeploy:productionprojectID
deployEnvironmentPromoteenvironmentdeploy:productionprojectID
updateGroupgroupupdategroupID
deleteGroupgroupdeletegroupID
addUserToGroupgroupaddUsergroupID
removeUserFromGroupgroupremoveUsergroupID
addNotificationToProjectprojectaddNotificationprojectID
removeNotificationFromProjectprojectremoveNotificationprojectID
updateProjectprojectupdateprojectID
addGroupsToProjectprojectaddGroupprojectID
removeGroupsFromProjectprojectremoveGroupprojectID
addTasktaskadd:productionprojectID
taskDrushUserLogintaskdrushUserLogin:destination:productionenvironmentID
taskDrushSqlSynctaskdrushSqlSync:destination:productionprojectID
taskDrushRsyncFilestaskdrushRsync:destination:productionprojectID
addBackupbackupaddprojectID
getBackupsByEnvironmentIdbackupviewprojectID
addEnvVariable (to Environment)env_varenvironment:add:developmentprojectID
deleteEnvVariable (from Environment)env_varenvironment:delete:developmentprojectID
getEnvVarsByEnvironmentIdenv_varenvironment:viewValue:developmentprojectID
addOrUpdateEnvironmentenvironmentaddOrUpdate:developmentprojectID
updateEnvironmentenvironmentupdate:developmentprojectID
deleteEnvironmentenvironmentdelete:developmentprojectID
addDeploymentenvironmentdeploy:developmentprojectID
setEnvironmentServicesenvironmentupdate:developmentprojectID
deployEnvironmentLatestenvironmentdeploy:developmentprojectID
deployEnvironmentBranchenvironmentdeploy:developmentprojectID
deployEnvironmentPullrequestenvironmentdeploy:developmentprojectID
deployEnvironmentPromoteenvironmentdeploy:developmentprojectID
getNotificationsByProjectIdnotificationviewprojectID
addTasktaskadd:developmentprojectID
taskDrushArchiveDumptaskdrushArchiveDump:developmentprojectID
taskDrushArchiveDumptaskdrushArchiveDump:productionprojectID
taskDrushSqlDumptaskdrushSqlDump:developmentprojectID
taskDrushSqlDumptaskdrushSqlDump:productionprojectID
taskDrushUserLogintaskdrushUserLogin:destination:developmentenvironmentID
taskDrushSqlSynctaskdrushSqlSync:source:developmentprojectID
taskDrushSqlSynctaskdrushSqlSync:source:productionprojectID
taskDrushSqlSynctaskdrushSqlSync:destination:developmentprojectID
taskDrushRsyncFilestaskdrushRsync:source:developmentprojectID
taskDrushRsyncFilestaskdrushRsync:source:productionprojectID
taskDrushRsyncFilestaskdrushRsync:destination:developmentprojectID
deleteTasktaskdeleteprojectID
updateTasktaskupdateprojectID
uploadFilesForTasktaskupdateprojectID
deleteFilesForTasktaskdeleteprojectID
getBackupsByEnvironmentIddeploymentviewprojectID
getEnvironmentsByProjectIdenvironmentviewprojectID
getEnvironmentServices
ByEnvironmentId
environmentviewprojectID
getEnvVarsByEnvironmentIdenv_varenvironment:view:developmentprojectID
getEnvVarsByEnvironmentIdenv_varenvironment:view:productionprojectID
getEnvVarsByProjectIdenv_varproject:viewprojectID
addGroupgroupadd
getOpenshiftByProjectIdopenshiftviewprojectID
addProjectprojectadd
getProjectByEnvironmentIdprojectviewprojectID
getProjectByGitUrlprojectviewprojectID
getProjectByNameprojectviewprojectID
addRestorerestoreaddprojectID
updateRestorerestoreupdateprojectID
taskDrushCacheCleartaskdrushCacheClear:developmentprojectID
taskDrushCacheCleartaskdrushCacheClear:productionprojectID
taskDrushCrontaskdrushCron:developmentprojectID
taskDrushCrontaskdrushCron:productionprojectID
getFilesByTaskIdtaskviewprojectID
getTasksByEnvironmentIdtaskviewprojectID
getTaskByRemoteIdtaskviewprojectID
getTaskByIdtaskviewprojectID
addUseruseradd
+
+
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
NameResourceScopeAttributes
addOrUpdateEnvironment
Storage
environmentstorage
addNotificationSlacknotificationadd
updateNotificationSlacknotificationupdate
deleteNotificationSlacknotificationdelete
addKuberneteskubernetesadd
updateKuberneteskubernetesupdate
deleteKuberneteskubernetesdelete
deleteAllKuberneteskubernetesdeleteAll
getAllOpenshiftsopenshiftviewAll
getAllProjectsprojectviewAll
addSshKeyssh_keyadduserID
updateSshKeyssh_keyupdateuserID
deleteSshKeyssh_keydeleteuserID
getUserSshKeysssh_keyview:useruserID
updateUseruserupdateuserID
deleteUseruserdeleteuserID
deleteEnvironmentenvironmentdelete:productionprojectID
deleteProjectprojectdeleteprojectID
getProjectByEnvironmentIdprojectviewPrivateKeyprojectID
getProjectByGitUrlprojectviewPrivateKeyprojectID
getProjectByNameprojectviewPrivateKeyprojectID
deleteBackupbackupdeleteprojectID
addEnvVariable (to Project)env_varproject:addprojectID
addEnvVariable (to Environment)env_varenvironment:add:productionprojectID
deleteEnvVariableenv_vardeleteprojectID
deleteEnvVariable (from Project)env_varproject:deleteprojectID
deleteEnvVariable (from Environment)env_varenvironment:delete:productionprojectID
getEnvVarsByProjectIdenv_varproject:viewValueprojectID
getEnvVarsByEnvironmentIdenv_varenvironment:viewValue:productionprojectID
addOrUpdateEnvironmentenvironmentaddOrUpdate:productionprojectID
updateEnvironmentenvironmentupdate:productionprojectID
allEnvironmentsenvironmentviewAll
getEnvironmentStorageMonthBy
EnvironmentId
environmentstorage
getEnvironmentHoursMonthBy
EnvironmentId
environmentstorage
getEnvironmentHitsMonthBy
EnvironmentId
environmentstorage
addOrUpdateEnvironment
Storage
environmentstorage
addDeploymentenvironmentdeploy:productionprojectID
deleteDeploymentdeploymentdeleteprojectID
updateDeploymentdeploymentupdateprojectID
setEnvironmentServicesenvironmentupdate:productionprojectID
deployEnvironmentLatestenvironmentdeploy:productionprojectID
deployEnvironmentBranchenvironmentdeploy:productionprojectID
deployEnvironmentPullrequestenvironmentdeploy:productionprojectID
deployEnvironmentPromoteenvironmentdeploy:productionprojectID
updateGroupgroupupdategroupID
deleteGroupgroupdeletegroupID
addUserToGroupgroupaddUsergroupID
removeUserFromGroupgroupremoveUsergroupID
addNotificationToProjectprojectaddNotificationprojectID
removeNotificationFromProjectprojectremoveNotificationprojectID
updateProjectprojectupdateprojectID
addGroupsToProjectprojectaddGroupprojectID
removeGroupsFromProjectprojectremoveGroupprojectID
addTasktaskadd:productionprojectID
taskDrushUserLogintaskdrushUserLogin:destination:productionenvironmentID
taskDrushSqlSynctaskdrushSqlSync:destination:productionprojectID
taskDrushRsyncFilestaskdrushRsync:destination:productionprojectID
addBackupbackupaddprojectID
getBackupsByEnvironmentIdbackupviewprojectID
addEnvVariable (to Environment)env_varenvironment:add:developmentprojectID
deleteEnvVariable (from Environment)env_varenvironment:delete:developmentprojectID
getEnvVarsByEnvironmentIdenv_varenvironment:viewValue:developmentprojectID
addOrUpdateEnvironmentenvironmentaddOrUpdate:developmentprojectID
updateEnvironmentenvironmentupdate:developmentprojectID
deleteEnvironmentenvironmentdelete:developmentprojectID
addDeploymentenvironmentdeploy:developmentprojectID
setEnvironmentServicesenvironmentupdate:developmentprojectID
deployEnvironmentLatestenvironmentdeploy:developmentprojectID
deployEnvironmentBranchenvironmentdeploy:developmentprojectID
deployEnvironmentPullrequestenvironmentdeploy:developmentprojectID
deployEnvironmentPromoteenvironmentdeploy:developmentprojectID
getNotificationsByProjectIdnotificationviewprojectID
addTasktaskadd:developmentprojectID
taskDrushArchiveDumptaskdrushArchiveDump:developmentprojectID
taskDrushArchiveDumptaskdrushArchiveDump:productionprojectID
taskDrushSqlDumptaskdrushSqlDump:developmentprojectID
taskDrushSqlDumptaskdrushSqlDump:productionprojectID
taskDrushUserLogintaskdrushUserLogin:destination:developmentenvironmentID
taskDrushSqlSynctaskdrushSqlSync:source:developmentprojectID
taskDrushSqlSynctaskdrushSqlSync:source:productionprojectID
taskDrushSqlSynctaskdrushSqlSync:destination:developmentprojectID
taskDrushRsyncFilestaskdrushRsync:source:developmentprojectID
taskDrushRsyncFilestaskdrushRsync:source:productionprojectID
taskDrushRsyncFilestaskdrushRsync:destination:developmentprojectID
deleteTasktaskdeleteprojectID
updateTasktaskupdateprojectID
uploadFilesForTasktaskupdateprojectID
deleteFilesForTasktaskdeleteprojectID
getBackupsByEnvironmentIddeploymentviewprojectID
getEnvironmentsByProjectIdenvironmentviewprojectID
getEnvironmentServices
ByEnvironmentId
environmentviewprojectID
getEnvVarsByEnvironmentIdenv_varenvironment:view:developmentprojectID
getEnvVarsByEnvironmentIdenv_varenvironment:view:productionprojectID
getEnvVarsByProjectIdenv_varproject:viewprojectID
addGroupgroupadd
getOpenshiftByProjectIdopenshiftviewprojectID
addProjectprojectadd
getProjectByEnvironmentIdprojectviewprojectID
getProjectByGitUrlprojectviewprojectID
getProjectByNameprojectviewprojectID
addRestorerestoreaddprojectID
updateRestorerestoreupdateprojectID
taskDrushCacheCleartaskdrushCacheClear:developmentprojectID
taskDrushCacheCleartaskdrushCacheClear:productionprojectID
taskDrushCrontaskdrushCron:developmentprojectID
taskDrushCrontaskdrushCron:productionprojectID
getFilesByTaskIdtaskviewprojectID
getTasksByEnvironmentIdtaskviewprojectID
getTaskByRemoteIdtaskviewprojectID
getTaskByIdtaskviewprojectID
addUseruseradd
+
+
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
NameResourceScopeAttributes
deleteAllBackupsbackupdeleteAll
deleteAllEnvironmentsenvironmentdeleteAll
getEnvironmentStorageMonthBy
EnvironmentId
environmentstorage
getEnvironmentHoursMonthBy
EnvironmentId
environmentstorage
getEnvironmentHitsMonthBy
EnvironmentId
environmentstorage
deleteAllGroupsgroupdeleteAll
deleteAllNotificationSlacksnotificationdeleteAll
removeAllNotificationsFrom
AllProjects
notificationremoveAll
getAllOpenshiftsopenshiftviewAll
deleteAllProjectsprojectdeleteAll
deleteAllSshKeysssh_keydeleteAll
removeAllSshKeysFromAllUsersssh_keyremoveAll
deleteAllUsersuserdeleteAll
addOrUpdateEnvironment
Storage
environmentstorage
addNotificationSlacknotificationadd
updateNotificationSlacknotificationupdate
deleteNotificationSlacknotificationdelete
addKuberneteskubernetesadd
updateKuberneteskubernetesupdate
deleteKuberneteskubernetesdelete
deleteAllKuberneteskubernetesdeleteAll
getAllProjectsprojectviewAll
addSshKeyssh_keyadduserID
updateSshKeyssh_keyupdateuserID
deleteSshKeyssh_keydeleteuserID
getUserSshKeysssh_keyview:useruserID
updateUseruserupdateuserID
deleteUseruserdeleteuserID
deleteEnvironmentenvironmentdelete:productionprojectID
deleteProjectprojectdeleteprojectID
getProjectByEnvironmentIdprojectviewPrivateKeyprojectID
getProjectByGitUrlprojectviewPrivateKeyprojectID
getProjectByNameprojectviewPrivateKeyprojectID
deleteBackupbackupdeleteprojectID
addEnvVariable (to Project)env_varproject:addprojectID
addEnvVariable (to
Environment)
env_varenvironment:add:productionprojectID
deleteEnvVariableenv_vardeleteprojectID
deleteEnvVariable (from Project)env_varproject:deleteprojectID
deleteEnvVariable (from Environment)env_varenvironment:delete:productionprojectID
getEnvVarsByProjectIdenv_varproject:viewValueprojectID
getEnvVarsByEnvironmentIdenv_varenvironment:viewValue:productionprojectID
addOrUpdateEnvironmentenvironmentaddOrUpdate:productionprojectID
updateEnvironmentenvironmentupdate:productionprojectID
addDeploymentenvironmentdeploy:productionprojectID
deleteDeploymentdeploymentdeleteprojectID
updateDeploymentdeploymentupdateprojectID
setEnvironmentServicesenvironmentupdate:productionprojectID
deployEnvironmentLatestenvironmentdeploy:productionprojectID
deployEnvironmentBranchenvironmentdeploy:productionprojectID
deployEnvironmentPullrequestenvironmentdeploy:productionprojectID
deployEnvironmentPromoteenvironmentdeploy:productionprojectID
updateGroupgroupupdategroupID
deleteGroupgroupdeletegroupID
addUserToGroupgroupaddUsergroupID
removeUserFromGroupgroupremoveUsergroupID
addNotificationToProjectprojectaddNotificationprojectID
removeNotificationFromProjectprojectremoveNotificationprojectID
updateProjectprojectupdateprojectID
addGroupsToProjectprojectaddGroupprojectID
removeGroupsFromProjectprojectremoveGroupprojectID
addTasktaskadd:productionprojectID
taskDrushUserLogintaskdrushUserLogin:destination:productionenvironmentID
taskDrushSqlSynctaskdrushSqlSync:destination:productionprojectID
taskDrushRsyncFilestaskdrushRsync:destination:productionprojectID
addBackupbackupaddprojectID
getBackupsByEnvironmentIdbackupviewprojectID
addEnvVariable (to
Environment)
env_varenvironment:add:developmentprojectID
deleteEnvVariable (from Environment)env_varenvironment:delete:developmentprojectID
getEnvVarsByEnvironmentIdenv_varenvironment:viewValue:developmentprojectID
addOrUpdateEnvironmentenvironmentaddOrUpdate:developmentprojectID
updateEnvironmentenvironmentupdate:developmentprojectID
deleteEnvironmentenvironmentdelete:developmentprojectID
addDeploymentenvironmentdeploy:developmentprojectID
setEnvironmentServicesenvironmentupdate:developmentprojectID
deployEnvironmentLatestenvironmentdeploy:developmentprojectID
deployEnvironmentBranchenvironmentdeploy:developmentprojectID
deployEnvironmentPullrequestenvironmentdeploy:developmentprojectID
deployEnvironmentPromoteenvironmentdeploy:developmentprojectID
getNotificationsByProjectIdnotificationviewprojectID
addTasktaskadd:developmentprojectID
taskDrushArchiveDumptaskdrushArchiveDump:developmentprojectID
taskDrushArchiveDumptaskdrushArchiveDump:productionprojectID
taskDrushSqlDumptaskdrushSqlDump:developmentprojectID
taskDrushSqlDumptaskdrushSqlDump:productionprojectID
taskDrushUserLogintaskdrushUserLogin:destination:developmentenvironmentID
taskDrushSqlSynctaskdrushSqlSync:source:developmentprojectID
taskDrushSqlSynctaskdrushSqlSync:source:productionprojectID
taskDrushSqlSynctaskdrushSqlSync:destination:developmentprojectID
taskDrushRsyncFilestaskdrushRsync:source:developmentprojectID
taskDrushRsyncFilestaskdrushRsync:source:productionprojectID
taskDrushRsyncFilestaskdrushRsync:destination:developmentprojectID
deleteTasktaskdeleteprojectID
updateTasktaskupdateprojectID
uploadFilesForTasktaskupdateprojectID
deleteFilesForTasktaskdeleteprojectID
getBackupsByEnvironmentIddeploymentviewprojectID
getEnvironmentsByProjectIdenvironmentviewprojectID
getEnvironmentServices
ByEnvironmentId
environmentviewprojectID
getEnvVarsByEnvironmentIdenv_varenvironment:view:developmentprojectID
getEnvVarsByEnvironmentIdenv_varenvironment:view:productionprojectID
getEnvVarsByProjectIdenv_varproject:viewprojectID
addGroupgroupadd
getOpenshiftByProjectIdopenshiftviewprojectID
addProjectprojectadd
getProjectByEnvironmentIdprojectviewprojectID
getProjectByGitUrlprojectviewprojectID
getProjectByNameprojectviewprojectID
addRestorerestoreaddprojectID
updateRestorerestoreupdateprojectID
taskDrushCacheCleartaskdrushCacheClear:developmentprojectID
taskDrushCacheCleartaskdrushCacheClear:productionprojectID
taskDrushCrontaskdrushCron:developmentprojectID
taskDrushCrontaskdrushCron:productionprojectID
getFilesByTaskIdtaskviewprojectID
getTasksByEnvironmentIdtaskviewprojectID
getTaskByRemoteIdtaskviewprojectID
getTaskByIdtaskviewprojectID
addUseruseradd
+
+
+
+ + + + + + + + +
+
+ + +
+ +
+ + + +
+
+
+
+ + + + + + + + + + + + + \ No newline at end of file diff --git a/administering-lagoon/repositories_overview.png b/administering-lagoon/repositories_overview.png new file mode 100644 index 0000000000..d66ea8f83e Binary files /dev/null and b/administering-lagoon/repositories_overview.png differ diff --git a/administering-lagoon/scanning_image_1.png b/administering-lagoon/scanning_image_1.png new file mode 100644 index 0000000000..923b1181ef Binary files /dev/null and b/administering-lagoon/scanning_image_1.png differ diff --git a/administering-lagoon/using-harbor/container_overview.png b/administering-lagoon/using-harbor/container_overview.png new file mode 100644 index 0000000000..ee4e84c36f Binary files /dev/null and b/administering-lagoon/using-harbor/container_overview.png differ diff --git a/administering-lagoon/using-harbor/harbor-settings/harbor-core/index.html b/administering-lagoon/using-harbor/harbor-settings/harbor-core/index.html new file mode 100644 index 0000000000..ced5252bc4 --- /dev/null +++ b/administering-lagoon/using-harbor/harbor-settings/harbor-core/index.html @@ -0,0 +1,2979 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + Harbor-Core - Lagoon Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ + + + Skip to content + + +
+
+ +
+ + + + +
+ + +
+ +
+ + + + + + +
+
+ + + +
+
+
+ + + + +
+
+
+ + + +
+
+
+ + + +
+
+
+ + + +
+
+ + + + + + + + + + + + + +

Harbor-Core#

+

Harbor-Core requires a configuration file to start, which is located at /etc/core/app.conf within the container. Any changes made to this config file are temporary and will not persist once the pod is restarted.

+

The configmap from which this config file is generated is stored within Lagoon in the services/harbor-core/harbor-core.yml file. Any changes made to this configmap will be persisted across container restarts.

+

Config File Contents#

+
    +
  • _REDIS_URL
      +
    • Tells harbor-core and the Chartmuseum service connection info for the Redis server.
    • +
    • The default value is harbor-redis:6379,100,.
    • +
    +
  • +
  • _REDIS_URL_REG
      +
    • The url which harborregistry should use to connect to the Redis server.
    • +
    • The default value is redis://harbor-redis:6379/2.
    • +
    +
  • +
  • ADMIRAL_URL
      +
    • Tells harbor-core where to find the admiral service.
    • +
    • This service is not used with Lagoon's implementation of Harbor.
    • +
    • The default value is NA.
    • +
    +
  • +
  • CFG_EXPIRATION
      +
    • This value is not used.
    • +
    • The default value is 5.
    • +
    +
  • +
  • CHART_CACHE_DRIVER
      +
    • Tells harbor-core where to store any uploaded charts.
    • +
    • The default value is redis.
    • +
    +
  • +
  • CLAIR_ADAPTER_URL
      +
    • The URL that harbor-core should use to connect to the harbor-trivy service.
    • +
    • The default value is http://harbor-trivy:8080.
    • +
    +
  • +
  • CLAIR_DB
      +
    • The database type harborclair should use.
    • +
    • This value is not used, and is included only for legacy support
    • +
    • The default value is postgres.
    • +
    +
  • +
  • CLAIR_DB_HOST
      +
    • This value is not used, and is included only for legacy support
    • +
    • Tells harbor-core where to find the harborclair service.
    • +
    • The default value is harbor-database.
    • +
    +
  • +
  • CLAIR_DB_PASSWORD
      +
    • The password used to access harborclair's postgres database.
    • +
    • The default value is test123 when run locally or during CI testing.
    • +
    • This value is not used, and is included only for legacy support
    • +
    • This value is retrieved from a secret created when Harbor is first set up on a running Lagoon.
    • +
    +
  • +
  • CLAIR_DB_PORT
      +
    • The port harborclair should use to connect to the harborclair server.
    • +
    • This value is not used, and is included only for legacy support
    • +
    • The default value is 5432.
    • +
    +
  • +
  • CLAIR_DB_SSLMODE
      +
    • Whether or not harborclair should use SSL to connect to the postgresql server.
    • +
    • This value is not used, and is included only for legacy support
    • +
    • The default value is disable.
    • +
    +
  • +
  • CLAIR_DB_USERNAME
      +
    • The user harborclair should use to connect to the postgresql server.
    • +
    • This value is not used, and is included only for legacy support
    • +
    • The default value is postgres.
    • +
    +
  • +
  • CLAIR_HEALTH_CHECK_SERVER_URL
      +
    • This value tells harbor-core where it should issue health checks to for the harbor-trivy service.
    • +
    • The default value is http://harbor-trivy:8080
    • +
    +
  • +
  • CLAIR_URL
      +
    • The URL that harbor-core should use to connect to the harbor-trivy service.
    • +
    • The default value is http://harbor-trivy:6060.
    • +
    +
  • +
  • CONFIG_PATH
      +
    • Where harbor-core should look for its config file.
    • +
    • The default value is /etc/core/app.conf.
    • +
    +
  • +
  • CORE_SECRET
      +
    • This value is a pre-shared key that must match between the various services connecting to harbor-core.
    • +
    • The default value is set to secret123 when Harbor is run locally or during CI testing.
    • +
    • This value is retrieved from a secret created when Harbor is first set up on a running Lagoon.
    • +
    +
  • +
  • CORE_URL
      +
    • The URL that harbor-core should publish to other Harbor services in order for them to connect to the harbor-core service.
    • +
    • The default value is http://harbor-core:8080.
    • +
    +
  • +
  • DATABASE_TYPE
      +
    • The database type Harbor should use.
    • +
    • The default value is postgresql.
    • +
    +
  • +
  • HARBOR_ADMIN_PASSWORD
      +
    • The password which should be used to access harbor using the admin user.
    • +
    • The default value is admin when run locally or during CI testing.
    • +
    • This value is retreived from a secret created when Harbor is first set up on a running Lagoon.
    • +
    +
  • +
  • HARBOR_NGINX_ENDPOINT
      +
    • This environment variable tells harborregistry where its NGINX ingress controller, harbor-nginx, is running in order to construct proper push and pull instructions in the UI, among other things.
    • +
    • The default value is set to http://harbor-nginx:8080 when run locally or during CI testing.
    • +
    • Lagoon attempts to obtain and set this variable automagically when run in production. If that process fails, this service will fail to run.
    • +
    +
  • +
  • HTTP_PROXY
      +
    • The default value is an empty string.
    • +
    +
  • +
  • HTTPS_PROXY
      +
    • The default value is an empty string.
    • +
    +
  • +
  • JOBSERVICE_SECRET
      +
    • This value is a pre-shared key that must match between the various services connecting to harbor-jobservice.
    • +
    • The default value is set to secret123 when Harbor is run locally or during CI testing.
    • +
    • This value is retrieved from a secret created when Harbor is first set up on a running Lagoon.
    • +
    +
  • +
  • JOBSERVICE_URL
      +
    • The URL that harbor-core should use to connect to the harbor-jobservice service.
    • +
    • The default value is http://harbor-jobservice:8080.
    • +
    +
  • +
  • LOG_LEVEL
      +
    • The default log level of the harbor-core service.
    • +
    • The default value is error.
    • +
    +
  • +
  • NO_PROXY
      +
    • A list of hosts which should never have their requests proxied.
    • +
    • The default is harbor-core,harbor-jobservice,harbor-database,harbor-trivy,harborregistry,harbor-portal,127.0.0.1,localhost,.local,.internal.
    • +
    +
  • +
  • PORTAL_URL
      +
    • This value tells the service where to connect to the harbor-portal service.
    • +
    • The default value is http://harbor-portal:8080.
    • +
    +
  • +
  • POSTGRESQL_DATABASE
      +
    • The postgres database harbor-core should use when connecting to the postgresql server.
    • +
    • The default value is registry.
    • +
    +
  • +
  • POSTGRESQL_HOST
      +
    • Where harbor-core should connect to the postgresql server.
    • +
    • The default value is harbor-database.
    • +
    +
  • +
  • POSTGRESQL_MAX_IDLE_CONNS
      +
    • The maximum number of idle connections harbor-core should leave open to the postgresql server.
    • +
    • The default value is 50.
    • +
    +
  • +
  • POSTGRESQL_MAX_OPEN_CONNS
      +
    • The maximum number of open connections harbor-core should have to the postgresql server.
    • +
    • The default value is 100.
    • +
    +
  • +
  • POSTGRESQL_PASSWORD
      +
    • The password Harbor should use to connect to the postgresql server.
    • +
    • The default value is a randomly generated value.
    • +
    +
  • +
  • POSTGRESQL_PORT
      +
    • The port harbor-core should use to connect to the postgresql server.
    • +
    • The default value is 5432.
    • +
    +
  • +
  • POSTGRESQL_USERNAME
      +
    • The username harbor-core should use to connect to the postgresql server.
    • +
    • The default value is postgres.
    • +
    +
  • +
  • POSTGRESQL_SSLMODE
      +
    • Whether or not harbor-core should use SSL to connect to the postgresql server.
    • +
    • The default value is disable.
    • +
    +
  • +
  • REGISTRY_HTTP_SECRET
      +
    • This value is a pre-shared key that must match between the various services connecting to harborregistry.
    • +
    • The default value is set to secret123 when Harbor is run locally or during CI testing.
    • +
    • This value is retreived from a secret created when Harbor is first set up on a running Lagoon.
    • +
    +
  • +
  • REGISTRY_STORAGE_PROVIDER_NAME
      +
    • The storage backend that harborregistry should use.
    • +
    • The default value is s3.
    • +
    +
  • +
  • REGISTRY_URL
      +
    • The URL that harbor-core should use to connect to the harborregistry service..
    • +
    • The default value is http://harborregistry:5000.
    • +
    +
  • +
  • REGISTRYCTL_URL
      +
    • This value tells the service where to connect to the harborregistryctl service.
    • +
    • The default value is set to http://harborregistryctl:8080.
    • +
    +
  • +
  • ROBOT_TOKEN_DURATION
      +
    • This values sets how many days each issues robot token should be valid for.
    • +
    • The default value is set to 999.
    • +
    +
  • +
  • SYNC_REGISTRY
      +
    • This value is not used.
    • +
    • The default value is false.
    • +
    +
  • +
  • TOKEN_SERVICE_URL
      +
    • The URL that the harbor-core service publishes to other services in order to retrieve a JWT token.
    • +
    • The default value is http://harbor-core:8080/service/token.
    • +
    +
  • +
  • TRIVY_ADAPTER_URL
      +
    • The URL that the harbor-core service should use to connect to the harbor-trivy service.
    • +
    • The default value is http://harbor-trivy:8080.
    • +
    +
  • +
  • WITH_CHARTMUSEUM
      +
    • Tells harbor-core if the Chartmuseum service is being used.
    • +
    • This service is not used with Lagoon's implementation of Harbor.
    • +
    • The default value is false.
    • +
    +
  • +
  • WITH_CLAIR
      +
    • Tells harbor-core if the harborclair service is being used.
    • +
    • Lagoon does use this service in its implementation of Harbor.
    • +
    • The default value is true.
    • +
    +
  • +
  • WITH_NOTARY
      +
    • Tells harbor-core if the Notary service is being used.
    • +
    • This service is not used with Lagoon's implementation of Harbor.
    • +
    • The default value is false.
    • +
    +
  • +
  • WITH_TRIVY
      +
    • Tells harbor-core if the Trivy service is being used.
    • +
    • The default value is true.
    • +
    +
  • +
+ + + + + + + + +
+
+ + +
+ +
+ + + +
+
+
+
+ + + + + + + + + + + + + \ No newline at end of file diff --git a/administering-lagoon/using-harbor/harbor-settings/harbor-database/index.html b/administering-lagoon/using-harbor/harbor-settings/harbor-database/index.html new file mode 100644 index 0000000000..eb4f80b595 --- /dev/null +++ b/administering-lagoon/using-harbor/harbor-settings/harbor-database/index.html @@ -0,0 +1,2745 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + Harbor-Database - Lagoon Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ + + + Skip to content + + +
+
+ +
+ + + + +
+ + +
+ +
+ + + + + + +
+
+ + + +
+
+
+ + + + +
+
+
+ + + +
+
+
+ + + +
+
+
+ + + +
+
+ + + + + + + + + + + + + +

Harbor-Database#

+

Harbor-Database requires specific environment variables to be set in order to start, which are stored within secrets as described in the services/harbor-database/harbor-core.yml file.

+

Config File Contents#

+
    +
  • POSTGRES_DB
      +
    • The default database to be set up when initializing the Postgres service.
    • +
    • The default value is postgres.
    • +
    +
  • +
  • POSTGRES_PASSWORD
      +
    • The root password for the Postgres database.
    • +
    • The default value is test123.
    • +
    • This value is retrieved from a secret created when Harbor is first set up on a running Lagoon.
    • +
    +
  • +
  • POSTGRES_USER
      +
    • The default user to be set up when initializing the Postgres service.
    • +
    • The default value is postgres.
    • +
    +
  • +
+ + + + + + + + +
+
+ + +
+ +
+ + + +
+
+
+
+ + + + + + + + + + + + + \ No newline at end of file diff --git a/administering-lagoon/using-harbor/harbor-settings/harbor-jobservice/index.html b/administering-lagoon/using-harbor/harbor-settings/harbor-jobservice/index.html new file mode 100644 index 0000000000..1f77264b79 --- /dev/null +++ b/administering-lagoon/using-harbor/harbor-settings/harbor-jobservice/index.html @@ -0,0 +1,2786 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + Harbor-Jobservice - Lagoon Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ + + + Skip to content + + +
+
+ +
+ + + + +
+ + +
+ +
+ + + + + + +
+
+ + + +
+
+
+ + + + +
+
+
+ + + +
+
+
+ + + +
+
+
+ + + +
+
+ + + + + + + + + + + + + +

Harbor-Jobservice#

+

Harbor-Jobservice requires a configuration file to start, which is located at /etc/jobservice/config.yml within the container. Any changes made to this config file are temporary and will not persist once the pod is restarted.

+

The configmap from which this config file is generated is stored within Lagoon in the services/harbor-jobservice/harbor-jobservice.yml file. Any changes made to this configmap will be persisted across container restarts.

+

Config File Contents#

+
    +
  • CORE_URL
      +
    • This value tells harbor-jobservice where harbor-core can be reached.
    • +
    • The default value is http://harbor-core:8080.
    • +
    +
  • +
  • CORE_SECRET
      +
    • This value is a pre-shared key that must match between the various services connecting to harbor-core.
    • +
    • The default value is set to secret123 when Harbor is run locally or during CI testing.
    • +
    • This value is retrieved from a secret created when Harbor is first set up on a running Lagoon.
    • +
    +
  • +
  • HTTP_PROXY
      +
    • The default value is an empty string.
    • +
    +
  • +
  • HTTPS_PROXY
      +
    • The default value is an empty string.
    • +
    +
  • +
  • JOBSERVICE_SECRET
      +
    • This value is a pre-shared key that must match between the various services connecting to harbor-jobservice.
    • +
    • The default value is set to secret123 when Harbor is run locally or during CI testing.
    • +
    • This value is retrieved from a secret created when Harbor is first set up on a running Lagoon.
    • +
    +
  • +
  • LOG_LEVEL
      +
    • The logging level this service should use.
    • +
    • The default value is error.
        +
      • This can also be set to debug to enable very verbose logging.
      • +
      +
    • +
    +
  • +
  • NO_PROXY
      +
    • A list of hosts which should never have their requests proxied.
    • +
    • The default is harbor-core,harbor-jobservice,harbor-database,harbor-trivy,harborregistry,harbor-portal,127.0.0.1,localhost,.local,.internal.
    • +
    +
  • +
  • REGISTRY_CONTROLLER_URL
      +
    • This value tells the service where to connect to the harborregistryctl service.
    • +
    • The default value is set to http://harborregistryctl:8080
    • +
    +
  • +
  • SCANNER_LOG_LEVEL
      +
    • The logging level the scanning service should use.
    • +
    • The default value is error.
        +
      • This can also be set to debug to enable very verbose logging.
      • +
      +
    • +
    +
  • +
  • SCANNER_STORE_REDIS_URL
      +
    • This value tells harbor-trivy how to connect to its Redis store.
    • +
    • The default value is redis://harbor-redis:6379/4.
    • +
    +
  • +
+ + + + + + + + +
+
+ + +
+ +
+ + + +
+
+
+
+ + + + + + + + + + + + + \ No newline at end of file diff --git a/administering-lagoon/using-harbor/harbor-settings/harbor-trivy/index.html b/administering-lagoon/using-harbor/harbor-settings/harbor-trivy/index.html new file mode 100644 index 0000000000..b0a66b0249 --- /dev/null +++ b/administering-lagoon/using-harbor/harbor-settings/harbor-trivy/index.html @@ -0,0 +1,2752 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + Harbor-Trivy - Lagoon Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ + + + Skip to content + + +
+
+ +
+ + + + +
+ + +
+ +
+ + + + + + +
+
+ + + +
+
+
+ + + + +
+
+
+ + + +
+
+
+ + + +
+
+
+ + + +
+
+ + + + + + + + + + + + + +

Harbor-Trivy#

+

Harbor-Trivy is configured via specific environment variables and does not use a config file.

+

Environment Variables#

+
    +
  • SCANNER_LOG_LEVEL
      +
    • The logging level this service should use.
    • +
    • The default value is error.
        +
      • This can be set to debug to enable very verbose logging.
      • +
      +
    • +
    +
  • +
  • SCANNER_STORE_REDIS_URL
      +
    • This value tells harbor-trivy how to connect to its Redis store.
    • +
    • The default value is redis://harbor-redis:6379/4.
    • +
    +
  • +
  • SCANNER_JOB_QUEUE_REDIS_URL
      +
    • This value tells harbor-trivy how to connect to its Redis store.
    • +
    • The default value is redis://harbor-redis:6379/4.
    • +
    +
  • +
  • SCANNER_TRIVY_VULN_TYPE
      +
    • This value tells harbor-trivy what types of vulnerabilities it should be searching for.
    • +
    • The default value is os,library
    • +
    +
  • +
+ + + + + + + + +
+
+ + +
+ +
+ + + +
+
+
+
+ + + + + + + + + + + + + \ No newline at end of file diff --git a/administering-lagoon/using-harbor/harbor-settings/harborregistry/index.html b/administering-lagoon/using-harbor/harbor-settings/harborregistry/index.html new file mode 100644 index 0000000000..64d4fee283 --- /dev/null +++ b/administering-lagoon/using-harbor/harbor-settings/harborregistry/index.html @@ -0,0 +1,2760 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + HarborRegistry - Lagoon Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ + + + Skip to content + + +
+
+ +
+ + + + +
+ + +
+ +
+ + + + + + +
+
+ + + +
+
+
+ + + + +
+
+
+ + + +
+
+
+ + + +
+
+
+ + + +
+
+ + + + + + + + + + + + + +

HarborRegistry#

+

HarborRegistry requires a configuration file to start, which is located at /etc/registry/config.yml within the container. Any changes made to this config file are temporary and will not persist once the pod is restarted.

+

This config file is stored within the services/harborregistry/harborregistry.yml file and loaded into the container as /etc/registry/pre-config.yml.

+

A custom container entrypoint, services/harborregistry/entrypoint.sh, then transposes provided environment variables into this config file and saves the results as /etc/registry/config.yml.

+

Config File Contents#

+
    +
  • CORE_SECRET
      +
    • This value is a pre-shared key that must match between the various services connecting to harbor-core.
    • +
    • The default value is set to secret123 when Harbor is run locally or during CI testing.
    • +
    • This value is retrieved from a secret created when Harbor is first set up on a running Lagoon.
    • +
    +
  • +
  • HARBOR_NGINX_ENDPOINT
      +
    • This environment variable tells harborregistry where its NGINX ingress controller, harbor-nginx, is running in order to construct proper push and pull instructions in the UI, among other things.
    • +
    • The default value is set to http://harbor-nginx:8080 when run locally or during CI testing.
    • +
    • Lagoon attempts to obtain and set this variable automagically when run in production. If that process fails, this service will fail to run.
    • +
    +
  • +
  • JOBSERVICE_SECRET
      +
    • This value is a pre-shared key that must match between the various services connecting to harbor-jobservice.
    • +
    • The default value is set to secret123 when Harbor is run locally or during CI testing.
    • +
    • This value is retrieved from a secret created when Harbor is first set up on a running Lagoon.
    • +
    +
  • +
  • REGISTRY_HTTP_SECRET
      +
    • This value is a pre-shared key that must match between the various services connecting to harborregistry.
    • +
    • The default value is set to secret123 when Harbor is run locally or during CI testing.
    • +
    • This value is retrieved from a secret created when Harbor is first set up on a running Lagoon.
    • +
    +
  • +
  • REGISTRY_REDIS_PASSWORD
      +
    • This environment variable tells harborregistryctl the password that should be used to connect to Redis.
    • +
    • The default value is an empty string.
    • +
    +
  • +
+ + + + + + + + +
+
+ + +
+ +
+ + + +
+
+
+
+ + + + + + + + + + + + + \ No newline at end of file diff --git a/administering-lagoon/using-harbor/harbor-settings/harborregistryctl/index.html b/administering-lagoon/using-harbor/harbor-settings/harborregistryctl/index.html new file mode 100644 index 0000000000..3954c2362e --- /dev/null +++ b/administering-lagoon/using-harbor/harbor-settings/harborregistryctl/index.html @@ -0,0 +1,2753 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + HarborRegistryCtl - Lagoon Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ + + + Skip to content + + +
+
+ +
+ + + + +
+ + +
+ +
+ + + + + + +
+
+ + + +
+
+
+ + + + +
+
+
+ + + +
+
+
+ + + +
+
+
+ + + +
+
+ + + + + + + + + + + + + +

HarborRegistryCtl#

+

HarborRegistryCtl requires a configuration file to start, which is located at /etc/registryctl/config.yml within the container. Any changes made to this config file are temporary and will not persist once the pod is restarted.

+

The configmap from which this config file is generated is stored within Lagoon in the services/harborregistryctl/harborregistry.yml file. Any changes made to this configmap will be persisted across container restarts.

+

Config File Contents#

+
    +
  • CORE_SECRET
      +
    • This value is a pre-shared key that must match between the various services connecting to harbor-core.
    • +
    • The default value is set to secret123 when Harbor is run locally or during CI testing.
    • +
    • This value is retrieved from a secret created when Harbor is first set up on a running Lagoon.
    • +
    +
  • +
  • JOBSERVICE_SECRET
      +
    • This value is a pre-shared key that must match between the various services connecting to harbor-jobservice.
    • +
    • The default value is set to secret123 when Harbor is run locally or during CI testing.
    • +
    • This value is retrieved from a secret created when Harbor is first set up on a running Lagoon.
    • +
    +
  • +
  • REGISTRY_HTTP_SECRET
      +
    • This value is a pre-shared key that must match between the various services connecting to harborregistry.
    • +
    • The default value is set to secret123 when Harbor is run locally or during CI testing.
    • +
    • This value is retrieved from a secret created when Harbor is first set up on a running Lagoon.
    • +
    +
  • +
  • REGISTRY_REDIS_PASSWORD
      +
    • This environment variable tells harborregistryctl the password that should be used to connect to Redis.
    • +
    • The default value is an empty string.
    • +
    +
  • +
+ + + + + + + + +
+
+ + +
+ +
+ + + +
+
+
+
+ + + + + + + + + + + + + \ No newline at end of file diff --git a/administering-lagoon/using-harbor/harbor-settings/index.html b/administering-lagoon/using-harbor/harbor-settings/index.html new file mode 100644 index 0000000000..c5753ffcb4 --- /dev/null +++ b/administering-lagoon/using-harbor/harbor-settings/index.html @@ -0,0 +1,2804 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + Harbor Settings - Lagoon Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ + + + Skip to content + + +
+
+ +
+ + + + +
+ + +
+ +
+ + + + + + +
+
+ + + +
+
+
+ + + + +
+
+
+ + + +
+
+
+ + + +
+
+
+ + + +
+
+ + + + + + + + + + + + + +

Running Harbor Locally#

+

Lagoon supports running Harbor locally, and it is automatically used for hosting all Kubernetes-based builds (any time the project's activeSystemsDeploy value is set to lagoon_kubernetesBuildDeploy). When Harbor is ran locally, it makes use of MinIO as a storage backend, which is an AWS S3 compatible local storage solution.

+

Settings#

+

Harbor is composed of multiple containers, which all require different settings in order for them to run successfully.

+

Environment Variables#

+

The following environment variables are required to be set in order for Harbor to properly start:

+
    +
  • HARBOR_REGISTRY_STORAGE_AMAZON_BUCKET
      +
    • This needs to be set to the name of the AWS bucket which Harbor will save images to.
    • +
    • Defaults to harbor-images when Lagoon is run locally or during CI testing.
    • +
    +
  • +
  • HARBOR_REGISTRY_STORAGE_AMAZON_REGION
      +
    • This needs to be set to the AWS region in which Harbor's bucket is located.
    • +
    • Defaults to us-east-1 when Lagoon is run locally or during CI testing.
    • +
    +
  • +
  • REGISTRY_STORAGE_S3_ACCESSKEY
      +
    • This needs to be set to the AWS access key Harbor should use to read and write to the AWS bucket.
    • +
    • Defaults to an empty string when Lagoon is run locally or during CI testing, as MinIO does not require authentication.
    • +
    +
  • +
  • REGISTRY_STORAGE_S3_SECRETKEY
      +
    • This needs to be set to the AWS secret key Harbor should use to read and write to the AWS bucket.
    • +
    • Defaults to an empty string when Lagoon is run locally or during CI testing, as MinIO does not require authentication.
    • +
    +
  • +
+

The following environment variables can be set if required:

+
    +
  • HARBOR_REGISTRY_STORAGE_AMAZON_ENDPOINT
      +
    • If this variable is set, the Harbor registry will use its value as the address of the s3 entrypoint.
    • +
    • Defaults to https://s3.amazonaws.com when this variable is not set.
    • +
    +
  • +
+

Container Specific Settings#

+

The following containers make use of configuration files:

+ +

The following containers do not require configuration files to run:

+
    +
  • Harbor-Nginx
  • +
  • Harbor-Portal
  • +
  • Harbor-Redis
  • +
+ + + + + + + + +
+
+ + +
+ +
+ + + +
+
+
+
+ + + + + + + + + + + + + \ No newline at end of file diff --git a/administering-lagoon/using-harbor/index.html b/administering-lagoon/using-harbor/index.html new file mode 100644 index 0000000000..a681ef365e --- /dev/null +++ b/administering-lagoon/using-harbor/index.html @@ -0,0 +1,2696 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + Using Harbor - Lagoon Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ + + + Skip to content + + +
+
+ +
+ + + + +
+ + +
+ +
+ + + + + + +
+
+ + + +
+
+
+ + + + +
+
+
+ + + +
+
+
+ + + +
+
+
+ + + +
+
+ + + + + + + + + + + + + +

Harbor#

+

Harbor is used as the default package repository for Lagoon when deploying to Kubernetes infrastructure. Harbor provides a Docker registry and a container security scanning solution provided by Trivy.

+
+

Note

+

When running Lagoon locally, the configuration for Harbor is handled entirely automagically.

+
+ +

If you are running Lagoon locally, you can access that UI at localhost:8084. The username is admin and the password is admin.

+
+

Note

+

If you are hosting a site with a provider (such as amazee.io), they may not allow customer access to the Harbor UI.

+
+

Once logged in, the first screen is a list of all repositories your user has access to. Each "repository" in Harbor correlates to a project in Lagoon.

+

Harbor Projects Overview

+

Within each Harbor repository, you'll see a list of container images from all environments with a single Lagoon project.

+

Harbor Repositories Overview

+

From here, you can drill down into an individual container in order to see its details, including an overview of its security scan results.

+

Harbor Container Overview

+ + + + + + + + +
+
+ + +
+ +
+ + + +
+
+
+
+ + + + + + + + + + + + + \ No newline at end of file diff --git a/administering-lagoon/using-harbor/projects_overview.png b/administering-lagoon/using-harbor/projects_overview.png new file mode 100644 index 0000000000..79f8902041 Binary files /dev/null and b/administering-lagoon/using-harbor/projects_overview.png differ diff --git a/administering-lagoon/using-harbor/repositories_overview.png b/administering-lagoon/using-harbor/repositories_overview.png new file mode 100644 index 0000000000..d66ea8f83e Binary files /dev/null and b/administering-lagoon/using-harbor/repositories_overview.png differ diff --git a/administering-lagoon/using-harbor/scanning_image_1.png b/administering-lagoon/using-harbor/scanning_image_1.png new file mode 100644 index 0000000000..923b1181ef Binary files /dev/null and b/administering-lagoon/using-harbor/scanning_image_1.png differ diff --git a/administering-lagoon/using-harbor/security-scanning/index.html b/administering-lagoon/using-harbor/security-scanning/index.html new file mode 100644 index 0000000000..71956756a8 --- /dev/null +++ b/administering-lagoon/using-harbor/security-scanning/index.html @@ -0,0 +1,2682 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + Security Scanning - Lagoon Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ + + + Skip to content + + +
+
+ +
+ + + + +
+ + +
+ +
+ + + + + + +
+
+ + + +
+
+
+ + + + +
+
+
+ + + +
+
+
+ + + +
+
+
+ + + +
+
+ + + + + + + + + + + + + +

Security Scanning#

+

Harbor comes with a built-in security scanning solution provided by the Trivy service. This service analyzes a specified container image for any installed packages, and collects the version numbers of those installed packages. The Trivy service then searches the National Vulnerability Database for any CVEs (common vulnerabilities and exposures) affecting those package versions. Trivy is also library aware, so it will scan any Composer files or other package library definition files and report any vulnerabilities found within those package versions. These vulnerabilities are then reported within Harbor for each individual container.

+

An example of a security scan in Harbor, showing applicable vulnerabilities for a scanned container:

+

Harbor Security Scanning Example Image

+ + + + + + + + +
+
+ + +
+ +
+ + + +
+
+
+
+ + + + + + + + + + + + + \ No newline at end of file diff --git a/applications/index.html b/applications/index.html new file mode 100644 index 0000000000..b1a51d1cae --- /dev/null +++ b/applications/index.html @@ -0,0 +1,2811 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + Overview - Lagoon Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ + + + Skip to content + + +
+
+ +
+ + + + +
+ + +
+ +
+ + + + + + +
+
+ + + +
+
+
+ + + + +
+
+
+ + + +
+
+
+ + + +
+
+
+ + + +
+
+ + + + + + + + + + + + + +

A wide range of Applications, Frameworks and Languages are supported by Lagoon#

+

Lagoon broadly classifies three levels in the application stack:

+

Languages#

+

The core building blocks of any Lagoon project, these are usually provided by Lagoon-specific images.

+

Frameworks#

+

These take those base images, and add in the necessary logic, tools and packages needed to serve a website, or drive an application.

+

Applications#

+

Usually built on top of Frameworks, this is the layer that content editors or developers will interact with to shape the finished product.

+
+

When we reference any repositories for use on Lagoon, we usually refer to them in three ways:

+

Templates#

+

These are fully-functional, cloneable starter repositories, maintained and updated regularly, ready to be extended and used with little customization.

+

Examples#

+

These are fully functional repositories, maintained and updated regularly, but may require some effort to make work for your individual project.

+

Demos#

+

These are repositories that have been built as a demonstration, and are usable for some of the concepts within, but aren't routinely maintained or updated.

+

For a more complete list, check out out our GitHub repository: https://www.github.com/lagoon-examples and our website https://lagoon.sh/application/

+ + + + + + + + +
+
+ + +
+ +
+ + + +
+
+
+
+ + + + + + + + + + + + + \ No newline at end of file diff --git a/applications/node/index.html b/applications/node/index.html new file mode 100644 index 0000000000..3fc7de7f23 --- /dev/null +++ b/applications/node/index.html @@ -0,0 +1,2728 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + Overview - Lagoon Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ + + + Skip to content + + +
+
+ +
+ + + + +
+ + +
+ +
+ + + + + + +
+
+ + + +
+
+
+ + + + +
+
+
+ + + +
+
+
+ + + +
+
+
+ + + +
+ +
+ + +
+ +
+ + + +
+
+
+
+ + + + + + + + + + + + + \ No newline at end of file diff --git a/applications/options/index.html b/applications/options/index.html new file mode 100644 index 0000000000..9ab7fe7d9d --- /dev/null +++ b/applications/options/index.html @@ -0,0 +1,2874 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + Options - Lagoon Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ + + + Skip to content + + +
+
+ +
+ + + + +
+ + +
+ +
+ + + + + + +
+
+ + + +
+
+
+ + + + +
+
+
+ + + +
+
+
+ + + +
+
+
+ + + +
+
+ + + + + + + + + + + + + +

Configuring Applications for use on Lagoon#

+

lagoon.yml#

+

Project- and environment-level configuration for Lagoon is provided in the .lagoon.yml file in your repository.

+

See lagoon-yml.md.

+

docker-compose.yml#

+

Service-level configuration for Lagoon in provided in the docker-compose.yml file in your repository. In particular, the lagoon.type and associated service labels are documented in the individual services.

+

See docker-compose-yml.md

+

Storage#

+

Lagoon has the ability to provision storage for most services - the built-in Lagoon service types have a -persistent variant that can add in the necessary PVCs, volumes, etc. We have updated our examples to reflect this configuration locally.

+

Databases#

+

Lagoon has configurations available for:

+
    +
  • Mariadb - all supported versions
  • +
  • PostgreSQL - all supported versions
  • +
+

Database-as-a-service#

+

Lagoon also has the capability to utilize the dbaas-operator to automatically provision these databases using an underlying managed database service (i.e. RDS, Google Cloud Databases, Azure Database). This will happen automatically when these services are provisioned and configured for your cluster. If these are not available, a pod will be provisioned as a fallback.

+

Cache#

+

Lagoon supports Redis as a cache backend. In production, some users provision a managed Redis service for their production environments to help them scale.

+ +

Lagoon supports Elasticsearch, Solr and OpenSearch as search providers. External search providers can also be configured if required.

+

Ingress/Routes#

+

Lagoon auto-generates routes for services that have ingress requirements. Custom routes can be provided in the .lagoon.yml on a per-service basis.

+

Environment Variables#

+

Lagoon makes heavy use of environment variables, at build and runtime. Where these are used to provide critical configuration for your application (e.g. database config/credentials) - it is important that the local and Lagoon versions are named similarly.

+

See environment-variables.md.

+ + + + + + + + +
+
+ + +
+ +
+ + + +
+
+
+
+ + + + + + + + + + + + + \ No newline at end of file diff --git a/applications/other/index.html b/applications/other/index.html new file mode 100644 index 0000000000..bc62749fe1 --- /dev/null +++ b/applications/other/index.html @@ -0,0 +1,2751 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + Overview - Lagoon Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ + + + Skip to content + + +
+
+ +
+ + + + +
+ + +
+ +
+ + + + + + +
+
+ + + +
+
+
+ + + + +
+
+
+ + + +
+
+
+ + + +
+
+
+ + + +
+
+ + + + + + + + + + + + + +

Running other applications on Lagoon#

+

Even if Lagoon doesn't have a base image for your particular application, framework or language, Lagoon can still build it!

+

Extending on, or inheriting from the commons image, Lagoon can run almost any workload.

+

Hugo#

+

This brief example shows how to build a Hugo website and serve it as static files in an NGINX image. The commons image is used to add Hugo, copy the site in, and build it. The NGINX image is then used to serve the site, with the addition of a customized NGINX config.

+
nginx.dockerfile
FROM uselagoon/commons as builder
+
+RUN apk add hugo git
+WORKDIR /app
+COPY . /app
+RUN hugo
+
+FROM uselagoon/nginx
+
+COPY --from=builder /app/public/ /app
+COPY lagoon/static-files.conf /etc/nginx/conf.d/app.conf
+
+RUN fix-permissions /usr/local/openresty/nginx
+
+
docker-compose.yml
services:
+  nginx:
+    build:
+      context: .
+      dockerfile: lagoon/nginx.Dockerfile
+    labels:
+      lagoon.type: nginx
+
+ + + + + + + + +
+
+ + +
+ +
+ + + +
+
+
+
+ + + + + + + + + + + + + \ No newline at end of file diff --git a/applications/php/index.html b/applications/php/index.html new file mode 100644 index 0000000000..067c9afb86 --- /dev/null +++ b/applications/php/index.html @@ -0,0 +1,2728 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + Overview - Lagoon Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ + + + Skip to content + + +
+
+ +
+ + + + +
+ + +
+ +
+ + + + + + +
+
+ + + +
+
+
+ + + + +
+
+
+ + + +
+
+
+ + + +
+
+
+ + + +
+ +
+ + +
+ +
+ + + +
+
+
+
+ + + + + + + + + + + + + \ No newline at end of file diff --git a/applications/python/index.html b/applications/python/index.html new file mode 100644 index 0000000000..3a02910314 --- /dev/null +++ b/applications/python/index.html @@ -0,0 +1,2728 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + Overview - Lagoon Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ + + + Skip to content + + +
+
+ +
+ + + + +
+ + +
+ +
+ + + + + + +
+
+ + + +
+
+
+ + + + +
+
+
+ + + +
+
+
+ + + +
+
+
+ + + +
+
+ + + + + + + + + + + + + +

Python#

+

Introduction#

+

Lagoon provides images for Python 3.7 and above that can be used to build web apps in a wide range of Python-based frameworks and applications.

+

More information on how to adapt your Python project to run on Lagoon can be found in our Python Docker Images section.

+ + + + + + + + +
+
+ + +
+ +
+ + + +
+
+
+
+ + + + + + + + + + + + + \ No newline at end of file diff --git a/applications/ruby/index.html b/applications/ruby/index.html new file mode 100644 index 0000000000..0736058868 --- /dev/null +++ b/applications/ruby/index.html @@ -0,0 +1,2828 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + Overview - Lagoon Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ + + + Skip to content + + +
+
+ +
+ + + + +
+ + +
+ +
+ + + + + + +
+
+ + + +
+
+
+ + + + +
+
+
+ + + +
+
+
+ + + +
+
+
+ + + +
+
+ + + + + + + + + + + + + +

Ruby and Ruby on Rails#

+

Introduction#

+

We provide images for Ruby 3.0 and above, built on the official Ruby alpine Docker images.

+

Below we assume that you're attempting to get a Rails app deployed on Lagoon, although most of the details described are really framework-neutral.

+

Getting Rails running on Lagoon#

+

Responding to requests#

+

The Ruby on Rails example in the Lagoon examples repository is instructive here.

+

In the docker-compose.yml we set up a service named ruby, which is the primary service that will be processing any dynamic requests.

+

If you look at the dockerfile specified for the ruby service, you'll see that we're exposing port 3000. The nginx service will direct any requests for non-static assets to the ruby service on this port (see the nginx configuration file for more details).

+

Logging#

+

The Lagoon logging infrastructure is described in the docs here. Essentially, in order to make use of the infrastructure, logs need to be sent via a UDP message to udp://application-logs.lagoon.svc:5140.

+

In our Rails example, we're importing the logstash-logger gem, and then in our config/application.rb we're initializing it with the following:

+
config/application.rb
    if ENV.has_key?('LAGOON_PROJECT') && ENV.has_key?('LAGOON_ENVIRONMENT') then
+      lagoon_namespace = ENV['LAGOON_PROJECT'] + "-" + ENV['LAGOON_ENVIRONMENT']
+      LogStashLogger.configure do |config|
+        config.customize_event do |event|
+          event["type"] = lagoon_namespace
+        end
+      end
+
+      config.logstash.host = 'application-logs.lagoon.svc'
+      config.logstash.type = :udp
+      config.logstash.port = 5140
+    end
+
+

Database configuration#

+

The example uses our PostgreSQL image (see the docker-compose.yml file). Configuring database access in Rails for Lagoon is very straightforward. Since Lagoon injects the database host, name, and credentials as environment variables, we can change our config/database.yml to be aware of these env vars, and consume them if they exist.

+
config/database.yml
default: &default
+  adapter: postgresql
+  encoding: unicode
+  pool: <%= ENV.fetch("RAILS_MAX_THREADS") { 5 } %>
+  username: <%= ENV.fetch("POSTGRES_USERNAME") { "drupal" } %>
+  password: <%= ENV.fetch("POSTGRES_PASSWORD") { "drupal" } %>
+  host: <%= ENV.fetch("POSTGRES_HOST") { "postgres" } %>
+  database: <%= ENV.fetch("('POSTGRES_DATABASE'") { "drupal" } %>
+
+ + + + + + + + +
+
+ + +
+ +
+ + + +
+
+
+
+ + + + + + + + + + + + + \ No newline at end of file diff --git a/applications/wordpress/index.html b/applications/wordpress/index.html new file mode 100644 index 0000000000..090472e8c3 --- /dev/null +++ b/applications/wordpress/index.html @@ -0,0 +1,2777 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + Overview - Lagoon Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ + + + Skip to content + + +
+
+ +
+ + + + +
+ + +
+ +
+ + + + + + +
+
+ + + +
+
+
+ + + + +
+
+
+ + + +
+
+
+ + + +
+
+
+ + + +
+
+ + + + + + + + + + + + + +

WordPress on Lagoon#

+

The WordPress template is configured to use Composer to install WordPress, its dependencies, and themes.

+

The WordPress template is based on the https://github.com/roots/bedrock boilerplate, but extended to match a standardized Lagoon deployment pattern.

+

Composer Install#

+

The template uses Composer to install WordPress and its themes.

+

Database#

+

Lagoon can support MariaDB and PostgreSQL databases, but as support for PostgreSQL is limited in WordPress, it isn't recommended for use.

+

NGINX configuration#

+

Lagoon doesn't have a built-in configuration for WordPress - instead, the template comes with a starting nginx.conf - please contribute any improvements you may find!

+

WP-CLI#

+

The Lagoon template installs wp-cli into the cli image to manage your WordPress install.

+ + + + + + + + +
+
+ + +
+ +
+ + + +
+
+
+
+ + + + + + + + + + + + + \ No newline at end of file diff --git a/assets/images/favicon.png b/assets/images/favicon.png new file mode 100644 index 0000000000..1cf13b9f9d Binary files /dev/null and b/assets/images/favicon.png differ diff --git a/assets/javascripts/bundle.51d95adb.min.js b/assets/javascripts/bundle.51d95adb.min.js new file mode 100644 index 0000000000..b20ec6835b --- /dev/null +++ b/assets/javascripts/bundle.51d95adb.min.js @@ -0,0 +1,29 @@ +"use strict";(()=>{var Hi=Object.create;var xr=Object.defineProperty;var Pi=Object.getOwnPropertyDescriptor;var $i=Object.getOwnPropertyNames,kt=Object.getOwnPropertySymbols,Ii=Object.getPrototypeOf,Er=Object.prototype.hasOwnProperty,an=Object.prototype.propertyIsEnumerable;var on=(e,t,r)=>t in e?xr(e,t,{enumerable:!0,configurable:!0,writable:!0,value:r}):e[t]=r,P=(e,t)=>{for(var r in t||(t={}))Er.call(t,r)&&on(e,r,t[r]);if(kt)for(var r of kt(t))an.call(t,r)&&on(e,r,t[r]);return e};var sn=(e,t)=>{var r={};for(var n in e)Er.call(e,n)&&t.indexOf(n)<0&&(r[n]=e[n]);if(e!=null&&kt)for(var n of kt(e))t.indexOf(n)<0&&an.call(e,n)&&(r[n]=e[n]);return r};var Ht=(e,t)=>()=>(t||e((t={exports:{}}).exports,t),t.exports);var Fi=(e,t,r,n)=>{if(t&&typeof t=="object"||typeof t=="function")for(let o of $i(t))!Er.call(e,o)&&o!==r&&xr(e,o,{get:()=>t[o],enumerable:!(n=Pi(t,o))||n.enumerable});return e};var yt=(e,t,r)=>(r=e!=null?Hi(Ii(e)):{},Fi(t||!e||!e.__esModule?xr(r,"default",{value:e,enumerable:!0}):r,e));var fn=Ht((wr,cn)=>{(function(e,t){typeof wr=="object"&&typeof cn!="undefined"?t():typeof define=="function"&&define.amd?define(t):t()})(wr,function(){"use strict";function e(r){var n=!0,o=!1,i=null,a={text:!0,search:!0,url:!0,tel:!0,email:!0,password:!0,number:!0,date:!0,month:!0,week:!0,time:!0,datetime:!0,"datetime-local":!0};function s(T){return!!(T&&T!==document&&T.nodeName!=="HTML"&&T.nodeName!=="BODY"&&"classList"in T&&"contains"in T.classList)}function f(T){var Ke=T.type,We=T.tagName;return!!(We==="INPUT"&&a[Ke]&&!T.readOnly||We==="TEXTAREA"&&!T.readOnly||T.isContentEditable)}function c(T){T.classList.contains("focus-visible")||(T.classList.add("focus-visible"),T.setAttribute("data-focus-visible-added",""))}function u(T){T.hasAttribute("data-focus-visible-added")&&(T.classList.remove("focus-visible"),T.removeAttribute("data-focus-visible-added"))}function p(T){T.metaKey||T.altKey||T.ctrlKey||(s(r.activeElement)&&c(r.activeElement),n=!0)}function m(T){n=!1}function d(T){s(T.target)&&(n||f(T.target))&&c(T.target)}function h(T){s(T.target)&&(T.target.classList.contains("focus-visible")||T.target.hasAttribute("data-focus-visible-added"))&&(o=!0,window.clearTimeout(i),i=window.setTimeout(function(){o=!1},100),u(T.target))}function v(T){document.visibilityState==="hidden"&&(o&&(n=!0),B())}function B(){document.addEventListener("mousemove",z),document.addEventListener("mousedown",z),document.addEventListener("mouseup",z),document.addEventListener("pointermove",z),document.addEventListener("pointerdown",z),document.addEventListener("pointerup",z),document.addEventListener("touchmove",z),document.addEventListener("touchstart",z),document.addEventListener("touchend",z)}function re(){document.removeEventListener("mousemove",z),document.removeEventListener("mousedown",z),document.removeEventListener("mouseup",z),document.removeEventListener("pointermove",z),document.removeEventListener("pointerdown",z),document.removeEventListener("pointerup",z),document.removeEventListener("touchmove",z),document.removeEventListener("touchstart",z),document.removeEventListener("touchend",z)}function z(T){T.target.nodeName&&T.target.nodeName.toLowerCase()==="html"||(n=!1,re())}document.addEventListener("keydown",p,!0),document.addEventListener("mousedown",m,!0),document.addEventListener("pointerdown",m,!0),document.addEventListener("touchstart",m,!0),document.addEventListener("visibilitychange",v,!0),B(),r.addEventListener("focus",d,!0),r.addEventListener("blur",h,!0),r.nodeType===Node.DOCUMENT_FRAGMENT_NODE&&r.host?r.host.setAttribute("data-js-focus-visible",""):r.nodeType===Node.DOCUMENT_NODE&&(document.documentElement.classList.add("js-focus-visible"),document.documentElement.setAttribute("data-js-focus-visible",""))}if(typeof window!="undefined"&&typeof document!="undefined"){window.applyFocusVisiblePolyfill=e;var t;try{t=new CustomEvent("focus-visible-polyfill-ready")}catch(r){t=document.createEvent("CustomEvent"),t.initCustomEvent("focus-visible-polyfill-ready",!1,!1,{})}window.dispatchEvent(t)}typeof document!="undefined"&&e(document)})});var un=Ht(Sr=>{(function(e){var t=function(){try{return!!Symbol.iterator}catch(c){return!1}},r=t(),n=function(c){var u={next:function(){var p=c.shift();return{done:p===void 0,value:p}}};return r&&(u[Symbol.iterator]=function(){return u}),u},o=function(c){return encodeURIComponent(c).replace(/%20/g,"+")},i=function(c){return decodeURIComponent(String(c).replace(/\+/g," "))},a=function(){var c=function(p){Object.defineProperty(this,"_entries",{writable:!0,value:{}});var m=typeof p;if(m!=="undefined")if(m==="string")p!==""&&this._fromString(p);else if(p instanceof c){var d=this;p.forEach(function(re,z){d.append(z,re)})}else if(p!==null&&m==="object")if(Object.prototype.toString.call(p)==="[object Array]")for(var h=0;hd[0]?1:0}),c._entries&&(c._entries={});for(var p=0;p1?i(d[1]):"")}})})(typeof global!="undefined"?global:typeof window!="undefined"?window:typeof self!="undefined"?self:Sr);(function(e){var t=function(){try{var o=new e.URL("b","http://a");return o.pathname="c d",o.href==="http://a/c%20d"&&o.searchParams}catch(i){return!1}},r=function(){var o=e.URL,i=function(f,c){typeof f!="string"&&(f=String(f)),c&&typeof c!="string"&&(c=String(c));var u=document,p;if(c&&(e.location===void 0||c!==e.location.href)){c=c.toLowerCase(),u=document.implementation.createHTMLDocument(""),p=u.createElement("base"),p.href=c,u.head.appendChild(p);try{if(p.href.indexOf(c)!==0)throw new Error(p.href)}catch(T){throw new Error("URL unable to set base "+c+" due to "+T)}}var m=u.createElement("a");m.href=f,p&&(u.body.appendChild(m),m.href=m.href);var d=u.createElement("input");if(d.type="url",d.value=f,m.protocol===":"||!/:/.test(m.href)||!d.checkValidity()&&!c)throw new TypeError("Invalid URL");Object.defineProperty(this,"_anchorElement",{value:m});var h=new e.URLSearchParams(this.search),v=!0,B=!0,re=this;["append","delete","set"].forEach(function(T){var Ke=h[T];h[T]=function(){Ke.apply(h,arguments),v&&(B=!1,re.search=h.toString(),B=!0)}}),Object.defineProperty(this,"searchParams",{value:h,enumerable:!0});var z=void 0;Object.defineProperty(this,"_updateSearchParams",{enumerable:!1,configurable:!1,writable:!1,value:function(){this.search!==z&&(z=this.search,B&&(v=!1,this.searchParams._fromString(this.search),v=!0))}})},a=i.prototype,s=function(f){Object.defineProperty(a,f,{get:function(){return this._anchorElement[f]},set:function(c){this._anchorElement[f]=c},enumerable:!0})};["hash","host","hostname","port","protocol"].forEach(function(f){s(f)}),Object.defineProperty(a,"search",{get:function(){return this._anchorElement.search},set:function(f){this._anchorElement.search=f,this._updateSearchParams()},enumerable:!0}),Object.defineProperties(a,{toString:{get:function(){var f=this;return function(){return f.href}}},href:{get:function(){return this._anchorElement.href.replace(/\?$/,"")},set:function(f){this._anchorElement.href=f,this._updateSearchParams()},enumerable:!0},pathname:{get:function(){return this._anchorElement.pathname.replace(/(^\/?)/,"/")},set:function(f){this._anchorElement.pathname=f},enumerable:!0},origin:{get:function(){var f={"http:":80,"https:":443,"ftp:":21}[this._anchorElement.protocol],c=this._anchorElement.port!=f&&this._anchorElement.port!=="";return this._anchorElement.protocol+"//"+this._anchorElement.hostname+(c?":"+this._anchorElement.port:"")},enumerable:!0},password:{get:function(){return""},set:function(f){},enumerable:!0},username:{get:function(){return""},set:function(f){},enumerable:!0}}),i.createObjectURL=function(f){return o.createObjectURL.apply(o,arguments)},i.revokeObjectURL=function(f){return o.revokeObjectURL.apply(o,arguments)},e.URL=i};if(t()||r(),e.location!==void 0&&!("origin"in e.location)){var n=function(){return e.location.protocol+"//"+e.location.hostname+(e.location.port?":"+e.location.port:"")};try{Object.defineProperty(e.location,"origin",{get:n,enumerable:!0})}catch(o){setInterval(function(){e.location.origin=n()},100)}}})(typeof global!="undefined"?global:typeof window!="undefined"?window:typeof self!="undefined"?self:Sr)});var Qr=Ht((Lt,Kr)=>{/*! + * clipboard.js v2.0.11 + * https://clipboardjs.com/ + * + * Licensed MIT © Zeno Rocha + */(function(t,r){typeof Lt=="object"&&typeof Kr=="object"?Kr.exports=r():typeof define=="function"&&define.amd?define([],r):typeof Lt=="object"?Lt.ClipboardJS=r():t.ClipboardJS=r()})(Lt,function(){return function(){var e={686:function(n,o,i){"use strict";i.d(o,{default:function(){return ki}});var a=i(279),s=i.n(a),f=i(370),c=i.n(f),u=i(817),p=i.n(u);function m(j){try{return document.execCommand(j)}catch(O){return!1}}var d=function(O){var w=p()(O);return m("cut"),w},h=d;function v(j){var O=document.documentElement.getAttribute("dir")==="rtl",w=document.createElement("textarea");w.style.fontSize="12pt",w.style.border="0",w.style.padding="0",w.style.margin="0",w.style.position="absolute",w.style[O?"right":"left"]="-9999px";var k=window.pageYOffset||document.documentElement.scrollTop;return w.style.top="".concat(k,"px"),w.setAttribute("readonly",""),w.value=j,w}var B=function(O,w){var k=v(O);w.container.appendChild(k);var F=p()(k);return m("copy"),k.remove(),F},re=function(O){var w=arguments.length>1&&arguments[1]!==void 0?arguments[1]:{container:document.body},k="";return typeof O=="string"?k=B(O,w):O instanceof HTMLInputElement&&!["text","search","url","tel","password"].includes(O==null?void 0:O.type)?k=B(O.value,w):(k=p()(O),m("copy")),k},z=re;function T(j){return typeof Symbol=="function"&&typeof Symbol.iterator=="symbol"?T=function(w){return typeof w}:T=function(w){return w&&typeof Symbol=="function"&&w.constructor===Symbol&&w!==Symbol.prototype?"symbol":typeof w},T(j)}var Ke=function(){var O=arguments.length>0&&arguments[0]!==void 0?arguments[0]:{},w=O.action,k=w===void 0?"copy":w,F=O.container,q=O.target,Le=O.text;if(k!=="copy"&&k!=="cut")throw new Error('Invalid "action" value, use either "copy" or "cut"');if(q!==void 0)if(q&&T(q)==="object"&&q.nodeType===1){if(k==="copy"&&q.hasAttribute("disabled"))throw new Error('Invalid "target" attribute. Please use "readonly" instead of "disabled" attribute');if(k==="cut"&&(q.hasAttribute("readonly")||q.hasAttribute("disabled")))throw new Error(`Invalid "target" attribute. You can't cut text from elements with "readonly" or "disabled" attributes`)}else throw new Error('Invalid "target" value, use a valid Element');if(Le)return z(Le,{container:F});if(q)return k==="cut"?h(q):z(q,{container:F})},We=Ke;function Ie(j){return typeof Symbol=="function"&&typeof Symbol.iterator=="symbol"?Ie=function(w){return typeof w}:Ie=function(w){return w&&typeof Symbol=="function"&&w.constructor===Symbol&&w!==Symbol.prototype?"symbol":typeof w},Ie(j)}function Ti(j,O){if(!(j instanceof O))throw new TypeError("Cannot call a class as a function")}function nn(j,O){for(var w=0;w0&&arguments[0]!==void 0?arguments[0]:{};this.action=typeof F.action=="function"?F.action:this.defaultAction,this.target=typeof F.target=="function"?F.target:this.defaultTarget,this.text=typeof F.text=="function"?F.text:this.defaultText,this.container=Ie(F.container)==="object"?F.container:document.body}},{key:"listenClick",value:function(F){var q=this;this.listener=c()(F,"click",function(Le){return q.onClick(Le)})}},{key:"onClick",value:function(F){var q=F.delegateTarget||F.currentTarget,Le=this.action(q)||"copy",Rt=We({action:Le,container:this.container,target:this.target(q),text:this.text(q)});this.emit(Rt?"success":"error",{action:Le,text:Rt,trigger:q,clearSelection:function(){q&&q.focus(),window.getSelection().removeAllRanges()}})}},{key:"defaultAction",value:function(F){return yr("action",F)}},{key:"defaultTarget",value:function(F){var q=yr("target",F);if(q)return document.querySelector(q)}},{key:"defaultText",value:function(F){return yr("text",F)}},{key:"destroy",value:function(){this.listener.destroy()}}],[{key:"copy",value:function(F){var q=arguments.length>1&&arguments[1]!==void 0?arguments[1]:{container:document.body};return z(F,q)}},{key:"cut",value:function(F){return h(F)}},{key:"isSupported",value:function(){var F=arguments.length>0&&arguments[0]!==void 0?arguments[0]:["copy","cut"],q=typeof F=="string"?[F]:F,Le=!!document.queryCommandSupported;return q.forEach(function(Rt){Le=Le&&!!document.queryCommandSupported(Rt)}),Le}}]),w}(s()),ki=Ri},828:function(n){var o=9;if(typeof Element!="undefined"&&!Element.prototype.matches){var i=Element.prototype;i.matches=i.matchesSelector||i.mozMatchesSelector||i.msMatchesSelector||i.oMatchesSelector||i.webkitMatchesSelector}function a(s,f){for(;s&&s.nodeType!==o;){if(typeof s.matches=="function"&&s.matches(f))return s;s=s.parentNode}}n.exports=a},438:function(n,o,i){var a=i(828);function s(u,p,m,d,h){var v=c.apply(this,arguments);return u.addEventListener(m,v,h),{destroy:function(){u.removeEventListener(m,v,h)}}}function f(u,p,m,d,h){return typeof u.addEventListener=="function"?s.apply(null,arguments):typeof m=="function"?s.bind(null,document).apply(null,arguments):(typeof u=="string"&&(u=document.querySelectorAll(u)),Array.prototype.map.call(u,function(v){return s(v,p,m,d,h)}))}function c(u,p,m,d){return function(h){h.delegateTarget=a(h.target,p),h.delegateTarget&&d.call(u,h)}}n.exports=f},879:function(n,o){o.node=function(i){return i!==void 0&&i instanceof HTMLElement&&i.nodeType===1},o.nodeList=function(i){var a=Object.prototype.toString.call(i);return i!==void 0&&(a==="[object NodeList]"||a==="[object HTMLCollection]")&&"length"in i&&(i.length===0||o.node(i[0]))},o.string=function(i){return typeof i=="string"||i instanceof String},o.fn=function(i){var a=Object.prototype.toString.call(i);return a==="[object Function]"}},370:function(n,o,i){var a=i(879),s=i(438);function f(m,d,h){if(!m&&!d&&!h)throw new Error("Missing required arguments");if(!a.string(d))throw new TypeError("Second argument must be a String");if(!a.fn(h))throw new TypeError("Third argument must be a Function");if(a.node(m))return c(m,d,h);if(a.nodeList(m))return u(m,d,h);if(a.string(m))return p(m,d,h);throw new TypeError("First argument must be a String, HTMLElement, HTMLCollection, or NodeList")}function c(m,d,h){return m.addEventListener(d,h),{destroy:function(){m.removeEventListener(d,h)}}}function u(m,d,h){return Array.prototype.forEach.call(m,function(v){v.addEventListener(d,h)}),{destroy:function(){Array.prototype.forEach.call(m,function(v){v.removeEventListener(d,h)})}}}function p(m,d,h){return s(document.body,m,d,h)}n.exports=f},817:function(n){function o(i){var a;if(i.nodeName==="SELECT")i.focus(),a=i.value;else if(i.nodeName==="INPUT"||i.nodeName==="TEXTAREA"){var s=i.hasAttribute("readonly");s||i.setAttribute("readonly",""),i.select(),i.setSelectionRange(0,i.value.length),s||i.removeAttribute("readonly"),a=i.value}else{i.hasAttribute("contenteditable")&&i.focus();var f=window.getSelection(),c=document.createRange();c.selectNodeContents(i),f.removeAllRanges(),f.addRange(c),a=f.toString()}return a}n.exports=o},279:function(n){function o(){}o.prototype={on:function(i,a,s){var f=this.e||(this.e={});return(f[i]||(f[i]=[])).push({fn:a,ctx:s}),this},once:function(i,a,s){var f=this;function c(){f.off(i,c),a.apply(s,arguments)}return c._=a,this.on(i,c,s)},emit:function(i){var a=[].slice.call(arguments,1),s=((this.e||(this.e={}))[i]||[]).slice(),f=0,c=s.length;for(f;f{"use strict";/*! + * escape-html + * Copyright(c) 2012-2013 TJ Holowaychuk + * Copyright(c) 2015 Andreas Lubbe + * Copyright(c) 2015 Tiancheng "Timothy" Gu + * MIT Licensed + */var is=/["'&<>]/;Jo.exports=as;function as(e){var t=""+e,r=is.exec(t);if(!r)return t;var n,o="",i=0,a=0;for(i=r.index;i0&&i[i.length-1])&&(c[0]===6||c[0]===2)){r=0;continue}if(c[0]===3&&(!i||c[1]>i[0]&&c[1]=e.length&&(e=void 0),{value:e&&e[n++],done:!e}}};throw new TypeError(t?"Object is not iterable.":"Symbol.iterator is not defined.")}function W(e,t){var r=typeof Symbol=="function"&&e[Symbol.iterator];if(!r)return e;var n=r.call(e),o,i=[],a;try{for(;(t===void 0||t-- >0)&&!(o=n.next()).done;)i.push(o.value)}catch(s){a={error:s}}finally{try{o&&!o.done&&(r=n.return)&&r.call(n)}finally{if(a)throw a.error}}return i}function D(e,t,r){if(r||arguments.length===2)for(var n=0,o=t.length,i;n1||s(m,d)})})}function s(m,d){try{f(n[m](d))}catch(h){p(i[0][3],h)}}function f(m){m.value instanceof Xe?Promise.resolve(m.value.v).then(c,u):p(i[0][2],m)}function c(m){s("next",m)}function u(m){s("throw",m)}function p(m,d){m(d),i.shift(),i.length&&s(i[0][0],i[0][1])}}function mn(e){if(!Symbol.asyncIterator)throw new TypeError("Symbol.asyncIterator is not defined.");var t=e[Symbol.asyncIterator],r;return t?t.call(e):(e=typeof xe=="function"?xe(e):e[Symbol.iterator](),r={},n("next"),n("throw"),n("return"),r[Symbol.asyncIterator]=function(){return this},r);function n(i){r[i]=e[i]&&function(a){return new Promise(function(s,f){a=e[i](a),o(s,f,a.done,a.value)})}}function o(i,a,s,f){Promise.resolve(f).then(function(c){i({value:c,done:s})},a)}}function A(e){return typeof e=="function"}function at(e){var t=function(n){Error.call(n),n.stack=new Error().stack},r=e(t);return r.prototype=Object.create(Error.prototype),r.prototype.constructor=r,r}var $t=at(function(e){return function(r){e(this),this.message=r?r.length+` errors occurred during unsubscription: +`+r.map(function(n,o){return o+1+") "+n.toString()}).join(` + `):"",this.name="UnsubscriptionError",this.errors=r}});function De(e,t){if(e){var r=e.indexOf(t);0<=r&&e.splice(r,1)}}var Fe=function(){function e(t){this.initialTeardown=t,this.closed=!1,this._parentage=null,this._finalizers=null}return e.prototype.unsubscribe=function(){var t,r,n,o,i;if(!this.closed){this.closed=!0;var a=this._parentage;if(a)if(this._parentage=null,Array.isArray(a))try{for(var s=xe(a),f=s.next();!f.done;f=s.next()){var c=f.value;c.remove(this)}}catch(v){t={error:v}}finally{try{f&&!f.done&&(r=s.return)&&r.call(s)}finally{if(t)throw t.error}}else a.remove(this);var u=this.initialTeardown;if(A(u))try{u()}catch(v){i=v instanceof $t?v.errors:[v]}var p=this._finalizers;if(p){this._finalizers=null;try{for(var m=xe(p),d=m.next();!d.done;d=m.next()){var h=d.value;try{dn(h)}catch(v){i=i!=null?i:[],v instanceof $t?i=D(D([],W(i)),W(v.errors)):i.push(v)}}}catch(v){n={error:v}}finally{try{d&&!d.done&&(o=m.return)&&o.call(m)}finally{if(n)throw n.error}}}if(i)throw new $t(i)}},e.prototype.add=function(t){var r;if(t&&t!==this)if(this.closed)dn(t);else{if(t instanceof e){if(t.closed||t._hasParent(this))return;t._addParent(this)}(this._finalizers=(r=this._finalizers)!==null&&r!==void 0?r:[]).push(t)}},e.prototype._hasParent=function(t){var r=this._parentage;return r===t||Array.isArray(r)&&r.includes(t)},e.prototype._addParent=function(t){var r=this._parentage;this._parentage=Array.isArray(r)?(r.push(t),r):r?[r,t]:t},e.prototype._removeParent=function(t){var r=this._parentage;r===t?this._parentage=null:Array.isArray(r)&&De(r,t)},e.prototype.remove=function(t){var r=this._finalizers;r&&De(r,t),t instanceof e&&t._removeParent(this)},e.EMPTY=function(){var t=new e;return t.closed=!0,t}(),e}();var Or=Fe.EMPTY;function It(e){return e instanceof Fe||e&&"closed"in e&&A(e.remove)&&A(e.add)&&A(e.unsubscribe)}function dn(e){A(e)?e():e.unsubscribe()}var Ae={onUnhandledError:null,onStoppedNotification:null,Promise:void 0,useDeprecatedSynchronousErrorHandling:!1,useDeprecatedNextContext:!1};var st={setTimeout:function(e,t){for(var r=[],n=2;n0},enumerable:!1,configurable:!0}),t.prototype._trySubscribe=function(r){return this._throwIfClosed(),e.prototype._trySubscribe.call(this,r)},t.prototype._subscribe=function(r){return this._throwIfClosed(),this._checkFinalizedStatuses(r),this._innerSubscribe(r)},t.prototype._innerSubscribe=function(r){var n=this,o=this,i=o.hasError,a=o.isStopped,s=o.observers;return i||a?Or:(this.currentObservers=null,s.push(r),new Fe(function(){n.currentObservers=null,De(s,r)}))},t.prototype._checkFinalizedStatuses=function(r){var n=this,o=n.hasError,i=n.thrownError,a=n.isStopped;o?r.error(i):a&&r.complete()},t.prototype.asObservable=function(){var r=new U;return r.source=this,r},t.create=function(r,n){return new wn(r,n)},t}(U);var wn=function(e){ne(t,e);function t(r,n){var o=e.call(this)||this;return o.destination=r,o.source=n,o}return t.prototype.next=function(r){var n,o;(o=(n=this.destination)===null||n===void 0?void 0:n.next)===null||o===void 0||o.call(n,r)},t.prototype.error=function(r){var n,o;(o=(n=this.destination)===null||n===void 0?void 0:n.error)===null||o===void 0||o.call(n,r)},t.prototype.complete=function(){var r,n;(n=(r=this.destination)===null||r===void 0?void 0:r.complete)===null||n===void 0||n.call(r)},t.prototype._subscribe=function(r){var n,o;return(o=(n=this.source)===null||n===void 0?void 0:n.subscribe(r))!==null&&o!==void 0?o:Or},t}(E);var Et={now:function(){return(Et.delegate||Date).now()},delegate:void 0};var wt=function(e){ne(t,e);function t(r,n,o){r===void 0&&(r=1/0),n===void 0&&(n=1/0),o===void 0&&(o=Et);var i=e.call(this)||this;return i._bufferSize=r,i._windowTime=n,i._timestampProvider=o,i._buffer=[],i._infiniteTimeWindow=!0,i._infiniteTimeWindow=n===1/0,i._bufferSize=Math.max(1,r),i._windowTime=Math.max(1,n),i}return t.prototype.next=function(r){var n=this,o=n.isStopped,i=n._buffer,a=n._infiniteTimeWindow,s=n._timestampProvider,f=n._windowTime;o||(i.push(r),!a&&i.push(s.now()+f)),this._trimBuffer(),e.prototype.next.call(this,r)},t.prototype._subscribe=function(r){this._throwIfClosed(),this._trimBuffer();for(var n=this._innerSubscribe(r),o=this,i=o._infiniteTimeWindow,a=o._buffer,s=a.slice(),f=0;f0?e.prototype.requestAsyncId.call(this,r,n,o):(r.actions.push(this),r._scheduled||(r._scheduled=ut.requestAnimationFrame(function(){return r.flush(void 0)})))},t.prototype.recycleAsyncId=function(r,n,o){var i;if(o===void 0&&(o=0),o!=null?o>0:this.delay>0)return e.prototype.recycleAsyncId.call(this,r,n,o);var a=r.actions;n!=null&&((i=a[a.length-1])===null||i===void 0?void 0:i.id)!==n&&(ut.cancelAnimationFrame(n),r._scheduled=void 0)},t}(Ut);var On=function(e){ne(t,e);function t(){return e!==null&&e.apply(this,arguments)||this}return t.prototype.flush=function(r){this._active=!0;var n=this._scheduled;this._scheduled=void 0;var o=this.actions,i;r=r||o.shift();do if(i=r.execute(r.state,r.delay))break;while((r=o[0])&&r.id===n&&o.shift());if(this._active=!1,i){for(;(r=o[0])&&r.id===n&&o.shift();)r.unsubscribe();throw i}},t}(Wt);var we=new On(Tn);var R=new U(function(e){return e.complete()});function Dt(e){return e&&A(e.schedule)}function kr(e){return e[e.length-1]}function Qe(e){return A(kr(e))?e.pop():void 0}function Se(e){return Dt(kr(e))?e.pop():void 0}function Vt(e,t){return typeof kr(e)=="number"?e.pop():t}var pt=function(e){return e&&typeof e.length=="number"&&typeof e!="function"};function zt(e){return A(e==null?void 0:e.then)}function Nt(e){return A(e[ft])}function qt(e){return Symbol.asyncIterator&&A(e==null?void 0:e[Symbol.asyncIterator])}function Kt(e){return new TypeError("You provided "+(e!==null&&typeof e=="object"?"an invalid object":"'"+e+"'")+" where a stream was expected. You can provide an Observable, Promise, ReadableStream, Array, AsyncIterable, or Iterable.")}function Ki(){return typeof Symbol!="function"||!Symbol.iterator?"@@iterator":Symbol.iterator}var Qt=Ki();function Yt(e){return A(e==null?void 0:e[Qt])}function Gt(e){return ln(this,arguments,function(){var r,n,o,i;return Pt(this,function(a){switch(a.label){case 0:r=e.getReader(),a.label=1;case 1:a.trys.push([1,,9,10]),a.label=2;case 2:return[4,Xe(r.read())];case 3:return n=a.sent(),o=n.value,i=n.done,i?[4,Xe(void 0)]:[3,5];case 4:return[2,a.sent()];case 5:return[4,Xe(o)];case 6:return[4,a.sent()];case 7:return a.sent(),[3,2];case 8:return[3,10];case 9:return r.releaseLock(),[7];case 10:return[2]}})})}function Bt(e){return A(e==null?void 0:e.getReader)}function $(e){if(e instanceof U)return e;if(e!=null){if(Nt(e))return Qi(e);if(pt(e))return Yi(e);if(zt(e))return Gi(e);if(qt(e))return _n(e);if(Yt(e))return Bi(e);if(Bt(e))return Ji(e)}throw Kt(e)}function Qi(e){return new U(function(t){var r=e[ft]();if(A(r.subscribe))return r.subscribe(t);throw new TypeError("Provided object does not correctly implement Symbol.observable")})}function Yi(e){return new U(function(t){for(var r=0;r=2;return function(n){return n.pipe(e?_(function(o,i){return e(o,i,n)}):me,Oe(1),r?He(t):zn(function(){return new Xt}))}}function Nn(){for(var e=[],t=0;t=2,!0))}function fe(e){e===void 0&&(e={});var t=e.connector,r=t===void 0?function(){return new E}:t,n=e.resetOnError,o=n===void 0?!0:n,i=e.resetOnComplete,a=i===void 0?!0:i,s=e.resetOnRefCountZero,f=s===void 0?!0:s;return function(c){var u,p,m,d=0,h=!1,v=!1,B=function(){p==null||p.unsubscribe(),p=void 0},re=function(){B(),u=m=void 0,h=v=!1},z=function(){var T=u;re(),T==null||T.unsubscribe()};return g(function(T,Ke){d++,!v&&!h&&B();var We=m=m!=null?m:r();Ke.add(function(){d--,d===0&&!v&&!h&&(p=jr(z,f))}),We.subscribe(Ke),!u&&d>0&&(u=new et({next:function(Ie){return We.next(Ie)},error:function(Ie){v=!0,B(),p=jr(re,o,Ie),We.error(Ie)},complete:function(){h=!0,B(),p=jr(re,a),We.complete()}}),$(T).subscribe(u))})(c)}}function jr(e,t){for(var r=[],n=2;ne.next(document)),e}function K(e,t=document){return Array.from(t.querySelectorAll(e))}function V(e,t=document){let r=se(e,t);if(typeof r=="undefined")throw new ReferenceError(`Missing element: expected "${e}" to be present`);return r}function se(e,t=document){return t.querySelector(e)||void 0}function _e(){return document.activeElement instanceof HTMLElement&&document.activeElement||void 0}function tr(e){return L(b(document.body,"focusin"),b(document.body,"focusout")).pipe(ke(1),l(()=>{let t=_e();return typeof t!="undefined"?e.contains(t):!1}),N(e===_e()),Y())}function Be(e){return{x:e.offsetLeft,y:e.offsetTop}}function Yn(e){return L(b(window,"load"),b(window,"resize")).pipe(Ce(0,we),l(()=>Be(e)),N(Be(e)))}function rr(e){return{x:e.scrollLeft,y:e.scrollTop}}function dt(e){return L(b(e,"scroll"),b(window,"resize")).pipe(Ce(0,we),l(()=>rr(e)),N(rr(e)))}var Bn=function(){if(typeof Map!="undefined")return Map;function e(t,r){var n=-1;return t.some(function(o,i){return o[0]===r?(n=i,!0):!1}),n}return function(){function t(){this.__entries__=[]}return Object.defineProperty(t.prototype,"size",{get:function(){return this.__entries__.length},enumerable:!0,configurable:!0}),t.prototype.get=function(r){var n=e(this.__entries__,r),o=this.__entries__[n];return o&&o[1]},t.prototype.set=function(r,n){var o=e(this.__entries__,r);~o?this.__entries__[o][1]=n:this.__entries__.push([r,n])},t.prototype.delete=function(r){var n=this.__entries__,o=e(n,r);~o&&n.splice(o,1)},t.prototype.has=function(r){return!!~e(this.__entries__,r)},t.prototype.clear=function(){this.__entries__.splice(0)},t.prototype.forEach=function(r,n){n===void 0&&(n=null);for(var o=0,i=this.__entries__;o0},e.prototype.connect_=function(){!zr||this.connected_||(document.addEventListener("transitionend",this.onTransitionEnd_),window.addEventListener("resize",this.refresh),xa?(this.mutationsObserver_=new MutationObserver(this.refresh),this.mutationsObserver_.observe(document,{attributes:!0,childList:!0,characterData:!0,subtree:!0})):(document.addEventListener("DOMSubtreeModified",this.refresh),this.mutationEventsAdded_=!0),this.connected_=!0)},e.prototype.disconnect_=function(){!zr||!this.connected_||(document.removeEventListener("transitionend",this.onTransitionEnd_),window.removeEventListener("resize",this.refresh),this.mutationsObserver_&&this.mutationsObserver_.disconnect(),this.mutationEventsAdded_&&document.removeEventListener("DOMSubtreeModified",this.refresh),this.mutationsObserver_=null,this.mutationEventsAdded_=!1,this.connected_=!1)},e.prototype.onTransitionEnd_=function(t){var r=t.propertyName,n=r===void 0?"":r,o=ya.some(function(i){return!!~n.indexOf(i)});o&&this.refresh()},e.getInstance=function(){return this.instance_||(this.instance_=new e),this.instance_},e.instance_=null,e}(),Jn=function(e,t){for(var r=0,n=Object.keys(t);r0},e}(),Zn=typeof WeakMap!="undefined"?new WeakMap:new Bn,eo=function(){function e(t){if(!(this instanceof e))throw new TypeError("Cannot call a class as a function.");if(!arguments.length)throw new TypeError("1 argument required, but only 0 present.");var r=Ea.getInstance(),n=new Ra(t,r,this);Zn.set(this,n)}return e}();["observe","unobserve","disconnect"].forEach(function(e){eo.prototype[e]=function(){var t;return(t=Zn.get(this))[e].apply(t,arguments)}});var ka=function(){return typeof nr.ResizeObserver!="undefined"?nr.ResizeObserver:eo}(),to=ka;var ro=new E,Ha=I(()=>H(new to(e=>{for(let t of e)ro.next(t)}))).pipe(x(e=>L(Te,H(e)).pipe(C(()=>e.disconnect()))),J(1));function de(e){return{width:e.offsetWidth,height:e.offsetHeight}}function ge(e){return Ha.pipe(S(t=>t.observe(e)),x(t=>ro.pipe(_(({target:r})=>r===e),C(()=>t.unobserve(e)),l(()=>de(e)))),N(de(e)))}function bt(e){return{width:e.scrollWidth,height:e.scrollHeight}}function ar(e){let t=e.parentElement;for(;t&&(e.scrollWidth<=t.scrollWidth&&e.scrollHeight<=t.scrollHeight);)t=(e=t).parentElement;return t?e:void 0}var no=new E,Pa=I(()=>H(new IntersectionObserver(e=>{for(let t of e)no.next(t)},{threshold:0}))).pipe(x(e=>L(Te,H(e)).pipe(C(()=>e.disconnect()))),J(1));function sr(e){return Pa.pipe(S(t=>t.observe(e)),x(t=>no.pipe(_(({target:r})=>r===e),C(()=>t.unobserve(e)),l(({isIntersecting:r})=>r))))}function oo(e,t=16){return dt(e).pipe(l(({y:r})=>{let n=de(e),o=bt(e);return r>=o.height-n.height-t}),Y())}var cr={drawer:V("[data-md-toggle=drawer]"),search:V("[data-md-toggle=search]")};function io(e){return cr[e].checked}function qe(e,t){cr[e].checked!==t&&cr[e].click()}function je(e){let t=cr[e];return b(t,"change").pipe(l(()=>t.checked),N(t.checked))}function $a(e,t){switch(e.constructor){case HTMLInputElement:return e.type==="radio"?/^Arrow/.test(t):!0;case HTMLSelectElement:case HTMLTextAreaElement:return!0;default:return e.isContentEditable}}function Ia(){return L(b(window,"compositionstart").pipe(l(()=>!0)),b(window,"compositionend").pipe(l(()=>!1))).pipe(N(!1))}function ao(){let e=b(window,"keydown").pipe(_(t=>!(t.metaKey||t.ctrlKey)),l(t=>({mode:io("search")?"search":"global",type:t.key,claim(){t.preventDefault(),t.stopPropagation()}})),_(({mode:t,type:r})=>{if(t==="global"){let n=_e();if(typeof n!="undefined")return!$a(n,r)}return!0}),fe());return Ia().pipe(x(t=>t?R:e))}function Me(){return new URL(location.href)}function ot(e){location.href=e.href}function so(){return new E}function co(e,t){if(typeof t=="string"||typeof t=="number")e.innerHTML+=t.toString();else if(t instanceof Node)e.appendChild(t);else if(Array.isArray(t))for(let r of t)co(e,r)}function M(e,t,...r){let n=document.createElement(e);if(t)for(let o of Object.keys(t))typeof t[o]!="undefined"&&(typeof t[o]!="boolean"?n.setAttribute(o,t[o]):n.setAttribute(o,""));for(let o of r)co(n,o);return n}function fr(e){if(e>999){let t=+((e-950)%1e3>99);return`${((e+1e-6)/1e3).toFixed(t)}k`}else return e.toString()}function fo(){return location.hash.substring(1)}function uo(e){let t=M("a",{href:e});t.addEventListener("click",r=>r.stopPropagation()),t.click()}function Fa(){return b(window,"hashchange").pipe(l(fo),N(fo()),_(e=>e.length>0),J(1))}function po(){return Fa().pipe(l(e=>se(`[id="${e}"]`)),_(e=>typeof e!="undefined"))}function Nr(e){let t=matchMedia(e);return Zt(r=>t.addListener(()=>r(t.matches))).pipe(N(t.matches))}function lo(){let e=matchMedia("print");return L(b(window,"beforeprint").pipe(l(()=>!0)),b(window,"afterprint").pipe(l(()=>!1))).pipe(N(e.matches))}function qr(e,t){return e.pipe(x(r=>r?t():R))}function ur(e,t={credentials:"same-origin"}){return ve(fetch(`${e}`,t)).pipe(ce(()=>R),x(r=>r.status!==200?Tt(()=>new Error(r.statusText)):H(r)))}function Ue(e,t){return ur(e,t).pipe(x(r=>r.json()),J(1))}function mo(e,t){let r=new DOMParser;return ur(e,t).pipe(x(n=>n.text()),l(n=>r.parseFromString(n,"text/xml")),J(1))}function pr(e){let t=M("script",{src:e});return I(()=>(document.head.appendChild(t),L(b(t,"load"),b(t,"error").pipe(x(()=>Tt(()=>new ReferenceError(`Invalid script: ${e}`))))).pipe(l(()=>{}),C(()=>document.head.removeChild(t)),Oe(1))))}function ho(){return{x:Math.max(0,scrollX),y:Math.max(0,scrollY)}}function bo(){return L(b(window,"scroll",{passive:!0}),b(window,"resize",{passive:!0})).pipe(l(ho),N(ho()))}function vo(){return{width:innerWidth,height:innerHeight}}function go(){return b(window,"resize",{passive:!0}).pipe(l(vo),N(vo()))}function yo(){return Q([bo(),go()]).pipe(l(([e,t])=>({offset:e,size:t})),J(1))}function lr(e,{viewport$:t,header$:r}){let n=t.pipe(X("size")),o=Q([n,r]).pipe(l(()=>Be(e)));return Q([r,t,o]).pipe(l(([{height:i},{offset:a,size:s},{x:f,y:c}])=>({offset:{x:a.x-f,y:a.y-c+i},size:s})))}(()=>{function e(n,o){parent.postMessage(n,o||"*")}function t(...n){return n.reduce((o,i)=>o.then(()=>new Promise(a=>{let s=document.createElement("script");s.src=i,s.onload=a,document.body.appendChild(s)})),Promise.resolve())}var r=class{constructor(n){this.url=n,this.onerror=null,this.onmessage=null,this.onmessageerror=null,this.m=a=>{a.source===this.w&&(a.stopImmediatePropagation(),this.dispatchEvent(new MessageEvent("message",{data:a.data})),this.onmessage&&this.onmessage(a))},this.e=(a,s,f,c,u)=>{if(s===this.url.toString()){let p=new ErrorEvent("error",{message:a,filename:s,lineno:f,colno:c,error:u});this.dispatchEvent(p),this.onerror&&this.onerror(p)}};let o=new EventTarget;this.addEventListener=o.addEventListener.bind(o),this.removeEventListener=o.removeEventListener.bind(o),this.dispatchEvent=o.dispatchEvent.bind(o);let i=document.createElement("iframe");i.width=i.height=i.frameBorder="0",document.body.appendChild(this.iframe=i),this.w.document.open(),this.w.document.write(` + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +
+ + + + +
+ + +
+ +
+ + + + + + +
+
+ + + +
+
+
+ + + + +
+
+
+ + + +
+
+
+ + + +
+
+
+ + + +
+
+ + + + + + + + + + + + + +

Code of Conduct#

+

Our Pledge#

+

In the interest of fostering an open and welcoming environment, we as contributors and maintainers pledge to making participation in our project and our community a harassment-free experience for everyone, regardless of age, body size, disability, ethnicity, gender identity and expression, level of experience, nationality, personal appearance, race, religion, or sexual identity and orientation.

+

Our Standards#

+

Examples of behavior that contributes to creating a positive environment include:

+
    +
  • Using welcoming and inclusive language.
  • +
  • Being respectful of differing viewpoints and experiences.
  • +
  • Gracefully accepting constructive criticism.
  • +
  • Focusing on what is best for the community.
  • +
  • Showing empathy towards other community members.
  • +
+

Examples of unacceptable behavior by participants include:

+
    +
  • The use of sexualized language or imagery and unwelcome sexual attention or advances.
  • +
  • Trolling, insulting/derogatory comments, and personal or political attacks.
  • +
  • Public or private harassment.
  • +
  • Publishing others' private information, such as a physical or electronic address, without explicit permission.
  • +
  • Other conduct which could reasonably be considered inappropriate in a professional setting.
  • +
+

Our Responsibilities#

+

Project maintainers are responsible for clarifying the standards of acceptable behavior and are expected to take appropriate and fair corrective action in response to any instances of unacceptable behavior.

+

Project maintainers have the right and responsibility to remove, edit, or reject comments, commits, code, wiki edits, issues, and other contributions that are not aligned to this Code of Conduct, or to ban temporarily or permanently any contributor for other behaviors that they deem inappropriate, threatening, offensive, or harmful.

+

Scope#

+

This Code of Conduct applies both within project spaces and in public spaces when an individual is representing the project or its community. Examples of representing a project or community include using an official project email address, posting via an official social media account, or acting as an appointed representative at an online or offline event. Representation of a project may be further defined and clarified by project maintainers.

+

Enforcement#

+

Instances of abusive, harassing, or otherwise unacceptable behavior may be reported by contacting the project team at uselagoon@amazee.io. The project team will review and investigate all complaints, and will respond in a way that it deems appropriate to the circumstances. The project team is obligated to maintain confidentiality with regard to the reporter of an incident. Further details of specific enforcement policies may be posted separately.

+

Project maintainers who do not follow or enforce the Code of Conduct in good faith may face temporary or permanent repercussions as determined by other members of the project's leadership.

+

Attribution#

+

This Code of Conduct is adapted from the Contributor Covenant, version 1.4, available at http://contributor-covenant.org/version/1/4.

+ + + + + + + + +
+
+ + +
+ +
+ + + +
+
+
+
+ + + + + + + + + + + + + \ No newline at end of file diff --git a/community/discord/index.html b/community/discord/index.html new file mode 100644 index 0000000000..345294b412 --- /dev/null +++ b/community/discord/index.html @@ -0,0 +1,2682 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + Discord - Lagoon Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +
+ + + + +
+ + +
+ +
+ + + + + + +
+
+ + + +
+
+
+ + + + +
+
+
+ + + +
+
+
+ + + +
+
+
+ + + +
+
+ + + + + + + + + + + + + +

Lagoon Community on Discord#

+

Our official community meeting space is the Lagoon Discord.

+

We’re starting this community as a place for all Lagoon users to collaborate, solve problems, share ideas, and contribute back to the Lagoon project. We’re working to consolidate our community as it’s currently spread out over Slack and various other places. We also wanted to invite all of our users and customers to join so that everyone can benefit from the community, no matter how they’re using Lagoon.

+

Please remember that this is not to replace your current support channels - those will remain the same. This is a place to connect with other users as well as the Lagoon maintainers.

+

We ask that all community members review our Participation and Moderation Guidelines, as well as the Code of Conduct.

+

In addition to our Zoom Community Hours, we'll also be hosting Community Hours on Discord in 2023!

+ + + + + + + + +
+
+ + +
+ +
+ + + +
+
+
+
+ + + + + + + + + + + + + \ No newline at end of file diff --git a/community/moderation/index.html b/community/moderation/index.html new file mode 100644 index 0000000000..64e637f30e --- /dev/null +++ b/community/moderation/index.html @@ -0,0 +1,2762 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + Moderation Guidelines - Lagoon Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +
+ + + + +
+ + +
+ +
+ + + + + + +
+
+ + + +
+
+
+ + + + +
+
+
+ + + +
+
+
+ + + +
+
+
+ + + +
+
+ + + + + + + + + + + + + +

Lagoon Moderation Guidelines#

+

These guidelines have been adapted from Drupal Diversity & Inclusion’s Moderation Guidelines.

+

In Lagoon spaces, strive to promote understanding, empathy, and increase personal awareness of all people. This includes people from across the Drupal Community and the greater Technical Community, even those you may personally disagree with.

+

If kicked from the Discord, the kicked user can send a private message (PM) to the kicker or another Moderator, if desired, for re-admittance. If a disruptive person is engaging in what appears to be intentionally inflammatory, bullying, or harassing behavior provoking hostile responses (or acting in a hostile manner), kicking is faster and easier than trying to placate a disruptive person whose behavior is causing distress to other channel members.

+

The kick is not a ban. There are times when disruptive or triggering comments and statements are genuine and break the lines of communication between two parties. By speaking with a Moderator, the (potentially) disruptive person can be coached on using more sensitive, inclusive, and diverse-aware language, and on engaging in a more constructive manner.

+

Tiered Responses#

+
    +
  1. +

    Tier One Response

    +

    User is welcomed in the channel, asked to read some scroll back, and given a link to participation guidelines.

    +
  2. +
  3. +

    Tier Two Response

    +

    User is gently reminded in channel to keep posts on topic, and/or of participation guidelines.

    +
  4. +
  5. +

    Tier Three Response

    +

    User is PM’d by available Moderator to explain the problem(s) with their posts and given suggestions of what to do differently.

    +
  6. +
  7. +

    Tier Four Response

    +

    If behavior continues, User is kicked for no less than 24 hours from the Discord.

    +
  8. +
+

Non-Tiered Response Banning#

+

Intentionally disruptive individuals get kicked, not tiered. Repeated offenses will result in a ban.

+ + + + + + + + +
+
+ + +
+ +
+ + + +
+
+
+
+ + + + + + + + + + + + + \ No newline at end of file diff --git a/community/participation/index.html b/community/participation/index.html new file mode 100644 index 0000000000..6f673b803b --- /dev/null +++ b/community/participation/index.html @@ -0,0 +1,2751 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + Participation Guidelines - Lagoon Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +
+ + + + +
+ + +
+ +
+ + + + + + +
+
+ + + +
+
+
+ + + + +
+
+
+ + + +
+
+
+ + + +
+
+
+ + + +
+
+ + + + + + + + + + + + + +

Lagoon Participation Guidelines#

+

We ask that all members of our community, in any spaces, virtual or physical, adhere to our Code of Conduct.

+

These guidelines have been adapted from Drupal Diversity & Inclusion’s Participation Guidelines.

+
    +
  1. Listen actively, read carefully, and be understanding.
      +
    • If joining a conversation, read the backlog. Give other Participants the opportunity to communicate effectively.
    • +
    • Assume good intent behind other Participants’ statements. The open-source software community is very diverse, with Participants from all over the globe. Be aware of cultural and linguistic quirks.
    • +
    • There are also many Participants who are new to this space. Assume that they have good intent but have not yet mastered the language or ideas. We want to help them!
    • +
    +
  2. +
  3. Speak from your own experience, instead of generalizing. Recognize the worth of others’ experience. Try not to speak for others.
      +
    • Use “I” instead of “they,” “we,” and “you”.
    • +
    • All Participants should recognize that other Participants have their own unique experiences.
    • +
    • Don’t invalidate another Participant’s story with your own spin on their experience. Instead, share your own story and experience.
    • +
    +
  4. +
  5. Challenge ideas, feelings, concerns, or one another by asking questions. Refrain from personal attacks. Focus on ideas first.
      +
    • Avoid verbal challenges, backhanded insults, gender/race/region stereotyping, etc.
    • +
    +
  6. +
  7. Take part to the fullest of your ability and availability.
      +
    • Community growth depends on the inclusion of individual voices. The channel wants you to speak up and speak out. Everyone has a different amount of time to contribute. We value participation here if you can give 5 minutes or 5 hours.
    • +
    • We do welcome those who quietly come to lurk and learn, or “lurk,” but please introduce yourself and say hello!
    • +
    +
  8. +
  9. Accept that it is not always the goal to agree.
      +
    • There are often many different “right” answers to technical issues, even though they may not work for your setup.
    • +
    +
  10. +
  11. Be conscious of language differences and unintended connotations.
      +
    • “Text is hard” - be aware that it is difficult to communicate effectively via text.
    • +
    +
  12. +
  13. Acknowledge individuals’ identities.
      +
    • Use stated names and pronouns. Do not challenge a person’s race, sexuality, disability, etc.
    • +
    • f you are unsure how to address someone, ask them discreetly and respectfully. For example, if you are unsure what pronouns to use, send a private message and ask. Using the correct pronouns will help others.
    • +
    +
  14. +
  15. Some off-topic conversation is okay.
      +
    • Some cross posting of announcements is okay. The following is not permitted:
        +
      • Thread hijacking
      • +
      • Spamming
      • +
      • Commercial advertising
      • +
      • Overt self-promotion
      • +
      • Excessive going off-topic, especially during official meeting times or focused conversations
      • +
      +
    • +
    • Consider announcing more appropriate places or times for in-depth off-topic conversations.
    • +
    • If you are not sure what’s appropriate, please contact an admin.
    • +
    +
  16. +
  17. Sharing content from inside Lagoon spaces must only be done with explicit consent. Any sharing must also be carefully considered, and without harassment or intent to harm any Participants.
      +
    • This forum should be considered public. Assume that anyone can and may read anything posted here.
    • +
    • When sharing any Lagoon content, permission from all Participants must be obtained first. This applies whether content is quoted, summarized, or screenshotted. This includes sharing in any public medium: on Twitter, in a blog post, in an article, on a podcast, etc. These spaces are where the discussion and work in progress is taking place. Removing snippets of a conversation takes away context. This can distort and discourage discussion, especially when this is done without the goal of driving the Lagoon project forward.
    • +
    • As stated above, if you take screenshots and post them to social media or other forums, you must get permission from the person that posted it. When getting permission, include the option of removing identifying information. Permission is still needed even if identifying information is removed. This includes any content from Discord, Github, or any other Lagoon medium.
    • +
    • If you want to share something, just ask! “Hey, is it ok to share this on Twitter? I’m happy to credit you!”
    • +
    • If it is necessary for a participant to take a screenshot to report harassing behavior to Lagoon moderators, this may be done without obtaining permission. It is not, however, acceptable to take screenshots to publicly or privately shame an individual. Again, this applies only to reporting harassing behavior.
    • +
    +
  18. +
  19. Address complaints between one another in the space when safe and appropriate.
      +
    • When safe, try to clarify and engage in the space where the conflict happened. For example, in the Discord channel.
    • +
    • Ping admins or Community Manager (Alanna) when conflict is escalating.
    • +
    • Ask for help.
    • +
    • If the topic of conflict is off-topic for Lagoon, move the conversation to a more appropriate channel.
    • +
    +
  20. +
+

Additional considerations for in-person Lagoon spaces

+
    +
  1. Follow the event’s Code of Conduct, if there is one. If not, our Code of Conduct applies.
  2. +
  3. Do not touch people, their mobility devices, or other assistive equipment without their consent. If someone asks you to stop a certain behavior, stop immediately.
  4. +
  5. Report any issues to the event’s staff. +If an issue involves Lagoon team members, report to uselagoon@amazee.io.
  6. +
+

The Lagoon team reserves the right to terminate anyone’s access to the Lagoon spaces.

+ + + + + + + + +
+
+ + +
+ +
+ + + +
+
+
+
+ + + + + + + + + + + + + \ No newline at end of file diff --git a/contributing-to-lagoon/api-debugging/index.html b/contributing-to-lagoon/api-debugging/index.html new file mode 100644 index 0000000000..3e328a872e --- /dev/null +++ b/contributing-to-lagoon/api-debugging/index.html @@ -0,0 +1,2721 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + API Debugging - Lagoon Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +
+ + + + +
+ + +
+ +
+ + + + + + +
+
+ + + +
+
+
+ + + + +
+
+
+ + + +
+
+
+ + + +
+
+
+ + + +
+
+ + + + + + + + + + + + + +

API Debugging#

+

1 . Ensure the dev script at services/api/package.json includes the following:

+
services/api/package.json
node --inspect=0.0.0.0:9229
+
+

2 . Update docker-compose.yml to map the dist folder and expose the 9229 port:

+
docker-compose.yml
  api:
+    image: ${IMAGE_REPO:-lagoon}/api
+    command: yarn run dev
+    volumes:
+      - ./services/api/src:/app/services/api/src
+      - ./services/api/dist:/app/services/api/dist
+  depends_on:
+      - api-db
+      - local-api-data-watcher-pusher
+      - keycloak
+    ports:
+      - '3000:3000'
+      - '9229:9229'
+
+

3 . Add the following to .vscode/launch.json:

+
.vscode/launch.json
{
+  // Use IntelliSense to learn about possible attributes.
+  // Hover to view descriptions of existing attributes.
+  // For more information, visit: https://go.microsoft.com/fwlink/?linkid=830387.
+  "version": "0.2.0",
+  "configurations": [
+    {
+      "name": "Docker: Attach to Node",
+      "type": "node",
+      "request": "attach",
+      "port": 9229,
+      "address": "localhost",
+      "outFiles": ["${workspaceRoot}/app/services/api/dist/**/*.js"],
+      "localRoot": "${workspaceFolder}/services/api",
+      "remoteRoot": "/app/services/api",
+      "sourceMaps": true,
+      "protocol": "inspector"
+    }
+  ]
+}
+
+

4 . Rebuild/restart the containers:

+
Restart containers
rm build/api && make build/api && docker-compose restart api
+
+

5 . Restart VScode.

+ + + + + + + + +
+
+ + +
+ +
+ + + +
+
+
+
+ + + + + + + + + + + + + \ No newline at end of file diff --git a/contributing-to-lagoon/developing-lagoon/index.html b/contributing-to-lagoon/developing-lagoon/index.html new file mode 100644 index 0000000000..c432615a91 --- /dev/null +++ b/contributing-to-lagoon/developing-lagoon/index.html @@ -0,0 +1,3171 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + Developing Lagoon - Lagoon Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +
+ + + + +
+ + +
+ +
+ + + + + + +
+
+ + + +
+
+
+ + + + +
+
+
+ + + + + + + +
+
+ + + + + + + + + + + + + +

Developing Lagoon#

+

Development of Lagoon locally can now be performed on a local Kubernetes cluster, or via Docker Compose (as a fallback).

+
+

Note

+

The full Lagoon stack relies on a range of upstream projects which are currently incompatible with ARM-based architectures, such as the the M1/M2 Apple Silicon-based machines. For this reason, running or developing lagoon-core or lagoon-remote locally on these architectures is not currently supported. See https://github.com/uselagoon/lagoon/issues/3189 for more information.

+
+

Docker#

+

Docker must be installed to build and run Lagoon locally.

+

Install Docker and Docker Compose#

+

Please check the official docs for how to install Docker.

+

Docker Compose is included in Docker for Mac installations. For Linux installations see the directions here.

+

Configure Docker#

+

You will need to update your insecure registries in Docker. Read the instructions here on how to do that. We suggest adding the entire local IPv4 Private Address Spaces to avoid unnecessary reconfiguration between Kubernetes and Docker Compose. e.g. "insecure-registries" : ["172.16.0.0/12","192.168.0.0/16"],

+

Allocate Enough Docker Resources#

+

Running a Lagoon, Kubernetes, or Docker cluster on your local machine consumes a lot of resources. We recommend that you give your Docker host a minimum of 8 CPU cores and 12GB RAM.

+

Build Lagoon Locally#

+
+

Warning

+

Only consider building Lagoon this way if you intend to develop features or functionality for it, or want to debug internal processes. We will also be providing instruction to install Lagoon without building it (i.e. by using the published releases).

+
+

We're using make (see the Makefile) in order to build the needed Docker images, configure Kubernetes and run tests.

+

We have provided a number of routines in the Makefile to cover most local development scenarios. Here we will run through a complete process.

+

Build images#

+
    +
  1. Here -j8 tells make to run 8 tasks in parallel to speed the build up. Adjust as necessary.
  2. +
  3. We have set SCAN_IMAGES=false as a default to not scan the built images for vulnerabilities. If set to true, a scan.txt file will be created in the project root with the scan output.
  4. +
+
Build images
make -j8 build
+
+
    +
  1. Start Lagoon test routine using the defaults in the Makefile (all tests).
  2. +
+
Start tests
make kind/test
+
+
+

Warning

+

There are a lot of tests configured to run by default - please consider only testing locally the minimum that you need to ensure functionality. This can be done by specifying or removing tests from the TESTS variable in the Makefile.

+
+

This process will:

+
    +
  1. Download the correct versions of the local development tools if not installed - kind, kubectl, helm, jq.
  2. +
  3. Update the necessary Helm repositories for Lagoon to function.
  4. +
  5. Ensure all of the correct images have been built in the previous step.
  6. +
  7. Create a local KinD cluster, which provisions an entire running Kubernetes cluster in a local Docker container. This cluster has been configured to talk to a provisioned image registry that we will be pushing the built Lagoon images to. It has also been configured to allow access to the host filesystem for local development.
  8. +
  9. Clone Lagoon from https://github.com/uselagoon/lagoon-charts (use the CHARTS_TREEISH variable in the Makefile to control which branch if needed).
  10. +
  11. Install the Harbor Image registry into the KinD cluster and configure its ingress and access properly.
  12. +
  13. Docker will push the built images for Lagoon into the Harbor image registry.
  14. +
  15. It then uses the Makefile from lagoon-charts to perform the rest of the setup steps.
  16. +
  17. A suitable ingress controller is installed - we use the NGINX Ingress Controller.
  18. +
  19. A local NFS server provisioner is installed to handle specific volume requests - we use one that handles Read-Write-Many operations (RWX).
  20. +
  21. Lagoon Core is then installed, using the locally built images pushed to the cluster-local Image Registry, and using the default configuration, which may exclude some services not needed for local testing. The installation will wait for the API and Keycloak to come online.
  22. +
  23. The DBaaS providers are installed - MariaDB, PostgreSQL and MongoDB. This step provisions standalone databases to be used by projects running locally, and emulates the managed services available via cloud providers (e.g. Cloud SQL, RDS or Azure Database).
  24. +
  25. Lagoon Remote is then installed, and configured to talk to the Lagoon Core, databases and local storage. The installation will wait for this to complete before continuing.
  26. +
  27. To provision the tests, the Lagoon Test chart is then installed, which provisions a local Git server to host the test repositories, and pre-configures the Lagoon API database with the default test users, accounts and configuration. It then performs readiness checks before starting tests.
  28. +
  29. Lagoon will run all the tests specified in the TESTS variable in the Makefile. Each test creates its own project & environments, performs the tests, and then removes the environments & projects. The test runs are output to the console log in the lagoon-test-suite-* pod, and can be accessed one test per container.
  30. +
+

Ideally, all of the tests pass and it's all done!

+

View the test progress and your local cluster#

+

The test routine creates a local Kubeconfig file (called kubeconfig.kind.lagoon in the root of the project, that can be used with a Kubernetes dashboard, viewer or CLI tool to access the local cluster. We use tools like Lens, Octant, kubectl or Portainer in our workflows. Lagoon Core, Remote and Tests all build in the Lagoon namespace, and each environment creates its own namespace to run, so make sure to use the correct context when inspecting.

+

In order to use kubectl with the local cluster, you will need to use the correct Kubeconfig. This can be done for every command or it can be added to your preferred tool:

+
kubeconfig.kind.lagoon
KUBECONFIG=./kubeconfig.kind.lagoon kubectl get pods -n lagoon
+
+

The Helm charts used to build the local Lagoon are cloned into a local folder and symlinked to lagoon-charts.kind.lagoon where you can see the configuration. We'll cover how to make easy modifications later in this documentation.

+

Interact with your local Lagoon cluster#

+

The Makefile includes a few simple routines that will make interacting with the installed Lagoon simpler:

+
Create local ports
make kind/port-forwards
+
+

This will create local ports to expose the UI (6060), API (7070) and Keycloak (8080). Note that this logs to stdout, so it should be performed in a secondary terminal/window.

+
Retrieve admin creds
make kind/get-admin-creds
+
+

This will retrieve the necessary credentials to interact with the Lagoon.

+
    +
  • The JWT is an admin-scoped token for use as a bearer token with your local GraphQL client. See more in our GraphQL documentation.
  • +
  • There is a token for use with the "admin" user in Keycloak, who can access all users, groups, roles, etc.
  • +
  • There is also a token for use with the "lagoonadmin" user in Lagoon, which can be allocated default groups, permissions, etc.
  • +
+
Re-push images
make kind/dev
+
+

This will re-push the images listed in KIND_SERVICES with the correct tag, and redeploy the lagoon-core chart. This is useful for testing small changes to Lagoon services, but does not support "live" development. You will need to rebuild these images locally first, e.g rm build/api && make build/api.

+
Build typescript services
make kind/local-dev-patch
+
+

This will build the typescript services, using your locally installed Node.js (it should be >16.0). It will then:

+
    +
  • Mount the "dist" folders from the Lagoon services into the correct lagoon-core pods in Kubernetes
  • +
  • Redeploy the lagoon-core chart with the services running with nodemonwatching the code for changes
  • +
  • This will facilitate "live" development on Lagoon.
  • +
  • Note that occasionally the pod in Kubernetes may require redeployment for a change to show. Clean any build artifacts from those services if you're rebuilding different branches with git clean -dfx as the dist folders are ignored by Git.
  • +
+
Initiate logging
make kind/local-dev-logging
+
+

This will create a standalone OpenDistro for Elasticsearch cluster in your local Docker, and configure Lagoon to dispatch all logs (Lagoon and project) to it, using the configuration in lagoon-logging.

+
Re-run tests.
make kind/retest
+# OR
+make kind/retest TESTS='[features-kubernetes]'
+
+

This will re-run a suite of tests (defined in the TESTS variable) against the existing cluster. It will re-push the images needed for tests (tests, local-git, and the data-watcher-pusher). You can specify tests to run by passing the TESTS variable inline.

+

If updating a test configuration, the tests image will need to be rebuilt and pushed, e.g rm build/tests && make build/tests && make kind/push-images IMAGES='tests' && make kind/retest TESTS='[api]'

+
Push all images
make kind/push-images
+# OR
+make kind/push-images IMAGES='tests local-git'
+
+

This will push all the images up to the image registry. Specifying IMAGES will tag and push specific images.

+
Remove cluster
make kind/clean
+
+

This will remove the KinD Lagoon cluster from your local Docker.

+

Ansible#

+

The Lagoon test uses Ansible to run the test suite. Each range of tests for a specific function has been split into its own routine. If you are performing development work locally, select which tests to run, and update the $TESTS variable in the Makefile to reduce the concurrent tests running.

+

The configuration for these tests is held in three services:

+
    +
  • tests is the Ansible test services themselves. The local testing routine runs each individual test as a separate container within a test-suite pod. These are listed below.
  • +
  • local-git is a Git server hosted in the cluster that holds the source files for the tests. Ansible pulls and pushes to this repository throughout the tests
  • +
  • api-data-watcher-pusher is a set of GraphQL mutations that pre-populates local Lagoon with the necessary Kubernetes configuration, test user accounts and SSH keys, and the necessary groups and notifications. Note that this will wipe local projects and environments on each run.
  • +
+

The individual routines relevant to Kubernetes are:

+
    +
  • active-standby-kubernetes runs tests to check active/standby in Kubernetes.
  • +
  • api runs tests for the API - branch/PR deployment, promotion.
  • +
  • bitbucket, gitlab and github run tests for the specific SCM providers.
  • +
  • drupal-php74 runs a single-pod MariaDB, MariaDB DBaaS and a Drush-specific test for a Drupal 8/9 project (drupal-php73 doesn't do the Drush test).
  • +
  • drupal-postgres runs a single-pod PostgreSQL and a PostgreSQL DBaaS test for a Drupal 8 project.
  • +
  • elasticsearch runs a simple NGINX proxy to an Elasticsearch single-pod.
  • +
  • features-variables runs tests that utilize variables in Lagoon.
  • +
  • features-kubernetes runs a range of standard Lagoon tests, specific to Kubernetes.
  • +
  • features-kubernetes-2 runs more advanced kubernetes-specific tests - covering multi-project and subfolder configurations.
  • +
  • nginx, node and python run basic tests against those project types.
  • +
  • node-mongodb runs a single-pod MongoDB test and a MongoDB DBaaS test against a Node.js app.
  • +
+

Local Development#

+

Most services are written in Node.js. As many of these services share similar Node.js code and Node.js packages, we're using a feature of Yarn, called Yarn workspaces. Yarn workspaces need a package.json in the project's root directory that defines the workspaces.

+

The development of the services can happen directly within Docker. Each container for each service is set up in a way that its source code is mounted into the running container (see docker-compose.yml). Node.js itself is watching the code via nodemon , and restarts the Node.js process automatically on a change.

+

lagoon-commons#

+

The services not only share many Node.js packages, but also share actual custom code. This code is within node-packages/lagoon-commons. It will be automatically symlinked by Yarn workspaces. Additionally, the nodemon of the services is set up in a way that it checks for changes in node-packages and will restart the node process automatically.

+

Troubleshooting#

+

I can't build a Docker image for any Node.js based service#

+

Rebuild the images via:

+
Rebuild images
    make clean
+    make build
+
+

I get errors about missing node_modules content when I try to build / run a Node.js based image#

+

Make sure to run yarn in Lagoon's root directory, since some services have common dependencies managed by yarn workspaces.

+

I get an error resolving the nip.io domains#

+
Error
Error response from daemon: Get https://registry.172.18.0.2.nip.io:32080/v2/: dial tcp: lookup registry.172.18.0.2.nip.io: no such host
+
+

This can happen if your local resolver filters private IPs from results. You can work around this by editing /etc/resolv.conf and adding a line like nameserver 8.8.8.8 at the top to use a public resolver that doesn't filter results.

+

Example workflows#

+

Here are some development scenarios and useful workflows for getting things done.

+

Add tests#

+
    +
  1. Repeat the first step above.
  2. +
  3. Edit tests/tests/features-variables.yaml and add a test case.
  4. +
  5. Rebuild the tests image.
  6. +
+
Build tests
rm build/tests
+make -j8 build/tests
+
+
    +
  1. Push the new tests image into the cluster registry.
  2. +
+
Push test image
make kind/push-images IMAGES=tests
+
+
    +
  1. Rerun the tests.
  2. +
+
Re-run tests
make kind/retest TESTS='[features-variables]'
+
+ + + + + + + + +
+
+ + +
+ +
+ + + +
+
+
+
+ + + + + + + + + + + + + \ No newline at end of file diff --git a/contributing-to-lagoon/documentation/index.html b/contributing-to-lagoon/documentation/index.html new file mode 100644 index 0000000000..2c525a703f --- /dev/null +++ b/contributing-to-lagoon/documentation/index.html @@ -0,0 +1,2770 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + Documentation - Lagoon Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +
+ + + + +
+ + +
+ +
+ + + + + + +
+
+ + + +
+
+
+ + + + +
+
+
+ + + +
+
+
+ + + +
+
+
+ + + +
+
+ + + + + + + + + + + + + +

Contributing to Lagoon documentation#

+

We really value anything that you can offer us!

+

We've made building and viewing the documentation really straightforward, and the team is always ready to help out with reviews or pointers.

+

We use mkdocs with the excellent Material theme.

+

Viewing and updating docs locally#

+

From the root of the Lagoon repository (you'll need Docker), run:

+
Get local docs up and running.
docker run --rm -it -p 127.0.0.1:8000:8000 -v ${PWD}:/docs ghcr.io/amazeeio/mkdocs-material
+
+ +

This will start a development server on http://127.0.0.1:8000, configured to live-reload on any updates.

+

The customized Docker image contains all the necessary extensions.

+

Alternatively, to run the mkdocs package locally, you'll need to install mkdocs, and then install all of the necessary plugins.

+
Install mkdocs
pip3 install -r docs/requirements.txt
+mkdocs serve
+
+

Editing in the Cloud#

+

Each documentation page also has an "edit" pencil in the top right, that will take you to the correct page in the Git repository.

+

Feel free to contribute here, too - you can always use the built-in github.dev web-based editor. It's got basic Markdown previews, but none of the mkdocs loveliness.

+

How we deploy documentation#

+

We use the Deploy MkDocs GitHub Action to build all main branch pushes, and trigger a deployment of the gh-pages branch.

+ + + + + + + + +
+
+ + +
+ +
+ + + +
+
+
+
+ + + + + + + + + + + + + \ No newline at end of file diff --git a/contributing-to-lagoon/releasing/index.html b/contributing-to-lagoon/releasing/index.html new file mode 100644 index 0000000000..7427f5678f --- /dev/null +++ b/contributing-to-lagoon/releasing/index.html @@ -0,0 +1,2812 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + Releasing - Lagoon Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +
+ + + + +
+ + +
+ +
+ + + + + + +
+
+ + + +
+
+
+ + + + +
+
+
+ + + +
+
+ +
+
+ + + +
+
+ + + + + + + + + + + + + +

Releasing Lagoon#

+

Lagoon has a number of moving parts, making releases quite complicated!

+

Lagoon-core - tags and testing#

+
    +
  1. Ensure all the identified pull requests have been merged into main branch for: +
  2. +
  3. Once you are confident, push the next tag in sequence (minor or patch) to the main branch in the format v2.MINOR.PATCH as per semver. This will trigger a Jenkins build, visible at https://ci.lagoon.sh/blue/organizations/jenkins/lagoon/branches
  4. +
  5. Whilst this is building, push lightweight tags to the correct commits on lagoon-ui and build-deploy-tool in the format core-v2.MINOR.PATCH. Note that there are no other tags or releases on build-deploy-tool, but lagoon-ui also has it's own semver releases that are based on it's features.
  6. +
  7. Once the build has completed successfully in Jenkins, head to https://github.com/uselagoon/lagoon-charts to prepare the charts release
  8. +
  9. +

    In the chart.yaml for the lagoon-core and lagoon-test charts, update the following fields:

    +
      +
    • version: This is the next "minor" release of the chart - we usually use minor for a corresponding lagoon-core release
    • +
    • appVersion: This is the actual tag of the released lagoon-core
    • +
    • artifacthub.io/changes: All that's needed are the two lines in the below snippet, modified for the actual appVersion being released.
    • +
    +

    sample chart.yml snippets
    # This is the chart version. This version number should be incremented each
    +# time you make changes to the chart and its templates, including the app
    +# version.
    +# Versions are expected to follow Semantic Versioning (https://semver.org/)
    +version: 1.28.0
    +
    +# This is the version number of the application being deployed. This version
    +# number should be incremented each time you make changes to the application.
    +# Versions are not expected to follow Semantic Versioning. They should reflect
    +# the version the application is using.
    +appVersion: v2.14.2
    +
    +# This section is used to collect a changelog for artifacthub.io
    +# It should be started afresh for each release
    +# Valid supported kinds are added, changed, deprecated, removed, fixed and security
    +annotations:
    +  artifacthub.io/changes: |
    +    - kind: changed
    +      description: update Lagoon appVersion to v2.14.2
    +
    +Only lagoon-core and lagoon-test charts are updated as a result of a lagoon-core release. Follow the lagoon-remote process if there are any other changes.

    +
  10. +
  11. +

    Create a PR for this chart release, and the Github Actions suite will undertake a full suite of tests:

    +
      +
    • Lint and test charts - matrix: performs a lint and chart install against the current tested version of Kubernetes
    • +
    • Lint and test charts - current: performs a lint and chart install against previous/future versions of Kubernetes
    • +
    • Lagoon tests: runs the full series of ansible tests against the release.
    • +
    +

    Usually, failures in the lint and test charts are well explained (missing/misconfigured chart settings). If a single Lagoon test failes, it may just need re-running. If multiple failures occur, they will need investigating.

    +
  12. +
+

Once those tests have all passed successfully, you can proceed with creating the releases:

+

Lagoon-core - releases and release notes#

+
    +
  1. In uselagoon/lagoon create a release from the tag pushed earlier. Use the "Generate release notes" button to create the changelog. Look at previous releases for what we include in the release - and the lagoon-images link will always be the most recent released version. Note that the links to the charts, lagoon-ui and build-deploy-tool can all be filled in now, but the links won't work until the future steps. Mark this as the latest release and Publish the release.
  2. +
  3. In uselagoon/build-deploy-tool create a release from the tag pushed earlier. Use the "Generate release notes" button to create the changelog - ensuring that the last core-v2.X tag is used, not any other tag. Look at previous releases for what we include in the release - Mark this as the latest release and Publish the release.
  4. +
  5. In uselagoon/lagoon-ui create a release from the tag pushed earlier. Use the "Generate release notes" button to create the changelog - ensuring that the last core-v2.X tag is used, not any other tag. Look at previous releases for what we include in the release - Mark this as the latest release and Publish the release.
  6. +
  7. In uselagoon/lagoon-charts merge the successful PR, this will create the lagoon-core and lagoon-test releases for you. Edit the resulting lagoon-core chart release to note the corresponding lagoon release in the title and text box, as per previous releases.
  8. +
+

Lagoon-remote - releases and release notes#

+

Lagoon remote has a release cycle separate to Lagoon Core, and as such, can be released anytime that a dependency sub-chart or service is updated.

+ + + + + + + + +
+
+ + +
+ +
+ + + +
+
+
+
+ + + + + + + + + + + + + \ No newline at end of file diff --git a/contributing-to-lagoon/tests/index.html b/contributing-to-lagoon/tests/index.html new file mode 100644 index 0000000000..18c88bccbf --- /dev/null +++ b/contributing-to-lagoon/tests/index.html @@ -0,0 +1,2768 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + Tests - Lagoon Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +
+ + + + +
+ + +
+ +
+ + + + + + +
+
+ + + +
+
+
+ + + + +
+
+
+ + + +
+
+
+ + + +
+
+
+ + + +
+
+ + + + + + + + + + + + + +

Tests#

+

All of our tests are written with Ansible and mostly follow this approach:

+
    +
  1. They create a new Git repository.
  2. +
  3. Add and commit some files from a list of files (in tests/files) into this Git repository.
  4. +
  5. Push this Git repository to a Git server (either locally or on GitHub).
  6. +
  7. Send a trigger to a trigger service (for example a webhook to the webhook handler, which is the same as a real webhook that would be sent).
  8. +
  9. Starts to monitor the URL at which the test would expect something to happen (like deploying a Node.js app that has the Git branch as an HTML text).
  10. +
  11. Compares the result on the URL with the expected result.
  12. +
+

Lagoon is mostly tested in 3 different ways:

+

1. Locally#

+

During local development, the best way to test is locally. All tests are started via make. Make will download and build all the required dependencies.

+
Make tests
make tests
+
+

This will run all defined tests. If you only want to run a subset of the tests, run make tests-list to see all existing tests and run them individually.

+

For example, make tests/node will run the Node.js Docker images tests.

+

In order to actually see what is happening inside the microservices, we can use make logs:

+
Make logs
make logs
+
+

Or only for a specific service:

+
Make logs
make logs service=webhook-handler
+
+

2. Automated integration testing#

+

In order to test pull requests that are created against Lagoon, we have a fully automatic integration test running on a dedicated Jenkins instance: https://ci.lagoon.sh. It is defined inside the .Jenkinsfile, and runs automatically for every pull request that is opened.

+

This will build all images, start a Kubernetes cluster and run a series of tests.

+

The tests can be found here:

+ + + + + + + + + + + +
+
+ + +
+ +
+ + + +
+
+
+
+ + + + + + + + + + + + + \ No newline at end of file diff --git a/contributing/index.html b/contributing/index.html new file mode 100644 index 0000000000..4e58c17920 --- /dev/null +++ b/contributing/index.html @@ -0,0 +1,2862 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + Contributing - Lagoon Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +
+ + + + +
+ + +
+ +
+ + + + + + +
+
+ + + +
+
+
+ + + + +
+
+
+ + + + + + + +
+
+ + + + + + + + + + + + + +

Contributing#

+

We gladly welcome any and all contributions to Lagoon!

+

What kind of contributions do we need?#

+

Lagoon benefits from any kind of contribution - whether it's a bugfix, new feature, documentation update, or simply some queue maintenance - we're happy that you want to help

+

Developing for Lagoon#

+

There's a whole section on how to get Lagoon running on your local machine using KinD over at Developing Lagoon. This documentation is still very WIP - but there are a lot of Makefile routines to help you out.

+

Installing Lagoon#

+

We've got another section that outlines how to install Lagoon from Helm charts at Installing Lagoon Into Existing Kubernetes Cluster - we'd love to get this process as slick as possible!

+

Help us with our examples#

+

Right now one of our biggest needs is putting together examples of Lagoon working with various content management systems, etc, other than Drupal.

+

If you can spin up an open source CMS or framework that we don’t currently have as a Docker Compose stack, send us a PR. Look at the existing examples at https://github.com/uselagoon/lagoon-examples for tips, pointers and starter issues.

+

One small catch – wherever possible, we’d like them to be built using our base Docker Hub images https://hub.docker.com/u/uselagoon – if we don’t have a suitable image, or our images need modifying – throw us a PR (if you can) or create an issue (so someone else can) at https://github.com/uselagoon/lagoon-images.

+

Help us improve our existing examples, if you can - are we following best practices, is there something we’re doing that doesn’t make sense?

+

Bonus points for anyone that helps contribute to tests for any of these examples – we’ve got some example tests in a couple of the projects you can use for guidance – https://github.com/amazeeio/drupal-example-simple/blob/8.x/TESTING_dockercompose.md. The testing framework we’re using is Leia, from the excellent team behind Lando.

+

Help us to document our other examples better – we’re not expecting a full manuscript, but tidy-ups, links to helpful resources and clarifying statements are all super-awesome.

+

If you have any questions, reach out to us on Discord!

+

I found a security issue 🔓#

+

We take security very seriously. If you discover a security issue or think you found one, please bring it to the maintainers' attention.

+
+

Danger

+

Please send your findings to security@amazee.io. Please DO NOT file a GitHub issue for them.

+
+

Security reports are greatly appreciated and will receive public karma and swag! We're also working on a Bug Bounty system.

+

I found an issue#

+

We're always interested in fixing issues, therefore issue reports are very welcome. Please make sure to check that your issue does not already exist in the issue queue.

+

I have a feature request or idea#

+

Cool! Create an issue and we're happy to look over it. We can't guarantee that it will be implemented. But we are always interested in hearing ideas of what we could bring to Lagoon.

+

Another good way is also to talk to us via Discord about your idea. Join today!

+

I wrote some code#

+

Epic! Please send us a pull request for it, we will do our best to review it and merge it if possible.

+ + + + + + + + +
+
+ + +
+ +
+ + + +
+
+
+
+ + + + + + + + + + + + + \ No newline at end of file diff --git a/docker-images/commons/index.html b/docker-images/commons/index.html new file mode 100644 index 0000000000..0d1b7df160 --- /dev/null +++ b/docker-images/commons/index.html @@ -0,0 +1,2749 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + Commons - Lagoon Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +
+ + + + +
+ + +
+ +
+ + + + + + +
+
+ + + +
+
+
+ + + + +
+
+
+ + + +
+
+
+ + + +
+
+
+ + + +
+
+ + + + + + + + + + + + + +

Commons#

+

The Lagoon commons Docker image. Based on the official Alpine images.

+

This image has no functionality itself, but is instead a base image, intended to be extended and utilized to build other images. All the alpine-based images in Lagoon inherit components from commons.

+

Included tooling#

+
    +
  • docker-sleep - standardized one-hour sleep
  • +
  • fix-permissions - automatically fixes permissions on a given directory to all group read-write
  • +
  • wait-for - a small script to ensure that services are up and running in the correct order - based off https://github.com/eficode/wait-for
  • +
  • entrypoint-readiness - checks to make sure that long-running entrypoints have completed
  • +
  • entrypoints - a script to source all entrypoints under /lagoon/entrypoints/* in an alphabetical/numerical order
  • +
+

Included entrypoints#

+

The list of default entrypoints in this image is found at https://github.com/uselagoon/lagoon-images/tree/main/images/commons/lagoon/entrypoints. Subsequent downstream images will also contribute entrypoints under /lagoon that are run in the eventual image.

+ + + + + + + + +
+
+ + +
+ +
+ + + +
+
+
+
+ + + + + + + + + + + + + \ No newline at end of file diff --git a/docker-images/mariadb/index.html b/docker-images/mariadb/index.html new file mode 100644 index 0000000000..1767f3bc36 --- /dev/null +++ b/docker-images/mariadb/index.html @@ -0,0 +1,2913 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + MariaDB - Lagoon Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +
+ + + + +
+ + +
+ +
+ + + + + + +
+
+ + + +
+
+
+ + + + +
+
+
+ + + +
+
+
+ + + +
+
+
+ + + +
+
+ + + + + + + + + + + + + +

MariaDB#

+

MariaDB is the open source successor to MySQL.

+

The Lagoon MariaDB image Dockerfile. Based on the official packages mariadb and mariadb-client provided by the the upstream Alpine image.

+

This Dockerfile is intended to be used to set up a standalone MariaDB database server.

+
    +
  • 10.4 Dockerfile (Alpine 3.12 Support until May 2022) - uselagoon/mariadb-10.4
  • +
  • 10.5 Dockerfile (Alpine 3.14 Support until May 2023) - uselagoon/mariadb-10.5
  • +
  • 10.6 Dockerfile (Alpine 3.16 Support until May 2024) - uselagoon/mariadb-10.6
  • +
  • 10.11 Dockerfile (Alpine 3.18 Support until May 2025) - uselagoon/mariadb-10.11
  • +
+
+

Info

+

As these images are not built from the upstream MariaDB images, their support follows a different cycle - and will only receive updates as long as the underlying Alpine images receive support - see https://alpinelinux.org/releases/ for more information. In practice, most MariaDB users will only be running these containers locally - the production instances will use the Managed Cloud Databases provided by the DBaaS Operator

+
+

Lagoon adaptions#

+

The default exposed port of MariaDB containers is port 3306.

+

To allow Lagoon to select the best way to run the MariaDB container, use lagoon.type: mariadb - this allows the DBaaS operator to provision a cloud database if available in the cluster. Use lagoon.type: mariadb-single to specifically request MariaDB in a container. Persistent storage is always provisioned for MariaDB containers at /var/lib/mysql.

+

This image is prepared to be used on Lagoon. There are therefore some things already done:

+
    +
  • Folder permissions are automatically adapted with fix-permissions, so this image will work with a random user.
  • +
  • readiness-probe.sh script to check when MariaDB container is ready.
  • +
+

docker-compose.yml snippet#

+
docker-compose.yml
    mariadb:
+        image: uselagoon/mariadb-10.6-drupal:latest
+        labels:
+        # tells Lagoon this is a MariaDB database
+            lagoon.type: mariadb
+        ports:
+            # exposes the port 3306 with a random local port, find it with `docker-compose port mariadb 3306`
+            - "3306"
+        volumes:
+            # mounts a named volume at the default path for MariaDB
+            - db:/var/lib/mysql
+
+

Included tools#

+
    +
  • mysqltuner.pl - Perl script useful for database parameter tuning.
  • +
  • mysql-backup.sh - Script for automating the daily MySQL backups on development environment.
  • +
  • pwgen - Utility to generate random and complex passwords.
  • +
+

Included my.cnf configuration file#

+

The image ships a default MariaDB configuration file, optimized to work on +Lagoon. Some options are configurable via environment +variables.

+

Environment Variables#

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Environment VariableDefaultDescription
MARIADB_DATABASElagoonDatabase name created at startup.
MARIADB_USERlagoonDefault user created at startup.
MARIADB_PASSWORDlagoonPassword of default user created at startup.
MARIADB_ROOT_PASSWORDLag00nMariaDB root user's password.
MARIADB_CHARSETutf8mb4Set the server charset.
MARIADB_COLLATIONutf8mb4_binSet server collation.
MARIADB_MAX_ALLOWED_PACKET64MSet the max_allowed_packet size.
MARIADB_INNODB_BUFFER_POOL_SIZE256MSet the MariaDB InnoDB buffer pool size.
MARIADB_INNODB_BUFFER_POOL_INSTANCES1Number of InnoDB buffer pool instances.
MARIADB_INNODB_LOG_FILE_SIZE64MSize of InnoDB log file.
MARIADB_LOG_SLOW(not set)Variable to control the save of slow queries.
MARIADB_LOG_QUERIES(not set)Variable to control the save of ALL queries.
BACKUPS_DIR/var/lib/mysql/backupDefault path for databases backups.
MARIADB_DATA_DIR/var/lib/mysqlPath of the MariaDB data dir, be careful, changing this can occur data loss!
MARIADB_COPY_DATA_DIR_SOURCE(not set)Path which the entrypoint script of mariadb will use to copy into the defined MARIADB_DATA_DIR, this can be used for prepopulating the MariaDB with a database. The scripts expects actual MariaDB data files and not a sql file! Plus it only copies data if the destination does not already have a mysql datadir in it.
+

If the LAGOON_ENVIRONMENT_TYPE variable is set to production, performances +are set accordingly by using MARIADB_INNODB_BUFFER_POOL_SIZE=1024 and +MARIADB_INNODB_LOG_FILE_SIZE=256.

+ + + + + + + + +
+
+ + +
+ +
+ + + +
+
+
+
+ + + + + + + + + + + + + \ No newline at end of file diff --git a/docker-images/mariadb/mariadb-drupal/index.html b/docker-images/mariadb/mariadb-drupal/index.html new file mode 100644 index 0000000000..5782fddff5 --- /dev/null +++ b/docker-images/mariadb/mariadb-drupal/index.html @@ -0,0 +1,15 @@ + + + + + + Redirecting... + + + + + + +Redirecting... + + diff --git a/docker-images/mongodb/index.html b/docker-images/mongodb/index.html new file mode 100644 index 0000000000..3c90e249f4 --- /dev/null +++ b/docker-images/mongodb/index.html @@ -0,0 +1,2751 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + MongoDB - Lagoon Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +
+ + + + +
+ + +
+ +
+ + + + + + +
+
+ + + +
+
+
+ + + + +
+
+
+ + + +
+
+
+ + + +
+
+
+ + + +
+
+ + + + + + + + + + + + + +

MongoDB#

+
+

MongoDB is a general purpose, document-based, distributed database built for modern application developers and for the cloud era. MongoDB is a document database, which means it stores data in JSON-like documents.

+ +
+

Supported Versions#

+

4.0 Dockerfile - uselagoon/mongo-4

+

This Dockerfile is intended to be used to set up a standalone MongoDB database server.

+

Lagoon adaptions#

+

This image is prepared to be used on Lagoon. There are therefore some things already done:

+
    +
  • Folder permissions are automatically adapted with fix-permissions, so this image will work with a random user, and therefore also on Kubernetes or OpenShift.
  • +
+ + + + + + + + +
+
+ + +
+ +
+ + + +
+
+
+
+ + + + + + + + + + + + + \ No newline at end of file diff --git a/docker-images/nginx/index.html b/docker-images/nginx/index.html new file mode 100644 index 0000000000..fa62933970 --- /dev/null +++ b/docker-images/nginx/index.html @@ -0,0 +1,2877 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + NGINX - Lagoon Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +
+ + + + +
+ + +
+ +
+ + + + + + +
+
+ + + +
+
+
+ + + + +
+
+
+ + + +
+
+
+ + + +
+
+
+ + + +
+
+ + + + + + + + + + + + + +

NGINX#

+

The Lagoon nginx image Dockerfile. Based on the official openresty/openresty images.

+

This Dockerfile is intended to be used as a base for any web servers within Lagoon.

+

Lagoon adaptions#

+

The default exposed port of NGINX containers is port 8080.

+

This image is prepared to be used on Lagoon. There are therefore some things already done:

+
    +
  • Folder permissions are automatically adapted with fix-permissions, so this image will work with a random user.
  • +
  • The files within /etc/nginx/* are parsed through envplate with a container-entrypoint.
  • +
+

Included NGINX configuration (static-files.conf)#

+
+

Warning

+

By default NGINX only serves static files - this can be used for static sites that don't require a database or PHP components: for example, static site generators like Hugo, Jekyll or Gatsby.

+
+

If you need PHP, have a look at the php-fpm image and use nginx and php-fpm in tandem.

+

Build the content during the build process and inject it into the nginx container.

+

Helpers#

+

redirects-map.conf#

+

In order to create redirects, we have redirects-map.conf in place. This helps you to redirect marketing domains to sub-sites or do non-www to www redirects. If you have a lot of redirects, we suggest having redirects-map.conf stored next to your code for easier maintainability.

+
+

Note

+

If you only have a few redirects, there's a handy trick to create the redirects with a RUN command in your nginx.dockerfile.

+
+

Here's an example showing how to redirect www.example.com to example.com and preserve the request:

+
Redirect
RUN echo "~^www.example.com http://example.com\$request_uri;" >> /etc/nginx/redirects-map.conf
+
+

To get more details about the various types of redirects that can be achieved, see the documentation within the redirects-map.conf directly.

+

After you put the redirects-map.conf in place, you also need to include it in your nginx.dockerfile in order to get the configuration file into your build.

+
nginx.dockerfile
COPY redirects-map.conf /etc/nginx/redirects-map.conf
+
+

Basic Authentication#

+

Basic authentication is enabled automatically when the BASIC_AUTH_USERNAME +and BASIC_AUTH_PASSWORD environment +variables are set.

+
+

Warning

+

Automatic basic auth configuration is provided for convenience. It should not be considered a secure method of protecting your website or private data.

+
+

Environment Variables#

+

Some options are configurable via environment +variables.

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Environment VariableDefaultDescription
BASIC_AUTHrestrictedSet to off to disable basic authentication.
BASIC_AUTH_USERNAME(not set)Username for basic authentication.
BASIC_AUTH_PASSWORD(not set)Password for basic authentication (unencrypted).
FAST_HEALTH_CHECK(not set)Set to true to redirect GET requests from certain user agents (StatusCake, Pingdom, Site25x7, Uptime, nagios) to the lightweight Lagoon service healthcheck.
+ + + + + + + + +
+
+ + +
+ +
+ + + +
+
+
+
+ + + + + + + + + + + + + \ No newline at end of file diff --git a/docker-images/nginx/nginx-drupal/index.html b/docker-images/nginx/nginx-drupal/index.html new file mode 100644 index 0000000000..054f1bea63 --- /dev/null +++ b/docker-images/nginx/nginx-drupal/index.html @@ -0,0 +1,15 @@ + + + + + + Redirecting... + + + + + + +Redirecting... + + diff --git a/docker-images/nodejs/index.html b/docker-images/nodejs/index.html new file mode 100644 index 0000000000..4c75ffa314 --- /dev/null +++ b/docker-images/nodejs/index.html @@ -0,0 +1,2797 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + Node.js - Lagoon Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +
+ + + + +
+ + +
+ +
+ + + + + + +
+
+ + + +
+
+
+ + + + +
+
+
+ + + +
+
+
+ + + +
+
+
+ + + +
+
+ + + + + + + + + + + + + +

Node.js#

+

The Lagoon Node.js Docker image. Based on the official Node Alpine images.

+

Supported Versions#

+

We ship 2 versions of Node.js images: the normal node:version image and the node:version-builder.

+

The builder variant of those images comes with additional tooling that is needed when you build Node.js apps (such as the build libraries, npm and Yarn). For a full list check out their Dockerfile.

+
    +
  • 12 (available for compatibility only, no longer officially supported) - uselagoon/node-12
  • +
  • 14 (available for compatibility only, no longer officially supported) - uselagoon/node-14
  • +
  • 16 Dockerfile (Security Support until September 2023) - uselagoon/node-16
  • +
  • 18 Dockerfile (Security Support until April 2025) - uselagoon/node-18
  • +
  • 20 Dockerfile (Security Support until April 2026) - uselagoon/node-20
  • +
+
+

Tip

+

We stop updating EOL Node.js images usually with the Lagoon release that comes after the officially communicated EOL date: https://nodejs.org/en/about/releases/.

+
+

Lagoon adaptions#

+

The default exposed port of Node.js containers is port 3000.

+

Persistent storage is configurable in Lagoon, using the lagoon.type: node-persistent. See the docs for more info

+

Use the following labels in your docker-compose.yml file to configure it:

+
    +
  • lagoon.persistent = use this to define the path in the container to use as persistent storage - e.g. /app/files.
  • +
  • lagoon.persistent.size = this to tell Lagoon how much storage to assign this path.
  • +
  • If you have multiple services that share the same storage, use this +lagoon.persistent.name = (optional) use this to tell Lagoon to use the storage defined in another named service.
  • +
+

docker-compose.yml snippet#

+
docker-compose.yml
    node:
+        build:
+            # this configures a build from a Dockerfile in the root folder
+            context: .
+            dockerfile: Dockerfile
+        labels:
+            # tells Lagoon this is a node service, configured with 500MB of persistent storage at /app/files
+            lagoon.type: node-persistent
+            lagoon.persistent: /app/files
+            lagoon.persistent.size: 500Mi
+        ports:
+        # local development only
+            # this exposes the port 3000 with a random local port
+            # find it with `docker-compose port node 3000`
+            - "3000"
+        volumes:
+        # local development only
+            # mounts a named volume (files) at the defined path for this service to replicate production
+            - files:/app/files
+
+ + + + + + + + +
+
+ + +
+ +
+ + + +
+
+
+
+ + + + + + + + + + + + + \ No newline at end of file diff --git a/docker-images/opensearch/index.html b/docker-images/opensearch/index.html new file mode 100644 index 0000000000..f47c24b070 --- /dev/null +++ b/docker-images/opensearch/index.html @@ -0,0 +1,2786 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + OpenSearch - Lagoon Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +
+ + + + +
+ + +
+ +
+ + + + + + +
+
+ + + +
+
+
+ + + + +
+
+
+ + + +
+
+
+ + + +
+
+
+ + + +
+
+ + + + + + + + + + + + + +

OpenSearch#

+
+

OpenSearch is a community-driven, Apache 2.0-licensed open source search and analytics suite that makes it easy to ingest, search, visualize, and analyze data.

+ +
+

Supported versions#

+ +

Environment Variables#

+

Some options are configurable via environment +variables.

+ + + + + + + + + + + + + + + +
Environment VariableDefaultDescription
OPENSEARCH_JAVA_OPTS-Xms512m -Xmx512mSets the memory usage of the OpenSearch container. Both values need be the same value or OpenSearch will not start cleanly.
+

Known issues#

+

On Linux-based systems, the start of the OpenSearch container may fail due to a low vm.max_map_count setting.

+
Error
opensearch_1  | ERROR: [1] bootstrap checks failed
+opensearch_1  | [1]: max virtual memory areas vm.max_map_count [65530] is too low, increase to at least [262144]
+
+

Solution to this issue can be found here.

+ + + + + + + + +
+
+ + +
+ +
+ + + +
+
+
+
+ + + + + + + + + + + + + \ No newline at end of file diff --git a/docker-images/php-cli/index.html b/docker-images/php-cli/index.html new file mode 100644 index 0000000000..229c2b9cb5 --- /dev/null +++ b/docker-images/php-cli/index.html @@ -0,0 +1,2850 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + PHP-CLI - Lagoon Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +
+ + + + +
+ + +
+ +
+ + + + + + +
+
+ + + +
+
+
+ + + + +
+
+
+ + + +
+
+
+ + + +
+
+
+ + + +
+
+ + + + + + + + + + + + + +

PHP-CLI#

+

The Lagoon php-cli Docker image. Based on Lagoon php-fpm image, it has all the needed command line tools for daily operations.

+

Containers (or pods) started from cli images are responsible for building code for Composer or Node.js based projects.

+

The image also contains database clis for both MariaDB and PostgreSQL.

+
+

Info

+

This Dockerfile is intended to be used as a base for any cli needs within Lagoon.

+
+

Supported versions#

+
    +
  • 7.3 (available for compatibility only, no longer officially supported)
  • +
  • 7.4 (available for compatibility only, no longer officially supported)
  • +
  • 8.0 Dockerfile (Security Support until November 2023) - uselagoon/php-8.0-cli
  • +
  • 8.1 Dockerfile (Security Support until November 2024) - uselagoon/php-8.1-cli
  • +
  • 8.2 Dockerfile (Security Support until December 2025) - uselagoon/php-8.2-cli
  • +
+

All PHP versions use their own Dockerfiles.

+

Lagoon adaptions#

+

This image is prepared to be used on Lagoon. There are therefore some things already done:

+
    +
  • Folder permissions are automatically adapted with fix-permissions, so this image will work with a random user.
  • +
  • COMPOSER_ALLOW_SUPERUSER=1 removes warning about use of Composer as root.
  • +
  • 80-shell-timeout.sh script checks if containers are running in a Kubernetes environment and then set a 10 minutes timeout to idle cli pods.
  • +
  • cli containers use an SSH key injected by Lagoon or defined into SSH_PRIVATE_KEYenvironment variable.
  • +
+

Included CLI tools#

+

The included CLI tools are:

+ +

Change Node.js Version#

+

By default this image ships with the nodejs-current package (v17 as of Mar 2022). If you need another version you can remove the current version and install the one of your choice. For example, to install Node.js 16, modify your dockerfile to include:

+
Update Node.js version
RUN apk del nodejs-current \
+    && apk add --no-cache nodejs=~16
+
+

Environment variables#

+

Some options are configurable via environment +variables. The php-fpm +environment variables also apply.

+ + + + + + + + + + + + + + + +
NameDefaultDescription
MARIADB_MAX_ALLOWED_PACKET64MControls the max allowed packet for the MySql client.
+ + + + + + + + +
+
+ + +
+ +
+ + + +
+
+
+
+ + + + + + + + + + + + + \ No newline at end of file diff --git a/docker-images/php-cli/php-cli-drupal/index.html b/docker-images/php-cli/php-cli-drupal/index.html new file mode 100644 index 0000000000..d14db72eaa --- /dev/null +++ b/docker-images/php-cli/php-cli-drupal/index.html @@ -0,0 +1,15 @@ + + + + + + Redirecting... + + + + + + +Redirecting... + + diff --git a/docker-images/php-fpm/index.html b/docker-images/php-fpm/index.html new file mode 100644 index 0000000000..a1bf2022e6 --- /dev/null +++ b/docker-images/php-fpm/index.html @@ -0,0 +1,3014 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + PHP-FPM - Lagoon Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +
+ + + + +
+ + +
+ +
+ + + + + + +
+
+ + + +
+
+
+ + + + +
+
+
+ + + +
+
+
+ + + +
+
+
+ + + +
+
+ + + + + + + + + + + + + +

PHP-FPM#

+

The Lagoon php-fpm Docker image. Based on the official PHP Alpine images.

+
+

PHP-FPM (FastCGI Process Manager) is an alternative PHP FastCGI implementation with some additional features useful for sites of any size, especially busier sites.

+ +

FastCGI is a way of having server scripts execute time-consuming code just once instead of every time the script is loaded, reducing overhead.

+
+
+

Info

+

This Dockerfile is intended to be used as a base for any PHP needs within Lagoon. This image itself does not create a web server, rather a php-fpm fastcgi listener. You may need to adapt the php-fpm pool config.

+
+

Supported versions#

+
    +
  • 7.3 (available for compatibility only, no longer officially supported) - uselagoon/php-7.3-fpm
  • +
  • 7.4 (available for compatibility only, no longer officially supported) - uselagoon/php-7.4-fpm
  • +
  • 8.0 Dockerfile (Security Support until November 2023) - uselagoon/php-8.0-fpm
  • +
  • 8.1 Dockerfile (Security Support until November 2024) - uselagoon/php-8.1-fpm
  • +
  • 8.2 Dockerfile (Security Support until December 2025) - uselagoon/php-8.2-fpm
  • +
+

All PHP versions use their own Dockerfiles.

+
+

Tip

+

We stop updating End of Life (EOL) PHP images usually with the Lagoon release that comes after the officially communicated EOL date: https://www.php.net/supported-versions.php. Previous published versions will remain available.

+
+

Lagoon adaptions#

+

This image is prepared to be used on Lagoon. There are therefore some things are already done:

+
    +
  • Folder permissions are automatically adapted with fix-permissions, so this image will work with a random user.
  • +
  • The /usr/local/etc/php/php.ini and /usr/local/etc/php-fpm.conf, plus all files within /usr/local/etc/php-fpm.d/ , are parsed through envplate with a container-entrypoint.
  • +
  • See the Dockerfile for installed PHP extensions.
  • +
  • To install further extensions, extend your Dockerfile from this image. Install extensions according to the docs, under the heading How to install more PHP extensions.
  • +
+

Included PHP config#

+

The included PHP config contains sensible values that will make the creation of PHP pools config easier. Here is a list of some of these. Check /usr/local/etc/php.ini, /usr/local/etc/php-fpm.conf for all of them:

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
ValueDetails
max_execution_time = 900Changeable via PHP_MAX_EXECUTION_TIME.
realpath_cache_size = 256kFor handling big PHP projects.
memory_limit = 400MFor big PHP projects (changeable via PHP_MEMORY_LIMIT).
opcache.memory_consumption = 265For big PHP projects.
opcache.enable_file_override = 1 and opcache.huge_code_pages = 1For faster PHP.
display_errors = Off and display_startup_errors = OffFor sensible production values (changeable via PHP_DISPLAY_ERRORS and PHP_DISPLAY_STARTUP_ERRORS).
upload_max_filesize = 2048MFor big file uploads.
apc.shm_size = 32m and apc.enabled = 1Changeable via PHP_APC_SHM_SIZE and PHP_APC_ENABLED.
+

Also, php-fpm error logging happens in stderr.

+

💡 If you don't like any of these configs, you have three possibilities:

+
    +
  1. If they are changeable via environment variables, use environment variables (this is the preferred method, see table of environment variables below).
  2. +
  3. Create your own fpm-pool config and set via php_admin_value and php_admin_flag.
      +
    1. Learn more about them in this documentation for Running PHP as an Apache module. This documentation refers to Apache, but it is also the case for php-fpm).

      Important:

      +

      1. If you want to provide your own php-fpm pool, overwrite the file /usr/local/etc/php-fpm.d/www.conf with your own config, or rename this file if you want it to have another name. If you don't do that, the provided pool will be started! + 2. PHP values with the PHP_INI_SYSTEM changeable mode cannot be changed via an fpm-pool config. They need to be changed either via already provided environment variables or: +3. Provide your own php.ini or php-fpm.conf file (this is the least preferred method).

      +
    2. +
    +
  4. +
+

Default fpm-pool#

+

This image is shipped with an fpm-pool config (php-fpm.d/www.conf) that creates an fpm-pool and listens on port 9000. This is because we try to provide an image which already covers most needs for PHP, so you don't need to create your own. You are welcome to do so if you like, though!

+

Here a short description of what this file does:

+
    +
  • Listens on port 9000 via IPv4 and IPv6.
  • +
  • Uses the pm dynamic and creates between 2-50 children.
  • +
  • Re-spawns php-fpm pool children after 500 requests to prevent memory leaks.
  • +
  • Replies with pong when making a fastcgi request to /ping (good for automated testing to check if the pool started).
  • +
  • catch_workers_output = yes to see PHP errors.
  • +
  • clear_env = no to be able to inject PHP environment variables via regular Docker environment variables.
  • +
+

Environment Variables#

+

Some options are configurable via environment +variables.

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Environment VariableDefaultDescription
NEWRELIC_ENABLEDfalseEnable NewRelic performance monitoring, needs NEWRELIC_LICENSE be configured.
NEWRELIC_LICENSE(not set)NewRelic license to be used. Important: NEWRELIC_ENABLED needs to be set totrue in order for NewRelic to be enabled.
NEWRELIC_BROWSER_MONITORING_ENABLEDtrueThis enables auto-insertion of the JavaScript fragments for NewRelic browser monitoring. Important: NEWRELIC_ENABLED needs to be set totrue in order for NewRelic to be enabled.
NEWRELIC_DISTRIBUTED_TRACING_ENABLEDfalseThis enables distributed tracing. Important: NEWRELIC_ENABLED needs to be set totrue in order for NewRelic to be enabled.
PHP_APC_ENABLED1Can be set to 0 to disable APC.
PHP_APC_SHM_SIZE32mThe size of each shared memory segment given.
PHP_DISPLAY_ERRORSOffConfigures whether errors are printed or hidden. See php.net.
PHP_DISPLAY_STARTUP_ERRORSOffConfigures whether startup errors are printed or hidden. See php.net.
PHP_ERROR_REPORTINGProduction E_ALL & ~E_DEPRECATED & ~E_STRICT Development: E_ALL & ~E_DEPRECATED & ~E_STRICT & ~E_NOTICEThe desired logging level you'd like PHP to use. See php.net.
PHP_FPM_PM_MAX_CHILDREN50The the maximum number of child processes. See php.net.
PHP_FPM_PM_MAX_REQUESTS500The number of requests each child process should execute before re-spawning. See php.net.
PHP_FPM_PM_MAX_SPARE_SERVERS2The desired maximum number of idle server processes. See php.net.
PHP_FPM_PM_MIN_SPARE_SERVERS2The desired minimum number of idle server processes. See php.net.
PHP_FPM_PM_PROCESS_IDLE_TIMEOUT60sThe number of seconds after which an idle process will be killed. See php.net.
PHP_FPM_PM_START_SERVERS2The number of child processes created on startup. See php.net.
PHP_MAX_EXECUTION_TIME900Maximum execution time of each script, in seconds. See php.net.
PHP_MAX_FILE_UPLOADS20The maximum number of files allowed to be uploaded simultaneously. See php.net.
PHP_MAX_INPUT_VARS2000How many input variables will be accepted. See php.net.
PHP_MEMORY_LIMIT400MMaximum amount of memory a script may consume. See php.net.
XDEBUG_ENABLE(not set)Set to true to enable xdebug extension.
BLACKFIRE_ENABLED(not set)Set to true to enable blackfire extension.
BLACKFIRE_SERVER_ID(not set)Set to Blackfire Server ID provided by Blackfire.io. Needs BLACKFIRE_ENABLED set to true.
BLACKFIRE_SERVER_TOKEN(not set)Set to Blackfire Server Token provided by Blackfire.io. Needs BLACKFIRE_ENABLED set to true.
BLACKFIRE_LOG_LEVEL3Change the log level of the blackfire agent. Available values: log verbosity level (4: debug, 3: info, 2: warning, 1: error) See blackfire.io.
+ + + + + + + + +
+
+ + +
+ +
+ + + +
+
+
+
+ + + + + + + + + + + + + \ No newline at end of file diff --git a/docker-images/postgres/index.html b/docker-images/postgres/index.html new file mode 100644 index 0000000000..e2649e7624 --- /dev/null +++ b/docker-images/postgres/index.html @@ -0,0 +1,2801 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + PostgreSQL - Lagoon Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +
+ + + + +
+ + +
+ +
+ + + + + + +
+
+ + + +
+
+
+ + + + +
+
+
+ + + +
+
+
+ + + +
+
+
+ + + +
+
+ + + + + + + + + + + + + +

PostgreSQL#

+

The Lagoon PostgreSQL Docker image. Based on the official PostgreSQL Alpine images.

+

Supported versions#

+
    +
  • 11 Dockerfile (Security Support until November 2023) - uselagoon/postgres-11
  • +
  • 12 Dockerfile (Security Support until November 2024) - uselagoon/postgres-12
  • +
  • 13 Dockerfile (Security Support until November 2025) - uselagoon/postgres-13
  • +
  • 14 Dockerfile (Security Support until November 2026) - uselagoon/postgres-14
  • +
  • 15 Dockerfile (Security Support until November 2027) - uselagoon/postgres-15
  • +
+
+

Tip

+

We stop updating EOL PostgreSQL images usually with the Lagoon release that comes after the officially communicated EOL date: https://www.postgresql.org/support/versioning

+
+

Lagoon adaptions#

+

The default exposed port of Postgres containers is port 5432.

+

To allow Lagoon to select the best way to run the Postgres container, use lagoon.type: postgres - this allows DBaaS operator to provision a cloud database if available in the cluster. Use lagoon.type: postgres-single to specifically request Postgres in a container. Persistent storage is always provisioned for postgres containers at /var/lib/postgresql/data.

+

docker-compose.yml snippet#

+
docker-compose.yml
postgres:
+  image: uselagoon/postgres-14-drupal:latest
+  labels:
+    # tells Lagoon this is a Postgres database
+    lagoon.type: postgres
+  ports:
+    # exposes the port 5432 with a random local port
+    # find it with `docker-compose port postgres 5432`
+    - "5432"
+  volumes:
+    # mounts a named volume at the default path for Postgres
+    - db:/var/lib/postgresql/data
+
+

Tips & Tricks#

+

If you have SQL statements that need to be run immediately after container startup to initialize the database, you can place those .sql files in the container's docker-entrypoint-initdb.d directory. Any .sql files contained in that directory are run automatically at startup, as part of bringing the PostgreSQL container up.

+
+

Warning

+

These scripts are only run if the container is started with an empty database.

+
+ + + + + + + + +
+
+ + +
+ +
+ + + +
+
+
+
+ + + + + + + + + + + + + \ No newline at end of file diff --git a/docker-images/python/index.html b/docker-images/python/index.html new file mode 100644 index 0000000000..71fb77be86 --- /dev/null +++ b/docker-images/python/index.html @@ -0,0 +1,2794 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + Python - Lagoon Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +
+ + + + +
+ + +
+ +
+ + + + + + +
+
+ + + +
+
+
+ + + + +
+
+
+ + + +
+
+
+ + + +
+
+
+ + + +
+
+ + + + + + + + + + + + + +

Python#

+

The Lagoon python Docker image. Based on the official Python Alpine images.

+

Supported Versions#

+
    +
  • 2.7 (available for compatibility only, no longer officially supported) - uselagoon/python-2.7
  • +
  • 3.7 Dockerfile (Security Support until July 2023) - uselagoon/python-3.7
  • +
  • 3.8 Dockerfile (Security Support until October 2024) - uselagoon/python-3.8
  • +
  • 3.9 Dockerfile (Security Support until October 2025) - uselagoon/python-3.9
  • +
  • 3.10 Dockerfile (Security Support until October 2026) - uselagoon/python-3.10
  • +
  • 3.11 Dockerfile (Security Support until October 2027) - uselagoon/python-3.11
  • +
+
+

Tip

+

We stop updating and publishing EOL Python images usually with the Lagoon release that comes after the officially communicated EOL date: https://devguide.python.org/versions/#versions. Previous published versions will remain available.

+
+

Lagoon adaptions#

+

The default exposed port of Python containers is port 8800.

+

Persistent storage is configurable in Lagoon, using the lagoon.type: python-persistent. See the docs for more info

+

Use the following labels in your docker-compose.yml file to configure it: +lagoon.persistent = use this to define the path in the container to use as persistent storage - e.g. /app/files +lagoon.persistent.size = this to tell Lagoon how much storage to assign this path

+

If you have multiple services that share the same storage, use this +lagoon.persistent.name = (optional) use this to tell Lagoon to use the storage defined in another named service

+

docker-compose.yml snippet#

+
docker-compose.yml
python:
+    build:
+    # this configures a build from a Dockerfile in the root folder
+        context: .
+        dockerfile: Dockerfile
+    labels:
+    # tells Lagoon this is a python service, configured with 500MB of persistent storage at /app/files
+        lagoon.type: python-persistent
+        lagoon.persistent: /app/files
+        lagoon.persistent.size: 500Mi
+    ports:
+    # local development only
+          # this exposes the port 8800 with a random local port
+          # find it with `docker-compose port python 8800`
+        - "8800"
+    volumes:
+    # local development only
+        # mounts a named volume (files) at the defined path for this service to replicate production
+        - files:/app/files
+
+ + + + + + + + +
+
+ + +
+ +
+ + + +
+
+
+
+ + + + + + + + + + + + + \ No newline at end of file diff --git a/docker-images/rabbitmq/index.html b/docker-images/rabbitmq/index.html new file mode 100644 index 0000000000..5a448af914 --- /dev/null +++ b/docker-images/rabbitmq/index.html @@ -0,0 +1,2828 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + RabbitMQ - Lagoon Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +
+ + + + +
+ + +
+ +
+ + + + + + +
+
+ + + +
+
+
+ + + + +
+
+
+ + + +
+
+
+ + + +
+
+
+ + + +
+
+ + + + + + + + + + + + + +

RabbitMQ#

+

The Lagoon RabbitMQ Dockerfile with management plugin installed. Based on the official rabbitmq:3-management image at docker-hub.

+

This Dockerfile is intended to be used to set up a standalone RabbitMQ queue broker, as well as a base image to set up a cluster with high availability queue support by default (Mirrored queues).

+

By default, the RabbitMQ broker is started as single node. If you want to start a cluster, you need to use the rabbitmq-cluster Docker image, based on rabbitmq image plus the rabbitmq_peer_discovery_k8s plugin.

+

Supported versions#

+
    +
  • 3.10 Dockerfile (Security Support until July 2023) - uselagoon/rabbitmq
  • +
+

Lagoon adaptions#

+

This image is prepared to be used on Lagoon. There are therefore some things already done:

+
    +
  • Folder permissions are automatically adapted with fix-permissions, so this image will work with a random user.
  • +
  • The file /etc/rabbitmq/definitions.json is parsed through envplate with a container-entrypoint.
  • +
+

Included RabbitMQ default schema (definitions.json)#

+
    +
  • To enable the support for Mirrored Queues, at least one policymust exist.
  • +
  • +

    In the definitions.json schema file, minimal entities are defined to make the

    +

    container run: virtualhost (vhost), username , and password to access management

    +

    UI, permissions , and policies.

    +
  • +
+

By default, a policy called lagoon-ha is created at startup, but it is not active because it doesn't match any queue's name pattern (see default Environment Variables).

+
definitions.json
"policies":[
+        {"vhost":"${RABBITMQ_DEFAULT_VHOST}","name":"lagoon-ha","pattern":"${RABBITMQ_DEFAULT_HA_PATTERN}", "definition":{"ha-mode":"exactly","ha-params":2,"ha-sync-mode":"automatic","ha-sync-batch-size":5}}
+  ]
+
+

By default, the ha-mode is set to exactly which controls the exact number of mirroring nodes for a queue (mirrors). The number of nodes is controller by ha-params.

+

For further information and custom configuration, please refer to official RabbitMQ documentation.

+

Environment Variables#

+

Some options are configurable via environment +variables.

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Environment VariableDefaultDescription
RABBITMQ_DEFAULT_USERguestUsername for management UI access.
RABBITMQ_DEFAULT_PASSguestPassword for management UI access.
RABBITMQ_DEFAULT_VHOST/RabbitMQ main virtualhost.
RABBITMQ_DEFAULT_HA_PATTERN^$Regular expression to match for mirrored queues.
+ + + + + + + + +
+
+ + +
+ +
+ + + +
+
+
+
+ + + + + + + + + + + + + \ No newline at end of file diff --git a/docker-images/redis/index.html b/docker-images/redis/index.html new file mode 100644 index 0000000000..374f4b76af --- /dev/null +++ b/docker-images/redis/index.html @@ -0,0 +1,3035 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + Redis - Lagoon Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +
+ + + + +
+ + +
+ +
+ + + + + + +
+
+ + + +
+
+
+ + + + +
+
+
+ + + +
+
+
+ + + +
+
+
+ + + +
+
+ + + + + + + + + + + + + +

Redis#

+

Lagoon Redis image Dockerfile, based on offical redis:alpine image.

+

This Dockerfile is intended to be used to set up a standalone Redis ephemeral server by default.

+

Supported versions#

+
    +
  • 5 (available for compatibility only, no longer officially supported) - uselagoon/redis-5 or uselagoon/redis-5-persistent
  • +
  • 6 Dockerfile - uselagoon/redis-6 or uselagoon/redis-6-persistent
  • +
  • 7 Dockerfile - uselagoon/redis-7 or uselagoon/redis-7-persistent
  • +
+

Usage#

+

There are 2 different flavors of Redis Images: Ephemeral and Persistent.

+

Ephemeral#

+

The ephemeral image is intended to be used as an in-memory cache for applications and will not retain data across container restarts.

+

When being used as an in-memory (RAM) cache, the first thing you might want to tune if you have large caches is to adapt the MAXMEMORY variable. This variable controls the maximum amount of memory (RAM) which redis will use to store cached items.

+

Persistent#

+

The persistent Redis image will persist data across container restarts and can be used for queues or application data that will need persistence.

+

We don't typically suggest using a persistent Redis for in-memory cache scenarios as this might have unintended side-effects on your application while a Redis container is restarting and loading data from disk.

+

Lagoon adaptions#

+

This image is prepared to be used on Lagoon. There are therefore some things already done:

+
    +
  • Folder permissions are automatically adapted with fix-permissionsso this image will work with a random user.
  • +
  • The files within /etc/redis/* are templated using envplate via a container-entrypoint.
  • +
+

Included redis.conf configuration file#

+

The image ships a default Redis configuration file, optimized to work on Lagoon.

+

Environment Variables#

+

Some options are configurable via environment +variables.

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Environment VariableDefaultDescription
DATABASES-1Default number of databases created at startup.
LOGLEVELnoticeDefine the level of logs.
MAXMEMORY100mbMaximum amount of memory.
MAXMEMORYPOLICYallkeys-lruThe policy to use when evicting keys if Redis reaches its maximum memory usage.
REDIS_PASSWORDdisabledEnables authentication feature.
+

Custom configuration#

+

By building on the base image you can include custom configuration. +See https://raw.githubusercontent.com/antirez/redis/4.0/redis.conf for full documentation of the Redis configuration file.

+

Redis-persistent#

+

Based on the Lagoon redis image, the Lagoon redis-persistent Docker image is intended for use when the Redis service must be utilized in persistent mode (ie. with a persistent volume where keys will be saved to disk).

+

It differs from redis only with the FLAVOR environment variable, which will use the respective Redis configuration according to the version of redis in use.

+

Troubleshooting#

+

The Lagoon Redis images all come pre-loaded with the redis-cli command, which allows for querying the Redis service for information and setting config values dynamically. To use this utility, you can simply SSH into your Redis pod by using the instructions [here] (../using-lagoon-advanced/ssh.md) with redis as the pod value then run it from the terminal once you've connected.

+

Maximum Memory Policy#

+

By default, the Lagoon redis images are set to use the allkeys-lru policy. This policy will alow ANY keys stored in Redis to be evicted if/when the Redis service hits its maxmemory limit according to when the key was least recently used.

+

For typical installations, this is the ideal configuration, as Drupal may not set a TTL value for each key cached in Redis. If the maxmemory-policy is set to something like volatile-lru and Drupal doesn't provide these TTL tags, this would result in the Redis container filling up, being totally unable to evict ANY keys, and ceasing to accept new cache keys at all.

+

More information on Redis' maxmemory policies can be found in Redis' official documentation.

+
+

Proceed with Caution

+

Changing this setting can lead to Redis becoming completely full and cause outages as a result.

+
+

Tuning Redis' maxmemory value#

+

Finding the optimal amount of memory to give Redis can be quite the difficult task. Before attempting to tune your Redis cache's memory size, it is prudent to let it run normally for as long as practical, with at least a day of typical usage being the ideal minimum timeframe.

+

There are a few high level things you can look at when tuning these memory values:

+
    +
  • The first thing to check is the percentage of memory in use by Redis currently.
      +
    • If this percentage is less than 50%, you might consider lowering the maxmemory value by 25%.
    • +
    • If this percentage is between 50% and 75%, things are running just fine.
    • +
    • If this value is greater than 75%, then it's worth looking at other variables to see if maxmemory needs to be increased.
    • +
    +
  • +
  • If you find that your Redis' memory usage percentage is high, the next thing to look at is the number of key evictions.
      +
    • A large number of key evictions and a memory usage greater than 95% is a fairly good indicator that your redis needs a higher maxmemory setting.
    • +
    • If the number of key evictions doesn't seem high and typical response times are reasonable, this is simply indicative of Redis doing its job and managing its allocated memory as expected.
    • +
    +
  • +
+

Example commands#

+

The following commands can be used to view information about the Redis service:

+
    +
  • View all info about the Redis service: redis-cli info
  • +
  • View service memory information: redis-cli info memory
  • +
  • View service keyspace information: redis-cli info keyspace
  • +
  • View service statistics: redis-cli info stats
  • +
+

It is also possible to set values for the Redis service dynamically without a restart of the Redis service. It is important to note that these dynamically set values will not persist if the pod is restarted (which can happen as a result of a deployment, maintenance, or even just being shuffled from one node to another).

+
    +
  • Set maxmemory config value dynamically to 500mb: config set maxmemory 500mb
  • +
  • Set maxmemory-policy config value dynamically to volatile-lru: config set maxmemory-policy volatile-lru
  • +
+ + + + + + + + +
+
+ + +
+ +
+ + + +
+
+
+
+ + + + + + + + + + + + + \ No newline at end of file diff --git a/docker-images/ruby/index.html b/docker-images/ruby/index.html new file mode 100644 index 0000000000..a2cb5f9f1e --- /dev/null +++ b/docker-images/ruby/index.html @@ -0,0 +1,2781 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + Ruby - Lagoon Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +
+ + + + +
+ + +
+ +
+ + + + + + +
+
+ + + +
+
+
+ + + + +
+
+
+ + + +
+
+
+ + + +
+
+
+ + + +
+
+ + + + + + + + + + + + + +

Node.js#

+

The Lagoon ruby Docker image. Based on the official Python Alpine images.

+

Supported Versions#

+
    +
  • 3.0 Dockerfile (Security Support until March 2024) - uselagoon/ruby-3.0
  • +
  • 3.1 Dockerfile (Security Support until March 2025) - uselagoon/ruby-3.1
  • +
  • 3.2 Dockerfile (Security Support until March 2026) - uselagoon/ruby-3.2
  • +
+
+

Tip

+

We stop updating and publishing EOL Ruby images usually with the Lagoon release that comes after the officially communicated EOL date: https://www.ruby-lang.org/en/downloads/releases/. Previous versions will remain available.

+
+

Lagoon adaptions#

+

The default exposed port of ruby containers is port 3000.

+

Lagoon has no "pre-defined" type for Ruby services, they should be configured with the lagoon.type: generic and a port set with lagoon.port: 3000

+

docker-compose.yml snippet#

+
docker-compose.yml
ruby:
+    build:
+    # this configures a build from a Dockerfile in the root folder
+        context: .
+        dockerfile: Dockerfile
+        labels:
+        # tells Lagoon this is a generic service, configured to expose port 3000
+            lagoon.type: generic
+            lagoon.port: 3000
+        ports:
+        # local development only
+        # this exposes the port 3000 with a random local port
+        # find it with `docker-compose port ruby 3000`
+            - "3000"
+
+ + + + + + + + +
+
+ + +
+ +
+ + + +
+
+
+
+ + + + + + + + + + + + + \ No newline at end of file diff --git a/docker-images/solr/index.html b/docker-images/solr/index.html new file mode 100644 index 0000000000..a1ff5fae2d --- /dev/null +++ b/docker-images/solr/index.html @@ -0,0 +1,2797 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + Solr - Lagoon Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +
+ + + + +
+ + +
+ +
+ + + + + + +
+
+ + + +
+
+
+ + + + +
+
+
+ + + +
+
+
+ + + +
+
+
+ + + +
+
+ + + + + + + + + + + + + +

Solr#

+

The Lagoon Solr image Dockerfile. Based on the official solr:<version>-alpine images.

+

This Dockerfile is intended to be used to set up a standalone Solr server with an initial core mycore.

+

Supported Versions#

+
    +
  • 5.5 (available for compatibility only, no longer officially supported)
  • +
  • 6.6 (available for compatibility only, no longer officially supported)
  • +
  • 7.7 (available for compatibility only, no longer officially supported)
  • +
  • 7 Dockerfile - uselagoon/solr-7
  • +
  • 8 Dockerfile - uselagoon/solr-8
  • +
+

Lagoon adaptions#

+

This image is prepared to be used on Lagoon. There are therefore some things already done:

+
    +
  • Folder permissions are automatically adapted with fix-permissions, so this image will work with a random user.
  • +
  • 10-solr-port.sh script to fix and check Solr port.
  • +
  • 20-solr-datadir.sh script to check if Solr config is compliant for Lagoon. This sets directory paths, and configures the correct lock type.
  • +
+

Environment Variables#

+

Some options are configurable via environment +variables.

+ + + + + + + + + + + + + + + + + + + + + + + + + +
Environment VariableDefaultDescription
SOLR_JAVA_MEM512MDefault Java HEAP size (ie. SOLR_JAVA_MEM="-Xms10g -Xmx10g").
SOLR_DATA_DIR/var/solrPath of the solr data dir. Be careful, changing this can cause data loss!
SOLR_COPY_DATA_DIR_SOURCE(not set)Path which the entrypoint script of solr will use to copy into the defined SOLR_DATA_DIR, this can be used for prepopulating the Solr with a core. The scripts expects actual Solr data files! Plus it only copies data if the destination does not already have a solr core in it.
+ + + + + + + + +
+
+ + +
+ +
+ + + +
+
+
+
+ + + + + + + + + + + + + \ No newline at end of file diff --git a/docker-images/solr/solr-drupal/index.html b/docker-images/solr/solr-drupal/index.html new file mode 100644 index 0000000000..0956e08658 --- /dev/null +++ b/docker-images/solr/solr-drupal/index.html @@ -0,0 +1,15 @@ + + + + + + Redirecting... + + + + + + +Redirecting... + + diff --git a/docker-images/varnish/index.html b/docker-images/varnish/index.html new file mode 100644 index 0000000000..a26edd0f83 --- /dev/null +++ b/docker-images/varnish/index.html @@ -0,0 +1,2872 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + Varnish - Lagoon Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +
+ + + + +
+ + +
+ +
+ + + + + + +
+
+ + + +
+
+
+ + + + +
+
+
+ + + +
+
+
+ + + +
+
+
+ + + +
+
+ + + + + + + + + + + + + +

Varnish#

+

The Lagoon Varnish Docker images. Based on the official Varnish package

+

Supported versions#

+
    +
  • 5 (available for compatibility only, no longer officially supported) - uselagoon/varnish-5
  • +
  • 6 Dockerfile - uselagoon/varnish-6
  • +
  • 7 Dockerfile - uselagoon/varnish-7
  • +
+

Included varnish modules#

+
    +
  • vbox-dynamic - Dynamic backends from DNS lookups and service discovery from SRV records.
  • +
  • vbox-bodyaccess - Varnish vmod that lets you access the request body.
  • +
+

Lagoon adaptions#

+

This image is prepared to be used on Lagoon. There are therefore some things already done:

+
    +
  • Folder permissions are automatically adapted with fix-permissions, so this image will work with a random user.
  • +
+

Included default.vcl configuration file#

+

The image ships a default vcl configuration file, optimized to work on Lagoon. Some options are configurable via environments variables (see Environment Variables).

+

Environment Variables#

+

Some options are configurable via environment +variables.

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Environment VariableDefaultDescription
VARNISH_BACKEND_HOSTNGINXDefault backend host.
VARNISH_BACKEND_PORT8080Default listening Varnish port.
VARNISH_SECRETlagoon_default_secretVarnish secret used to connect to management.
LIBVMOD_DYNAMIC_VERSION5.2Default version of vmod-dynamic module.
LIBVMOD_BODYACCESS_VERSION5.0Default version of vmod-bodyaccess module.
HTTP_RESP_HDR_LEN8kMaximum length of any HTTP backend response header.
HTTP_RESP_SIZE32kMaximum number of bytes of HTTP backend response we will deal with.
NUKE_LIMIT150Maximum number of objects we attempt to nuke in order to make space for an object body.
CACHE_TYPEmallocType of varnish cache.
CACHE_SIZE100MCache size.
LISTEN8080Default backend server port.
MANAGEMENT_LISTEN6082Default management listening port.
+ + + + + + + + +
+
+ + +
+ +
+ + + +
+
+
+
+ + + + + + + + + + + + + \ No newline at end of file diff --git a/docker-images/varnish/varnish-drupal/index.html b/docker-images/varnish/varnish-drupal/index.html new file mode 100644 index 0000000000..905725946d --- /dev/null +++ b/docker-images/varnish/varnish-drupal/index.html @@ -0,0 +1,15 @@ + + + + + + Redirecting... + + + + + + +Redirecting... + + diff --git a/drupal/1-docker-setup.png b/drupal/1-docker-setup.png new file mode 100644 index 0000000000..89c929a8d2 Binary files /dev/null and b/drupal/1-docker-setup.png differ diff --git a/drupal/2-cli-interpreter.png b/drupal/2-cli-interpreter.png new file mode 100644 index 0000000000..692df036ab Binary files /dev/null and b/drupal/2-cli-interpreter.png differ diff --git a/drupal/3-remote-interpreter-setup.png b/drupal/3-remote-interpreter-setup.png new file mode 100644 index 0000000000..21ecacf872 Binary files /dev/null and b/drupal/3-remote-interpreter-setup.png differ diff --git a/drupal/4-configure-runner.png b/drupal/4-configure-runner.png new file mode 100644 index 0000000000..1afd76bbd6 Binary files /dev/null and b/drupal/4-configure-runner.png differ diff --git a/drupal/5-going-green-1-.gif b/drupal/5-going-green-1-.gif new file mode 100644 index 0000000000..3847e7a4f1 Binary files /dev/null and b/drupal/5-going-green-1-.gif differ diff --git a/drupal/drush-9/index.html b/drupal/drush-9/index.html new file mode 100644 index 0000000000..7a876b47e2 --- /dev/null +++ b/drupal/drush-9/index.html @@ -0,0 +1,2857 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + Drush 9 - Lagoon Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +
+ + + + +
+ + +
+ +
+ + + + + + +
+
+ + + +
+
+
+ + + + +
+
+
+ + + +
+
+
+ + + +
+
+
+ + + +
+
+ + + + + + + + + + + + + +

Drush 9#

+

Aliases#

+

Unfortunately, Drush 9 does not provide the ability to inject dynamic site aliases like Drush 8 did. We are working with the Drush team to implement this again. In the meantime, we have a workaround that allows you to use Drush 9 with Lagoon.

+

Basic Idea#

+

Drush 9 provides a new command, drush site:alias-convert , which can convert Drush 8-style site aliases over to the Drush 9 YAML site alias style. This will create a on- time export of the site aliases currently existing in Lagoon, and save them in /app/drush/sites . These are then used when running a command like drush sa.

+

Preparation#

+

In order to be able to use drush site:alias-convert , you need to do the following:

+
    +
  • Rename the aliases.drushrc.php inside the drush folder to lagoon.aliases.drushrc.php.
  • +
+

Generate Site Aliases#

+

You can now convert your Drush aliases by running the following command in your project using the cli container:

+
Generate Site Aliases
docker-compose exec cli drush site:alias-convert /app/drush/sites --yes
+
+

It's good practice to commit the resulting YAML files into your Git repository, so that they are in place for your fellow developers.

+

Use Site Aliases#

+

In Drush 9, all site aliases are prefixed with a group. In our case, this is lagoon. You can show all site aliases with their prefix via:

+
Show all site aliases
drush sa --format=list
+
+

and to use them:

+
Using Drush site alias
drush @lagoon.main ssh
+
+

Update Site Aliases#

+

If a new environment in Lagoon has been created, you can run drush site:alias-convert to update the site aliases file. If running this command does not update lagoon.site.yml, try deleting lagoon.site.yml first, and then re-run drush site:alias-convert.

+

Drush rsync from local to remote environments#

+

If you would like to sync files from a local environment to a remote environment, you need to pass additional parameters:

+
Drush rsync
drush rsync @self:%files @lagoon.main:%files -- --omit-dir-times --no-perms --no-group --no-owner --chmod=ugo=rwX
+
+

This also applies to syncing one remote environment to another, if you're not using the Lagoon tasks UI to copy files between environments.

+

For example, if you wanted to sync the files from @lagoon.main to @lagoon.dev , and ran drush rsync @lagoon.main @lagoon.dev locally, without the extra parameters, you would probably run into a "Cannot specify two remote aliases" error.

+

To resolve this, you would first need to SSH into your destination environment drush @lagoon.dev ssh, and then execute the rsync command with parameters similar to the above:

+
Drush rsync
drush rsync @lagoon.main:%files  @self:%files -- --omit-dir-times --no-perms --no-group --no-owner --chmod=ugo=rwX
+
+

This is not necessary if you rsync from a remote to a local environment.

+

Also, we're working with the Drush maintainers to find a way to inject this automatically.

+ + + + + + + + +
+
+ + +
+ +
+ + + +
+
+
+
+ + + + + + + + + + + + + \ No newline at end of file diff --git a/drupal/first-deployment-of-drupal/index.html b/drupal/first-deployment-of-drupal/index.html new file mode 100644 index 0000000000..9d9ed533b7 --- /dev/null +++ b/drupal/first-deployment-of-drupal/index.html @@ -0,0 +1,2912 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + First Deployment of Drupal - Lagoon Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +
+ + + + +
+ + +
+ +
+ + + + + + +
+
+ + + +
+
+
+ + + + +
+
+
+ + + + + + + +
+
+ + + + + + + + + + + + + +

First Deployment of Drupal#

+

excited

+

1. Make sure you are all set#

+

In order to make your first deployment a successful one, please make sure that your Drupal Project is Lagoonized and you have set up the project in Lagoon. If not, don't worry! Follow the Step-by-Step Guide which show you how this works.

+

2. Push#

+

With Lagoon, you create a new deployment by pushing into a branch that is configured to be deployed.

+

If you don't have any new code to push, don't worry, you can run

+
Git push
git commit --allow-empty -m "go, go! Power Rangers!"
+git push
+
+

This will trigger a push, and the Git hosting will inform Lagoon about this push via the configured webhook.

+

If all is correct, you will see a notification in your configured chat system (this is configured by your friendly Lagoon administrator):

+

Slack notification of a deployment starting.

+

This tells you that Lagoon has just started to deploy your code. Depending on the size of the codebase and amount of containers, this will take a couple of seconds. Just relax. If you'd like to know what's happening now, check out the Build and Deploy Process of Lagoon.

+

You can also check your Lagoon UI to see the progress of any deployment (your Lagoon administrator has the info).

+

3. A fail#

+

Depending on the post-rollout tasks defined in .lagoon.yml , you might have run some tasks like drush updb or drush cr. These Drush tasks depend on a database existing within the environment, which obviously does not exist yet. Let's fix that! Keep reading.

+

4. Synchronize local database to the remote Lagoon environment#

+

With full Drush site alias support in Lagoon, you can synchronize a local database with the remote Lagoon environment.

+
+

Warning

+

You may have to tell pygmy about your public keys before the next step.

+
+

If you get an error like Permission denied (publickey), check out the documentation here: pygmy - adding ssh keys

+

First let's make sure that you can see the Drush site aliases:

+
Get site aliases
drush sa
+
+

This should return your just deployed environment (let's assume you just pushed into develop):

+
Returned site aliases
[drupal-example]cli-drupal:/app$ drush sa
+@develop
+@self
+default
+
+

With this we can now synchronize the local database (which is represented in Drush via the site alias @self) with the remote one (@develop):

+
Drush sql-sync
drush sql-sync @self @develop
+
+

You should see something like:

+
Drush sql-sync results
[drupal-example]cli-drupal:/app$ drush sql-sync @self @develop
+You will destroy data in ssh.lagoon.amazeeio.cloud/drupal and replace with data from drupal.
+Do you really want to continue? (y/n): y
+Starting to dump database on Source.                                                                              [ok]
+Database dump saved to /home/drush-backups/drupal/20180227075813/drupal_20180227_075815.sql.gz               [success]
+Starting to discover temporary files directory on Destination.                                                    [ok]
+You will delete files in drupal-example-develop@ssh.lagoon.amazeeio.cloud:/tmp/drupal_20180227_075815.sql.gz and replace with data from /home/drush-backups/drupal/20180227075813/drupal_20180227_075815.sql.gz
+Do you really want to continue? (y/n): y
+Copying dump file from Source to Destination.                                                                     [ok]
+Starting to import dump file onto Destination database.
+
+

Now let's try another deployment, again an empty push:

+
Git push
git commit --allow-empty -m "go, go! Power Rangers!"
+git push
+
+

This time all should be green:

+

Deployment Success!

+

Click on the links in the notification, and you should see your Drupal site loaded in all its beauty! It will probably not have images yet, which we will handle in Step 6.

+

If it is still failing, check the logs link for more information.

+

5. Synchronize local files to the remote Lagoon environment#

+

You probably guessed it: we can do it with Drush:

+
Drush rsync
drush rsync @self:%files @develop:%files
+
+

It should show you something like:

+
Drush rsync results
[drupal-example]cli-drupal:/app$ drush rsync @self:%files @develop:%files
+You will delete files in drupal-example-develop@ssh.lagoon.amazeeio.cloud:/app/web/sites/default/files and replace with data from /app/web/sites/default/files/
+Do you really want to continue? (y/n): y
+
+

In some cases, though, it might not look correct, like here:

+
Drush rsync results
[drupal-example]cli-drupal:/app$ drush rsync @self:%files @develop:%files
+You will delete files in drupal-example-develop@ssh.lagoon.amazeeio.cloud:'/app/web/%files' and replace with data from '/app/web/%files'/
+Do you really want to continue? (y/n):
+
+

The reason for that is that the Drupal cannot resolve the path of the files directory. This most probably has to do that the Drupal is not fully configured or has a missing database. For a workaround you can use drush rsync @self:sites/default/files @develop:sites/default/files, but we suggest that you actually check your local and remote Drupal (you can test with drush status to see if the files directory is correctly configured).

+

6. It's done#

+

As soon as Lagoon is done building and deploying it will send a second notification to the chat system, like so:

+

Slack notification of complete deployment.

+

This tells you:

+
    +
  • Which project has been deployed.
  • +
  • Which branch and Git SHA has been deployed.
  • +
  • A link to the full logs of the build and deployment.
  • +
  • Links to all routes (URLs) where the environment can be reached.
  • +
+

That's it! We hope that wasn't too hard - making devOps accessible is what we are striving for.

+

But wait, how about other branches or the production environment?#

+

That's the beauty of Lagoon: it's exactly the same: Push the branch name you defined to be your production branch and that one will be deployed.

+

Failure? Don't worry.#

+

Did the deployment fail? Oh no! But we're here to help:

+
    +
  1. Click on the logs link in the error notification. It will tell you where in the deployment process the failure happened.
  2. +
  3. If you can't figure it out, ask your Lagoon administrator, they are here to help!
  4. +
+ + + + + + + + +
+
+ + +
+ +
+ + + +
+
+
+
+ + + + + + + + + + + + + \ No newline at end of file diff --git a/drupal/first_deployment_slack_2nd_success.jpg b/drupal/first_deployment_slack_2nd_success.jpg new file mode 100644 index 0000000000..e3d69f28c4 Binary files /dev/null and b/drupal/first_deployment_slack_2nd_success.jpg differ diff --git a/drupal/first_deployment_slack_start.jpg b/drupal/first_deployment_slack_start.jpg new file mode 100644 index 0000000000..2aed71ec80 Binary files /dev/null and b/drupal/first_deployment_slack_start.jpg differ diff --git a/drupal/first_deployment_slack_success.jpg b/drupal/first_deployment_slack_success.jpg new file mode 100644 index 0000000000..45bd49fc0e Binary files /dev/null and b/drupal/first_deployment_slack_success.jpg differ diff --git a/drupal/index.html b/drupal/index.html new file mode 100644 index 0000000000..ca8f0d0a30 --- /dev/null +++ b/drupal/index.html @@ -0,0 +1,2745 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + Overview - Lagoon Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +
+ + + + +
+ + +
+ +
+ + + + + + +
+
+ + + +
+
+
+ + + + +
+
+
+ + + +
+
+
+ + + +
+
+
+ + + +
+
+ + + + + + + + + + + + + +

Drupal on Lagoon#

+

Lagoon was built to host Drupal sites (no, seriously, it was - at least initially!)

+

In this section you'll find more information on the various services that have been customised for use with Drupal.

+

drupal_integrations Drupal scaffolding package#

+

The drupal_integrations package, available on pacakagist extends Drupal's core-composer-scaffold for use on Lagoon. It also provides additional Drush command drush la to retreive the Drush aliases for your Lagoon project.

+

lagoon-logs Drupal module#

+

The lagoon_logs module, availalble on drupal.org provides zero-configuration logging for Drupal on Lagoon.

+ + + + + + + + +
+
+ + +
+ +
+ + + +
+
+
+
+ + + + + + + + + + + + + \ No newline at end of file diff --git a/drupal/integrate-drupal-and-fastly/index.html b/drupal/integrate-drupal-and-fastly/index.html new file mode 100644 index 0000000000..d0d4cdc021 --- /dev/null +++ b/drupal/integrate-drupal-and-fastly/index.html @@ -0,0 +1,3114 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + Integrate Drupal & Fastly - Lagoon Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +
+ + + + +
+ + +
+ +
+ + + + + + +
+
+ + + +
+
+
+ + + + +
+
+
+ + + + + + + +
+
+ + + + + + + + + + + + + +

Integrate Drupal & Fastly#

+

Prerequisites#

+
    +
  • A Drupal 7, 8 or 9 site
  • +
  • A Fastly service ID
  • +
  • A Fastly API token with the permission to purge
  • +
+

Drupal 8 or 9 with cache tag purging#

+

Use Composer to get the latest version of the module:

+
Install Fastly
composer require drupal/fastly drupal/http_cache_control drupal/purge
+
+

You will need to enable the following modules:

+
    +
  • fastly
  • +
  • fastlypurger
  • +
  • http_cache_control (2.x)
  • +
  • purge
  • +
  • purge_ui (technically optional, but this is really handy to have enabled on production)
  • +
  • purge_processor_lateruntime
  • +
  • purge_processor_cron
  • +
  • purge_queuer_coretags
  • +
  • purge_drush (useful for purge via Drush, here is a list of commands)
  • +
+

Configure the Fastly module in Drupal#

+

Configure the Fastly service ID and API token. You can use runtime environment variables, or you can edit the settings form found at /admin/config/services/fastly:

+
    +
  • FASTLY_API_TOKEN
  • +
  • FASTLY_API_SERVICE
  • +
+

A site ID is required, the module will generate one for you when you first install it. The idea behind the site ID is that it is a unique string which is appended as a cache tag on all requests. Thus, you are able to purge a single site from Fastly, even though multiple sites may flow through the same service in Fastly.

+

Set the purge options#

+
    +
  • Cache tag hash length: 4
  • +
  • Purge method: Use soft purge
  • +
+

A 4 character cache tag is plenty for most sites, a 5 character cache tag is likely better for sites with millions of entities (to reduce cache tag collisions).

+
Soft purging should be used, this means the item in Fastly is marked as stale, rather than being purged so that it can be used in the event the origin is down (with the feature 'serve while stale').#
+

Fastly admin UI for purging

+

Set the Stale Content Options#

+

Set the options to what makes sense for your site. Minimum 1 hour (3600), maximum 1 week 604800). Generally something like the following will be fine:

+
    +
  1. Stale while revalidate - on, 14440 seconds
  2. +
  3. Stale if error - on, 604800 seconds
  4. +
+

Fastly admin UI for stale

+

Optionally configure the webhooks (so you can ping Slack for instance when a cache purge is sent).

+

Fastly admin UI for webhooks

+

Configure the Purge module#

+

Visit the purge page /admin/config/development/performance/purge

+

Set up the following options:

+

Cache Invalidation#

+
    +
  • Drupal Origin: Tag
  • +
  • Fastly: E, Tag, URL
  • +
+

Queue#

+
    +
  • Queuers: Core tags queuer, Purge block(s)
  • +
  • Queue: Database
  • +
  • Processors: Core processor, Late runtime processor, Purge block(s)
  • +
+

Fastly Admin UI configuration

+

What this means is that we will be using Drupal's built in core tag queuer (add tags to the queue), the queue will be stored in the database (default), and the queue will be processed by

+
    +
  • Cron processor
  • +
  • Late runtime processor
  • +
+

In order for the cron processor to run, you need to ensure that cron is running on your site. Ideally every minute. You can manually run it in your cli pod, to ensure that purge_processor_cron_cron() is being executed without errors.

+
start cron
[drupal8]production@cli-drupal:/app$ drush cron -v
+ ...
+ [notice] Starting execution of purge_processor_cron_cron(), execution of node_cron() took 21.16ms.
+
+

The Late runtime processor will run in hook_exit() for every page load, this can be useful to process the purges nearly as quickly as they come into the queue.

+

By having both, you guarantee that purges happen as soon as possible.

+

Optimal Cache Header Setup#

+

Out of the box, Drupal does not have the power to set different cache lifetimes in the browser vs in Fastly. So if you do set long cache lifetimes in Drupal, often end users will not see them if their browser has cached the page. If you install the 2.x version of the HTTP Cache Control module, this will give you a lot more flexibility on what caches and for how long.

+

For most sites, a sensible default could be

+
    +
  • Shared cache maximum age : 1 month
  • +
  • Browser cache maximum age : 10 minutes
  • +
  • 404 cache maximum age: 15 minutes
  • +
  • 302 cache maximum age: 1 hour
  • +
  • 301 cache maximum age: 1 hour
  • +
  • 5xx cache maximum age: no cache
  • +
+
+

Note

+

This relies on your site having accurate cache tags represented for all the content that exists on the page.

+
+

Viewing caching headers using cURL#

+

Use this function: (works in Linux and Mac OSX)

+
cURL function
function curlf() { curl -sLIXGET -H 'Fastly-Debug:1' "$@" | grep -iE 'X-Cache|Cache-Control|Set-Cookie|X-Varnish|X-Hits|Vary|Fastly-Debug|X-Served|surrogate-control|surrogate-key' }
+
+
Using cURL
$ curlf https://www.example-site-fastly.com
+cache-control: max-age=601, public, s-maxage=2764800
+surrogate-control: max-age=2764800, public, stale-while-revalidate=3600, stale-if-error=3600
+fastly-debug-path: (D cache-wlg10427-WLG 1612906144) (F cache-wlg10426-WLG 1612906141) (D cache-fra19179-FRA 1612906141) (F cache-fra19122-FRA 1612906141)
+fastly-debug-ttl: (H cache-wlg10427-WLG - - 3) (M cache-fra19179-FRA - - 0)
+fastly-debug-digest: 1118d9fefc8a514ca49d49cb6ece04649e1acf1663398212650bb462ba84c381
+x-served-by: cache-fra19179-FRA, cache-wlg10427-WLG
+x-cache: MISS, HIT
+x-cache-hits: 0, 1
+vary: Cookie, Accept-Encoding
+
+

From the above headers we can see that:

+
    +
  • The HTML page is cacheable
  • +
  • Browsers will cache the page for 601 seconds
  • +
  • Fastly will cache the page for 32 days (2764800 seconds)
  • +
  • Tiered caching is in effect (edge PoP in Wellington, and shield PoP in France)
  • +
  • The HTML page was a cache hit at the edge PoP
  • +
+

Sending manual purge requests to Fastly#

+

If you ever want to remove a specific page from cache manually, there are ways to do this.

+

For a single page, you do not need any authentication:

+
Single page cURL
curl -Ssi -XPURGE -H 'Fastly-Soft-Purge:1' https://www.example.com/subpage
+
+

For cache tags, you need to supply your API token for authentication:

+
Cache tags
curl -XPOST -H "Fastly-Key:<Fastly API Key>" https://api.fastly.com/service/<serviceID>/purge/<surrogatekey>
+
+

You can always find what your site ID cache tag is by using PHP

+
Find site ID cache tag
php > var_dump(substr(base64_encode(md5('bananasite', true)), 0, 4));
+string(4) "DTRk"
+
+

So you can purge your entire site from Fastly fairly easily.

+

True client IPs#

+

We configure Fastly to send the actual client IP back on the HTTP header True-Client-IP, you can make Drupal respect this header with the following changes in settings.php:

+
settings.php
$settings['reverse_proxy'] = TRUE;
+$settings['reverse_proxy_header'] = 'HTTP_TRUE_CLIENT_IP';
+
+

Drush integration#

+
settings.php
 fastly:
+   fastly:purge:all (fpall)                                                    Purge whole service.
+   fastly:purge:key (fpkey)                                                    Purge cache by key.
+   fastly:purge:url (fpurl)                                                    Purge cache by Url.
+
+

Drupal 7 with URL based purging#

+
    +
  1. Download and install the Fastly Drupal module.
  2. +
  3. Configure the Fastly service ID and API token.
  4. +
  5. Optionally configure the webhooks (so you can ping Slack for instance when a cache purge is sent)
  6. +
  7. Only URL based purging can be done in Drupal 7 (simple purging).
  8. +
  9. Alter Drupal's client IP in settings.php:
  10. +
+
settings.php
$conf['reverse_proxy_header'] = 'HTTP_TRUE_CLIENT_IP';
+
+ + + + + + + + +
+
+ + +
+ +
+ + + +
+
+
+
+ + + + + + + + + + + + + \ No newline at end of file diff --git a/drupal/phpunit-and-phpstorm/index.html b/drupal/phpunit-and-phpstorm/index.html new file mode 100644 index 0000000000..8bc3c84f7c --- /dev/null +++ b/drupal/phpunit-and-phpstorm/index.html @@ -0,0 +1,2960 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + PHPUnit and PhpStorm - Lagoon Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +
+ + + + +
+ + +
+ +
+ + + + + + +
+
+ + + +
+
+
+ + + + +
+
+
+ + + +
+
+
+ + + +
+
+
+ + + +
+
+ + + + + + + + + + + + + +

PHPUnit and PhpStorm#

+
+

Note

+

This document assumes the following:

+

- You are using Docker.

+

- You are using a standard Amazee/Lagoon project with a docker-compose.yml file.

+

- You are on a Mac - it should work for other operating systems but folder structure and some configuration settings may be different.

+
+

Configuring the project#

+
    +
  1. Duplicate* the /core/phpunit.xml.dist file to /core/phpunit.xml
  2. +
  3. +

    Edit* /core/phpunit.xml and fill in the following variables:

    +
      +
    • SIMPLETEST_DB: mysql://drupal:drupal@mariadb:3306/drupal#db
    • +
    • SIMPLETEST_BASE_URL: <PROJECT_URL>
    • +
    +
  4. +
+

Configuring PhpStorm#

+

Set Up Docker#

+
    +
  1. In PhpStorm, go to File > Settings > Build, Execution, Deployment > Docker
  2. +
  3. Click: +
  4. +
  5. Select: Docker for Mac
  6. +
+

Set Up Docker

+

Set Up CLI interpreter#

+

Add a new CLI interpreter:

+
    +
  1. In PhpStorm, go to File > Settings > Languages & Frameworks > PHP
  2. +
  3. Click ... and then +
  4. +
  5. Next select: Add a new CLI interpreter from Docker, vagrant...
  6. +
  7. Use the following configurations:
      +
    • Server: <DOCKER>
    • +
    • Configuration file(s): ./docker-compose.yml
    • +
    • Service: cli
    • +
    • Lifecycle: Connect to existing container ('docker-compose exec')
    • +
    +
  8. +
  9. Path mappings:
      +
    • Local path: <ROOT_PATH>
    • +
    • Remote path*: /app
    • +
    +
  10. +
+

Add a new CLI interpreter:

+

Set Up Remote Interpreter#

+

Add Remote Interpreter:

+
    +
  1. In PhpStorm, go to File > Settings > Languages & Frameworks > PHP > Test Frameworks
  2. +
  3. Click + and select PHPUnit by Remote Interpreter
  4. +
  5. Use the following configurations:
      +
    • CLI Interpreter: <CLI_INTERPRETER>
    • +
    • Path mappings*: <PROJECT_ROOT> -> /app
    • +
    • PHPUnit: Use Composer autoloader
    • +
    • Path to script*: /app/vendor/autoload.php
    • +
    • Default configuration file*: /app/web/core/phpunit.xml
    • +
    +
  6. +
+

Add Remote Interpreter

+

Setup/Configure Runner Template #

+
    +
  1. Configure runner:
      +
    1. In PhpStorm, go to Run > Edit Configurations... > Templates > PHPUnit
    2. +
    3. Use the following configurations:

      1. Test scope: Defined in the configuration file

      +

      2. Interpreter: <CLI_INTERPRETER>

      +
    4. +
    +
  2. +
+

Configure runner

+
+

Note

+

If you are not on a Mac, this may vary.

+
+

Final checks#

+

Some final checks to run before you run a test!#

+
    +
  1. You have the project up and running: $ docker-compose up -d
  2. +
  3. The project is working without any errors, visit the site just to make sure it all works as expected - this is not 100% necessary, but nice to know it is working normally.
  4. +
  5. We should be ready to run some tests!
  6. +
+

Ready to Run#

+

Now you have the above configuration set up it should be as straightforward as going to the test you want to run and pressing the green arrow!

+

Once you press this PhpStorm will use Docker to enter the CLI container, then start running PHPUnit based upon the config.

+

Here it is in action, look at it go!!

+ + + + + + + + +
+
+ + +
+ +
+ + + +
+
+
+
+ + + + + + + + + + + + + \ No newline at end of file diff --git a/drupal/services/index.html b/drupal/services/index.html new file mode 100644 index 0000000000..feacfee9d8 --- /dev/null +++ b/drupal/services/index.html @@ -0,0 +1,2784 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + Overview - Lagoon Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +
+ + + + +
+ + +
+ +
+ + + + + + +
+
+ + + +
+
+
+ + + + +
+
+
+ + + + + + + +
+
+ + + + + + + + + + + + + +

Services#

+

MariaDB is the open-source successor to MySQL#

+

Learn about MariaDB with Drupal

+

Documentation on the MariaDB-Drupal image.

+

Documentation on the plain MariaDB image (the MariaDB-Drupal image is built on this).

+

Redis is a fast, open-source, in-memory key-value data store for use as a database, cache, message broker, and queue#

+

Learn about Redis with Drupal.

+

Documentation on the Redis-persistent image.

+

Solr is an open-source search platform#

+

Learn about Solr with Drupal.

+

Documentation on the Solr-Drupal image.

+

Documentation on the plain Solr image (the Solr-Drupal image is built on this).

+

Varnish is a powerful, open-source HTTP engine and reverse HTTP proxy that helps to speed up your website#

+

Learn about Varnish with Drupal

+

Documentation on the Varnish-Drupal image.

+

Documentation on the plain Varnish image (the Varnish-Drupal image is built on this).

+ + + + + + + + +
+
+ + +
+ +
+ + + +
+
+
+
+ + + + + + + + + + + + + \ No newline at end of file diff --git a/drupal/services/mariadb/index.html b/drupal/services/mariadb/index.html new file mode 100644 index 0000000000..5bd8c0c19d --- /dev/null +++ b/drupal/services/mariadb/index.html @@ -0,0 +1,2903 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + MariaDB - Lagoon Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +
+ + + + +
+ + +
+ +
+ + + + + + +
+
+ + + +
+
+
+ + + + +
+
+
+ + + +
+ +
+ + + +
+
+ + + + + + + + + + + + + +

MariaDB-Drupal#

+

The Lagoon mariadb-drupal Docker image Dockerfile is a customized mariadb image to use within Drupal projects in Lagoon. It differs from the mariadb image only for initial database setup, made by some environment variables:

+ + + + + + + + + + + + + + + + + + + + + + + + + +
Environment VariableDefaultDescription
MARIADB_DATABASEdrupalDrupal database created at startup.
MARIADB_USERdrupalDefault user created at startup.
MARIADB_PASSWORDdrupalPassword of default user created at startup.
+

If the LAGOON_ENVIRONMENT_TYPE variable is set to production, performances are set accordingly by using MARIADB_INNODB_BUFFER_POOL_SIZE=1024 and MARIADB_INNODB_LOG_FILE_SIZE=256.

+

Additional MariaDB Logging#

+

During the course of development, it may be necessary to enable either query logging or slow query logging. To do so, set the environment variables MARIADB_LOG_SLOW or MARIADB_LOG_QUERIES. This can be done in docker-compose.yml.

+

Connecting to MySQL container from the host#

+

If you would like to connect to your MySQL database inside the Docker container with an external tool like Sequel Pro, MySQL Workbench, HeidiSQL, DBeaver, plain old mysql-cli or anything else, here's how to get the IP and port info.

+

Get published MySQL port from the container#

+

By default, Docker assigns a randomly published port for MySQL during each container start. This is done to prevent port collisions.

+

To get the published port via docker:

+

Run: docker port [container_name].

+
Get port
$ docker port drupal_example_mariadb_1
+3306/tcp -> 0.0.0.0:32797
+
+

Or via docker-compose inside a Drupal repository:

+

Run: docker-compose port [service_name] [interal_port].

+
Set ports
docker-compose port mariab 3306
+0.0.0.0:32797
+
+ +

During development, if you are using an external database tool, it may become cumbersome to continually check and set the MySQL connection port.

+

To set a static port, edit your service definition in your docker-compose.yml.

+
docker-compose.yml
  mariadb:
+    ...
+    ports:
+      - "33772:3306" # Exposes port 3306 with a 33772 on the host port. Note by doing this you are responsible for managing port collisions`.
+
+
+

Warning

+

By setting a static port you become responsible for managing port collisions.

+
+

Connect to MySQL#

+

Now you can use these details to connect to whatever database management tool you'd like.

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
LinuxOS X
IP/HostIP from containerdocker.amazee.io
PortPublished port from containerPublished port from container
Usernamedrupaldrupal
Passworddrupaldrupal
Databasedrupaldrupal
+ + + + + + + + +
+
+ + +
+ +
+ + + +
+
+
+
+ + + + + + + + + + + + + \ No newline at end of file diff --git a/drupal/services/nginx/index.html b/drupal/services/nginx/index.html new file mode 100644 index 0000000000..9fcc559d62 --- /dev/null +++ b/drupal/services/nginx/index.html @@ -0,0 +1,2837 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + NGINX - Lagoon Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +
+ + + + +
+ + +
+ +
+ + + + + + +
+
+ + + +
+
+
+ + + + +
+
+
+ + + +
+ +
+ + + +
+
+ + + + + + + + + + + + + +

NGINX-Drupal#

+

The Lagoon nginx-drupal Docker image. Optimized to work with Drupal. Based on Lagoon nginx image.

+

Lagoon adaptions#

+

This image is prepared to be used on Lagoon. There are therefore some things already done:

+
    +
  • Folder permissions are automatically adapted with fix-permissions, so this image will work with a random user.
  • +
  • To keep drupal.conf 's configuration file as clean and customizable as possible, we added include directives in the main sections of the file:server, location /, location @drupal and location @php.
  • +
  • Further information in the section Drupal.conf customization.
  • +
+

Included Drupal configuration (drupal.conf)#

+

The image includes a full NGINX working configuration for Drupal 7, 8 and 9. It includes some extra functionalities like:

+ +

Drupal.conf customization#

+

The drupal.conf file is a customized version of the nginx configuration file, optimized for Drupal. Customers have different ways of customizing it:

+
    +
  • Modifying it (hard to support in case of errors).
  • +
  • Using built-in customization through *.conf files.
  • +
+

The drupal.conf file is divided into several sections. The sections we've included in our customizations are:

+
    +
  • server
  • +
  • location /
  • +
  • location @drupal
  • +
  • location @php.
  • +
+

For each of this section, there are two includes:

+
    +
  • *_prepend.conf
  • +
  • *_append.conf
  • +
+

Here what the location @drupal section looks like:

+
drupal.conf
location @drupal {
+    include /etc/nginx/conf.d/drupal/location_drupal_prepend*.conf;
+
+    include        /etc/nginx/fastcgi.conf;
+    fastcgi_param  SCRIPT_NAME        /index.php;
+    fastcgi_param  SCRIPT_FILENAME    $realpath_root/index.php;
+    fastcgi_pass   ${NGINX_FASTCGI_PASS:-php}:9000;
+
+    include /etc/nginx/conf.d/drupal/location_drupal_append*.conf;
+}
+
+

This configuration allows customers to create files called location_drupal_prepend.conf and location_drupal_append.conf, where they can put all the configuration they want to insert before and after the other statements.

+

Those files, once created, MUST exist in the nginx container, so add them to Dockerfile.nginx like so:

+
dockerfile.nginx
COPY location_drupal_prepend.conf /etc/nginx/conf.d/drupal/location_drupal_prepend.conf
+RUN fix-permissions /etc/nginx/conf.d/drupal/location_drupal_prepend.conf
+
+

Drupal Core Statistics Module Configuration#

+

If you're using the core Statistics module, you may run into an issue that needs a quick configuration change.

+

With the default NGINX configuration, the request to the tracking endpoint /core/modules/statistics/statistics.php is denied (404).

+

This is related to the default NGINX configuration:

+
drupal.conf
location ~* ^.+\.php$ {
+    try_files /dev/null @drupal;
+}
+
+

To fix the issue, we instead define a specific location rule and inject this as a location prepend configuration:

+
drupal.conf
## Allow access to to the statistics endpoint.
+location ~* ^(/core/modules/statistics/statistics.php) {
+      try_files /dev/null @php;
+}
+
+

And copy this during the NGINX container build:

+
dockerfile.nginx
# Add specific Drupal statistics module NGINX configuration.
+COPY .lagoon/nginx/location_prepend_allow_statistics.conf /etc/nginx/conf.d/drupal/location_prepend_allow_statistics.conf
+
+ + + + + + + + +
+
+ + +
+ +
+ + + +
+
+
+
+ + + + + + + + + + + + + \ No newline at end of file diff --git a/drupal/services/php-cli/index.html b/drupal/services/php-cli/index.html new file mode 100644 index 0000000000..1e609bca32 --- /dev/null +++ b/drupal/services/php-cli/index.html @@ -0,0 +1,2760 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + PHP-cli - Lagoon Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +
+ + + + +
+ + +
+ +
+ + + + + + +
+
+ + + +
+
+
+ + + + +
+
+
+ + + +
+
+
+ + + +
+
+
+ + + +
+
+ + + + + + + + + + + + + +

PHP-CLI-Drupal#

+

The Lagoon php-cli-drupal Docker image is optimized to work with Drupal. It is based on the Lagoon php-cli image, and has all the command line tools needed for the daily maintenance of a Drupal website:

+
    +
  • drush
  • +
  • drupal console
  • +
  • drush launcher (which will fallback to Drush 8 if there is no site installed Drush found)
  • +
+

Supported versions#

+
    +
  • 7.3 (available for compatibility only, no longer officially supported)
  • +
  • 7.4 Dockerfile - uselagoon/php-7.4-cli-drupal
  • +
  • 8.0 Dockerfile - uselagoon/php-8.0-cli-drupal
  • +
  • 8.1 Dockerfile - uselagoon/php-8.1-cli-drupal
  • +
+

All PHP versions use their own Dockerfiles.

+

Lagoon adaptions#

+

This image is prepared to be used on Lagoon. There are therefore some things already done:

+
    +
  • Folder permissions are automatically adapted with fix-permissions, so this image will work with a random user.
  • +
+ + + + + + + + +
+
+ + +
+ +
+ + + +
+
+
+
+ + + + + + + + + + + + + \ No newline at end of file diff --git a/drupal/services/redis/index.html b/drupal/services/redis/index.html new file mode 100644 index 0000000000..332af799af --- /dev/null +++ b/drupal/services/redis/index.html @@ -0,0 +1,2933 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + Redis - Lagoon Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +
+ + + + +
+ + +
+ +
+ + + + + + +
+
+ + + +
+
+
+ + + + +
+
+
+ + + +
+
+
+ + + +
+
+
+ + + +
+
+ + + + + + + + + + + + + +

Redis#

+

We recommend using Redis for internal caching. Add the Redis service to docker-compose.yaml.

+
docker-compose.yml
  redis:
+    image: uselagoon/redis-5
+    labels:
+      lagoon.type: redis
+    << : *default-user # Uses the defined user from top.
+    environment:
+      << : *default-environment
+
+

Also, to configure Redis, add the following to your settings.php.

+

Drupal 7#

+
settings.php
  if(getenv('LAGOON')){
+    $conf['redis_client_interface'] = 'PhpRedis';
+    $conf['redis_client_host'] = 'redis';
+    $conf['lock_inc'] = 'sites/all/modules/contrib/redis/redis.lock.inc';
+    $conf['path_inc'] = 'sites/all/modules/contrib/redis/redis.path.inc';
+    $conf['cache_backends'][] = 'sites/all/modules/contrib/redis/redis.autoload.inc';
+    $conf['cache_default_class'] = 'Redis_Cache';
+    $conf['cache_class_cache_form'] = 'DrupalDatabaseCache';
+    $conf['cache_class_cache_field'] = 'DrupalDatabaseCache';
+  }
+
+

Depending on file system structure, the module paths may need to be updated.

+

Drupal 8#

+

The Drupal 8 config is largely stock. Notably, Redis is disabled while Drupal is being installed.

+
settings.php
if (getenv('LAGOON')){
+  $settings['redis.connection']['interface'] = 'PhpRedis';
+  $settings['redis.connection']['host'] = getenv('REDIS_HOST') ?: 'redis';
+  $settings['redis.connection']['port'] = getenv('REDIS_SERVICE_PORT') ?: '6379';
+  $settings['cache_prefix']['default'] = getenv('LAGOON_PROJECT') . '_' . getenv('LAGOON_GIT_SAFE_BRANCH');
+
+  // Do not set the cache during installations of Drupal.
+  if (!drupal_installation_attempted() && extension_loaded('redis')) {
+    $settings['cache']['default'] = 'cache.backend.redis';
+
+    // And allows to use it without the Redis module being enabled.
+    $class_loader->addPsr4('Drupal\\redis\\', 'modules/contrib/redis/src');
+
+    $settings['bootstrap_container_definition'] = [
+      'parameters' => [],
+      'services' => [
+        'redis.factory' => [
+          'class' => 'Drupal\redis\ClientFactory',
+        ],
+        'cache.backend.redis' => [
+          'class' => 'Drupal\redis\Cache\CacheBackendFactory',
+          'arguments' => ['@redis.factory', '@cache_tags_provider.container', '@serialization.phpserialize'],
+        ],
+        'cache.container' => [
+          'class' => '\Drupal\redis\Cache\PhpRedis',
+          'factory' => ['@cache.backend.redis', 'get'],
+          'arguments' => ['container'],
+        ],
+        'cache_tags_provider.container' => [
+          'class' => 'Drupal\redis\Cache\RedisCacheTagsChecksum',
+          'arguments' => ['@redis.factory'],
+        ],
+        'serialization.phpserialize' => [
+          'class' => 'Drupal\Component\Serialization\PhpSerialize',
+        ],
+      ],
+    ];
+  }
+}
+
+

Persistent#

+

Redis can also be configured as a persistent backend.

+
docker-compose.yml
redis:
+  image: uselagoon/redis-5-persistent
+  labels:
+    lagoon.type: redis-persistent
+  environment:
+    << : *default-environment
+
+

Environment Variables#

+

Environment variables are meant to store some common information about Redis.

+ + + + + + + + + + + + + + + + + + + + + + + + + +
Environment VariableDefaultDescription
LOGLEVELnoticeRedis loglevel
DATABASES1Number of databases
MAXMEMORY100mbMaximum memory usage of Redis
+

Redis Failover#

+

Here is a snippet to implement a Redis failover in case of the Redis container not being available (for example, during maintenance)

+

The following is inserted into Drupal's active settings.php file.

+
settings.php
if (getenv('LAGOON')) {
+  $contrib_path = is_dir('sites/all/modules/contrib') ? 'sites/all/modules/contrib' : 'sites/all/modules';
+  $redis = DRUPAL_ROOT . '/sites/all/modules/contrib/redis';
+
+  if (file_exists("$redis/redis.module")) {
+    require_once "$redis/redis.module";
+    $conf['redis_client_host'] = getenv('REDIS_HOST') ?: 'redis';
+    $conf['redis_client_port'] = getenv('REDIS_SERVICE_PORT') ?: 6379;
+    $conf['cache_prefix'] = getenv('REDIS_CACHE_PREFIX') ?: getenv('LAGOON_PROJECT') . '_' . getenv('LAGOON_GIT_SAFE_BRANCH');
+    try {
+      // Ensure that there is a connection to redis.
+      $client = Redis_Client::getClient();
+      $response = $client->ping();
+      if (!$response) {
+      throw new \Exception('Redis could be reached but is not responding correctly.');
+      }
+      $conf['redis_client_interface'] = 'PhpRedis';
+      $conf['lock_inc'] = $contrib_path . '/redis/redis.lock.inc';
+      $conf['path_inc'] = $contrib_path . '/redis/redis.path.inc';
+      $conf['cache_backends'][] = $contrib_path . '/redis/redis.autoload.inc';
+      $conf['cache_default_class'] = 'Redis_Cache';
+    } catch (\Exception $e) {
+      // Redis is not available for this request we should not configure the
+      // redis backend and ensure no cache is used. This will retry next
+      // request.
+      if (!class_exists('DrupalFakeCache')) {
+        $conf['cache_backends'][] = 'includes/cache-install.inc';
+      }
+      $conf['cache_default_class'] = 'DrupalFakeCache';
+    }
+  }
+}
+
+ + + + + + + + +
+
+ + +
+ +
+ + + +
+
+
+
+ + + + + + + + + + + + + \ No newline at end of file diff --git a/drupal/services/solr/index.html b/drupal/services/solr/index.html new file mode 100644 index 0000000000..44eb530e13 --- /dev/null +++ b/drupal/services/solr/index.html @@ -0,0 +1,2789 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + Solr - Lagoon Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +
+ + + + +
+ + +
+ +
+ + + + + + +
+
+ + + +
+
+
+ + + + +
+
+
+ + + +
+
+
+ + + +
+
+
+ + + +
+
+ + + + + + + + + + + + + +

Solr-Drupal#

+

Standard use#

+

For Solr 5.5, 6.6 and 7.7, we ship the default schema files provided by the search_api_solr Drupal module. Add the Solr version you would like to use in your docker-compose.yml file, following our example.

+

Custom schema#

+

To implement schema customizations for Solr in your project, look to how Lagoon creates our standard images.

+
    +
  • In the solr section of your docker-compose.yml file, replace image: amazeeio/solr:7.7 with:
  • +
+
docker-compose.yml
  build:
+    context: .
+    dockerfile: solr.dockerfile
+
+
    +
  • Place your schema files in your code repository. We typically like to use .lagoon/solr.
  • +
  • Create a solr.dockerfile.
  • +
+
solr.dockerfile
FROM amazeeio/solr:7.7
+
+COPY .lagoon/solr /solr-conf/conf
+
+RUN precreate-core drupal /solr-conf
+
+CMD ["solr-foreground"]
+
+

The goal is to have your Solr configuration files exist at /solr-conf/conf in the image you are building.

+

Multiple cores#

+

To implement multiple cores, you will also need to ship your own Solr schema as above. The only change needed is to the CMD of the Dockerfile - repeat the pattern of precreate-core corename /solr-conf/ ; for each core you require.

+
solr.dockerfile
FROM amazeeio/solr:7.7-drupal
+
+RUN precreate-core drupal-index1 /solr-conf && \
+    precreate-core drupal-index2 /solr-conf && \
+    precreate-core drupal-index3 /solr-conf
+
+CMD ["solr-foreground"]
+
+ + + + + + + + +
+
+ + +
+ +
+ + + +
+
+
+
+ + + + + + + + + + + + + \ No newline at end of file diff --git a/drupal/services/varnish/index.html b/drupal/services/varnish/index.html new file mode 100644 index 0000000000..306bd5d5d4 --- /dev/null +++ b/drupal/services/varnish/index.html @@ -0,0 +1,2996 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + Varnish - Lagoon Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +
+ + + + +
+ + +
+ +
+ + + + + + +
+
+ + + +
+
+
+ + + + +
+
+
+ + + + + + + +
+
+ + + + + + + + + + + + + +

Varnish#

+

We suggest using Drupal with a Varnish reverse proxy. Lagoon provides a varnish-drupal Docker image that has Varnish already configured with a Drupal Varnish config.

+

This Varnish config does the following:

+
    +
  • It understands Drupal session cookies and automatically disables the Varnish caching for any authenticated request.
  • +
  • It automatically caches any assets (images, css, js, etc.) for one month, and also sends this header to the browser, so browser cache the assets as well. This happens for authenticated and non-authenticated requests.
  • +
  • It has support for BAN and URIBAN which is used by the Drupal 8 purge module.
  • +
  • It removes utm_ and gclid from the URL parameter to prevent Google Analytics links from creating multiple cache objects.
  • +
  • Many other good things - just check out the drupal.vcl.
  • +
+

Usage with Drupal 8#

+

TL;DR: Check out the drupal8-advanced example in our examples repo, it ships with the needed modules and needed Drupal configuration.

+

Note: many of these examples are on the same drupal-example-simple repo, but different branches/hashes. Be sure to get the exact branch from the examples list!

+

Install Purge and Varnish Purge modules#

+

In order to fully use Varnish with Drupal 8 cache tags, you need to install the Purge and Varnish Purge modules. They ship with many submodules. We suggest installing at least the following:

+
    +
  • purge
  • +
  • purge_drush
  • +
  • purge_tokens
  • +
  • purge_ui
  • +
  • purge_processor_cron
  • +
  • purge_processor_lateruntime
  • +
  • purge_queuer_coretags
  • +
  • varnish_purger
  • +
  • varnish_purge_tags
  • +
+

Grab them all at once:

+
Install Purge and Varnish Purge
composer require drupal/purge drupal/varnish_purge
+
+drush en purge purge_drush purge_tokens purge_ui purge_processor_cron purge_processor_lateruntime purge_queuer_coretags varnish_purger varnish_purge_tags
+
+

Configure Varnish Purge#

+
    +
  1. Visit Configuration > Development > Performance > Purge.
  2. +
  3. Add a purger via Add purger.
  4. +
  5. Select Varnish Bundled Purger (not the Varnish Purger, see the #Behind the Scenes section, for more information.).
  6. +
  7. Click the dropdown beside the just added purger and click Configure.
  8. +
  9. Give it a nice name, Lagoon Varnish sounds good.
  10. +
  11. +

    Configure it with:

    +
    Configure Varnish Purge
     TYPE: Tag
    +
    + REQUEST:
    + Hostname: varnish
    + (or whatever your Varnish is called in docker-compose.yml)
    + Port: 8080
    + Path: /
    + Request Method: BAN
    + Scheme: http
    +
    + HEADERS:
    + Header: Cache-Tags
    + Value: [invalidations:separated_pipe]
    +
    +
  12. +
  13. +

    Save configuration.

    +
  14. +
+

That's it! If you'd like to test this locally, make sure you read the next section.

+

Configure Drupal for Varnish#

+

There are a few other configurations that can be done:

+
    +
  1. Uninstall the Internal Page Cache Drupal module with drush pmu page_cache. It can cause some weird double caching situations where only the Varnish cache is cleared, but not the internal cache, and changes appear very slowly to the users. Also, it uses a lot of cache storage on big sites.
  2. +
  3. Change $config['system.performance']['cache']['page']['max_age'] in production.settings.php to 2628000. This tells Varnish to cache sites for up 1 month, which sounds like a lot, but the Drupal 8 cache tag system is so awesome that it will basically make sure that the Varnish cache is purged whenever something changes.
  4. +
+

Test Varnish Locally#

+

Drupal setups on Lagoon locally have Varnish and the Drupal caches disabled as it can be rather hard to develop with all them set. This is done via the following:

+
    +
  • The VARNISH_BYPASS=true environment variable in docker-compose.yml which tells Varnish to basically disable itself.
  • +
  • Drupal is configured to not send any cache headers (via setting the Drupal config $config['system.performance']['cache']['page']['max_age'] = 0 in development.settings.php).
  • +
+

To test Varnish locally, change the following in docker-compose.yml:

+
    +
  • Set VARNISH_BYPASS to false in the Varnish service section.
  • +
  • Set LAGOON_ENVIRONMENT_TYPE to production in the x-environment section.
  • +
  • Run docker-compose up -d , which restarts all services with the new environment variables.
  • +
+

Now you should be able to test Varnish!

+ +

Here is a short example assuming there is a node with the ID 1 and has the URL drupal-example.docker.amazee.io/node/1

+
    +
  1. Run curl -I drupal-example.docker.amazee.io/node/1 and look for these headers:
      +
    • X-LAGOON should include varnish which tells you that the request actually went through Varnish.
    • +
    • Age: will be still 0 as Varnish has probably never seen this site before, and the first request will warm the varnish cache.
    • +
    • X-Varnish-Cache will be MISS , also telling you that Varnish didn't find a previously cached version of this request.
    • +
    +
  2. +
  3. Now run curl -I drupal-example.docker.amazee.io/node/1 again, and the headers should be:
      +
    • Age: will show you how many seconds ago the request has been cached. In our example it will probably something between 1-30, depending on how fast you are executing the command.
    • +
    • X-Varnish-Cache will be HIT, telling you that Varnish successfully found a cached version of the request and returned that one to you.
    • +
    +
  4. +
  5. Change some content at node/1 in Drupal.
  6. +
  7. Run curl -I drupal-example.docker.amazee.io/node/1 , and the headers should the same as very first request:
      +
    • Age:0
    • +
    • X-Varnish-Cache: MISS
    • +
    +
  8. +
+ +

Varnish on Drupal behind the scenes#

+

If you come from other Drupal hosts or have done a Drupal 8 & Varnish tutorial before, you might have realized that there are a couple of changes in the Lagoon Drupal Varnish tutorial. Let's address them:

+

Usage of Varnish Bundled Purger instead of Varnish Purger#

+

The Varnish Purger purger sends a BAN request for each cache-tag that should be invalidated. Drupal has a lot of cache-tags, and this could lead to quite a large amount of requests sent to Varnish. Varnish Bundled Purger instead sends just one BAN request for multiple invalidations, separated nicely by pipe (|), which fits perfectly with the Varnish regular expression system of bans. This causes less requests and a smaller ban list table inside Varnish.

+

Usage of Purge Late runtime processor#

+

Contradictory to the Varnish module in Drupal 7, the Drupal 8 Purge module has a slightly different approach to purging caches: It adds them to a queue which is then processed by different processors. Purge suggests using the Cron processor , which means that the Varnish cache is only purged during a cron run. This can lead to old data being cached by Varnish, as your cron is probably not configured to run every minute or so, and can result in confused editors and clients.

+

Instead, we suggest using the Purge Late runtime processor, which processes the queue at the end of each Drupal request. This has the advantage that if a cache-tag is added to the purge queue (because an editor edited a Drupal node, for example) the cache-tags for this node are directly purged. Together with the Varnish Bundled Purger, this means just a single additional request to Varnish at the very end of a Drupal request, which causes no noticeable processing time on the request.

+

Full support for Varnish Ban Lurker#

+

Our Varnish configurations have full support for Ban Lurker. Ban Lurker helps you to maintain a clean cache and keep Varnish running smoothly. It is basically a small tool that runs through the Varnish ban list and compares them to the cached requests in the Varnish cache. Varnish bans are used to mark an object in the cache for purging. If Ban Lurker finds an item that should be "banned," it removes them from the cache and also removes the ban itself. Now any seldom-accessed objects with very long TTLs which would normally never be banned and just keep taking up cache space are removed and can be refreshed. This keeps the list of bans small and with that, less processing time for Varnish on each request. Check out the official Varnish post on Ban Lurker and some other helpful reading for more information.

+

Troubleshooting#

+

Varnish doesn't cache? Or something else not working? Here a couple of ways to debug:

+ +
    +
  • Run drush p-debug-en to enable debug logging of the purge module. This should show you debugging in the Drupal log under admin/reports/dblog.
  • +
  • Make sure that Drupal sends proper cache headers. To best test this, use the URL that Lagoon generates for bypassing the Varnish cache, (locally in our Drupal example this is http://nginx-drupal-example.docker.amazee.io). Check for the Cache-Control: max-age=900, public header, where the 900 is what you configured in $config['system.performance']['cache']['page']['max_age'].
  • +
  • Make sure that the environment variable VARNISH_BYPASS is not set to true (see docker-compose.yml and run docker-compose up -d varnish to make sure the environment variable is configured correctly).
  • +
  • If all fails, and before you flip your table (╯°□°)╯︵ ┻━┻, talk to the Lagoon team, we're happy to help.
  • +
+ + + + + + + + + +
+
+ + +
+ +
+ + + +
+
+
+
+ + + + + + + + + + + + + \ No newline at end of file diff --git a/drupal/step-by-step-getting-drupal-ready-to-run-on-lagoon/index.html b/drupal/step-by-step-getting-drupal-ready-to-run-on-lagoon/index.html new file mode 100644 index 0000000000..07f4dca9d1 --- /dev/null +++ b/drupal/step-by-step-getting-drupal-ready-to-run-on-lagoon/index.html @@ -0,0 +1,3005 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + Step by Step - Getting Drupal ready to run on Lagoon - Lagoon Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +
+ + + + +
+ + +
+ +
+ + + + + + +
+
+ + + +
+
+
+ + + + +
+
+
+ + + + + + + +
+
+ + + + + + + + + + + + + +

Step by Step: Getting Drupal ready to run on Lagoon#

+

1. Lagoon Drupal Setting Files#

+

In order for Drupal to work with Lagoon, we need to teach Drupal about Lagoon and Lagoon about Drupal. This happens by copying specific YAML and PHP files into your Git repository.

+

If you're working on a Drupal project, you can check out one of the various Drupal example projects in our examples repository. We have Drupal 8 and 9 and some variants of each depending on your needs, such as database types. Clone the repository that best suits your needs to get started!

+

Here is a summary of the Lagoon- and Drupal-specific files you will find:

+
    +
  • .lagoon.yml - The main file that will be used by Lagoon to understand what should be deployed and many more things. This file has some sensible Drupal defaults. If you would like to edit or modify, please check the documentation for .lagoon.yml.
  • +
  • docker-compose.yml, .dockerignore, and *.dockerfile (or Dockerfile) - These files are used to run your local Drupal development environment, they tell Docker which services to start and how to build them. They contain sensible defaults and many commented lines. We hope that it's well-commented enough to be self-describing. If you would like to find out more, see documentation for docker-compose.yml.
  • +
  • sites/default/* - These .php and .yml files tell Drupal how to communicate with Lagoon containers both locally and in production. They also provide a straightforward system for specific overrides in development and production environments. Unlike other Drupal hosting systems, Lagoon never ever injects Drupal settings files into your Drupal. Therefore, you can edit them however you like. Like all other files, they contain sensible defaults and some commented parts.
  • +
  • drush/aliases.drushrc.php - These files are specific to Drush and tell Drush how to talk to the Lagoon GraphQL API in order to learn about all site aliases there are.
  • +
  • drush/drushrc.php - Some sensible defaults for Drush commands.
  • +
+

Update your .gitignore Settings#

+

Don't forget to make sure your .gitignore will allow you to commit the settings files.

+

Drupal is shipped with sites/*/settings*.php and sites/*/services*.yml in .gitignore. Remove that, as with Lagoon we don't ever have sensitive information in the Git repository.

+

Note about WEBROOT in Drupal 8#

+

Unfortunately the Drupal community has not decided on a standardized WEBROOT folder name. Some projects put Drupal within web, and others within docroot or somewhere else. The Lagoon Drupal settings files assume that your Drupal is within web, but if this is different for your Drupal, please adapt the files accordingly.

+

Note about composer.json#

+

If you installed Drupal via composer, please check your composer.json and make sure that the name is NOT drupal/drupal, as this could confuse Drush and other tools of the Drupal universe, just rename it to something like myproject/drupal

+

2. Customize docker-compose.yml#

+

Don't forget to customize the values in lagoon-project & LAGOON_ROUTE with your site-specific name & the URL you'd like to access the site with. Here's an example:

+
docker-compose.yml
x-environment:
+  &default-environment
+    LAGOON_PROJECT: *lagoon-project
+    # Route that should be used locally. If you are using pygmy, this route *must* end with .docker.amazee.io.
+    LAGOON_ROUTE: http://drupal-example.docker.amazee.io
+
+

3. Build Images#

+

First, we need to build the defined images:

+
Build images
docker-compose build
+
+

This will tell docker-compose to build the Docker images for all containers that have a build: definition in the docker-compose.yml. Usually for Drupal this is the case for the cli, nginx and php images. We do this because we want to run specific build commands (like composer install) or inject specific environment variables (like WEBROOT) into the images.

+

Usually, building is not necessary every time you edit your Drupal code (as the code is mounted into the containers from your host), but rebuilding does not hurt. Plus, Lagoon will build the exact same Docker images during a deploy, so you can check that your build will also work during a deployment by just running docker-compose build again.

+

4. Start Containers#

+

Now that the images are built, we can start the containers:

+
Start containers
docker-compose up -d
+
+

This will bring up all containers. After the command is done, you can check with docker-compose ps to ensure that they are all fully up and have not crashed. If there is a problem, check the logs with docker-compose logs -f [servicename].

+

5. Rerun composer install (for Composer projects only)#

+

In a local development environment, you probably want all dependencies downloaded and installed, so connect to the cli container and run composer install:

+
Run composer install in CLI
docker-compose exec cli bash
+composer install
+
+

This might sound weird, as there was already a composer install executed during the build step, so let us explain:

+
    +
  • In order to be able to edit files on the host and have them immediately available in the container, the default docker-composer.yml mounts the whole folder into the the containers (this happens with .:/app:delegated in the volumes section). This also means that all dependencies installed during the Docker build are overwritten with the files on the host.
  • +
  • Locally, you probably want dependencies defined as require-dev in composer.json to exist as well, while on a production deployment they would just use unnecessary space. So we run composer install --no-dev in the Dockerfile and composer install manually.
  • +
+

If everything went well, open the LAGOON_ROUTE defined in docker-compose.yml (for example http://drupal.docker.amazee.io) and you should be greeted by a nice Drupal error. Don't worry - that's ok right now, most important is that it tries to load a Drupal site.

+

If you get a 500 or similar error, make sure everything loaded properly with Composer.

+

6. Check Status and Install Drupal#

+

Finally it's time to install Drupal, but just before that we want to make sure everything works. We suggest using Drush for that:

+
Drush status
docker-compose exec cli bash
+drush status
+
+

This should return something like:

+
Drush status result
[drupal-example]cli-drupal:/app$ drush status
+[notice] Missing database table: key_value
+Drupal version       :  8.6.1
+Site URI             :  http://drupal.docker.amazee.io
+Database driver      :  mysql
+Database hostname    :  mariadb
+Database port        :  3306
+Database username    :  drupal
+Database name        :  drupal
+PHP binary           :  /usr/local/bin/php
+PHP config           :  /usr/local/etc/php/php.ini
+PHP OS               :  Linux
+Drush script         :  /app/vendor/drush/drush/drush
+Drush version        :  9.4.0
+Drush temp           :  /tmp
+Drush configs        :  /home/.drush/drush.yml
+                        /app/vendor/drush/drush/drush.yml
+Drupal root          :  /app/web
+Site path            :  sites/default
+
+
+

Warning

+

You may have to tell pygmy about your public key before the next step.

+
+

If you get an error like Permission denied (publickey), check out the documentation here: pygmy - adding ssh keys

+

Now it is time to install Drupal (if instead you would like to import an existing SQL file, please skip to step 7, but we suggest you start with a clean Drupal installation in the beginning to be sure everything works).

+
Install Drupal
drush site-install
+
+

This should output something like:

+
drush site-install
[drupal-example]cli-drupal:/app$ drush site-install
+You are about to DROP all tables in your 'drupal' database. Do you want to continue? (y/n): y
+Starting Drupal installation. This takes a while. Consider using the --notify global option.
+Installation complete.  User name: admin  User password: a7kZJekcqh
+Congratulations, you installed Drupal!
+
+

Now you can visit the URL defined in LAGOON_ROUTE and you should see a fresh and clean installed Drupal site - Congrats!

+

Congrats!

+

7. Import existing Database Dump#

+

If you already have an existing Drupal site, you probably want to import its database over to your local site.

+

There are many different ways to create a database dump. If your current hosting provider has Drush installed, you can use the following:

+
Drush sql-dump
drush sql-dump --result-file=dump.sql
+
+Database dump saved to dump.sql
+
+

Now you have a dump.sql file that contains your whole database.

+

Copy this file into your Git repository and connect to the cli, and you should see the file in there:

+
Viewing dump.sql
[drupal-example]cli-drupal:/app$ ls -l dump.sql
+-rw-r--r--    1 root     root          5281 Dec 19 12:46 dump.sql
+
+

Now you can drop the current database, and then import the dump.

+
Import dump.sql
drush sql-drop
+
+drush sql-cli < dump.sql
+
+

Verify that everything works with visiting the URL of your project. You should have a functional copy of your Drupal site!

+

8. Drupal files directory#

+

A Drupal site also needs the files directory. As the whole folder is mounted into the Docker containers, add the files into the correct folder (probably web/sites/default/files, sites/default/files or something similar). Remember what you've set as your WEBROOT - it may not be the same for all projects.

+

9. Done#

+

You are done with your local setup. The Lagoon team wishes happy Drupaling!

+ + + + + + + + +
+
+ + +
+ +
+ + + +
+
+
+
+ + + + + + + + + + + + + \ No newline at end of file diff --git a/drupal/subfolders/index.html b/drupal/subfolders/index.html new file mode 100644 index 0000000000..a98a845e84 --- /dev/null +++ b/drupal/subfolders/index.html @@ -0,0 +1,2932 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + Subfolders - Lagoon Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +
+ + + + +
+ + +
+ +
+ + + + + + +
+
+ + + +
+
+
+ + + + +
+
+
+ + + + + + + +
+
+ + + + + + + + + + + + + +

Subfolders#

+

An example could be: www.example.com points to one Drupal site, while www.example.com/blog loads a blog built in another Drupal.

+

It would be possible to run both Drupals in a single Git repository and deploy it as a whole, but this workflow might not fit every team, and having separate Git repositories fits some situations better.

+

Modifications of root application#

+

The root application (in this example, the Drupal site for www.example.com), needs a couple of NGINX configs that will configure NGINX to be a reverse proxy to the subfolder applications:

+

location_prepend.conf#

+

Create a file called location_prepend.conf in the root of your Drupal installation:

+
location_prepend.conf
resolver 8.8.8.8 valid=30s;
+
+location ~ ^/subfolder {
+  # If $http_x_forwarded_proto is empty (If it is not set from an upstream reverseproxy).
+  # Aet it to the current scheme.
+  set_if_empty $http_x_forwarded_proto $scheme;
+
+  proxy_set_header      X-Forwarded-For $proxy_add_x_forwarded_for;
+  proxy_set_header      X-Forwarded-Proto $scheme;
+  proxy_set_header      X-Forwarded-Proto $http_x_forwarded_proto;
+  proxy_set_header      X-Lagoon-Forwarded-Host $host;
+  # Will be used by downstream to know the original host.
+  proxy_set_header      X-REVERSEPROXY $hostname;
+  proxy_set_header      FORWARDED "";
+  # Unset FORWARDED because drupal8 gives errors if it is set.
+  proxy_set_header      Proxy "";
+  # Unset Proxy because drupal8 gives errors if it is set.
+  proxy_ssl_server_name on;
+
+  # NGINX needs a variable set in order for the DNS resolution to work correctly.
+  set                   $subfolder_drupal_host "https://nginx-lagoonproject-${LAGOON_GIT_SAFE_BRANCH}.clustername.com:443";
+  # LAGOON_GIT_SAFE_BRANCH variable will be replaced during docker entrypoint.
+  proxy_pass            $subfolder_drupal_host;
+  proxy_set_header      Host $proxy_host;
+  # $proxy_host will be automatically generated by NGINX based on proxy_pass (it needs to be without scheme and port).
+
+  expires off; # make sure we honor cache headers from the proxy and not overwrite them
+
+

Replace the following strings:

+
    +
  • /subfolder with the name of the subfolder you want to use. For example, /blog.
  • +
  • nginx with the service that you want to point too in the subfolder project.
  • +
  • lagoonproject with the Lagoon projectname of the subfolder project.
  • +
+

NGINX Dockerfile#

+

Add the following to your NGINX Dockerfile (nginx.dockerfile or Dockerfile.nginx):

+
nginx.dockerfile
COPY location_prepend.conf /etc/nginx/conf.d/drupal/location_prepend.conf
+RUN fix-permissions /etc/nginx/conf.d/drupal/*
+
+

Modifications of subfolder application#

+

Like the root application, we also need to teach the subfolder application (in this example, the Drupal installation for www.example.com/blog), that it is running under a subfolder. To do this, we create two files:

+

location_drupal_append_subfolder.conf#

+

Create a file called location_drupal_append_subfolder.conf in the root of your subfolder Drupal installation:

+
location_drupal_append_subfolder.conf
# When injecting a script name that is prefixed with `subfolder`, Drupal will
+# render all URLs with `subfolder` prefixed
+fastcgi_param  SCRIPT_NAME        /subfolder/index.php;
+
+# If we are running via a reverse proxy, we inject the original HOST URL
+# into PHP. With this Drupal will render all URLs with the original HOST URL,
+# and not the current used HOST.
+
+# We first set the HOST to the regular host variable.
+fastcgi_param  HTTP_HOST          $http_host;
+# Then we overwrite it with `X-Lagoon-Forwarded-Host` if it exists.
+fastcgi_param  HTTP_HOST          $http_x_lagoon_forwarded_host if_not_empty;
+
+

Replace /subfolder with the name of the subfolder you want to use. For example, /blog.

+

server_prepend_subfolder.conf#

+

Create a file called server_prepend_subfolder.conf in the root of your subfolder Drupal installation:

+
server_prepend_subfolder.conf
# Check for redirects before we do the internal NGINX rewrites.
+# This is done because the internal NGINX rewrites uses `last`,
+# which instructs NGINX to not check for rewrites anymore (and
+# `if` is part of the redirect module).
+include /etc/nginx/helpers/010_redirects.conf;
+
+# This is an internal NGINX rewrite, it removes `/subfolder/`
+# from the requests so that NGINX handles the request as it would
+# have been `/` from the beginning.
+# The `last` flag is also important. It will cause NGINX not to
+# execute any more rewrites, because it would redirect forever
+# with the rewrites below.
+rewrite ^/subfolder/(.*)          /$1             last;
+
+# Make sure redirects are NOT absolute, to ensure NGINX does not
+# overwrite the host of the URL - which could be something other than
+# what NGINX currently thinks it is serving.
+absolute_redirect off;
+
+# If a request just has `/subfolder` we 301 redirect to `/subfolder/`
+# (Drupal really likes a trailing slash)
+rewrite ^/subfolder               /subfolder/     permanent;
+
+# Any other request we prefix 301 redirect with `/subfolder/`
+rewrite ^\/(.*)                   /subfolder/$1   permanent;
+
+

Replace /subfolder with the name of the subfolder you want to use. For example, /blog.

+

NGINX Dockerfile#

+

We also need to modify the NGINX Dockerfile.

+

Add the following to your NGINX Dockerfile (nginx.dockerfile or Dockerfile.nginx):

+
nginx.dockerfile
COPY location_drupal_append_subfolder.conf /etc/nginx/conf.d/drupal/location_drupal_append_subfolder.conf
+COPY server_prepend_subfolder.conf /etc/nginx/conf.d/drupal/server_prepend_subfolder.conf
+RUN fix-permissions /etc/nginx/conf.d/drupal/*
+
+ + + + + + + + +
+
+ + +
+ +
+ + + +
+
+
+
+ + + + + + + + + + + + + \ No newline at end of file diff --git a/images/assets/Lagoon-2-Symbol.png b/images/assets/Lagoon-2-Symbol.png new file mode 100644 index 0000000000..fbcb7437a5 Binary files /dev/null and b/images/assets/Lagoon-2-Symbol.png differ diff --git a/images/assets/Lagoon-2-Symbol.svg b/images/assets/Lagoon-2-Symbol.svg new file mode 100644 index 0000000000..0414e7c2f1 --- /dev/null +++ b/images/assets/Lagoon-2-Symbol.svg @@ -0,0 +1,382 @@ + + + + diff --git a/images/assets/Lagoon-Horizontal-LG.png b/images/assets/Lagoon-Horizontal-LG.png new file mode 100644 index 0000000000..7c0ef9d700 Binary files /dev/null and b/images/assets/Lagoon-Horizontal-LG.png differ diff --git a/images/assets/Lagoon-Horizontal-Logo.svg b/images/assets/Lagoon-Horizontal-Logo.svg new file mode 100644 index 0000000000..495cb0153e --- /dev/null +++ b/images/assets/Lagoon-Horizontal-Logo.svg @@ -0,0 +1,157 @@ + + + + diff --git a/images/assets/Lagoon-Horizontal-SM.png b/images/assets/Lagoon-Horizontal-SM.png new file mode 100644 index 0000000000..cba9cf6e7b Binary files /dev/null and b/images/assets/Lagoon-Horizontal-SM.png differ diff --git a/images/assets/Lagoon-Stacked-LG.png b/images/assets/Lagoon-Stacked-LG.png new file mode 100644 index 0000000000..ceb7f6eec1 Binary files /dev/null and b/images/assets/Lagoon-Stacked-LG.png differ diff --git a/images/assets/Lagoon-Stacked-Logo.svg b/images/assets/Lagoon-Stacked-Logo.svg new file mode 100644 index 0000000000..8a67e7cfeb --- /dev/null +++ b/images/assets/Lagoon-Stacked-Logo.svg @@ -0,0 +1,158 @@ + + + + diff --git a/images/assets/Lagoon-Stacked-SM.png b/images/assets/Lagoon-Stacked-SM.png new file mode 100644 index 0000000000..e4f5ed6a43 Binary files /dev/null and b/images/assets/Lagoon-Stacked-SM.png differ diff --git a/images/assets/Lagoon-Symbol-LG.png b/images/assets/Lagoon-Symbol-LG.png new file mode 100644 index 0000000000..14bab297cb Binary files /dev/null and b/images/assets/Lagoon-Symbol-LG.png differ diff --git a/images/assets/Lagoon-Symbol-SM.png b/images/assets/Lagoon-Symbol-SM.png new file mode 100644 index 0000000000..ea92bde4aa Binary files /dev/null and b/images/assets/Lagoon-Symbol-SM.png differ diff --git a/images/assets/Lagoon-Symbol.svg b/images/assets/Lagoon-Symbol.svg new file mode 100644 index 0000000000..77edf11585 --- /dev/null +++ b/images/assets/Lagoon-Symbol.svg @@ -0,0 +1,137 @@ + + + + diff --git a/images/lagoon-icon-bw.png b/images/lagoon-icon-bw.png new file mode 100644 index 0000000000..d7a0da8084 Binary files /dev/null and b/images/lagoon-icon-bw.png differ diff --git a/images/lagoon-icon.png b/images/lagoon-icon.png new file mode 100644 index 0000000000..c5db7f6740 Binary files /dev/null and b/images/lagoon-icon.png differ diff --git a/images/lagoon-logo.png b/images/lagoon-logo.png new file mode 100644 index 0000000000..56d63d4685 Binary files /dev/null and b/images/lagoon-logo.png differ diff --git a/index.html b/index.html new file mode 100644 index 0000000000..a01089b961 --- /dev/null +++ b/index.html @@ -0,0 +1,2874 @@ + + + + + + + + + + + + + + + + + + + + + + + + Lagoon Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +
+ + + + +
+ + +
+ +
+ + + + + + +
+
+ + + +
+
+
+ + + + +
+
+
+ + + +
+ +
+ + + +
+
+ + + + + + + + + + + + + +

Lagoon#

+

+

Lagoon - the Open Source Application Delivery Platform for Kubernetes#

+

Lagoon gives developers what they dream about. It's a system that allows developers to run the exact same code in their local and production environment. The same Docker images, the same service configurations, and the same code.

+

Who are you?#

+
+
    +
  • If you want to use Lagoon to host your website or application, visit Using Lagoon.
  • +
  • If you want to develop Lagoon (add features, fix bugs), Developing Lagoon.
  • +
+
+

TL;DR: How Lagoon Works#

+
    +
  1. Developers define and configure needed services within YAML files.
  2. +
  3. When they are happy, they push the code to Git.
  4. +
  5. Lagoon parses the YAML files and adds in any additional needed configuration.
  6. +
  7. Lagoon builds the needed Docker images.
  8. +
  9. Lagoon pushes them to a Docker registry.
  10. +
  11. Lagoon creates the needed resources in Kubernetes.
  12. +
  13. Lagoon monitors the deployment of the containers.
  14. +
  15. When all is done, Lagoon informs the developers in different ways (Slack, email, website, etc).
  16. +
+

Help?#

+

Questions? Ideas? Meet the maintainers and contributors.

+

Chat with us on the Lagoon Discord: https://discord.gg/te5hHe95JE +

+

A couple of things about Lagoon#

+
    +
  1. Lagoon is based on microservices. The deployment and build workflow is very complex. We have multiple version control sources, multiple clusters, and multiple notification systems. Each deployment is unique and can take from seconds to hours. It's built with flexibility and robustness in mind. Microservices communicate through a messaging system, which allows us to scale individual services up and down. It allows us to survive down times of individual services. It also allows us to try out new parts of Lagoon in production without affecting others.
  2. +
  3. Lagoon uses many programming languages. Each programming language has specific strengths. We try to decide which language makes the most sense for each service. Currently, a lot of Lagoon is built in Node.js. This is partly because we started with Node.js, but also because Node.js allows asynchronous processing of webhooks, tasks and more. We are likely going to change the programming language of some services. This is what is great about microservices! We can replace a single service with another language without worrying about other parts of the platform.
  4. +
  5. Lagoon is not Drupal-specific. Everything has been built so that it can run any Docker image. There are existing Docker images for Drupal, and support for Drupal-specific tools like Drush. But that's it!
  6. +
  7. Lagoon is DevOps. It allows developers to define the services they need and customize them as they need. You might think this is not the right way to do it, and gives too much power to developers. We believe that as system engineers, we need to empower developers. If we allow developers to define services locally, and test them locally, they will find bugs and mistakes themselves.
  8. +
  9. Lagoon runs on Docker and Kubernetes. (That one should be obvious, right?)
  10. +
  11. Lagoon can be completely locally developed and tested.
  12. +
  13. Lagoon is completely integration tested. This means we can test the whole process. From receiving Git webhooks to deploying into a Docker container, the same Git hash is deployed in the cluster.
  14. +
  15. Most important: It's a work in progress. It's not done yet. At amazee.io, we believe that as a hosting community, we need to work together and share code where we can.
  16. +
+

We want you to understand the Lagoon infrastructure and how the services work together. Here is a schema (it's a little out of date - it doesn't include some of the more recent services we've added, or cover Kubernetes, so we're working on an update!): Lucid Chart

+

History of Lagoon#

+

As described, Lagoon is a dream come true. At amazee.io, we've been hosting Drupal for more than 8 years. This is the fourth major iteration of our hosting platform. The third iteration was built around Puppet and Ansible. Every single piece of the platform was done with configuration management. This allowed very fast setup of new servers, but at the same time was also lacking customizability for developers. We implemented some customizability, with some already with Docker in production. However, we were never completely happy with it. We realized that our existing platform wasn't enough. With the rise of decoupled Drupal, the need to run Node.js on the server side, the requests for Elasticsearch, and different Solr versions, we had to do more. ‌

+

At the same time, we've been using Docker for many years for local development. It was always an idea to use Docker for everything in production. The only problem was the connection between local development and production environments. There are other systems that allow you to run Drupal in Docker in production. But, nothing allowed you to test the exact same images and services locally and in production.

+

Lagoon was born in 2017. It has since been developed into a system that runs Docker in production. Lagoon has replaced our third generation hosting platform with a cutting edge all Docker-based system.

+

Open Source#

+

At amazee.io, we believe in open source. It was always troubling for us that open source code like Drupal was hosted on proprietary hosting platforms. The strength and success of a hosting company is not just their deployment systems or service configurations. It's the the people and knowledge that run the platform. The processes, skills, ability to react to unforeseen situations, and last but not least, the support they provide their clients.

+

License#

+

Lagoon is available under an Apache 2.0 License.

+ + + + + + + + +
+
+ + +
+ +
+ + + +
+
+
+
+ + + + + + + + + + + + + \ No newline at end of file diff --git a/installing-lagoon/add-group/index.html b/installing-lagoon/add-group/index.html new file mode 100644 index 0000000000..8a7294f351 --- /dev/null +++ b/installing-lagoon/add-group/index.html @@ -0,0 +1,2679 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + Add Group - Lagoon Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +
+ + + + +
+ + +
+ +
+ + + + + + +
+
+ + + +
+
+
+ + + + +
+
+
+ + + +
+
+
+ + + +
+
+
+ + + +
+
+ + + + + + + + + + + + + +

Add Group#

+
Add group
  lagoon add group -N groupname
+
+ + + + + + + + +
+
+ + +
+ +
+ + + +
+
+
+
+ + + + + + + + + + + + + \ No newline at end of file diff --git a/installing-lagoon/add-project/index.html b/installing-lagoon/add-project/index.html new file mode 100644 index 0000000000..610b28a884 --- /dev/null +++ b/installing-lagoon/add-project/index.html @@ -0,0 +1,2804 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + Add a Project - Lagoon Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +
+ + + + +
+ + +
+ +
+ + + + + + +
+
+ + + +
+
+
+ + + + +
+
+
+ + + +
+
+ +
+
+ + + +
+
+ + + + + + + + + + + + + +

Adding a Project#

+

Add the project to Lagoon#

+
    +
  1. +

    Run this command:

    +
    Add project
    lagoon add project \
    +  --gitUrl <YOUR-GITHUB-REPO-URL> \
    +  --openshift 1 \
    +  --productionEnvironment <YOUR-PROD-ENV> \
    +  --branches <THE-BRANCHES-YOU-WANT-TO-DEPLOY> \
    +  --project <YOUR-PROJECT-NAME>
    +
    +
      +
    • The value for --openshift is the ID of your Kubernetes cluster.
    • +
    • Your production environment should be the name of the branch you want to have as your production environment.
    • +
    • The branches you want to deploy might look like this: “^(main|develop)$”
    • +
    • The name of your project is anything you want - “Company Website,” “example,” etc.
    • +
    +
  2. +
  3. +

    Go to the Lagoon UI, and you should see your project listed!

    +
  4. +
+

Add the deploy key to your Git repository#

+

Lagoon creates a deploy key for each project. You now need to add it as a deploy key in your Git repository to allow Lagoon to download the code.

+
    +
  1. +

    Run the following command to get the deploy key:

    +
    Get project-key
    lagoon get project-key --project <YOUR-PROJECT-NAME>
    +
    +
  2. +
  3. +

    Copy the key and save it as a deploy key in your Git repository.

    +
  4. +
+

GitHub +GitLab +Bitbucket

+

Add the webhooks endpoint to your Git repository#

+

In order for Lagoon to be able to deploy on code updates, it needs to be connected to your Git repository

+
    +
  1. +

    Add your Lagoon cluster's webhook endpoint to your Git repository

    +
      +
    • Payload URL: <LAGOON-WEBHOOK-INGRESS>
    • +
    • Content Type: JSON
    • +
    • Active: Active (allows you to enable/disable as required)
    • +
    • Events: Select the relevant events, or choose All. Usually push, branch create/delete are required
    • +
    +
  2. +
+

GitHub +GitLab +Bitbucket

+ + + + + + + + +
+
+ + +
+ +
+ + + +
+
+
+
+ + + + + + + + + + + + + \ No newline at end of file diff --git a/installing-lagoon/create-user/index.html b/installing-lagoon/create-user/index.html new file mode 100644 index 0000000000..88458ec2ed --- /dev/null +++ b/installing-lagoon/create-user/index.html @@ -0,0 +1,2689 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + Create Lagoon User - Lagoon Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +
+ + + + +
+ + +
+ +
+ + + + + + +
+
+ + + +
+
+
+ + + + +
+
+
+ + + +
+
+
+ + + +
+
+
+ + + +
+
+ + + + + + + + + + + + + +

Create Lagoon user#

+
    +
  1. +

    Add user via Lagoon CLI:

    +
    Add user
    lagoon add user --email user@example.com --firstName MyFirstName --lastName MyLastName
    +
    +
  2. +
  3. +

    Go to your email and click the password reset link in the email.

    +
  4. +
  5. Follow the instructions and log in to Lagoon UI with created password.
  6. +
  7. Add the SSH public key of the user via Settings.
  8. +
+ + + + + + + + +
+
+ + +
+ +
+ + + +
+
+
+
+ + + + + + + + + + + + + \ No newline at end of file diff --git a/installing-lagoon/deploy-project/index.html b/installing-lagoon/deploy-project/index.html new file mode 100644 index 0000000000..ce83442bfe --- /dev/null +++ b/installing-lagoon/deploy-project/index.html @@ -0,0 +1,2690 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + Deploy Your Project - Lagoon Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +
+ + + + +
+ + +
+ +
+ + + + + + +
+
+ + + +
+
+
+ + + + +
+
+
+ + + +
+
+
+ + + +
+
+
+ + + +
+
+ + + + + + + + + + + + + +

Deploy Your Project#

+
    +
  1. +

    Run the following command to deploy your project:

    +
    Deploy
    lagoon deploy branch -p <YOUR-PROJECT-NAME> -b <YOUR-BRANCH-NAME>
    +
    +
  2. +
  3. +

    Go to the Lagoon UI and take a look at your project - you should now see the environment for this project!

    +
  4. +
  5. Look in your cluster at your pods list, and you should see the build pod as it begins to clone Git repositories, set up services, etc.
    See all pods
    kubectl get pods --all-namespaces | grep lagoon-build
    +
    +
  6. +
+ + + + + + + + +
+
+ + +
+ +
+ + + +
+
+
+
+ + + + + + + + + + + + + \ No newline at end of file diff --git a/installing-lagoon/efs-provisioner/index.html b/installing-lagoon/efs-provisioner/index.html new file mode 100644 index 0000000000..5912472e2a --- /dev/null +++ b/installing-lagoon/efs-provisioner/index.html @@ -0,0 +1,2712 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + EFS Provisioner - Lagoon Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +
+ + + + +
+ + +
+ +
+ + + + + + +
+
+ + + +
+
+
+ + + + +
+
+
+ + + +
+
+
+ + + +
+
+
+ + + +
+
+ + + + + + + + + + + + + +

EFS Provisioner#

+
+

Info

+

This is only applicable to AWS installations.

+
+
    +
  1. +

    Add Helm repository:

    +
    Add Helm repo
    helm repo add stable https://charts.helm.sh/stable
    +
    +
  2. +
  3. +

    Create efs-provisioner-values.yml in your config directory and update the values:

    +
    efs-provisioner-values.yml
    efsProvisioner:
    +  efsFileSystemId: <efsFileSystemId>
    +  awsRegion: <awsRegion>
    +  path: /
    +  provisionerName: example.com/aws-efs
    +  storageClass:
    +    name: bulk
    +    isDefault: false
    +    reclaimPolicy: Delete
    +    mountOptions: []
    +global:
    +  deployEnv: prod
    +
    +
  4. +
  5. +

    Install EFS Provisioner:

    +
    Install EFS Provisioner
    helm upgrade --install --create-namespace \
    +  --namespace efs-provisioner --wait \
    +  -f efs-provisioner-values.yaml \
    +  efs-provisioner stable/efs-provisioner
    +
    +
  6. +
+ + + + + + + + +
+
+ + +
+ +
+ + + +
+
+
+
+ + + + + + + + + + + + + \ No newline at end of file diff --git a/installing-lagoon/gitlab/index.html b/installing-lagoon/gitlab/index.html new file mode 100644 index 0000000000..358506cdcb --- /dev/null +++ b/installing-lagoon/gitlab/index.html @@ -0,0 +1,2711 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + GitLab - Lagoon Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +
+ + + + +
+ + +
+ +
+ + + + + + +
+
+ + + +
+
+
+ + + + +
+
+
+ + + +
+
+
+ + + +
+
+
+ + + +
+
+ + + + + + + + + + + + + +

GitLab#

+

Not needed for *most* installs, but this is configured to integrate Lagoon with GitLab for user and group authentication.

+
    +
  1. Create Personal Access token in GitLab for a user with admin access.
  2. +
  3. Create system hooks under your-gitlab.com/admin/hooks pointing to: webhookhandler.lagoon.example.com and define a random secret token.
      +
    1. Enable “repository update events”
    2. +
    +
  4. +
  5. +

    Update lagoon-core-values.yml:

    +
    lagoon-core-values.yml
    api:
    +  additionalEnvs:
    +    GITLAB_API_HOST: <<URL of GitLab example: https://your-gitlab.com>>
    +    GITLAB_API_TOKEN: << Personal Access token with Access to API >>
    +    GITLAB_SYSTEM_HOOK_TOKEN: << System Hook Secret Token >>
    +webhook-haondler:
    +  additionalEnvs:
    +    GITLAB_API_HOST: <<URL of GitLab example: https://your-gitlab.com>>
    +    GITLAB_API_TOKEN: << Personal Access token with Access to API >>
    +    GITLAB_SYSTEM_HOOK_TOKEN: << System Hook Secret Token >>
    +webhooks2tasks:
    +  additionalEnvs:
    +    GITLAB_API_HOST: <<URL of GitLab example: https://your-gitlab.com>>
    +    GITLAB_API_TOKEN: << Personal Access token with Access to API >>
    +    GITLAB_SYSTEM_HOOK_TOKEN: << System Hook Secret Token >>
    +
    +
  6. +
  7. +

    Helm update the lagoon-core helmchart.

    +
  8. +
  9. If you've already created users in Keycloak, delete them.
  10. +
  11. Run the following command in an API pod:
    Sync with GitLab
      yarn sync:gitlab:all
    +
    +
  12. +
+ + + + + + + + +
+
+ + +
+ +
+ + + +
+
+
+
+ + + + + + + + + + + + + \ No newline at end of file diff --git a/installing-lagoon/install-harbor/index.html b/installing-lagoon/install-harbor/index.html new file mode 100644 index 0000000000..911401388c --- /dev/null +++ b/installing-lagoon/install-harbor/index.html @@ -0,0 +1,2741 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + Install Harbor - Lagoon Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +
+ + + + +
+ + +
+ +
+ + + + + + +
+
+ + + +
+
+
+ + + + +
+
+
+ + + +
+
+
+ + + +
+
+
+ + + +
+
+ + + + + + + + + + + + + +

Install Harbor#

+
    +
  1. +

    Add Helm repository:

    +
    Add Helm repository
    helm repo add harbor https://helm.goharbor.io
    +
    +
  2. +
  3. +

    Consider the optimal configuration of Harbor for your particular circumstances - see their docs for more recommendations:

    +
      +
    1. We recommend using S3-compatible storage for image blobs (imageChartStorage).
    2. +
    3. We recommend using a managed database service for the Postgres service (database.type).
    4. +
    5. In high-usage scenarios we recommend using a managed Redis service. (redis.type)
    6. +
    +
  4. +
  5. +

    Create the file harbor-values.yml inside of your config directory. The proxy-buffering annotations help with large image pushes:

    +
    harbor-values.yml
    expose:
    +  ingress:
    +    annotations:
    +      kubernetes.io/tls-acme: "true"
    +      nginx.ingress.kubernetes.io/proxy-buffering: "off"
    +      nginx.ingress.kubernetes.io/proxy-request-buffering: "off"
    +    hosts:
    +      core: harbor.lagoon.example.com
    +  tls:
    +    enabled: true
    +    certSource: secret
    +    secret:
    +      secretName: harbor-harbor-ingress
    +externalURL: https://harbor.lagoon.example.com
    +harborAdminPassword: <your Harbor Admin Password>
    +chartmuseum:
    +  enabled: false
    +clair:
    +  enabled: false
    +notary:
    +  enabled: false
    +trivy:
    +  enabled: false
    +jobservice:
    +  jobLogger: stdout
    +
    +
  6. +
  7. +

    Install Harbor, checking the requirements for the currently supported Harbor versions:

    +
    Install Harbor
    helm upgrade --install --create-namespace \
    +  --namespace harbor --wait \
    +  -f harbor-values.yml \
    +  harbor harbor/harbor
    +
    +
  8. +
  9. +

    Visit Harbor at the URL you set in harbor.yml.

    +
      +
    1. Username: admin
    2. +
    3. Password:
    4. +
    +
    Get Harbor secret
    kubectl -n harbor get secret harbor-core -o jsonpath="{.data.HARBOR_ADMIN_PASSWORD}" | base64 --decode
    +
    +
  10. +
  11. +

    You will need to add the above Harbor credentials to the Lagoon Remote values.yml in the next step, as well as harbor-values.yml.

    +
  12. +
+ + + + + + + + +
+
+ + +
+ +
+ + + +
+
+
+
+ + + + + + + + + + + + + \ No newline at end of file diff --git a/installing-lagoon/install-lagoon-remote/index.html b/installing-lagoon/install-lagoon-remote/index.html new file mode 100644 index 0000000000..31bdda393f --- /dev/null +++ b/installing-lagoon/install-lagoon-remote/index.html @@ -0,0 +1,2752 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + Install Lagoon Remote - Lagoon Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +
+ + + + +
+ + +
+ +
+ + + + + + +
+
+ + + +
+
+
+ + + + +
+
+
+ + + +
+
+
+ + + +
+
+
+ + + +
+
+ + + + + + + + + + + + + +

Install Lagoon Remote#

+

Now we will install Lagoon Remote into the Lagoon namespace. The RabbitMQ service is the broker.

+
    +
  1. +

    Create lagoon-remote-values.yml in your config directory as you did the previous two files, and update the values.

    +
      +
    • rabbitMQPassword
    • +
    +
    Get RabbitMQ password
    kubectl -n lagoon-core get secret lagoon-core-broker -o jsonpath="{.data.RABBITMQ_PASSWORD}" | base64 --decode
    +
    +
      +
    • rabbitMQHostname
    • +
    +
    lagoon-remote-values.yml
    lagoon-core-broker.lagoon-core.svc.local
    +
    +
      +
    • taskSSHHost
    • +
    +
    Update SSH Host
    kubectl get service lagoon-core-broker-amqp-ext \
    +  -o custom-columns="NAME:.metadata.name,IP ADDRESS:.status.loadBalancer.ingress[*].ip,HOSTNAME:.status.loadBalancer.ingress[*].hostname"
    +
    +
      +
    • harbor-password
    • +
    +
    Get Harbor secret
    kubectl -n harbor get secret harbor-harbor-core -o jsonpath="{.data.HARBOR_ADMIN_PASSWORD}" | base64 --decode
    +
    +
  2. +
  3. +

    Add the Harbor configuration from the Install Harbor step.

    +
    lagoon-remote-values.yml
    lagoon-build-deploy:
    +  enabled: true
    +  extraArgs:
    +    - "--enable-harbor=true"
    +    - "--harbor-url=https://harbor.lagoon.example.com"
    +    - "--harbor-api=https://harbor.lagoon.example.com/api/"
    +    - "--harbor-username=admin"
    +    - "--harbor-password=<from harbor-harbor-core secret>"
    +  rabbitMQUsername: lagoon
    +  rabbitMQPassword: <from lagoon-core-broker secret>
    +  rabbitMQHostname: lagoon-core-broker.lagoon-core.svc.cluster.local
    +  lagoonTargetName: <name of lagoon remote, can be anything>
    +  taskSSHHost: <IP of ssh service loadbalancer>
    +  taskSSHPort: "22"
    +  taskAPIHost: "api.lagoon.example.com"
    +dbaas-operator:
    +  enabled: true
    +
    +  mariadbProviders:
    +    production:
    +      environment: production
    +      hostname: 172.17.0.1.nip.io
    +      readReplicaHostnames:
    +      - 172.17.0.1.nip.io
    +      password: password
    +      port: '3306'
    +      user: root
    +
    +    development:
    +      environment: development
    +      hostname: 172.17.0.1.nip.io
    +      readReplicaHostnames:
    +      - 172.17.0.1.nip.io
    +      password: password
    +      port: '3306'
    +      user: root
    +
    +
  4. +
  5. +

    Install Lagoon Remote:

    +
    Install Lagoon remote
    helm upgrade --install --create-namespace \
    +  --namespace lagoon \
    +  -f remote-values.yaml \
    +  lagoon-remote lagoon/lagoon-remote
    +
    +
  6. +
+ + + + + + + + +
+
+ + +
+ +
+ + + +
+
+
+
+ + + + + + + + + + + + + \ No newline at end of file diff --git a/installing-lagoon/lagoon-backups/index.html b/installing-lagoon/lagoon-backups/index.html new file mode 100644 index 0000000000..c7b04d67ab --- /dev/null +++ b/installing-lagoon/lagoon-backups/index.html @@ -0,0 +1,2769 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + Lagoon Backups - Lagoon Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +
+ + + + +
+ + +
+ +
+ + + + + + +
+
+ + + +
+
+
+ + + + +
+
+
+ + + +
+
+
+ + + +
+
+
+ + + +
+
+ + + + + + + + + + + + + +

Lagoon Backups#

+

Lagoon uses the K8up backup operator: https://k8up.io. Lagoon isn’t tightly integrated with K8up, it’s more that Lagoon can create its resources in a way that K8up can automatically discover and backup.

+

Lagoon has been extensively tested with K8up 1.x, but is not compatible with 2.x yet. We recommend using the 1.1.0 chart version (App version v1.2.0).

+
    +
  1. +

    Create new AWS User with policies:

    +
    example K8up IAM user
    {
    +  "Version":"2012-10-17",
    +  "Statement":[
    +    {
    +      "Sid":"VisualEditor0",
    +      "Effect":"Allow",
    +      "Action":[
    +        "s3:ListAllMyBuckets",
    +        "s3:CreateBucket",
    +        "s3:GetBucketLocation"
    +      ],
    +      "Resource":"*"
    +    },
    +    {
    +      "Sid":"VisualEditor1",
    +      "Effect":"Allow",
    +      "Action":"s3:ListBucket",
    +      "Resource":"arn:aws:s3:::baas-*"
    +    },
    +    {
    +      "Sid":"VisualEditor2",
    +      "Effect":"Allow",
    +      "Action":[
    +        "s3:PutObject",
    +        "s3:GetObject",
    +        "s3:AbortMultipartUpload",
    +        "s3:DeleteObject",
    +        "s3:ListMultipartUploadParts"
    +      ],
    +      "Resource":"arn:aws:s3:::baas-*/*"
    +    }
    +  ]
    +}
    +
    +
  2. +
  3. +

    Create k8up-values.yml (customize for your provider):

    +
    k8up-values.yml
    k8up:
    +  envVars:
    +    - name: BACKUP_GLOBALS3ENDPOINT
    +      value: 'https://s3.eu-west-1.amazonaws.com'
    +    - name: BACKUP_GLOBALS3BUCKET
    +      value: ''
    +    - name: BACKUP_GLOBALKEEPJOBS
    +      value: '1'
    +    - name: BACKUP_GLOBALSTATSURL
    +      value: 'https://backup.lagoon.example.com'
    +    - name: BACKUP_GLOBALACCESSKEYID
    +      value: ''
    +    - name: BACKUP_GLOBALSECRETACCESSKEY
    +      value: ''
    +    - name: BACKUP_BACKOFFLIMIT
    +      value: '2'
    +    - name: BACKUP_GLOBALRESTORES3BUCKET
    +      value: ''
    +    - name: BACKUP_GLOBALRESTORES3ENDPOINT
    +      value: 'https://s3.eu-west-1.amazonaws.com'
    +    - name: BACKUP_GLOBALRESTORES3ACCESSKEYID
    +      value: ''
    +    - name: BACKUP_GLOBALRESTORES3SECRETACCESSKEY
    +      value: ''
    +  timezone: Europe/Zurich
    +
    +
  4. +
  5. +

    Install K8up:

    +
    Install K8up Step 1
    helm repo add appuio https://charts.appuio.ch
    +
    +
    Install K8up Step 2
    kubectl apply -f https://github.com/vshn/k8up/releases/download/v1.2.0/k8up-crd.yaml
    +
    +
    Install K8up Step 3
    helm upgrade --install --create-namespace \
    +  --namespace k8up \
    +  -f k8up-values.yaml \
    +  --version 1.1.0 \
    +  k8up appuio/k8up
    +
    +
  6. +
  7. +

    Update lagoon-core-values.yml:

    +
    lagoon-core-values.yml
    s3BAASAccessKeyID: <<Access Key ID for restore bucket>>
    +s3BAASSecretAccessKey: <<Access Key Secret for restore bucket>>
    +
    +
  8. +
  9. +

    Redeploy lagoon-core.

    +
  10. +
+ + + + + + + + +
+
+ + +
+ +
+ + + +
+
+
+
+ + + + + + + + + + + + + \ No newline at end of file diff --git a/installing-lagoon/lagoon-cli/index.html b/installing-lagoon/lagoon-cli/index.html new file mode 100644 index 0000000000..9eea4c7f8a --- /dev/null +++ b/installing-lagoon/lagoon-cli/index.html @@ -0,0 +1,2716 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + Install the Lagoon CLI - Lagoon Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +
+ + + + +
+ + +
+ +
+ + + + + + +
+
+ + + +
+
+
+ + + + +
+
+
+ + + +
+
+
+ + + +
+
+
+ + + +
+
+ + + + + + + + + + + + + +

Install the Lagoon CLI#

+
    +
  1. Check https://github.com/uselagoon/lagoon-cli#install on how to install for your operating system. For macOS and Linux, you can use Homebrew:
      +
    1. brew tap uselagoon/lagoon-cli
    2. +
    3. brew install lagoon
    4. +
    +
  2. +
  3. +

    The CLI needs to know how to communicate with Lagoon, so run the following command:

    +
    Lagoon config
        lagoon config add \
    +        --graphql https://YOUR-API-URL/graphql \
    +        --ui https://YOUR-UI-URL \
    +        --hostname YOUR.SSH.IP \
    +        --lagoon YOUR-LAGOON-NAME \
    +        --port 22
    +
    +
  4. +
  5. +

    Access Lagoon by authenticating with your SSH key.

    +
      +
    1. In the Lagoon UI (the URL is in values.yml if you forget), go to Settings.
    2. +
    3. Add your public SSH key.
    4. +
    5. +

      You need to set the default Lagoon to your Lagoon so that it doesn’t try to use the amazee.io defaults:

      +
      Lagoon config
          lagoon config default --lagoon <YOUR-LAGOON-NAME>
      +
      +
    6. +
    +
  6. +
  7. +

    Now run lagoon login. Lagoon talks to SSH and authenticates against your public/private key pair, and gets a token for your username.

    +
  8. +
  9. +

    Verify via lagoon whoami that you are logged in.

    +
  10. +
+
+

Info

+

We don’t generally recommend using the Lagoon Admin role, but you’ll need to create an admin account at first to get started. Ideally, you’ll immediately create another account to work from which is not an admin.

+
+ + + + + + + + +
+
+ + +
+ +
+ + + +
+
+
+
+ + + + + + + + + + + + + \ No newline at end of file diff --git a/installing-lagoon/lagoon-core/index.html b/installing-lagoon/lagoon-core/index.html new file mode 100644 index 0000000000..84cf1ea973 --- /dev/null +++ b/installing-lagoon/lagoon-core/index.html @@ -0,0 +1,2814 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + Install Lagoon Core - Lagoon Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +
+ + + + +
+ + +
+ +
+ + + + + + +
+
+ + + +
+
+
+ + + + +
+
+
+ + + +
+
+
+ + + +
+
+
+ + + +
+
+ + + + + + + + + + + + + +

Install Lagoon Core#

+

Install the Helm chart#

+
    +
  1. +

    Add Lagoon Charts repository to your Helm Repositories:

    +
    Add Lagoon Charts repository
    helm repo add lagoon https://uselagoon.github.io/lagoon-charts/
    +
    +
  2. +
  3. +

    Create a directory for the configuration files we will create, and make sure that it’s version controlled. Ensure that you reference this path in commands referencing your values.yml files.

    +
  4. +
  5. Create values.yml in the directory you’ve just created. Update the endpoint URLs (change them from api.lagoon.example.com to your values). + Example: https://github.com/uselagoon/lagoon-charts/blob/main/charts/lagoon-core/ci/linter-values.yaml
  6. +
  7. +

    Now run helm upgrade --install command, pointing to values.yml, like so:

    +
    Upgrade Helm with values.yml
    helm upgrade --install --create-namespace --namespace lagoon-core -f values.yml lagoon-core lagoon/lagoon-core`
    +
    +
  8. +
  9. +

    Lagoon Core is now installed!

    +
  10. +
+
+

Warning

+

Sometimes we run into Docker Hub pull limits. We are considering moving our images elsewhere if this continues to be a problem.

+
+

Configure Keycloak#

+

Visit the Keycloak dashboard at the URL you defined in the values.yml for Keycloak.

+
    +
  1. Click "Administration Console"
  2. +
  3. Username: admin
  4. +
  5. Password: use lagoon-core-keycloak secret, key-value KEYCLOAK_ADMIN_PASSWORD
  6. +
  7. +

    Retrieve the secret like so:

    +
    Retrieve secret
    kubectl -n lagoon-core get secret lagoon-core-keycloak -o jsonpath="{.data.KEYCLOAK_ADMIN_PASSWORD}" | base64 --decode
    +
    +
  8. +
  9. +

    Click on User on top right.

    +
      +
    1. Go to Manage Account.
    2. +
    3. Add an Email for the admin account you created.
    4. +
    5. Save.
    6. +
    +
  10. +
  11. Go to Realm Lagoon -> Realm Settings -> Email
      +
    1. Configure email server for Keycloak, test connection via “Test connection” button.
    2. +
    +
  12. +
  13. Go to Realm Lagoon -> Realm Settings -> Login
      +
    1. Enable “Forgot Password”
    2. +
    3. Save.
    4. +
    +
  14. +
+

Log in to the UI#

+

You should now be able to visit the Lagoon UI at the URL you defined in the values.yml for the UI.

+
    +
  1. Username: lagoonadmin
  2. +
  3. Secret: use lagoon-core-keycloak secret key-value: LAGOON-CORE-KEYCLOAK
  4. +
  5. Retrieve the secret:
    Retrieve secret
        kubectl -n lagoon-core get secret lagoon-core-keycloak -o jsonpath="{.data.KEYCLOAK_LAGOON_ADMIN_PASSWORD}" | base64 --decode
    +
    +
  6. +
+ + + + + + + + +
+
+ + +
+ +
+ + + +
+
+
+
+ + + + + + + + + + + + + \ No newline at end of file diff --git a/installing-lagoon/lagoon-files/index.html b/installing-lagoon/lagoon-files/index.html new file mode 100644 index 0000000000..afb3043ee0 --- /dev/null +++ b/installing-lagoon/lagoon-files/index.html @@ -0,0 +1,2729 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + Lagoon Files - Lagoon Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +
+ + + + +
+ + +
+ +
+ + + + + + +
+
+ + + +
+
+
+ + + + +
+
+
+ + + +
+
+
+ + + +
+
+
+ + + +
+
+ + + + + + + + + + + + + +

Lagoon Files#

+

Lagoon files are used to store the file output of tasks, such as backups, and can be hosted on any S3-compatible storage.

+
    +
  1. +

    Create new AWS User with policies:

    +
    Example files IAM user
    {
    +  "Version":"2012-10-17",
    +  "Statement":[
    +    {
    +      "Effect":"Allow",
    +      "Action":[
    +        "s3:ListBucket",
    +        "s3:GetBucketLocation",
    +        "s3:ListBucketMultipartUploads"
    +      ],
    +      "Resource":"arn:aws:s3:::S3_BUCKET_NAME"
    +    },
    +    {
    +      "Effect":"Allow",
    +      "Action":[
    +        "s3:PutObject",
    +        "s3:GetObject",
    +        "s3:DeleteObject",
    +        "s3:ListMultipartUploadParts",
    +        "s3:AbortMultipartUpload"
    +      ],
    +      "Resource":"arn:aws:s3:::S3_BUCKET_NAME/*"
    +    }
    +  ]
    +}
    +
    +
  2. +
  3. +

    Update lagoon-core-values.yml:

    +
    lagoon-core-values.yml
    s3FilesAccessKeyID: <<Access Key ID>>
    +s3FilesBucket: <<Bucket Name for Lagoon Files>>
    +s3FilesHost: <<S3 endpoint like "https://s3.eu-west-1.amazonaws.com" >>
    +s3FilesSecretAccessKey: <<Access Key Secret>>
    +s3FilesRegion: <<S3 Region >>
    +
    +
  4. +
  5. +

    If you use ingress-nginx in front of lagoon-core, we suggest setting this configuration which will allow for bigger file uploads:

    +
    lagoon-core-values.yml
    controller:
    +config:
    +  client-body-timeout: '600' # max 600 secs fileuploads
    +  proxy-send-timeout: '1800' # max 30min connections - needed for websockets
    +  proxy-read-timeout: '1800' # max 30min connections - needed for websockets
    +  proxy-body-size: 1024m # 1GB file size
    +  proxy-buffer-size: 64k # bigger buffer
    +
    +
  6. +
+ + + + + + + + +
+
+ + +
+ +
+ + + +
+
+
+
+ + + + + + + + + + + + + \ No newline at end of file diff --git a/installing-lagoon/lagoon-logging/index.html b/installing-lagoon/lagoon-logging/index.html new file mode 100644 index 0000000000..07ece98e78 --- /dev/null +++ b/installing-lagoon/lagoon-logging/index.html @@ -0,0 +1,2813 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + Lagoon Logging - Lagoon Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +
+ + + + +
+ + +
+ +
+ + + + + + +
+
+ + + +
+
+
+ + + + +
+
+
+ + + +
+
+
+ + + +
+
+
+ + + +
+
+ + + + + + + + + + + + + +

Lagoon Logging#

+

Lagoon integrates with OpenSearch to store application, container and router logs. Lagoon Logging collects the application, router and container logs from Lagoon projects, and sends them to the logs concentrator. It needs to be installed onto each lagoon-remote instance.

+

In addition, it should be installed in the lagoon-core cluster to collect logs from the lagoon-core service. This is configured in the LagoonLogs section.

+

Logging Overview: Lucid Chart

+

See also: Logging.

+

Read more about Lagoon logging here: https://github.com/uselagoon/lagoon-charts/tree/main/charts/lagoon-logging

+
    +
  1. +

    Create lagoon-logging-values.yaml:

    +
    lagoon-logging-values.yaml
    tls:
    +  caCert: |
    +    << content of ca.pem from Logs-Concentrator>>
    +  clientCert: |
    +    << content of client.pem from Logs-Concentrator>>
    +  clientKey: |
    +    << content of client-key.pem from Logs-Concentrator>>
    +forward:
    +  username: <<Username for Lagoon Remote 1>>
    +  password: <<Password for Lagoon Remote 1>>
    +  host: <<ExternalIP of Logs-Concentrator Service LoadBalancer>>
    +  hostName: <<Hostname in Server Cert of Logs-Concentrator>>
    +  hostPort: '24224'
    +  selfHostname: <<Hostname in Client Cert of Logs-Concentrator>>
    +  sharedKey: <<Generated ForwardSharedKey of Logs-Concentrator>>
    +  tlsVerifyHostname: false
    +clusterName: <<Short Cluster Identifier>>
    +logsDispatcher:
    +  serviceMonitor:
    +    enabled: false
    +logging-operator:
    +  monitoring:
    +    serviceMonitor:
    +      enabled: false
    +lagoonLogs:
    +  enabled: true
    +  rabbitMQHost: lagoon-core-broker.lagoon-core.svc.cluster.local
    +  rabbitMQUser: lagoon
    +  rabbitMQPassword: <<RabbitMQ Lagoon Password>>
    +excludeNamespaces: {}
    +
    +
  2. +
  3. +

    Install lagoon-logging:

    +
    Install lagoon-logging
    helm repo add banzaicloud-stable https://kubernetes-charts.banzaicloud.com
    +
    +helm upgrade --install --create-namespace \
    +  --namespace lagoon-logging \
    +  -f lagoon-logging-values.yaml \
    +  lagoon-logging lagoon/lagoon-logging
    +
    +
  4. +
+

Logging NGINX Ingress Controller#

+

If you'd like logs from ingress-nginx inside lagoon-logging:

+
    +
  1. The ingress controller must be installed in the namespace ingress-nginx
  2. +
  3. +

    Add the content of this file to ingress-nginx:

    +
    ingress-nginx log-format-upstream
    controller:
    +  config:
    +    log-format-upstream: >-
    +      {
    +      "time": "$time_iso8601",
    +      "remote_addr": "$remote_addr",
    +      "x-forwarded-for": "$http_x_forwarded_for",
    +      "true-client-ip": "$http_true_client_ip",
    +      "req_id": "$req_id",
    +      "remote_user": "$remote_user",
    +      "bytes_sent": $bytes_sent,
    +      "request_time": $request_time,
    +      "status": "$status",
    +      "host": "$host",
    +      "request_proto": "$server_protocol",
    +      "request_uri": "$uri",
    +      "request_query": "$args",
    +      "request_length": $request_length,
    +      "request_time": $request_time,
    +      "request_method": "$request_method",
    +      "http_referer": "$http_referer",
    +      "http_user_agent": "$http_user_agent",
    +      "namespace": "$namespace",
    +      "ingress_name": "$ingress_name",
    +      "service_name": "$service_name",
    +      "service_port": "$service_port"
    +      }
    +
    +
  4. +
  5. +

    Your logs should start flowing!

    +
  6. +
+ + + + + + + + +
+
+ + +
+ +
+ + + +
+
+
+
+ + + + + + + + + + + + + \ No newline at end of file diff --git a/installing-lagoon/logs-concentrator/index.html b/installing-lagoon/logs-concentrator/index.html new file mode 100644 index 0000000000..06fc5b0889 --- /dev/null +++ b/installing-lagoon/logs-concentrator/index.html @@ -0,0 +1,2710 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + Logs Concentrator - Lagoon Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +
+ + + + +
+ + +
+ +
+ + + + + + +
+
+ + + +
+
+
+ + + + +
+
+
+ + + +
+
+
+ + + +
+
+
+ + + +
+
+ + + + + + + + + + + + + +

Logs-Concentrator#

+

Logs-concentrator collects the logs being sent by Lagoon clusters and augments them with additional metadata before inserting them into Elasticsearch.

+
    +
  1. Create certificates according to ReadMe: https://github.com/uselagoon/lagoon-charts/tree/main/charts/lagoon-logs-concentrator
  2. +
  3. +

    Create logs-concentrator-values.yml:

    +
    logs-concentrator-values.yml
    tls:
    +  caCert: |
    +    <<contents of ca.pem>>
    +  serverCert: |
    +    <<contents of server.pem
    +  serverKey: |
    +    <<contents of server-key.pem>>
    +elasticsearchHost: elasticsearch-opendistro-es-client-service.elasticsearch.svc.cluster.local
    +elasticsearchAdminPassword: <<ElasticSearch Admin Password>>
    +forwardSharedKey: <<Random 32 Character Password>>
    +users:
    +  - username: <<Username for Lagoon Remote 1>>
    +    password: <<Random Password for Lagoon Remote 1>>
    +service:
    +  type: LoadBalancer
    +serviceMonitor:
    +  enabled: false
    +
    +
  4. +
  5. +

    Install logs-concentrator:

    +
    Install logs-concentrator
    helm upgrade --install --create-namespace \
    +  --namespace lagoon-logs-concentrator \
    +  -f logs-concentrator-values.yaml \
    +  lagoon-logs-concentrator lagoon/lagoon-logs-concentrator
    +
    +
  6. +
+ + + + + + + + +
+
+ + +
+ +
+ + + +
+
+
+
+ + + + + + + + + + + + + \ No newline at end of file diff --git a/installing-lagoon/opendistro/index.html b/installing-lagoon/opendistro/index.html new file mode 100644 index 0000000000..23de4c08e6 --- /dev/null +++ b/installing-lagoon/opendistro/index.html @@ -0,0 +1,2907 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + OpenDistro - Lagoon Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +
+ + + + +
+ + +
+ +
+ + + + + + +
+
+ + + +
+
+
+ + + + +
+
+
+ + + +
+
+
+ + + +
+
+
+ + + +
+
+ + + + + + + + + + + + + +

OpenDistro#

+

To install an OpenDistro cluster, you will need to configure TLS and secrets so that Lagoon can talk to it securely. You're going to have to create a handful of JSON files - put these in the same directory as the values files you've been creating throughout this installation process.

+

Install OpenDistro Helm, according to https://opendistro.github.io/for-elasticsearch-docs/docs/install/helm/

+

Create Keys and Certificates#

+
    +
  1. +

    Generate certificates

    +
    +

    Note:

    +

    CFSSL is CloudFlare's PKI/TLS swiss army knife. It is both a command line tool and an HTTP API server for signing, verifying, and bundling TLS certificates. It requires Go 1.12+ to build.

    +
    +
      +
    1. Install CFSSL: https://github.com/cloudflare/cfssl
    2. +
    3. Generate CA. You'll need the following file:
    4. +
    +
    ca-csr.json
    {
    +  "CN": "ca.elasticsearch.svc.cluster.local",
    +  "hosts": [
    +    "ca.elasticsearch.svc.cluster.local"
    +  ],
    +  "key": {
    +    "algo": "ecdsa",
    +    "size": 256
    +  },
    +  "ca": {
    +  "expiry": "87600h"
    +  }
    +}
    +
    +
  2. +
  3. +

    Run the following two commands:

    +
    Generate certificate
    cfssl gencert -initca ca-csr.json | cfssljson -bare ca -
    +rm ca.csr
    +
    +

    You'll get ca-key.pem, and ca.pem. This is your CA key and self-signed certificate.

    +
  4. +
  5. +

    Next, we'll generate the node peering certificate. You'll need the following two files:

    +
    ca-config.json
    {
    +  "signing": {
    +    "default": {
    +      "expiry": "87600h"
    +    },
    +    "profiles": {
    +      "peer": {
    +          "expiry": "87600h",
    +          "usages": [
    +            "signing",
    +              "key encipherment",
    +              "server auth",
    +              "client auth"
    +          ]
    +        },
    +      "client": {
    +          "expiry": "87600h",
    +          "usages": [
    +            "signing",
    +            "key encipherment",
    +            "client auth"
    +          ]
    +      }
    +    }
    +  }
    +}
    +
    +
    node.json
    {
    +  "hosts": [
    +    "node.elasticsearch.svc.cluster.local"
    +  ],
    +  "CN": "node.elasticsearch.svc.cluster.local",
    +  "key": {
    +    "algo": "ecdsa",
    +    "size": 256
    +  }
    +}
    +
    +
  6. +
  7. +

    Run the following two commands:

    +
    Generate certificate keys
    cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=peer node.json | cfssljson -bare node
    +rm node.csr
    +
    +

    You'll get node.pem and node-key.pem. This is the peer certificate that will be used by nodes in the ES cluster.

    +
  8. +
  9. +

    Next, we'll convert the key to the format supported by Java with the following command:

    +
    Convert key format
    openssl pkey -in node-key.pem -out node-key.pkcs8
    +
    +
  10. +
  11. +

    Now we'll generate the admin certificate. You'll need the following file:

    +
    admin.json
    {
    +  "CN": "admin.elasticsearch.svc.cluster.local",
    +  "key": {
    +    "algo": "ecdsa",
    +    "size": 256
    +  }
    +}
    +
    +
  12. +
  13. +

    Run the following two commands:

    +
    Generate admin certificate keys
    cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=client admin.json | cfssljson -bare admin
    +rm admin.csr
    +
    +

    You'll get admin.pem and admin-key.pem. This is the certificate that will be used to perform admin commands on the opendistro-security plugin.

    +
  14. +
  15. +

    Next, we'll convert the key to the format supported by Java with the following command:

    +
    Convert key format
    openssl pkey -in admin-key.pem -out admin-key.pkcs8
    +
    +
  16. +
+

Installing OpenDistro#

+

Now that we have our keys and certificates, we can continue with the installation.

+
    +
  1. +

    Generate hashed passwords.

    +
      +
    1. The elasticsearch-secrets-values.yaml needs two hashed passwords. Create them with this command (run it twice, enter a random password, store both the plaintext and hashed passwords).
    2. +
    +
    Generate hashed passwords
    docker run --rm -it docker.io/amazon/opendistro-for-elasticsearch:1.12.0 sh -c "chmod +x /usr/share/elasticsearch/plugins/opendistro_security/tools/hash.sh; /usr/share/elasticsearch/plugins/opendistro_security/tools/hash.sh"
    +
    +
  2. +
  3. +

    Create secrets:

    +
      +
    1. You'll need to create elasticsearch-secrets-values.yaml. See this gist as an example: https://gist.github.com/Schnitzel/43f483dfe0b23ca0dddd939b12bb4b0b
    2. +
    +
  4. +
  5. +

    Install secrets with the following commands:

    +
    Install secrets
    helm repo add incubator https://charts.helm.sh/incubator`
    +helm upgrade --namespace elasticsearch --create-namespace --install elasticsearch-secrets incubator/raw --values elasticsearch-secrets-values.yaml `
    +
    +
  6. +
  7. +

    You'll need to create elasticsearch-values.yaml. See this gist as an example: (fill all <\> with values) https://gist.github.com/Schnitzel/1e386654b6abf75bf4d66a544db4aa6a

    +
  8. +
  9. +

    Install Elasticsearch:

    +
    Install Elasticsearch
    helm upgrade --namespace elasticsearch --create-namespace --install elasticsearch opendistro-es-X.Y.Z.tgz --values elasticsearch-values.yaml
    +
    +
  10. +
  11. +

    Configure security inside Elasticsearch with the following:

    +
    Configure security
    kubectl exec -n elasticsearch -it elasticsearch-opendistro-es-master-0 -- bash
    +chmod +x /usr/share/elasticsearch/plugins/opendistro_security/tools/securityadmin.sh
    +/usr/share/elasticsearch/plugins/opendistro_security/tools/securityadmin.sh -nhnv -cacert /usr/share/elasticsearch/config/admin-root-ca.pem -cert /usr/share/elasticsearch/config/admin-crt.pem -key /usr/share/elasticsearch/config/admin-key.pem -cd /usr/share/elasticsearch/plugins/opendistro_security/securityconfig/
    +
    +
  12. +
  13. +

    Update lagoon-core-values.yaml with:

    +
    lagoon-core-values.yaml
    elasticsearchURL: http://elasticsearch-opendistro-es-client-service.elasticsearch.svc.cluster.local:9200
    +kibanaURL: https://<<Kibana Public URL>>
    +logsDBAdminPassword: "<<PlainText Elasticsearch Admin Password>>"
    +
    +
  14. +
  15. +

    Rollout Lagoon Core:

    +
    Rollout Lagoon Core
    helm upgrade --install --create-namespace --namespace lagoon-core -f values.yaml lagoon-core lagoon/lagoon-core
    +
    +
  16. +
  17. +

    Sync all Lagoon Groups with Opendistro Elasticsearch

    +
    Sync groups
    kubectl -n lagoon-core exec -it deploy/lagoon-core-api -- sh
    +yarn run sync:opendistro-security
    +
    +
  18. +
+ + + + + + + + +
+
+ + +
+ +
+ + + +
+
+
+
+ + + + + + + + + + + + + \ No newline at end of file diff --git a/installing-lagoon/querying-graphql/index.html b/installing-lagoon/querying-graphql/index.html new file mode 100644 index 0000000000..4cb57b34bf --- /dev/null +++ b/installing-lagoon/querying-graphql/index.html @@ -0,0 +1,2750 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + Querying with GraphQL - Lagoon Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +
+ + + + +
+ + +
+ +
+ + + + + + +
+
+ + + +
+
+
+ + + + +
+
+
+ + + +
+
+
+ + + +
+
+
+ + + +
+
+ + + + + + + + + + + + + +

Querying with GraphQL#

+
    +
  1. +

    You’ll need an app for sending and receiving GraphQL queries. We recommend GraphiQL.

    +
      +
    1. If you’re using Homebrew, you can install it with brew install --cask graphiql.
    2. +
    +
  2. +
  3. +

    We need to tell Lagoon Core about the Kubernetes cluster. The GraphQL endpoint is: https://<YOUR-API-URL>/graphql

    +
  4. +
  5. +

    Go to Edit HTTP Headers, and Add Header.

    +
      +
    1. Header Name: Authorization
    2. +
    3. Value: Bearer YOUR-TOKEN-HERE
    4. +
    5. In your home directory, the Lagoon CLI has created a .lagoon.yml file. Copy the token from that file and use it for the value here.
    6. +
    7. Save.
    8. +
    +
  6. +
  7. +

    Now you’re ready to run some queries. Run the following test query to ensure everything is working correctly:

    +
    Get all projects
    query allProjects {allProjects {name } }
    +
    +
  8. +
  9. +

    This should give you the following response:

    +
    API Response
      {
    +    "data": {
    +      "allProjects": []
    +    }
    +  }
    +
    +

    Read more about GraphQL here in our documentation.

    +
  10. +
  11. +

    Once you get the correct response, we need to add a mutation.

    +
      +
    1. +

      Run the following query:

      +
      Add mutation
      mutation addKubernetes {
      +  addKubernetes(input:
      +  {
      +    name: "<TARGET-NAME-FROM-REMOTE-VALUES.yml>",
      +    consoleUrl: "<URL-OF-K8S-CLUSTER>",
      +    token: "xxxxxx”
      +    routerPattern: "${environment}.${project}.lagoon.example.com"
      +  }){id}
      +}
      +
      +
        +
      1. name: get from lagoon-remote-values.yml
      2. +
      3. consoleUrl: API Endpoint of Kubernetes cluster. Get from values.yml
      4. +
      5. +

        token: create a token for the lagoon-build-deploy service account

        +
        Create token
          kubectl -n lagoon create token lagoon-build-deploy --duration 3h
        +
        +
      6. +
      +
    2. +
    +
  12. +
+
+

Prior to Kubernetes 1.21:

+

Use the lagoon-build-deploy token installed by lagoon-remote:

+
Use deploy token
  kubectl -n lagoon describe secret \
+    $(kubectl -n lagoon get secret | grep lagoon-build-deploy | awk '{print $1}') | grep token: | awk '{print $2}'
+
+
+
+

Info

+

Authorization tokens for GraphQL are very short term so you may need to generate a new one. Run lagoon login and then cat the .lagoon.yml file to get the new token, and replace the old token in the HTTP header with the new one.

+
+ + + + + + + + +
+
+ + +
+ +
+ + + +
+
+
+
+ + + + + + + + + + + + + \ No newline at end of file diff --git a/installing-lagoon/requirements/index.html b/installing-lagoon/requirements/index.html new file mode 100644 index 0000000000..70281b00d9 --- /dev/null +++ b/installing-lagoon/requirements/index.html @@ -0,0 +1,2875 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + Requirements - Lagoon Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +
+ + + + +
+ + +
+ +
+ + + + + + +
+
+ + + +
+
+
+ + + + +
+
+
+ + + +
+ +
+ + + +
+
+ + + + + + + + + + + + + +

Installing Lagoon Into Existing Kubernetes Cluster#

+

Requirements#

+
    +
  • Kubernetes 1.23+ (Kubernetes 1.21 is supported, but 1.23 is recommended)
  • +
  • Familiarity with Helm and Helm Charts, and kubectl.
  • +
  • Ingress controller, we recommend ingress-nginx, installed into ingress-nginx namespace
  • +
  • Cert manager (for TLS) - We highly recommend using letsencrypt
  • +
  • StorageClasses (RWO as default, RWM for persistent types)
  • +
+
+

Note

+

We acknowledge that this is a lot of steps, and our roadmap for the immediate future includes reducing the number of steps in this process.

+
+

Specific requirements (as of January 2023)#

+

Kubernetes#

+

Lagoon supports Kubernetes versions 1.21 onwards. We actively test and develop against Kubernetes 1.24, also regularly testing against 1.21,1.22 and 1.25.

+

The next large round of breaking changes is in Kubernetes 1.25, and we will endeavour to be across these in advance, although this will require a bump in the minimum supported version of Lagoon.

+

ingress-nginx#

+

Lagoon is currently configured only for a single ingress-nginx controller, and therefore defining an IngressClass was not necessary in the past.

+

In order to use the recent ingress-nginx controllers (v4 onwards, required for Kubernetes 1.22), the following configuration should be used, as per the ingress-nginx docs.

+
    +
  • nginx-ingress should be configured as the default controller - set .controller.ingressClassResource.default: true in Helm values
  • +
  • nginx-ingress should be configured to watch ingresses without IngressClass set - set .controller.watchIngressWithoutClass: true in Helm values
  • +
+

This will configure the controller to create any new ingresses with itself as the IngressClass, and also to handle any existing ingresses without an IngressClass set.

+

Other configurations may be possible, but have not been tested.

+

Harbor#

+

Versions 2.1 and 2.2+ of Harbor are currently supported. The method of retrieving robot accounts was changed in 2.2, and the Lagoon remote-controller is able to handle these tokens. This means that Harbor has to be configured with the credentials in lagoon-build-deploy - not lagoon-core.

+

We recommend installing a Harbor version greater than 2.6.0 with Helm chart 1.10.0 or greater.

+

k8up for backups#

+

Lagoon has built in configuration for the K8up backup operator. Lagoon can configure prebackup pods, schedules and retentions, and manage backups and restores for K8up. Lagoon currently only supports the 1.x versions of K8up, owing to a namespace change in v2 onwards, but we are working on a fix.

+
+

K8up v2:

+

Lagoon does not currently support K8up v2 onwards due to a namespace change here.

+
+

We recommend installing K8up version 1.2.0 with Helm Chart 1.1.0

+

Storage provisioners#

+

Lagoon utilizes a default 'standard' StorageClass for most workloads, and the internal provisioner for most Kubernetes platforms will suffice. This should be configured to be dynamic provisioning and expandable where possible.

+

Lagoon also requires a StorageClass called 'bulk' to be available to support persistant pod replicas (across nodes). This StorageClass should support ReadWriteMany (RWX) access mode and should be configured to be dynamic provisioning and expandable where possible. See https://kubernetes.io/docs/concepts/storage/persistent-volumes/#access-modes for more information, and the production drivers list for a complete list of compatible drivers.

+

We have curently only included the instructions for (the now deprecated) EFS Provisioner. The production EFS CSI driver has issues with provisioning more than 120 PVCs. We are awaiting upstream possible fixes here and here - but most other providers CSI drivers should also work, as will configurations with an NFS-compatible server and provisioner.

+

How much Kubernetes experience/knowledge is required?#

+

Lagoon uses some very involved Kubernetes and cloud-native concepts, and while full familiarity may not be necessary to install and configure Lagoon, diagnosing issues and contributing may prove difficult without a good level of familiarity.

+

As an indicator, comfort with the curriculum for the Certified Kubernetes Administrator would be suggested as a minimum.

+ + + + + + + + +
+
+ + +
+ +
+ + + +
+
+
+
+ + + + + + + + + + + + + \ No newline at end of file diff --git a/installing-lagoon/update-lagoon/index.html b/installing-lagoon/update-lagoon/index.html new file mode 100644 index 0000000000..dea173a53a --- /dev/null +++ b/installing-lagoon/update-lagoon/index.html @@ -0,0 +1,2819 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + Updating - Lagoon Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +
+ + + + +
+ + +
+ +
+ + + + + + +
+
+ + + +
+
+
+ + + + +
+
+
+ + + +
+
+
+ + + +
+
+
+ + + +
+
+ + + + + + + + + + + + + +

Updating#

+
    +
  1. +

    Download newest charts using Helm.

    +
    Download newest charts
    helm repo update
    +
    +
  2. +
  3. +

    Check with helm diff for changes (https://github.com/databus23/helm-diff).

    +
    Check for changes
    helm diff upgrade --install --create-namespace --namespace lagoon-core \
    +    -f values.yml lagoon-core lagoon/lagoon-core
    +
    +
  4. +
  5. +

    Back up the Lagoon databases prior to any Helm actions. + We also suggest scaling the API to a single pod, to aid the database migration scripts running in the initContainers.

    +
  6. +
  7. +

    Run the upgrade using Helm.

    +
    Run upgrade
    helm upgrade --install --create-namespace --namespace lagoon-core \
    +    -f values.yaml lagoon-core lagoon/lagoon-core
    +
    +
  8. +
  9. +

    (Note that as of Lagoon v2.11.0, this step is no longer required.) + If upgrading Lagoon Core, ensure you run the rerun_initdb.sh script to perform post upgrade migrations.

    +
    Run script
    kubectl --namespace lagoon-core exec -it lagoon-core-api-db-0 -- \
    +    sh -c /rerun_initdb.sh
    +
    +
  10. +
  11. +

    Re-scale the API pods back to their original level.

    +
  12. +
  13. +

    If upgrading Lagoon Core, and you have enabled groups/user syncing for OpenSearch, you may additionally need to run the sync:opendistro-security script to update the groups in OpenSearch. This command can also be prefixed with a GROUP_REGEX=<group-to-sync to sync a single group at a time, as syncing the entire group structure may take a long time.

    +
    Run script
    kubectl --namespace lagoon-core exec -it deploy/lagoon-core-api -- \
    +    sh -c yarn sync:opendistro-security
    +
    +
  14. +
+

Check https://github.com/uselagoon/lagoon/releases for additional upgrades.

+

Database Backups#

+

You may want to back up the databases before upgrading Lagoon Core, the following will create backups you can use to restore from if required. You can delete them afterwards.

+

API DB#

+
Back up API DB
kubectl --namespace lagoon-core exec -it lagoon-core-api-db-0 -- \
+    sh -c 'mysqldump --max-allowed-packet=500M --events \
+    --routines --quick --add-locks --no-autocommit \
+    --single-transaction infrastructure | gzip -9 > \
+    /var/lib/mysql/backup/$(date +%Y-%m-%d_%H%M%S).infrastructure.sql.gz'
+
+

Keycloak DB#

+
Back up Keycloak DB
kubectl --namespace lagoon-core exec -it lagoon-core-keycloak-db-0 -- \
+    sh -c 'mysqldump --max-allowed-packet=500M --events \
+    --routines --quick --add-locks --no-autocommit \
+    --single-transaction keycloak | gzip -9 > \
+    /var/lib/mysql/backup/$(date +%Y-%m-%d_%H%M%S).keycloak.sql.gz'
+
+ + + + + + + + +
+
+ + +
+ +
+ + + +
+
+
+
+ + + + + + + + + + + + + \ No newline at end of file diff --git a/javascript/tablesort.js b/javascript/tablesort.js new file mode 100644 index 0000000000..2e9fd4e511 --- /dev/null +++ b/javascript/tablesort.js @@ -0,0 +1,6 @@ +document$.subscribe(function() { + var tables = document.querySelectorAll("article table:not([class])") + tables.forEach(function(table) { + new Tablesort(table) + }) + }) diff --git a/lagoon-logo.png b/lagoon-logo.png new file mode 100644 index 0000000000..56d63d4685 Binary files /dev/null and b/lagoon-logo.png differ diff --git a/lagoon/administering-lagoon/feature-flags/index.html b/lagoon/administering-lagoon/feature-flags/index.html new file mode 100644 index 0000000000..e8827a1ec6 --- /dev/null +++ b/lagoon/administering-lagoon/feature-flags/index.html @@ -0,0 +1,15 @@ + + + + + + Redirecting... + + + + + + +Redirecting... + + diff --git a/lagoon/administering-lagoon/graphql-queries/index.html b/lagoon/administering-lagoon/graphql-queries/index.html new file mode 100644 index 0000000000..fb0eb9d269 --- /dev/null +++ b/lagoon/administering-lagoon/graphql-queries/index.html @@ -0,0 +1,15 @@ + + + + + + Redirecting... + + + + + + +Redirecting... + + diff --git a/lagoon/administering-lagoon/rbac/index.html b/lagoon/administering-lagoon/rbac/index.html new file mode 100644 index 0000000000..74b96011d0 --- /dev/null +++ b/lagoon/administering-lagoon/rbac/index.html @@ -0,0 +1,15 @@ + + + + + + Redirecting... + + + + + + +Redirecting... + + diff --git a/lagoon/administering-lagoon/using_harbor/harbor-settings/harbor-core/index.html b/lagoon/administering-lagoon/using_harbor/harbor-settings/harbor-core/index.html new file mode 100644 index 0000000000..190ef5110e --- /dev/null +++ b/lagoon/administering-lagoon/using_harbor/harbor-settings/harbor-core/index.html @@ -0,0 +1,15 @@ + + + + + + Redirecting... + + + + + + +Redirecting... + + diff --git a/lagoon/administering-lagoon/using_harbor/harbor-settings/harbor-database/index.html b/lagoon/administering-lagoon/using_harbor/harbor-settings/harbor-database/index.html new file mode 100644 index 0000000000..a58c82a702 --- /dev/null +++ b/lagoon/administering-lagoon/using_harbor/harbor-settings/harbor-database/index.html @@ -0,0 +1,15 @@ + + + + + + Redirecting... + + + + + + +Redirecting... + + diff --git a/lagoon/administering-lagoon/using_harbor/harbor-settings/harbor-jobservice/index.html b/lagoon/administering-lagoon/using_harbor/harbor-settings/harbor-jobservice/index.html new file mode 100644 index 0000000000..0847511afa --- /dev/null +++ b/lagoon/administering-lagoon/using_harbor/harbor-settings/harbor-jobservice/index.html @@ -0,0 +1,15 @@ + + + + + + Redirecting... + + + + + + +Redirecting... + + diff --git a/lagoon/administering-lagoon/using_harbor/harbor-settings/harbor-trivy/index.html b/lagoon/administering-lagoon/using_harbor/harbor-settings/harbor-trivy/index.html new file mode 100644 index 0000000000..b7f47c8a82 --- /dev/null +++ b/lagoon/administering-lagoon/using_harbor/harbor-settings/harbor-trivy/index.html @@ -0,0 +1,15 @@ + + + + + + Redirecting... + + + + + + +Redirecting... + + diff --git a/lagoon/administering-lagoon/using_harbor/harbor-settings/harborregistry/index.html b/lagoon/administering-lagoon/using_harbor/harbor-settings/harborregistry/index.html new file mode 100644 index 0000000000..afd4dbdfc5 --- /dev/null +++ b/lagoon/administering-lagoon/using_harbor/harbor-settings/harborregistry/index.html @@ -0,0 +1,15 @@ + + + + + + Redirecting... + + + + + + +Redirecting... + + diff --git a/lagoon/administering-lagoon/using_harbor/harbor-settings/harborregistryctl/index.html b/lagoon/administering-lagoon/using_harbor/harbor-settings/harborregistryctl/index.html new file mode 100644 index 0000000000..1a18f621ea --- /dev/null +++ b/lagoon/administering-lagoon/using_harbor/harbor-settings/harborregistryctl/index.html @@ -0,0 +1,15 @@ + + + + + + Redirecting... + + + + + + +Redirecting... + + diff --git a/lagoon/administering-lagoon/using_harbor/harbor-settings/index.html b/lagoon/administering-lagoon/using_harbor/harbor-settings/index.html new file mode 100644 index 0000000000..9241e47750 --- /dev/null +++ b/lagoon/administering-lagoon/using_harbor/harbor-settings/index.html @@ -0,0 +1,15 @@ + + + + + + Redirecting... + + + + + + +Redirecting... + + diff --git a/lagoon/administering-lagoon/using_harbor/index.html b/lagoon/administering-lagoon/using_harbor/index.html new file mode 100644 index 0000000000..3d6cfd5704 --- /dev/null +++ b/lagoon/administering-lagoon/using_harbor/index.html @@ -0,0 +1,15 @@ + + + + + + Redirecting... + + + + + + +Redirecting... + + diff --git a/lagoon/administering-lagoon/using_harbor/security_scanning/index.html b/lagoon/administering-lagoon/using_harbor/security_scanning/index.html new file mode 100644 index 0000000000..584d303196 --- /dev/null +++ b/lagoon/administering-lagoon/using_harbor/security_scanning/index.html @@ -0,0 +1,15 @@ + + + + + + Redirecting... + + + + + + +Redirecting... + + diff --git a/lagoon/contributing-to-lagoon/api-debugging/index.html b/lagoon/contributing-to-lagoon/api-debugging/index.html new file mode 100644 index 0000000000..1eebb374f8 --- /dev/null +++ b/lagoon/contributing-to-lagoon/api-debugging/index.html @@ -0,0 +1,15 @@ + + + + + + Redirecting... + + + + + + +Redirecting... + + diff --git a/lagoon/contributing-to-lagoon/code-of-conduct/index.html b/lagoon/contributing-to-lagoon/code-of-conduct/index.html new file mode 100644 index 0000000000..18124abfd3 --- /dev/null +++ b/lagoon/contributing-to-lagoon/code-of-conduct/index.html @@ -0,0 +1,15 @@ + + + + + + Redirecting... + + + + + + +Redirecting... + + diff --git a/lagoon/contributing-to-lagoon/contributing/index.html b/lagoon/contributing-to-lagoon/contributing/index.html new file mode 100644 index 0000000000..15fc1271b5 --- /dev/null +++ b/lagoon/contributing-to-lagoon/contributing/index.html @@ -0,0 +1,15 @@ + + + + + + Redirecting... + + + + + + +Redirecting... + + diff --git a/lagoon/contributing-to-lagoon/developing-lagoon/index.html b/lagoon/contributing-to-lagoon/developing-lagoon/index.html new file mode 100644 index 0000000000..855421e48f --- /dev/null +++ b/lagoon/contributing-to-lagoon/developing-lagoon/index.html @@ -0,0 +1,15 @@ + + + + + + Redirecting... + + + + + + +Redirecting... + + diff --git a/lagoon/contributing-to-lagoon/tests/index.html b/lagoon/contributing-to-lagoon/tests/index.html new file mode 100644 index 0000000000..9ffdb72a54 --- /dev/null +++ b/lagoon/contributing-to-lagoon/tests/index.html @@ -0,0 +1,15 @@ + + + + + + Redirecting... + + + + + + +Redirecting... + + diff --git a/lagoon/docker-images/elasticsearch/index.html b/lagoon/docker-images/elasticsearch/index.html new file mode 100644 index 0000000000..a7342281e5 --- /dev/null +++ b/lagoon/docker-images/elasticsearch/index.html @@ -0,0 +1,15 @@ + + + + + + Redirecting... + + + + + + +Redirecting... + + diff --git a/lagoon/docker-images/mariadb/index.html b/lagoon/docker-images/mariadb/index.html new file mode 100644 index 0000000000..81e95a450a --- /dev/null +++ b/lagoon/docker-images/mariadb/index.html @@ -0,0 +1,15 @@ + + + + + + Redirecting... + + + + + + +Redirecting... + + diff --git a/lagoon/docker-images/mariadb/mariadb-drupal/index.html b/lagoon/docker-images/mariadb/mariadb-drupal/index.html new file mode 100644 index 0000000000..9b52a74cfe --- /dev/null +++ b/lagoon/docker-images/mariadb/mariadb-drupal/index.html @@ -0,0 +1,15 @@ + + + + + + Redirecting... + + + + + + +Redirecting... + + diff --git a/lagoon/docker-images/mongodb/index.html b/lagoon/docker-images/mongodb/index.html new file mode 100644 index 0000000000..ea695597a9 --- /dev/null +++ b/lagoon/docker-images/mongodb/index.html @@ -0,0 +1,15 @@ + + + + + + Redirecting... + + + + + + +Redirecting... + + diff --git a/lagoon/docker-images/nginx/index.html b/lagoon/docker-images/nginx/index.html new file mode 100644 index 0000000000..4bbe2a13af --- /dev/null +++ b/lagoon/docker-images/nginx/index.html @@ -0,0 +1,15 @@ + + + + + + Redirecting... + + + + + + +Redirecting... + + diff --git a/lagoon/docker-images/nginx/nginx-drupal/index.html b/lagoon/docker-images/nginx/nginx-drupal/index.html new file mode 100644 index 0000000000..cf5195c0e5 --- /dev/null +++ b/lagoon/docker-images/nginx/nginx-drupal/index.html @@ -0,0 +1,15 @@ + + + + + + Redirecting... + + + + + + +Redirecting... + + diff --git a/lagoon/docker-images/nodejs/index.html b/lagoon/docker-images/nodejs/index.html new file mode 100644 index 0000000000..c1100dd78f --- /dev/null +++ b/lagoon/docker-images/nodejs/index.html @@ -0,0 +1,15 @@ + + + + + + Redirecting... + + + + + + +Redirecting... + + diff --git a/lagoon/docker-images/php-cli/index.html b/lagoon/docker-images/php-cli/index.html new file mode 100644 index 0000000000..42c6eb112b --- /dev/null +++ b/lagoon/docker-images/php-cli/index.html @@ -0,0 +1,15 @@ + + + + + + Redirecting... + + + + + + +Redirecting... + + diff --git a/lagoon/docker-images/php-cli/php-cli-drupal/index.html b/lagoon/docker-images/php-cli/php-cli-drupal/index.html new file mode 100644 index 0000000000..d51a3f8c81 --- /dev/null +++ b/lagoon/docker-images/php-cli/php-cli-drupal/index.html @@ -0,0 +1,15 @@ + + + + + + Redirecting... + + + + + + +Redirecting... + + diff --git a/lagoon/docker-images/php-fpm/index.html b/lagoon/docker-images/php-fpm/index.html new file mode 100644 index 0000000000..306d2d8809 --- /dev/null +++ b/lagoon/docker-images/php-fpm/index.html @@ -0,0 +1,15 @@ + + + + + + Redirecting... + + + + + + +Redirecting... + + diff --git a/lagoon/docker-images/postgres/index.html b/lagoon/docker-images/postgres/index.html new file mode 100644 index 0000000000..3d867ae98e --- /dev/null +++ b/lagoon/docker-images/postgres/index.html @@ -0,0 +1,15 @@ + + + + + + Redirecting... + + + + + + +Redirecting... + + diff --git a/lagoon/docker-images/rabbitmq/index.html b/lagoon/docker-images/rabbitmq/index.html new file mode 100644 index 0000000000..49a919d954 --- /dev/null +++ b/lagoon/docker-images/rabbitmq/index.html @@ -0,0 +1,15 @@ + + + + + + Redirecting... + + + + + + +Redirecting... + + diff --git a/lagoon/docker-images/redis/index.html b/lagoon/docker-images/redis/index.html new file mode 100644 index 0000000000..f9e9f1f635 --- /dev/null +++ b/lagoon/docker-images/redis/index.html @@ -0,0 +1,15 @@ + + + + + + Redirecting... + + + + + + +Redirecting... + + diff --git a/lagoon/docker-images/redis/redis-persistent/index.html b/lagoon/docker-images/redis/redis-persistent/index.html new file mode 100644 index 0000000000..55cc166967 --- /dev/null +++ b/lagoon/docker-images/redis/redis-persistent/index.html @@ -0,0 +1,15 @@ + + + + + + Redirecting... + + + + + + +Redirecting... + + diff --git a/lagoon/docker-images/solr/index.html b/lagoon/docker-images/solr/index.html new file mode 100644 index 0000000000..509f338816 --- /dev/null +++ b/lagoon/docker-images/solr/index.html @@ -0,0 +1,15 @@ + + + + + + Redirecting... + + + + + + +Redirecting... + + diff --git a/lagoon/docker-images/solr/solr-drupal/index.html b/lagoon/docker-images/solr/solr-drupal/index.html new file mode 100644 index 0000000000..4142f44770 --- /dev/null +++ b/lagoon/docker-images/solr/solr-drupal/index.html @@ -0,0 +1,15 @@ + + + + + + Redirecting... + + + + + + +Redirecting... + + diff --git a/lagoon/docker-images/varnish/index.html b/lagoon/docker-images/varnish/index.html new file mode 100644 index 0000000000..59e5038b77 --- /dev/null +++ b/lagoon/docker-images/varnish/index.html @@ -0,0 +1,15 @@ + + + + + + Redirecting... + + + + + + +Redirecting... + + diff --git a/lagoon/docker-images/varnish/varnish-drupal/index.html b/lagoon/docker-images/varnish/varnish-drupal/index.html new file mode 100644 index 0000000000..39a75568c1 --- /dev/null +++ b/lagoon/docker-images/varnish/varnish-drupal/index.html @@ -0,0 +1,15 @@ + + + + + + Redirecting... + + + + + + +Redirecting... + + diff --git a/lagoon/drupal/drush-9/index.html b/lagoon/drupal/drush-9/index.html new file mode 100644 index 0000000000..9c56833d8c --- /dev/null +++ b/lagoon/drupal/drush-9/index.html @@ -0,0 +1,15 @@ + + + + + + Redirecting... + + + + + + +Redirecting... + + diff --git a/lagoon/drupal/first-deployment-of-drupal/index.html b/lagoon/drupal/first-deployment-of-drupal/index.html new file mode 100644 index 0000000000..8303f8d663 --- /dev/null +++ b/lagoon/drupal/first-deployment-of-drupal/index.html @@ -0,0 +1,15 @@ + + + + + + Redirecting... + + + + + + +Redirecting... + + diff --git a/lagoon/drupal/integrate-drupal-and-fastly/index.html b/lagoon/drupal/integrate-drupal-and-fastly/index.html new file mode 100644 index 0000000000..39c393b5dc --- /dev/null +++ b/lagoon/drupal/integrate-drupal-and-fastly/index.html @@ -0,0 +1,15 @@ + + + + + + Redirecting... + + + + + + +Redirecting... + + diff --git a/lagoon/drupal/phpunit-and-phpstorm/index.html b/lagoon/drupal/phpunit-and-phpstorm/index.html new file mode 100644 index 0000000000..ccc35dbce7 --- /dev/null +++ b/lagoon/drupal/phpunit-and-phpstorm/index.html @@ -0,0 +1,15 @@ + + + + + + Redirecting... + + + + + + +Redirecting... + + diff --git a/lagoon/drupal/services/index.html b/lagoon/drupal/services/index.html new file mode 100644 index 0000000000..49c2cf6506 --- /dev/null +++ b/lagoon/drupal/services/index.html @@ -0,0 +1,15 @@ + + + + + + Redirecting... + + + + + + +Redirecting... + + diff --git a/lagoon/drupal/services/mariadb/index.html b/lagoon/drupal/services/mariadb/index.html new file mode 100644 index 0000000000..9b52a74cfe --- /dev/null +++ b/lagoon/drupal/services/mariadb/index.html @@ -0,0 +1,15 @@ + + + + + + Redirecting... + + + + + + +Redirecting... + + diff --git a/lagoon/drupal/services/redis/index.html b/lagoon/drupal/services/redis/index.html new file mode 100644 index 0000000000..1630f4c4f0 --- /dev/null +++ b/lagoon/drupal/services/redis/index.html @@ -0,0 +1,15 @@ + + + + + + Redirecting... + + + + + + +Redirecting... + + diff --git a/lagoon/drupal/services/solr/index.html b/lagoon/drupal/services/solr/index.html new file mode 100644 index 0000000000..4142f44770 --- /dev/null +++ b/lagoon/drupal/services/solr/index.html @@ -0,0 +1,15 @@ + + + + + + Redirecting... + + + + + + +Redirecting... + + diff --git a/lagoon/drupal/services/untitled/index.html b/lagoon/drupal/services/untitled/index.html new file mode 100644 index 0000000000..513ecb85db --- /dev/null +++ b/lagoon/drupal/services/untitled/index.html @@ -0,0 +1,15 @@ + + + + + + Redirecting... + + + + + + +Redirecting... + + diff --git a/lagoon/drupal/services/varnish/index.html b/lagoon/drupal/services/varnish/index.html new file mode 100644 index 0000000000..39a75568c1 --- /dev/null +++ b/lagoon/drupal/services/varnish/index.html @@ -0,0 +1,15 @@ + + + + + + Redirecting... + + + + + + +Redirecting... + + diff --git a/lagoon/drupal/step-by-step-getting-drupal-ready-to-run-on-lagoon/index.html b/lagoon/drupal/step-by-step-getting-drupal-ready-to-run-on-lagoon/index.html new file mode 100644 index 0000000000..cd9fc6297a --- /dev/null +++ b/lagoon/drupal/step-by-step-getting-drupal-ready-to-run-on-lagoon/index.html @@ -0,0 +1,15 @@ + + + + + + Redirecting... + + + + + + +Redirecting... + + diff --git a/lagoon/drupal/subfolders/index.html b/lagoon/drupal/subfolders/index.html new file mode 100644 index 0000000000..dc9145026e --- /dev/null +++ b/lagoon/drupal/subfolders/index.html @@ -0,0 +1,15 @@ + + + + + + Redirecting... + + + + + + +Redirecting... + + diff --git a/lagoon/getting-started/index.html b/lagoon/getting-started/index.html new file mode 100644 index 0000000000..083e1867de --- /dev/null +++ b/lagoon/getting-started/index.html @@ -0,0 +1,15 @@ + + + + + + Redirecting... + + + + + + +Redirecting... + + diff --git a/lagoon/index.html b/lagoon/index.html new file mode 100644 index 0000000000..e0c38c74de --- /dev/null +++ b/lagoon/index.html @@ -0,0 +1,15 @@ + + + + + + Redirecting... + + + + + + +Redirecting... + + diff --git a/lagoon/logging/kibana-examples/index.html b/lagoon/logging/kibana-examples/index.html new file mode 100644 index 0000000000..f516dc0860 --- /dev/null +++ b/lagoon/logging/kibana-examples/index.html @@ -0,0 +1,15 @@ + + + + + + Redirecting... + + + + + + +Redirecting... + + diff --git a/lagoon/logging/logging/index.html b/lagoon/logging/logging/index.html new file mode 100644 index 0000000000..e29d159f39 --- /dev/null +++ b/lagoon/logging/logging/index.html @@ -0,0 +1,15 @@ + + + + + + Redirecting... + + + + + + +Redirecting... + + diff --git a/lagoon/resources/faq/index.html b/lagoon/resources/faq/index.html new file mode 100644 index 0000000000..7e860d3af6 --- /dev/null +++ b/lagoon/resources/faq/index.html @@ -0,0 +1,15 @@ + + + + + + Redirecting... + + + + + + +Redirecting... + + diff --git a/lagoon/resources/glossary/index.html b/lagoon/resources/glossary/index.html new file mode 100644 index 0000000000..b3a0e60987 --- /dev/null +++ b/lagoon/resources/glossary/index.html @@ -0,0 +1,15 @@ + + + + + + Redirecting... + + + + + + +Redirecting... + + diff --git a/lagoon/resources/tutorials-and-webinars/index.html b/lagoon/resources/tutorials-and-webinars/index.html new file mode 100644 index 0000000000..5935f14716 --- /dev/null +++ b/lagoon/resources/tutorials-and-webinars/index.html @@ -0,0 +1,15 @@ + + + + + + Redirecting... + + + + + + +Redirecting... + + diff --git a/lagoon/using-lagoon-advanced/active_standby/index.html b/lagoon/using-lagoon-advanced/active_standby/index.html new file mode 100644 index 0000000000..ea2d9ca920 --- /dev/null +++ b/lagoon/using-lagoon-advanced/active_standby/index.html @@ -0,0 +1,15 @@ + + + + + + Redirecting... + + + + + + +Redirecting... + + diff --git a/lagoon/using-lagoon-advanced/backups/index.html b/lagoon/using-lagoon-advanced/backups/index.html new file mode 100644 index 0000000000..d8032ecd11 --- /dev/null +++ b/lagoon/using-lagoon-advanced/backups/index.html @@ -0,0 +1,15 @@ + + + + + + Redirecting... + + + + + + +Redirecting... + + diff --git a/lagoon/using-lagoon-advanced/base-images/index.html b/lagoon/using-lagoon-advanced/base-images/index.html new file mode 100644 index 0000000000..e00c968d20 --- /dev/null +++ b/lagoon/using-lagoon-advanced/base-images/index.html @@ -0,0 +1,15 @@ + + + + + + Redirecting... + + + + + + +Redirecting... + + diff --git a/lagoon/using-lagoon-advanced/custom-tasks/index.html b/lagoon/using-lagoon-advanced/custom-tasks/index.html new file mode 100644 index 0000000000..c8e1c9f29a --- /dev/null +++ b/lagoon/using-lagoon-advanced/custom-tasks/index.html @@ -0,0 +1,15 @@ + + + + + + Redirecting... + + + + + + +Redirecting... + + diff --git a/lagoon/using-lagoon-advanced/deploytarget_configs/index.html b/lagoon/using-lagoon-advanced/deploytarget_configs/index.html new file mode 100644 index 0000000000..ab047939c7 --- /dev/null +++ b/lagoon/using-lagoon-advanced/deploytarget_configs/index.html @@ -0,0 +1,15 @@ + + + + + + Redirecting... + + + + + + +Redirecting... + + diff --git a/lagoon/using-lagoon-advanced/environment-idling/index.html b/lagoon/using-lagoon-advanced/environment-idling/index.html new file mode 100644 index 0000000000..76c425e69a --- /dev/null +++ b/lagoon/using-lagoon-advanced/environment-idling/index.html @@ -0,0 +1,15 @@ + + + + + + Redirecting... + + + + + + +Redirecting... + + diff --git a/lagoon/using-lagoon-advanced/environment-types/index.html b/lagoon/using-lagoon-advanced/environment-types/index.html new file mode 100644 index 0000000000..213914f3b8 --- /dev/null +++ b/lagoon/using-lagoon-advanced/environment-types/index.html @@ -0,0 +1,15 @@ + + + + + + Redirecting... + + + + + + +Redirecting... + + diff --git a/lagoon/using-lagoon-advanced/environment-variables/index.html b/lagoon/using-lagoon-advanced/environment-variables/index.html new file mode 100644 index 0000000000..ba7dd3790a --- /dev/null +++ b/lagoon/using-lagoon-advanced/environment-variables/index.html @@ -0,0 +1,15 @@ + + + + + + Redirecting... + + + + + + +Redirecting... + + diff --git a/lagoon/using-lagoon-advanced/graphql/index.html b/lagoon/using-lagoon-advanced/graphql/index.html new file mode 100644 index 0000000000..afb5d33d2d --- /dev/null +++ b/lagoon/using-lagoon-advanced/graphql/index.html @@ -0,0 +1,15 @@ + + + + + + Redirecting... + + + + + + +Redirecting... + + diff --git a/lagoon/using-lagoon-advanced/installing-lagoon-into-existing-kubernetes-cluster/index.html b/lagoon/using-lagoon-advanced/installing-lagoon-into-existing-kubernetes-cluster/index.html new file mode 100644 index 0000000000..31c7cc4e7a --- /dev/null +++ b/lagoon/using-lagoon-advanced/installing-lagoon-into-existing-kubernetes-cluster/index.html @@ -0,0 +1,15 @@ + + + + + + Redirecting... + + + + + + +Redirecting... + + diff --git a/lagoon/using-lagoon-advanced/nodejs/index.html b/lagoon/using-lagoon-advanced/nodejs/index.html new file mode 100644 index 0000000000..af0b42106b --- /dev/null +++ b/lagoon/using-lagoon-advanced/nodejs/index.html @@ -0,0 +1,15 @@ + + + + + + Redirecting... + + + + + + +Redirecting... + + diff --git a/lagoon/using-lagoon-advanced/private-repositories/index.html b/lagoon/using-lagoon-advanced/private-repositories/index.html new file mode 100644 index 0000000000..586b4fe3b5 --- /dev/null +++ b/lagoon/using-lagoon-advanced/private-repositories/index.html @@ -0,0 +1,15 @@ + + + + + + Redirecting... + + + + + + +Redirecting... + + diff --git a/lagoon/using-lagoon-advanced/project-default-users-keys/index.html b/lagoon/using-lagoon-advanced/project-default-users-keys/index.html new file mode 100644 index 0000000000..a7a14325f7 --- /dev/null +++ b/lagoon/using-lagoon-advanced/project-default-users-keys/index.html @@ -0,0 +1,15 @@ + + + + + + Redirecting... + + + + + + +Redirecting... + + diff --git a/lagoon/using-lagoon-advanced/service-types/index.html b/lagoon/using-lagoon-advanced/service-types/index.html new file mode 100644 index 0000000000..fe7f71660c --- /dev/null +++ b/lagoon/using-lagoon-advanced/service-types/index.html @@ -0,0 +1,15 @@ + + + + + + Redirecting... + + + + + + +Redirecting... + + diff --git a/lagoon/using-lagoon-advanced/setting-up-xdebug-with-lagoon/index.html b/lagoon/using-lagoon-advanced/setting-up-xdebug-with-lagoon/index.html new file mode 100644 index 0000000000..c50bc20036 --- /dev/null +++ b/lagoon/using-lagoon-advanced/setting-up-xdebug-with-lagoon/index.html @@ -0,0 +1,15 @@ + + + + + + Redirecting... + + + + + + +Redirecting... + + diff --git a/lagoon/using-lagoon-advanced/simplesaml/index.html b/lagoon/using-lagoon-advanced/simplesaml/index.html new file mode 100644 index 0000000000..0415f360f7 --- /dev/null +++ b/lagoon/using-lagoon-advanced/simplesaml/index.html @@ -0,0 +1,15 @@ + + + + + + Redirecting... + + + + + + +Redirecting... + + diff --git a/lagoon/using-lagoon-advanced/ssh/index.html b/lagoon/using-lagoon-advanced/ssh/index.html new file mode 100644 index 0000000000..e4c77cd371 --- /dev/null +++ b/lagoon/using-lagoon-advanced/ssh/index.html @@ -0,0 +1,15 @@ + + + + + + Redirecting... + + + + + + +Redirecting... + + diff --git a/lagoon/using-lagoon-advanced/triggering-deployments/index.html b/lagoon/using-lagoon-advanced/triggering-deployments/index.html new file mode 100644 index 0000000000..dcff841d2c --- /dev/null +++ b/lagoon/using-lagoon-advanced/triggering-deployments/index.html @@ -0,0 +1,15 @@ + + + + + + Redirecting... + + + + + + +Redirecting... + + diff --git a/lagoon/using-lagoon-advanced/workflows/index.html b/lagoon/using-lagoon-advanced/workflows/index.html new file mode 100644 index 0000000000..751a558d77 --- /dev/null +++ b/lagoon/using-lagoon-advanced/workflows/index.html @@ -0,0 +1,15 @@ + + + + + + Redirecting... + + + + + + +Redirecting... + + diff --git a/lagoon/using-lagoon-the-basics/build-and-deploy-process/index.html b/lagoon/using-lagoon-the-basics/build-and-deploy-process/index.html new file mode 100644 index 0000000000..52e1c1e06c --- /dev/null +++ b/lagoon/using-lagoon-the-basics/build-and-deploy-process/index.html @@ -0,0 +1,15 @@ + + + + + + Redirecting... + + + + + + +Redirecting... + + diff --git a/lagoon/using-lagoon-the-basics/configure-webhooks/index.html b/lagoon/using-lagoon-the-basics/configure-webhooks/index.html new file mode 100644 index 0000000000..edbd027eb6 --- /dev/null +++ b/lagoon/using-lagoon-the-basics/configure-webhooks/index.html @@ -0,0 +1,15 @@ + + + + + + Redirecting... + + + + + + +Redirecting... + + diff --git a/lagoon/using-lagoon-the-basics/docker-compose-yml/index.html b/lagoon/using-lagoon-the-basics/docker-compose-yml/index.html new file mode 100644 index 0000000000..30ea158db7 --- /dev/null +++ b/lagoon/using-lagoon-the-basics/docker-compose-yml/index.html @@ -0,0 +1,15 @@ + + + + + + Redirecting... + + + + + + +Redirecting... + + diff --git a/lagoon/using-lagoon-the-basics/first-deployment/index.html b/lagoon/using-lagoon-the-basics/first-deployment/index.html new file mode 100644 index 0000000000..da175039e4 --- /dev/null +++ b/lagoon/using-lagoon-the-basics/first-deployment/index.html @@ -0,0 +1,15 @@ + + + + + + Redirecting... + + + + + + +Redirecting... + + diff --git a/lagoon/using-lagoon-the-basics/going-live/index.html b/lagoon/using-lagoon-the-basics/going-live/index.html new file mode 100644 index 0000000000..a7cdc462fc --- /dev/null +++ b/lagoon/using-lagoon-the-basics/going-live/index.html @@ -0,0 +1,15 @@ + + + + + + Redirecting... + + + + + + +Redirecting... + + diff --git a/lagoon/using-lagoon-the-basics/index.html b/lagoon/using-lagoon-the-basics/index.html new file mode 100644 index 0000000000..265e8be30c --- /dev/null +++ b/lagoon/using-lagoon-the-basics/index.html @@ -0,0 +1,15 @@ + + + + + + Redirecting... + + + + + + +Redirecting... + + diff --git a/lagoon/using-lagoon-the-basics/lagoon-yml/index.html b/lagoon/using-lagoon-the-basics/lagoon-yml/index.html new file mode 100644 index 0000000000..632766c083 --- /dev/null +++ b/lagoon/using-lagoon-the-basics/lagoon-yml/index.html @@ -0,0 +1,15 @@ + + + + + + Redirecting... + + + + + + +Redirecting... + + diff --git a/lagoon/using-lagoon-the-basics/local-development-environments/index.html b/lagoon/using-lagoon-the-basics/local-development-environments/index.html new file mode 100644 index 0000000000..56d23f17e5 --- /dev/null +++ b/lagoon/using-lagoon-the-basics/local-development-environments/index.html @@ -0,0 +1,15 @@ + + + + + + Redirecting... + + + + + + +Redirecting... + + diff --git a/lagoon/using-lagoon-the-basics/setup_project/index.html b/lagoon/using-lagoon-the-basics/setup_project/index.html new file mode 100644 index 0000000000..f846d4df08 --- /dev/null +++ b/lagoon/using-lagoon-the-basics/setup_project/index.html @@ -0,0 +1,15 @@ + + + + + + Redirecting... + + + + + + +Redirecting... + + diff --git a/logging/kibana-examples/index.html b/logging/kibana-examples/index.html new file mode 100644 index 0000000000..32d057091c --- /dev/null +++ b/logging/kibana-examples/index.html @@ -0,0 +1,2926 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + Kibana Examples - Lagoon Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +
+ + + + +
+ + +
+ +
+ + + + + + +
+
+ + + +
+
+
+ + + + +
+
+
+ + + +
+ +
+ + + +
+
+ + + + + + + + + + + + + +

Kibana Examples#

+

Have you seen the Kibana getting started video and are now ready to work with logs? We are here to help! This page will give you examples of Kibana queries you can use. This is not a Kibana 101 class, but it can help you understand some of what you can do in Kibana.

+

Ready to get started? Good!

+
+

Note

+

Make sure that you have selected your tenant before starting! You can do that by on the Tenant icon on the left-hand menu. Once you have selected your tenant, click on the Discover icon again to get started.

+
+

Router Logs#

+

Below you'll find examples for two common log requests:

+
    +
  • Viewing the total number of hits/requests to your site.
  • +
  • Viewing the number of hits/requests from a specific IP address.
  • +
+

Total Number of hits/requests to your site#

+
    +
  • Let's start Kibana up and select Discovery (#1 in screen shot below)
  • +
  • Then the router logs for your project(#2).
  • +
  • From there, we will filter some of this information down a bit. Let's focus on our main production environment.
  • +
  • +

    In the search bar (#3), enter:

    +

    openshift_project: "name of your production project"

    +
  • +
+
    +
  • This will show you all the hits to your production environment in the given time frame.
  • +
  • You can change the time frame in the upper right hand corner (#4).
  • +
  • Clicking on the arrow next to the entry (#5) will expand it and show you all the information that was captured.
  • +
  • You can add any of those fields to the window by hovering over them and clicking add on the left hand side (#6).
  • +
  • You can also further filter your results by using the search bar.
  • +
+

How to get the total number of hits/requests to your site in Kibana.

+

Number of hits/requests from a specific IP address#

+

Running the query above will give you a general look at all the traffic to your site, but what if you want to narrow in on a specific IP address? Perhaps you want to see how many times an IP has hit your site and what specific pages they were looking at. This next query should help.

+

We are going to start off with the same query as above, but we are going to add a couple of things.

+
    +
  • First, add the following fields: client_ip and http_request.
  • +
  • This will show you a list of all IP addresses and the page they requested. Here is what we see for the Amazee.io page:
  • +
+

All IP addresses and the page they requested.

+

That looks good, but what if we wanted to just show requests from a specific IP address? You can filter for the address by adding it to your search criteria.

+
    +
  • We are going to add: AND client_ip: "IP address".
  • +
  • That will filter the results to just show you hits from that specific IP address, and the page they were requesting. Here is what it looks like for our Amazee.io website:
  • +
+

Hits from a specific IP address.

+

Container Logs#

+

Container logs will show you all stout and sterr messages for your specific container and project. We are going to show an example for getting logs from a specific container and finding specific error numbers in that container.

+

Logs from a container#

+

Want to see the logs for a specific container (php, nginx, etc)? This section will help! Let's focus on looking at NGINX logs.

+
    +
  • We start by opening up Kibana and selecting Discover (#1 in the screen shot below).
  • +
  • From there, we select the container logs for our project (#2).
  • +
  • Let's go to the search bar (#3) and enter: kubernetes.container_name: "nginx"
  • +
  • This will display all NGINX logs for our project.
  • +
  • Clicking on the arrow next to an entry (#4) will expand that entry and show you all of the information it gathered.
  • +
  • Let's add the message field and the level field to the view. You can do that by clicking on "Add" on the left hand side (#5).
  • +
  • You can change the time frame in the upper right hand corner of the screen (#6), in the example below I'm looking at logs for the last 4 hours.
  • +
+

+

Specific errors in logs#

+

Want to see how many 500 Internal Server errors you've had in your NGINX container? You can do that by changing the search query. If you search:

+

kubernetes.container_name: "nginx" AND message: "500"

+

That will only display 500 error messages in the NGINX container. You can search for any error message in any container that you would like!

+

Visualization#

+

Kibana will also give you the option to create visualizations or graphs. We are going to create a chart to show number of hits/requests in a month using the same query we used above.

+
    +
  1. Click on Visualize on the left hand side of Kibana.
  2. +
  3. Click on the blue plus sign.
  4. +
  5. For this example, we are going to select a Vertical Bar chart.
  6. +
  7. Select the router logs for your project.
  8. +
  9. Click on X-Axis under Buckets and select Date Histogram, with the interval set to daily
  10. +
  11. Success!! You should now see a nice bar graph showing your daily traffic.
  12. +
+
+

Note

+

Make sure that you select an appropriate time frame for the data in the upper right hand corner.

+
+

Here is an example of a daily hits visualization chart:

+

Daily hits visualization chart.

+

Also note that you can save your visualizations (and searches)! That will make it even faster to access them in the future. And because each account has their own Kibana Tenant, no searches or visualizations are shared with another account.

+

Troubleshooting#

+ + + + + + + + + +
+
+ + +
+ +
+ + + +
+
+
+
+ + + + + + + + + + + + + \ No newline at end of file diff --git a/logging/kibana_example1.png b/logging/kibana_example1.png new file mode 100644 index 0000000000..dba1a9933a Binary files /dev/null and b/logging/kibana_example1.png differ diff --git a/logging/kibana_example2.png b/logging/kibana_example2.png new file mode 100644 index 0000000000..0edb859815 Binary files /dev/null and b/logging/kibana_example2.png differ diff --git a/logging/kibana_example3.png b/logging/kibana_example3.png new file mode 100644 index 0000000000..b9589bb878 Binary files /dev/null and b/logging/kibana_example3.png differ diff --git a/logging/kibana_example4.png b/logging/kibana_example4.png new file mode 100644 index 0000000000..270cc075f8 Binary files /dev/null and b/logging/kibana_example4.png differ diff --git a/logging/kibana_example5.png b/logging/kibana_example5.png new file mode 100644 index 0000000000..8cd36768aa Binary files /dev/null and b/logging/kibana_example5.png differ diff --git a/logging/logging/index.html b/logging/logging/index.html new file mode 100644 index 0000000000..9311740b0c --- /dev/null +++ b/logging/logging/index.html @@ -0,0 +1,2722 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + Logging - Lagoon Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +
+ + + + +
+ + +
+ +
+ + + + + + +
+
+ + + +
+
+
+ + + + +
+
+
+ + + +
+
+
+ + + +
+
+
+ + + +
+
+ + + + + + + + + + + + + +

Logging#

+

Lagoon provides access to the following logs via Kibana:

+
    +
  • Logs from the Kubernetes Routers, including every single HTTP and HTTPS request with:
      +
    • Source IP
    • +
    • URL
    • +
    • Path
    • +
    • HTTP verb
    • +
    • Cookies
    • +
    • Headers
    • +
    • User agent
    • +
    • Project
    • +
    • Container name
    • +
    • Response size
    • +
    • Response time
    • +
    +
  • +
  • Logs from containers:
      +
    • stdout and stderr messages
    • +
    • Container name
    • +
    • Project
    • +
    +
  • +
  • Lagoon logs:
      +
    • Webhooks parsing
    • +
    • Build logs
    • +
    • Build errors
    • +
    • Any other Lagoon related logs
    • +
    +
  • +
  • Application logs:
      +
    • For Drupal: install the Lagoon Logs module in order to receive logs from Drupal Watchdog.
    • +
    • For Laravel: install the Lagoon Logs for Laravel package.
    • +
    • For other workloads:
        +
      • Send logs to udp://application-logs.lagoon.svc:5140
      • +
      • Ensure logs are structured as JSON encoded objects.
      • +
      • Ensure the type field contains the name of the Kubernetes namespace ($LAGOON_PROJECT-$LAGOON_ENVIRONMENT).
      • +
      +
    • +
    +
  • +
+

To access the logs, please check with your Lagoon administrator to get the URL for the Kibana route (for amazee.io, this is https://logs.amazeeio.cloud/).

+

Each Lagoon user account has their own login and will see the logs only for the projects to which they have access.

+

Each Lagoon user account also has their own Kibana Tenant, which means no saved searches or visualizations are shared with another account.

+

If you would like to know more about how to use Kibana: https://www.elastic.co/webinars/getting-started-kibana.

+ + + + + + + + +
+
+ + +
+ +
+ + + +
+
+
+
+ + + + + + + + + + + + + \ No newline at end of file diff --git a/requirements.txt b/requirements.txt new file mode 100644 index 0000000000..de8aa67ed4 --- /dev/null +++ b/requirements.txt @@ -0,0 +1,4 @@ +mkdocs-material +mkdocs-redirects +mdx_truly_sane_lists +mkdocs-git-revision-date-localized-plugin diff --git a/resources/faq/index.html b/resources/faq/index.html new file mode 100644 index 0000000000..44645ed43c --- /dev/null +++ b/resources/faq/index.html @@ -0,0 +1,3211 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + FAQ - Lagoon Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +
+ + + + +
+ + +
+ +
+ + + + + + +
+
+ + + +
+
+
+ + + + +
+
+
+ + + +
+
+
+ + + +
+
+
+ + + +
+
+ + + + + + + + + + + + + +

FAQ#

+

How do I contact my Lagoon administrator?#

+

You should have a private Slack channel that was set up for you to communicate - if not, or you've forgotten how to contact us, reach out at support@amazee.io.

+

I found a bug! 🐞#

+

If you've found a bug or security issue, please send your findings to support@amazee.io. Please DO NOT file a GitHub issue for them.

+

I'm interested in amazee.io's hosting services with Lagoon#

+

That's great news! You can contact them via email at inquiries@amazee.io.

+

How can I restore a backup?#

+

We have backups available for files and databases, typically taken every 24 hours at most. These backups are stored offsite.

+

We keep up to 7 daily backups and 4 weekly backups.

+

If you ever need to recover or restore a backup, feel free to submit a ticket or send us a message via chat and we will be more than happy to help!

+

How can I download a database dump?#

+ + +

I'm getting an invalid SSL certificate error#

+

The first thing to try is what is listed in our documentation about SSL.

+

If you follow those steps, and you are still seeing an error, please submit a ticket or send us a message on chat and we can help resolve this for you.

+

I'm getting an "Array" error when running a Drush command#

+

This was a bug that was prevalent in Drush versions 8.1.16 and 8.1.17. There error would look something like this:

+
Text Only
The command could not be executed successfully (returned: Array [error]
+(
+[default] => Array
+(
+[default] => Array
+(
+[driver] => mysql
+[prefix] => Array
+(
+[default] =>
+)
+, code: 0)
+Error: no database record could be found for source @main [error]
+
+

Upgrading Drush should fix that for you. We strongly suggest that you use version 8.3 or newer. Once Drush is upgraded the command should work!

+

I'm seeing an Internal Server Error when trying to access my Kibana logs#

+ + +

No need to panic! This usually happens when a tenant has not been selected. To fix this, follow these steps:

+
    +
  1. Go to "Tenants" on the left-hand menu of Kibana.
  2. +
  3. Click on your tenant name.
  4. +
  5. You'll see a pop-up window that says: "Tenant Change" and the name of your tenant.
  6. +
  7. Go back to the "Discover" tab and attempt your query again.
  8. +
+

You should now be able to see your logs.

+

I'm unable to SSH into any environment#

+

I'm unable to SSH into any environment. I'm getting the following message: Permission denied (publickey). When I run drush sa no aliases are returned.

+

This typically indicates an issue with Pygmy. You can find our troubleshooting docs for Pygmy here: https://pygmy.readthedocs.io/en/master/troubleshooting/

+

How can I check the status of a build?#

+ + +

How do I add a cron job?#

+ + +

How do I add a new route?#

+ + +

How do I remove a route?#

+

You will need to contact your helpful Lagoon administrator should you need to remove a route. You can use the Slack channel that was set up for you to communicate - if not, you can always reach us at support@amazee.io or on Discord.

+

When I run pygmy status, no keys are loaded#

+

You'll need to load your SSH key into pygmy. Here's how: https://pygmy.readthedocs.io/en/master/ssh_agent

+

When I run drush sa no aliases are returned#

+

This typically indicates an issue with Pygmy. You can find our troubleshooting docs for Pygmy here: https://pygmy.readthedocs.io/en/master/troubleshooting

+

My deployments fail with a message saying: "drush needs a more functional environment"#

+

This usually means that there is no database uploaded to the project. Follow our step-by-step guide to add a database to your project.

+

When I start Pygmy I see an "address already in use" error?#

+

Error starting userland proxy: listen tcp 0.0.0.0:80: bind: address already in use Error: failed to start containers: amazeeio-haproxy

+

This is a known error! Most of the time it means that there is already something running on port 80. You can find the culprit by running the following query:

+
Text Only
netstat -ltnp | grep -w ':80'
+
+

That should list everything running on port 80. Kill the process running on port 80. Once port 80 is freed up, Pygmy should start up with no further errors.

+

How can I change branches/PR environments/production on my project?#

+

You can make that change using the Lagoon API! You can find the documentation for this change in our GraphQL documentation.

+

How do I add a redirect?#

+ + +

How can I add new users (and SSH keys) to my project/group?#

+

This can be done via the Lagoon API. You can find the steps documentation for this change in our GraphQL documentation.

+

Can an environment be completely deleted to roll out large code changes to my project?#

+

Environments are fully built from scratch at each deploy, dropping the old database and files and pushing your code would result in a fresh clean build, Don’t forget to re-sync!

+

It is possible to delete an environment via GraphQL. You can find the instructions in our GraphQL documentation.

+

How do I get my new environment variable to show up?#

+

Once you've added a runtime environment variable to your production environment via GraphQL, then all you need to do a deploy in order to get your change to show up on your environment.

+

How do I SFTP files to/from my Lagoon environment?#

+

For cloud hosting customers, you can SFTP to your Lagoon environment by using the following information:

+
    +
  • Server Hostname: ssh.lagoon.amazeeio.cloud
  • +
  • Port: 32222
  • +
  • Username: <Project-Environment-Name>
  • +
+

Your username is going to be the name of the environment you are connecting to, most commonly in the pattern PROJECTNAME-ENVIRONMENT.

+

You may also be interested in checking out our new Lagoon Sync tool, which you can read about here: https://github.com/uselagoon/lagoon-sync

+

Authentication also happens automatically via SSH Public & Private Key Authentication.

+

I don't want to use Let's Encrypt. I have an SSL certificate I would like to install#

+

We can definitely help with that. Once you have your own SSL certificate, feel free to submit a ticket or send us a message via chat and we will be more than happy to help! You will need to send us the following files:

+
    +
  • Certificate key (.key)
  • +
  • Certificate file (.crt)
  • +
  • Intermediate certificates (.crt)
  • +
+

Also, you will need to set the tls-acme option in .lagoon.yml to false.

+

Is it possible to mount an external volume (EFS/Fuse/SMB/etc) into Lagoon?#

+

Mounting an external volume would need to be handled completely inside of your containers, Lagoon does not provide a provision for this type of connection as part of the platform.

+

A developer can handle this by installing the necessary packages into the container (via the Dockerfile), and ensuring the volume mount is connected via a pre- or post-rollout task.

+

Is there a way to stop a Lagoon build?#

+

If you have a build that has been running for a long time, and want to stop it, you will need to reach out to support. Currently, builds can only be stopped by users with admin access to the cluster.

+

We installed the Elasticsearch\Solr service on our website. How can we get access to the UI (port 9200/8983) from a browser?#

+ +

We suggest only exposing web services (NGINX/Varnish/Node.js) in your deployed environments. Locally, you can get the ports mapped for these services by checking docker-compose ps, and then load http://localhost:<port> in your browser.

+

I have a question that isn't answered here#

+

You can reach out to the team via Discord or email at uselagoon@amazee.io.

+ + + + + + + + +
+
+ + +
+ +
+ + + +
+
+
+
+ + + + + + + + + + + + + \ No newline at end of file diff --git a/resources/glossary/index.html b/resources/glossary/index.html new file mode 100644 index 0000000000..917b8bf9ce --- /dev/null +++ b/resources/glossary/index.html @@ -0,0 +1,3077 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + Glossary - Lagoon Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +
+ + + + +
+ + +
+ +
+ + + + + + +
+
+ + + +
+
+
+ + + + +
+
+
+ + + +
+
+
+ + + +
+
+
+ + + +
+
+ + + + + + + + + + + + + +

Glossary#

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
TermDefinition
Access ModeControls how a persistent volume can be accessed.
Active/StandbyActive/Standby deployments, also known as blue/green deployments, are a way to seamlessly switch over your production content.
AnsibleAn open-source suite of software tools that enables infrastructure as code.
AWSAmazon Web Services
AWS GlacierA secure and inexpensive S3 storage for long-term backup.
BitBucketGit hosting owned by Atlassian, which integrates with their tools.
BrewHomebrew is a package manager for OSX.
CAA Certificate Authority is a trusted entity that issues Secure Sockets Layer (SSL) certificates.
CDNContent Delivery Network - distributes content via caching
CIContinuous Integration
CIDRClassess Inter-Domain Routing - a method of assigning IP addresses
CLICommand Line Interface
ClusterA unified group of servers or VMs, distributed and managed together, which serves one entity to ensure high availability, load balancing, and scalability.
CMSContent Management System
Cron jobThe cron command-line utility is a job scheduler on Unix-like operating systems. Users who set up and maintain software environments use cron to schedule jobs, also known as cron jobs, to run periodically at fixed times, dates, or intervals.
ComposerA package manager
DDoSDistributed Denial of Service
DNSDomain Name System
DockerA container engine using Linux features and automating application deployment.
Docker ComposeA tool for defining and running Docker applications via YAML files.
DrupalOpen-source Content Management System
DrushA command line shell for Drupal.
EC2Amazon Elastic Compute Cloud
ElasticsearchAn open-source search engine. It provides a distributed, multi-tenant-capable full-text search engine with a web interface and schema-free JSON documents.
GaleraA generic synchronous multi-master replication library for transactional databases.
GitA free and open-source distributed version control system.
Git Hash/SHAA generated string that identifies each commit. Uses the SHA-1 algorithm
GitHubA proprietary version control hosting company using Git. A subsidiary of Microsoft, it offers all of the distributed version control and source code management functionality of Git as well as additional features.
GitLabA web-based Git repository manager with CI capabilities.
GraphQLAn open-source data query and manipulation language for APIs, and a runtime for fulfilling queries with existing data.
HarborAn open source container image registry that secures images with role-based access control, scans images for vulnerabilities, and signs images as trusted.
HelmA package manager for Kubernetes, it helps you manage Kubernetes applications.
Helm ChartsHelm Charts help you define, install, and upgrade even the most complex Kubernetes application.
HTTPHyperText Transfer Protocol. HTTP is the underlying protocol used by the World Wide Web and this protocol defines how messages are formatted and transmitted, and what actions Web servers and browsers should take in response to various commands.
IAMAWS Identity and Access Management (IAM) is a web service that helps you securely control access to AWS resources.
IDEAn integrated development environment is a software application that provides comprehensive facilities to computer programmers for software development. An IDE normally consists of at least a source code editor, build automation tools, and a debugger.
Ingress controllerAn Ingress controller is a specialized load balancer for Kubernetes (and other containerized) environments.
IPTablesA command line utility for configuring Linux kernel firewall.
JenkinsAn open-source automation server.
k3sA highly available, certified Kubernetes distribution.
k3dk3d is a lightweight wrapper to run k3s in Docker.
k8sNumeronym for Kubernetes (K + 8 letters + s)
k8upK8up is a backup operator that will handle storage and app backups on a k8s/OpenShift cluster.
KibanaAn open-source data visualization plugin for Elasticsearch. It provides visualization capabilities on top of the content indexed on an Elasticsearch cluster.
KinDKubernetes in Docker - a tool for running local Kubernetes clusters using Docker container “nodes”. Kind was primarily designed for testing Kubernetes itself, but may be used for local development or CI.
kubectlThe Kubernetes command-line tool which allows you to run commands against Kubernetes clusters.
KubernetesAn open-source system for automating deployment, scaling, and management of containerized applications.
LagoonAn open-source application delivery platform for Kubernetes.
LagoonizeConfiguration changes to allow your app to run on Lagoon.
LandoA free, open source, cross-platform, local development environment and DevOps tool built on Docker.
LaravelA free, open-source PHP web framework, following the model–view–controller (MVC) architectural pattern and based on Symfony.
Let's EncryptAa free, automated, and open certificate authority (CA).
MariaDBA community-developed, commercially supported fork of the MySQL relational database management system, intended to remain free and open-source software under the GNU General Public License.
Master nodeA single node in the cluster on which a collection of processes which manage the cluster state are running.
MicroserviceThe practice of breaking up an application into a series of smaller, more specialized parts, each of which communicate with one another across common interfaces such as APIs and REST interfaces like HTTP
MongoDBMongoDB is a cross-platform document-oriented database program. Classified as a NoSQL database program, MongoDB uses JSON-like documents with schema.
Multi-TenantA single instance of software runs on a server and serves multiple tenants - a tenant is a group of users who share common access with privileges to access the software instance. The software is designed to provide each tenant a share of the resources.
MVCModel-view-controller - an architectural pattern that separates an application into three main logical components: the model, the view, and the controller. Each of these components are built to handle specific development aspects of an application.
NGINXNGINX is a web server which can also be used as a reverse proxy, load balancer, mail proxy and HTTP cache.
NodeSingle EC2 instance (AWS virtual machine)
Node.jsAn open-source, cross-platform, JavaScript runtime environment that executes JavaScript code outside of a browser.
OpenSearchA community-driven, Apache 2.0-licensed open source search and analytics suite that makes it easy to ingest, search, visualize, and analyze data.
OpenShiftContainer application platform that brings Docker and Kubernetes to the enterprise.
PHPPHP (Personal Home Page) is a general-purpose programming language originally designed for web development.
PhpStormA development tool (IDE) for PHP and web projects.
PodA group of containers that are deployed together on the same host. The basic unit that Kubernetes works with.
PostgreSQLA free and open-source relational database management system emphasizing extensibility and technical standards compliance.
Public/Private KeyPublic-key encryption is a cryptographic system that uses two keys -- a public key known to everyone and a private or secret key known only to the recipient of the message.
PuppetAn open-source software configuration management and deployment tool.
PVPersistentVolume - a piece of storage in the cluster that has been provisioned by an administrator or dynamically provisioned using Storage Classes.
PVCPersistent Volume Claim - a request for storage by a user.
PythonPython is an open-source, interpreted, high-level, general-purpose programming language.
RabbitMQAn open-source message-broker software.
RBACRole-Based Access Control
RDSRelational Database Service
RedisAn open source, in-memory data store used as a database, cache, streaming engine, and message broker.
ResticAn open-source backup program.
ROXKubernetes access mode ReadOnlyMany - the volume can be mounted as read-only by many nodes.
RubyAn interpreted, high-level, general-purpose programming language which supports multiple programming paradigms. It was designed with an emphasis on programming productivity and simplicity. In Ruby, everything is an object, including primitive data types.
RWOKubernetes access mode ReadWriteOnce - the volume can be mounted as read-write by a single node. ReadWriteOnce access mode still can allow multiple pods to access the volume when the pods are running on the same node.
RWOPKubernetes access mode ReadWriteOncePod - the volume can be mounted as read-write by a single Pod. Use ReadWriteOncePod access mode if you want to ensure that only one pod across whole cluster can read that PVC or write to it. This is only supported for CSI volumes and Kubernetes version 1.22+.
RWXKubernetes access mode ReadWriteMany - the volume can be mounted as read-write by many nodes.
SHA-1Secure Hash Algorithm 1, a hash function which takes an input and produces a 160-bit hash value known as a message digest – typically rendered as 40 hexadecimal digits. It was designed by the United States National Security Agency, and is a U.S. Federal Information Processing Standard.
SolrAn open-source enterprise-search platform, written in Java.
SSHSecure Socket Shell, a network protocol that provides administrators with a secure way to access a remote computer.
SSLSecure Socket Layer
Storage ClassesA StorageClass provides a way for Kubernetes administrators to describe the "classes" of storage they offer. Different classes might map to quality-of-service levels, or to backup policies, or to arbitrary policies determined by the cluster administrators
SymfonySymfony is a PHP web application framework and a set of reusable PHP components/libraries, Drupal 8 and up are based on Symfony.
TCPTransmission Control Protocol, a standard that defines how to establish and maintain a network conversation through which application programs can exchange data.
TLSTransport Layer Security
TrivyA simple and comprehensive vulnerability scanner for containers, suitable for CI.
TTLTime to live or hop limit is a mechanism that limits the lifespan or lifetime of data in a computer or network.
Uptime RobotUptime monitoring service.
VarnishA powerful, open-source HTTP engine/reverse HTTP proxy that can speed up a website by caching (or storing) a copy of a webpage the first time a user visits.
VMVirtual Machine
WebhookA webhook is a way for an app like GitHub, GitLab, Bitbucket, etc, to provide other applications with immediate data and act upon something, like a pull request.
YAMLYet Another Markup Language - YAML is a human-readable data-serialization language. It is commonly used for configuration files and in applications where data is being stored or transmitted.
+ + + + + + + + +
+
+ + +
+ +
+ + + +
+
+
+
+ + + + + + + + + + + + + \ No newline at end of file diff --git a/resources/tutorials-and-webinars/index.html b/resources/tutorials-and-webinars/index.html new file mode 100644 index 0000000000..6dffc730fd --- /dev/null +++ b/resources/tutorials-and-webinars/index.html @@ -0,0 +1,3016 @@ + + + + + + + + + + + + + + + + + + + + + + + + Tutorials, Webinars, and Videos - Lagoon Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +
+ + + + +
+ + +
+ +
+ + + + + + +
+
+ + + +
+
+
+ + + + +
+
+
+ + + + + + + +
+
+ + + + + + + + + + + + + +

Tutorials, Webinars, and Videos#

+

Intro to Lagoon Webinar#

+

[Slides]

+ + +

Advance Lando-ing with Lagoon#

+ + +

Webinar - Lagoon Insights#

+ + +

Lagoon Deployment Demo#

+ + +

How to Manage Multiple Drupal Sites with Lagoon#

+

[Slides]

+ + +

Kubernetes Webinar 101#

+

[Slides]

+ + +

Kubernetes Webinar 102#

+

[Slides]

+ + +

Server-side Rendering Best Practices: How We Run Decoupled Websites with 110 Million Hits per Month#

+ + +

Lagoon: OpenSource Docker Build & Deployment System with Full Drupal Support#

+ + +

How do I fix an internal server error in Kibana?#

+ + +

How do I add a new route?#

+ + +

How do I check the status of a build?#

+ + +

How do I add a redirect in Lagoon?#

+ + +

How do I download a database dump?#

+ + +

How do I add a cron job?#

+ + +

Deploying web applications on Kubernetes - Toby Bellwood | Techweek21 Talk#

+ + +

Dealing with unprecedented scale during Covid-19 - Sean Hamlin| Techweek21 Talk#

+ + +

Silverstripe from local to live on Lagoon -Thom Toogood | Techweek21 Talk#

+ + + + + + + + + +
+
+ + +
+ +
+ + + +
+
+
+
+ + + + + + + + + + + + + \ No newline at end of file diff --git a/search/search_index.json b/search/search_index.json new file mode 100644 index 0000000000..a425d3b698 --- /dev/null +++ b/search/search_index.json @@ -0,0 +1 @@ +{"config":{"lang":["en"],"separator":"[\\s\\-]+","pipeline":["stopWordFilter"]},"docs":[{"location":"","title":"Lagoon","text":""},{"location":"#lagoon-the-open-source-application-delivery-platform-for-kubernetes","title":"Lagoon - the Open Source Application Delivery Platform for Kubernetes","text":"

Lagoon gives developers what they dream about. It's a system that allows developers to run the exact same code in their local and production environment. The same Docker images, the same service configurations, and the same code.

"},{"location":"#who-are-you","title":"Who are you?","text":"
  • If you want to use Lagoon to host your website or application, visit Using Lagoon.
  • If you want to develop Lagoon (add features, fix bugs), Developing Lagoon.
"},{"location":"#tldr-how-lagoon-works","title":"TL;DR: How Lagoon Works","text":"
  1. Developers define and configure needed services within YAML files.
  2. When they are happy, they push the code to Git.
  3. Lagoon parses the YAML files and adds in any additional needed configuration.
  4. Lagoon builds the needed Docker images.
  5. Lagoon pushes them to a Docker registry.
  6. Lagoon creates the needed resources in Kubernetes.
  7. Lagoon monitors the deployment of the containers.
  8. When all is done, Lagoon informs the developers in different ways (Slack, email, website, etc).
"},{"location":"#help","title":"Help?","text":"

Questions? Ideas? Meet the maintainers and contributors.

Chat with us on the Lagoon Discord: https://discord.gg/te5hHe95JE

"},{"location":"#a-couple-of-things-about-lagoon","title":"A couple of things about Lagoon","text":"
  1. Lagoon is based on microservices. The deployment and build workflow is very complex. We have multiple version control sources, multiple clusters, and multiple notification systems. Each deployment is unique and can take from seconds to hours. It's built with flexibility and robustness in mind. Microservices communicate through a messaging system, which allows us to scale individual services up and down. It allows us to survive down times of individual services. It also allows us to try out new parts of Lagoon in production without affecting others.
  2. Lagoon uses many programming languages. Each programming language has specific strengths. We try to decide which language makes the most sense for each service. Currently, a lot of Lagoon is built in Node.js. This is partly because we started with Node.js, but also because Node.js allows asynchronous processing of webhooks, tasks and more. We are likely going to change the programming language of some services. This is what is great about microservices! We can replace a single service with another language without worrying about other parts of the platform.
  3. Lagoon is not Drupal-specific. Everything has been built so that it can run any Docker image. There are existing Docker images for Drupal, and support for Drupal-specific tools like Drush. But that's it!
  4. Lagoon is DevOps. It allows developers to define the services they need and customize them as they need. You might think this is not the right way to do it, and gives too much power to developers. We believe that as system engineers, we need to empower developers. If we allow developers to define services locally, and test them locally, they will find bugs and mistakes themselves.
  5. Lagoon runs on Docker and Kubernetes. (That one should be obvious, right?)
  6. Lagoon can be completely locally developed and tested.
  7. Lagoon is completely integration tested. This means we can test the whole process. From receiving Git webhooks to deploying into a Docker container, the same Git hash is deployed in the cluster.
  8. Most important: It's a work in progress. It's not done yet. At amazee.io, we believe that as a hosting community, we need to work together and share code where we can.

We want you to understand the Lagoon infrastructure and how the services work together. Here is a schema (it's a little out of date - it doesn't include some of the more recent services we've added, or cover Kubernetes, so we're working on an update!): Lucid Chart \u200c

"},{"location":"#history-of-lagoon","title":"History of Lagoon","text":"

As described, Lagoon is a dream come true. At amazee.io, we've been hosting Drupal for more than 8 years. This is the fourth major iteration of our hosting platform. The third iteration was built around Puppet and Ansible. Every single piece of the platform was done with configuration management. This allowed very fast setup of new servers, but at the same time was also lacking customizability for developers. We implemented some customizability, with some already with Docker in production. However, we were never completely happy with it. We realized that our existing platform wasn't enough. With the rise of decoupled Drupal, the need to run Node.js on the server side, the requests for Elasticsearch, and different Solr versions, we had to do more. \u200c

At the same time, we've been using Docker for many years for local development. It was always an idea to use Docker for everything in production. The only problem was the connection between local development and production environments. There are other systems that allow you to run Drupal in Docker in production. But, nothing allowed you to test the exact same images and services locally and in production.

Lagoon was born in 2017. It has since been developed into a system that runs Docker in production. Lagoon has replaced our third generation hosting platform with a cutting edge all Docker-based system.

"},{"location":"#open-source","title":"Open Source","text":"

At amazee.io, we believe in open source. It was always troubling for us that open source code like Drupal was hosted on proprietary hosting platforms. The strength and success of a hosting company is not just their deployment systems or service configurations. It's the the people and knowledge that run the platform. The processes, skills, ability to react to unforeseen situations, and last but not least, the support they provide their clients.

"},{"location":"#license","title":"License","text":"

Lagoon is available under an Apache 2.0 License.

"},{"location":"code-of-conduct/","title":"Code of Conduct","text":""},{"location":"code-of-conduct/#our-pledge","title":"Our Pledge","text":"

In the interest of fostering an open and welcoming environment, we as contributors and maintainers pledge to making participation in our project and our community a harassment-free experience for everyone, regardless of age, body size, disability, ethnicity, gender identity and expression, level of experience, nationality, personal appearance, race, religion, or sexual identity and orientation.

"},{"location":"code-of-conduct/#our-standards","title":"Our Standards","text":"

Examples of behavior that contributes to creating a positive environment include:

  • Using welcoming and inclusive language.
  • Being respectful of differing viewpoints and experiences.
  • Gracefully accepting constructive criticism.
  • Focusing on what is best for the community.
  • Showing empathy towards other community members.

Examples of unacceptable behavior by participants include:

  • The use of sexualized language or imagery and unwelcome sexual attention or advances.
  • Trolling, insulting/derogatory comments, and personal or political attacks.
  • Public or private harassment.
  • Publishing others' private information, such as a physical or electronic address, without explicit permission.
  • Other conduct which could reasonably be considered inappropriate in a professional setting.
"},{"location":"code-of-conduct/#our-responsibilities","title":"Our Responsibilities","text":"

Project maintainers are responsible for clarifying the standards of acceptable behavior and are expected to take appropriate and fair corrective action in response to any instances of unacceptable behavior.

Project maintainers have the right and responsibility to remove, edit, or reject comments, commits, code, wiki edits, issues, and other contributions that are not aligned to this Code of Conduct, or to ban temporarily or permanently any contributor for other behaviors that they deem inappropriate, threatening, offensive, or harmful.

"},{"location":"code-of-conduct/#scope","title":"Scope","text":"

This Code of Conduct applies both within project spaces and in public spaces when an individual is representing the project or its community. Examples of representing a project or community include using an official project email address, posting via an official social media account, or acting as an appointed representative at an online or offline event. Representation of a project may be further defined and clarified by project maintainers.

"},{"location":"code-of-conduct/#enforcement","title":"Enforcement","text":"

Instances of abusive, harassing, or otherwise unacceptable behavior may be reported by contacting the project team at uselagoon@amazee.io. The project team will review and investigate all complaints, and will respond in a way that it deems appropriate to the circumstances. The project team is obligated to maintain confidentiality with regard to the reporter of an incident. Further details of specific enforcement policies may be posted separately.

Project maintainers who do not follow or enforce the Code of Conduct in good faith may face temporary or permanent repercussions as determined by other members of the project's leadership.

"},{"location":"code-of-conduct/#attribution","title":"Attribution","text":"

This Code of Conduct is adapted from the Contributor Covenant, version 1.4, available at http://contributor-covenant.org/version/1/4.

"},{"location":"contributing/","title":"Contributing","text":"

We gladly welcome any and all contributions to Lagoon!

"},{"location":"contributing/#what-kind-of-contributions-do-we-need","title":"What kind of contributions do we need?","text":"

Lagoon benefits from any kind of contribution - whether it's a bugfix, new feature, documentation update, or simply some queue maintenance - we're happy that you want to help

"},{"location":"contributing/#developing-for-lagoon","title":"Developing for Lagoon","text":"

There's a whole section on how to get Lagoon running on your local machine using KinD over at Developing Lagoon. This documentation is still very WIP - but there are a lot of Makefile routines to help you out.

"},{"location":"contributing/#installing-lagoon","title":"Installing Lagoon","text":"

We've got another section that outlines how to install Lagoon from Helm charts at Installing Lagoon Into Existing Kubernetes Cluster - we'd love to get this process as slick as possible!

"},{"location":"contributing/#help-us-with-our-examples","title":"Help us with our examples","text":"

Right now one of our biggest needs is putting together examples of Lagoon working with various content management systems, etc, other than Drupal.

If you can spin up an open source CMS or framework that we don\u2019t currently have as a Docker Compose stack, send us a PR. Look at the existing examples at https://github.com/uselagoon/lagoon-examples for tips, pointers and starter issues.

One small catch \u2013 wherever possible, we\u2019d like them to be built using our base Docker Hub images https://hub.docker.com/u/uselagoon \u2013 if we don\u2019t have a suitable image, or our images need modifying \u2013 throw us a PR (if you can) or create an issue (so someone else can) at https://github.com/uselagoon/lagoon-images.

Help us improve our existing examples, if you can - are we following best practices, is there something we\u2019re doing that doesn\u2019t make sense?

Bonus points for anyone that helps contribute to tests for any of these examples \u2013 we\u2019ve got some example tests in a couple of the projects you can use for guidance \u2013 https://github.com/amazeeio/drupal-example-simple/blob/8.x/TESTING_dockercompose.md. The testing framework we\u2019re using is Leia, from the excellent team behind Lando.

Help us to document our other examples better \u2013 we\u2019re not expecting a full manuscript, but tidy-ups, links to helpful resources and clarifying statements are all super-awesome.

If you have any questions, reach out to us on Discord!

"},{"location":"contributing/#i-found-a-security-issue","title":"I found a security issue \ud83d\udd13","text":"

We take security very seriously. If you discover a security issue or think you found one, please bring it to the maintainers' attention.

Danger

Please send your findings to security@amazee.io. Please DO NOT file a GitHub issue for them.

Security reports are greatly appreciated and will receive public karma and swag! We're also working on a Bug Bounty system.

"},{"location":"contributing/#i-found-an-issue","title":"I found an issue","text":"

We're always interested in fixing issues, therefore issue reports are very welcome. Please make sure to check that your issue does not already exist in the issue queue.

"},{"location":"contributing/#i-have-a-feature-request-or-idea","title":"I have a feature request or idea","text":"

Cool! Create an issue and we're happy to look over it. We can't guarantee that it will be implemented. But we are always interested in hearing ideas of what we could bring to Lagoon.

Another good way is also to talk to us via Discord about your idea. Join today!

"},{"location":"contributing/#i-wrote-some-code","title":"I wrote some code","text":"

Epic! Please send us a pull request for it, we will do our best to review it and merge it if possible.

"},{"location":"administering-lagoon/feature-flags/","title":"Feature flags","text":"

Some Lagoon features can be controlled by setting feature flags. This is designed to assist users and administrators to roll out new platform features in a controlled manner.

"},{"location":"administering-lagoon/feature-flags/#environment-variables","title":"Environment variables","text":"

The following environment variables can be set on an environment or project to toggle feature flags.

Environment Variable Name Active scope Version introduced Version removed Default Value Description LAGOON_FEATURE_FLAG_ROOTLESS_WORKLOAD global 2.2.0 - disabled Set to enabled to set a non-root pod security context on the pods in this environment or project.This flag will eventually be deprecated, at which point non-root workloads will be enforced. LAGOON_FEATURE_FLAG_ISOLATION_NETWORK_POLICY global 2.2.0 - disabled Set to enabled to add a default namespace isolation network policy to each environment on deployment.This flag will eventually be deprecated, at which point the namespace isolation network policy will be enforced.NOTE: enabling and then disabling this feature will not remove any existing network policy from previous deployments. Those must be removed manually."},{"location":"administering-lagoon/feature-flags/#cluster-level-controls","title":"Cluster-level controls","text":"

Feature flags may also be controlled at the cluster level. There is support for this in the lagoon-build-deploy chart. For each feature flag there are two flavours of values which can be set: default and force.

  • default controls the default policy for environments deployed to the cluster, but can be overridden at the project or environment level by the environment variables documented above.
  • force also controls the policy for environments deployed to the cluster, but cannot be overridden by the environment variables documented above.
"},{"location":"administering-lagoon/graphql-queries/","title":"GraphQL API","text":""},{"location":"administering-lagoon/graphql-queries/#running-graphql-queries","title":"Running GraphQL queries","text":"

Direct API interactions in Lagoon are done via GraphQL.

In order to authenticate with the API, we need a JWT (JSON Web Token) that allows us to use the GraphQL API as admin. To generate this token, open the terminal of the storage-calculator pod via your Kubernetes UI, or via kubectl and run the following command:

Generate JWT token.
./create_jwt.py\n

This will return a long string which is the JWT token. Make a note of this, as we will need it to send queries.

We also need the URL of the API endpoint, which can be found under \"Ingresses\" in your Kubernetes UI or via kubectl on the command line. Make a note of this endpoint URL, which we will also need.

To compose and send GraphQL queries, we recommend GraphiQL.app, a desktop GraphQL client with features such as autocomplete. To continue with the next steps, install and start the app.

Under \"GraphQL Endpoint\", enter the API endpoint URL with /graphql on the end. Then click on \"Edit HTTP Headers\" and add a new header:

  • \"Header name\": Authorization
  • \"Header value\": Bearer [JWT token] (make sure that the JWT token has no spaces, as this would not work)

Press ESC to close the HTTP header overlay and now we are ready to send the first GraphQL request!

Enter this in the left panel

Running a query
query allProjects{\n  allProjects {\n    name\n  }\n}\n

And press the \u25b6\ufe0f button (or press CTRL+ENTER).

If all went well, your first GraphQL response should appear shortly afterwards in the right pane.

"},{"location":"administering-lagoon/graphql-queries/#creating-the-first-project","title":"Creating the first project","text":"

Let's create the first project for Lagoon to deploy! For this we'll use the queries from the GraphQL query template in create-project.gql.

For each of the queries (the blocks starting with mutation {), fill in all of the empty fields marked by TODO comments and run the queries in GraphiQL.app. This will create one of each of the following two objects:

  1. kubernetes : The Kubernetes (or Openshift) cluster to which Lagoon should deploy. Lagoon is not only capable of deploying to its own Kubernetes cluster, but also to any Kubernetes cluster anywhere in the world.
  2. project : The Lagoon project to be deployed, which is a Git repository with a .lagoon.yml configuration file committed in the root.
"},{"location":"administering-lagoon/graphql-queries/#allowing-access-to-the-project","title":"Allowing access to the project","text":"

In Lagoon, each developer authenticates via their SSH key(s). This determines their access to:

  1. The Lagoon API, where they can see and edit projects they have access to.
  2. Remote shell access to containers that are running in projects they have access to.
  3. The Lagoon logging system, where a developer can find request logs, container logs, Lagoon logs and more.

To allow access to the project, we first need to add a new group to the API:

Add group to API
mutation {\n  addGroup (\n    input: {\n      # TODO: Enter the name for your new group.\n      name: \"\"\n    }\n  )     {\n    id\n    name\n  }\n}\n

Then we need to add a new user to the API:

Add new user to API
mutation {\n  addUser(\n    input: {\n      email: \"michael.schmid@example.com\"\n      firstName: \"Michael\"\n      lastName: \"Schmid\"\n      comment: \"CTO\"\n    }\n  ) {\n    # TODO: Make a note of the user ID that is returned.\n    id\n  }\n}\n

Then we can add an SSH public key for the user to the API:

Add SSH public key for the user to API
mutation {\n  addSshKey(\n    input: {\n      # TODO: Fill in the name field.\n      # This is a non-unique identifier for the SSH key.\n      name: \"\"\n      # TODO: Fill in the keyValue field.\n      # This is the actual SSH public key (without the type at the beginning and without the comment at the end, ex. `AAAAB3NzaC1yc2EAAAADAQ...3QjzIOtdQERGZuMsi0p`).\n      keyValue: \"\"\n      # TODO: Fill in the keyType field.\n      # Valid values are either SSH_RSA, SSH_ED25519, ECDSA_SHA2_NISTP256/384/521\n      keyType: SSH_RSA\n      user: {\n        # TODO: Fill in the userId field.\n        # This is the user ID that we noted from the addUser query.\n        id:\"0\",\n        email:\"michael.schmid@example.com\"\n      }\n    }\n  ) {\n    id\n  }\n}\n

After we add the key, we need to add the user to a group:

Add user to group
mutation {\n  addUserToGroup (\n    input: {\n      user: {\n        #TODO: Enter the email address of the user.\n        email: \"\"\n      }\n      group: {\n        #TODO: Enter the name of the group you want to add the user to.\n        name: \"\"\n      }\n      #TODO: Enter the role of the user.\n      role: OWNER\n\n    }\n  ) {\n    id\n    name\n  }\n}\n

After running one or more of these kinds of queries, the user will be granted access to create tokens via SSH, access containers and more.

"},{"location":"administering-lagoon/graphql-queries/#adding-notifications-to-the-project","title":"Adding notifications to the project","text":"

If you want to know what is going on during a deployment, we suggest configuring notifications for your project, which provide:

  • Push notifications
  • Build start information
  • Build success or failure messages
  • And many more!

As notifications can be quite different in terms of the information they need, each notification type has its own mutation.

As with users, we first add the notification:

Add notification
mutation {\n  addNotificationSlack(\n    input: {\n      # TODO: Fill in the name field.\n      # This is your own identifier for the notification.\n      name: \"\"\n      # TODO: Fill in the channel field.\n      # This is the channel for the message to be sent to.\n      channel: \"\"\n      # TODO: Fill in the webhook field.\n      # This is the URL of the webhook where messages should be sent, this is usually provided by the chat system to you.\n      webhook: \"\"\n    }\n  ) {\n    id\n  }\n}\n

After the notification is created, we can now assign it to our project:

Assign notification to project
mutation {\n  addNotificationToProject(\n    input: {\n      notificationType: SLACK\n      # TODO: Fill in the project field.\n      # This is the project name.\n      project: \"\"\n      # TODO: Fill in the notification field.\n      # This is the notification name.\n      notificationName: \"\"\n      # TODO: OPTIONAL\n      # The kind notification class you're interested in defaults to DEPLOYMENT\n      contentType: DEPLOYMENT/PROBLEM\n      # TODO: OPTIONAL\n      # Related to contentType PROBLEM, we can set the threshold for the kinds of problems\n      # we'd like to be notified about\n      notificationSeverityThreshold \"NONE/UNKNOWN/NEGLIGIBLE/LOW/MEDIUM/HIGH/CRITICAL\n    }\n  ) {\n    id\n  }\n}\n

Now for every deployment you will receive messages in your defined channel.

"},{"location":"administering-lagoon/graphql-queries/#example-graphql-queries","title":"Example GraphQL queries","text":""},{"location":"administering-lagoon/graphql-queries/#adding-a-new-kubernetes-target","title":"Adding a new Kubernetes target","text":"

Note

In Lagoon, both addKubernetes and addOpenshift can be used for both Kubernetes and OpenShift targets - either will work interchangeably.

The cluster to which Lagoon should deploy.

Add Kubernetes target
mutation {\n  addKubernetes(\n    input: {\n      # TODO: Fill in the name field.\n      # This is the unique identifier of the cluster.\n      name: \"\"\n      # TODO: Fill in consoleUrl field.\n      # This is the URL of the Kubernetes cluster\n      consoleUrl: \"\"\n      # TODO: Fill in the token field.\n      # This is the token of the `lagoon` service account created in this cluster (this is the same token that we also used during installation of Lagoon).\n      token: \"\"\n    }\n  ) {\n    name\n    id\n  }\n}\n
"},{"location":"administering-lagoon/graphql-queries/#adding-a-group-to-a-project","title":"Adding a group to a project","text":"

This query will add a group to a project. Users of that group will be able to access the project. They will be able to make changes, based on their role in that group.

Add a group to a project
mutation {\n  addGroupsToProject (\n    input: {\n      project: {\n        #TODO: Enter the name of the project.\n        name: \"\"\n      }\n      groups: {\n        #TODO: Enter the name of the group that will be added to the project.\n        name: \"\"\n      }\n    }\n  ) {\n    id\n  }\n}\n
"},{"location":"administering-lagoon/graphql-queries/#adding-a-new-project","title":"Adding a new project","text":"

This query adds a new Lagoon project to be deployed, which is a Git repository with a .lagoon.yml configuration file committed in the root.

If you omit the privateKey field, a new SSH key for the project will be generated automatically.

If you would like to reuse a key from another project. you will need to supply the key in the addProject mutation.

Add a new project
mutation {\n  addProject(\n    input: {\n      # TODO: Fill in the name field.\n      # This is the project name.\n      name: \"\"\n      # TODO: Fill in the private key field (replace newlines with '\\n').\n      # This is the private key for a project, which is used to access the Git code.\n      privateKey: \"\"\n      # TODO: Fill in the Kubernetes field.\n      # This is the ID of the Kubernetes or OpenShift to assign to the project.\n      kubernetes: 0\n      # TODO: Fill in the name field.\n      # This is the project name.\n      gitUrl: \"\"\n      # TODO: Fill in the branches to be deployed.\n      branches: \"\"\n      # TODO: Define the production environment.\n      productionEnvironment: \"\"\n    }\n  ) {\n    name\n    kubernetes {\n      name\n      id\n    }\n    gitUrl\n    activeSystemsDeploy\n    activeSystemsRemove\n    branches\n    pullrequests\n  }\n}\n
"},{"location":"administering-lagoon/graphql-queries/#list-projects-and-groups","title":"List projects and groups","text":"

This is a good query to see an overview of all projects, clusters and groups that exist within our Lagoon.

Get an overview of all projects, clusters, and groups
query {\n  allProjects {\n    name\n    gitUrl\n  }\n  allKubernetes {\n    name\n    id\n  }\n  allGroups{\n    id\n    name\n    members {\n      # This will display the users in this group.\n      user {\n        id\n        firstName\n        lastName\n      }\n      role\n    }\n    groups {\n      id\n      name\n    }\n  }\n}\n
"},{"location":"administering-lagoon/graphql-queries/#single-project","title":"Single project","text":"

If you want a detailed look at a single project, this query has been proven quite good:

Take a detailed look at one project
query {\n  projectByName(\n    # TODO: Fill in the project name.\n    name: \"\"\n  ) {\n    id\n    branches\n    gitUrl\n    pullrequests\n    productionEnvironment\n    notifications(type: SLACK) {\n      ... on NotificationSlack {\n        name\n        channel\n        webhook\n        id\n      }\n    }\n    environments {\n      name\n      deployType\n      environmentType\n    }\n    kubernetes {\n      id\n    }\n  }\n}\n
"},{"location":"administering-lagoon/graphql-queries/#querying-a-project-by-its-git-url","title":"Querying a project by its Git URL","text":"

Don't remember the name of a project, but know the Git URL? Search no longer, there is a GraphQL query for that:

Query project by Git URL
query {\n  projectByGitUrl(gitUrl: \"git@server.com:org/repo.git\") {\n    name\n  }\n}\n
"},{"location":"administering-lagoon/graphql-queries/#updating-objects","title":"Updating objects","text":"

The Lagoon GraphQL API can not only display objects and create objects, it also has the capability to update existing objects, using a patch object.

Update the branches to deploy within a project:

Update deploy branches.
mutation {\n  updateProject(\n    input: { id: 109, patch: { branches: \"^(prod|stage|dev|update)$\" } }\n  ) {\n    id\n  }\n}\n

Update the production environment within a project:

Warning

This requires a redeploy in order for the changes to be reflected in the containers.

Update prod environment
 mutation {\n   updateProject(\n    input: { id: 109, patch: { productionEnvironment: \"main\" } }\n  ) {\n    id\n  }\n}\n

You can also combine multiple changes at once:

Update prod environment and set deploy branches.
mutation {\n  updateProject(\n    input: {\n      id: 109\n      patch: {\n        productionEnvironment: \"main\"\n        branches: \"^(prod|stage|dev|update)$\"\n      }\n    }\n  ) {\n    id\n  }\n}\n
"},{"location":"administering-lagoon/graphql-queries/#deleting-environments","title":"Deleting Environments","text":"

You can also use the Lagoon GraphQL API to delete an environment. You'll need to know the project name and the environment name in order to run the command.

Delete environment.
mutation {\n  deleteEnvironment(\n    input: {\n      # TODO: Fill in the name field.\n      # This is the environment name.\n      name:\"\"\n      # TODO: Fill in the project field.\n      # This is the project name.\n      project:\"\"\n      execute:true\n    }\n  )\n}\n
"},{"location":"administering-lagoon/graphql-queries/#querying-a-project-to-see-what-groups-and-users-are-assigned","title":"Querying a project to see what groups and users are assigned","text":"

Want to see what groups and users have access to a project? Want to know what their roles are? Do I have a query for you! Using the query below you can search for a project and display the groups, users, and roles that are assigned to that project.

Query groups, users, and roles assigned to project
query search{\n  projectByName(\n    #TODO: Enter the name of the project.\n    name: \"\"\n  ) {\n    id,\n    branches,\n    productionEnvironment,\n    pullrequests,\n    gitUrl,\n    kubernetes {\n      id\n    },\n     groups{\n      id\n      name\n      groups {\n        id\n        name\n      }\n      members {\n        role\n        user {\n          id\n          email\n        }\n      }\n    }\n  }\n}\n
"},{"location":"administering-lagoon/graphql-queries/#maintaining-project-metadata","title":"Maintaining project metadata","text":"

Project metadata can be assigned using arbitrary key/value pairs. Projects can then be queried by the associated metadata; for example you may categorize projects by type of software, version number, or any other categorization you may wish to query on later.

"},{"location":"administering-lagoon/graphql-queries/#addupdate-metadata-on-a-project","title":"Add/update metadata on a project","text":"

Updates to metadata expect a key/value pair. It operates as an UPSERT, meaning if a key already exists the value will be updated, otherwise inserted.

You may have any number of k/v pairs stored against a project.

Add a key/value pair to metadata
mutation {\n  updateProjectMetadata(\n    input: { id: 1,  patch: { key: \"type\", value: \"saas\" } }\n  ) {\n    id\n    metadata\n  }\n}\n
"},{"location":"administering-lagoon/graphql-queries/#query-for-projects-by-metadata","title":"Query for projects by metadata","text":"

Queries may be by key only (e.g return all projects where a specific key exists) or both key and value where both key and value must match.

All projects that have the version tag:

Query by metadata
query projectsByMetadata {\n  projectsByMetadata(metadata: [{key: \"version\"] ) {\n    id\n    name\n  }\n}\n

All projects that have the version tag, specifically version 8:

Query by metadata
query projectsByMetadata {\n  projectsByMetadata(metadata: [{key: \"version\", value: \"8\"] ) {\n    id\n    name\n  }\n}\n
"},{"location":"administering-lagoon/graphql-queries/#removing-metadata-on-a-project","title":"Removing metadata on a project","text":"

Metadata can be removed on a per-key basis. Other metadata key/value pairs will persist.

Remove metadata
mutation {\n  removeProjectMetadataByKey (\n    input: { id: 1,  key: \"version\" }\n  ) {\n    id\n    metadata\n  }\n}\n
"},{"location":"administering-lagoon/rbac/","title":"Role-Based Access Control (RBAC)","text":"

Version 1.0 of Lagoon changed how you access your projects! Access to your project is handled via groups, with projects assigned to one or multiple groups. Users are added to groups with a role. Groups can also be nested within sub-groups. This change provides a lot more flexibility and the possibility to recreate real world teams within Lagoon.

"},{"location":"administering-lagoon/rbac/#roles","title":"Roles","text":"

When assigning a user to a group, you need to provide a group role for that user inside this group. Each one of the 5 current existing group roles gives the user different permissions to the group and projects assigned to the group. Here are the platform-wide roles and the group roles that are currently found in Lagoon:

"},{"location":"administering-lagoon/rbac/#platform-wide-roles","title":"Platform-Wide Roles","text":""},{"location":"administering-lagoon/rbac/#platform-wide-admin","title":"Platform-Wide Admin","text":"

The platform-wide admin has access to everything across all of Lagoon. That includes dangerous mutations like deleting all projects. Use very, very, very carefully.

"},{"location":"administering-lagoon/rbac/#platform-wide-owner","title":"Platform-Wide Owner","text":"

The platform-wide owner has access to every Lagoon group, like the group owner role, and can be used if you need a user that needs access to everything but you don't want to assign the user to every group.

"},{"location":"administering-lagoon/rbac/#group-roles","title":"Group Roles","text":""},{"location":"administering-lagoon/rbac/#owner","title":"Owner","text":"

The owner role can do everything within a group and its associated projects. They can add and manage users of a group. Be careful with this role, as it can delete projects and production environments!

"},{"location":"administering-lagoon/rbac/#maintainer","title":"Maintainer","text":"

The maintainer role can do everything within a group and its associated projects except deleting the project itself or the production environment. They can add and manage users of a group.

"},{"location":"administering-lagoon/rbac/#developer","title":"Developer","text":"

The developer role has SSH access only to development environments. This role cannot access, update or delete the production environment. They can run a sync task with the production environment as a source, but not as the destination. Cannot manage users of a group.

IMPORTANT

This role does not prevent the deployment of the production environment as a deployment is triggered via a Git push! You need to make sure that your Git server prevents these users from pushing into the branch defined as production environment.

"},{"location":"administering-lagoon/rbac/#reporter","title":"Reporter","text":"

The reporter role has view access only. They cannot access any environments via SSH or make modifications to them. They can run cache-clear tasks. This role is mostly used for stakeholders to have access to Lagoon UI and logging.

"},{"location":"administering-lagoon/rbac/#guest","title":"Guest","text":"

The guest role has the same privileges as the reporter role listed above.

Here is a table that lists the roles and the access they have:

"},{"location":"administering-lagoon/rbac/#lagoon-100-rbac-permission-matrix","title":"Lagoon 1.0.0 RBAC Permission Matrix","text":"SelfGuestDeveloperMaintainerOwnerPlatform-Wide OwnerPlatform-Wide Admin Name Resource Scope Attributes addSshKey ssh_key add userID updateSshKey ssh_key update userID deleteSshKey ssh_key delete userID getUserSshKeys ssh_key view:user userID updateUser user update userID deleteUser user delete userID Name Resource Scope Attributes getBackupsByEnvironmentId deployment view projectID getEnvironmentsByProjectId environment view projectID getEnvironmentServicesByEnvironmentId environment view projectID getEnvVarsByEnvironmentId env_var environment:view:development projectID getEnvVarsByEnvironmentId env_var environment:view:production projectID getEnvVarsByProjectId env_var project:view projectID addGroup group add getOpenshiftByProjectId openshift view projectID addProject project add getProjectByEnvironmentId project view projectID getProjectByGitUrl project view projectID getProjectByName project view projectID addRestore restore add projectID updateRestore restore update projectID taskDrushCacheClear task drushCacheClear:development projectID taskDrushCacheClear task drushCacheClear:production projectID taskDrushCron task drushCron:development projectID taskDrushCron task drushCron:production projectID getFilesByTaskId task view projectID getTasksByEnvironmentId task view projectID getTaskByRemoteId task view projectID getTaskById task view projectID addUser user add Name Resource Scope Attributes addBackup backup add projectID getBackupsByEnvironmentId backup view projectID addEnvVariable (to Environment) env_var environment:add:development projectID deleteEnvVariable (from Environment) env_var environment:delete:development projectID getEnvVarsByEnvironmentId env_var environment:viewValue:development projectID addOrUpdateEnvironment environment addOrUpdate:development projectID updateEnvironment environment update:development projectID deleteEnvironment environment delete:development projectID addDeployment environment deploy:development projectID setEnvironmentServices environment update:development projectID deployEnvironmentLatest environment deploy:development projectID deployEnvironmentBranch environment deploy:development projectID deployEnvironmentPullrequest environment deploy:development projectID deployEnvironmentPromote environment deploy:development projectID userCanSshToEnvironment environment ssh:development projectID getNotificationsByProjectId notification view projectID addTask task add:development projectID taskDrushArchiveDump task drushArchiveDump:development projectID taskDrushArchiveDump task drushArchiveDump:production projectID taskDrushSqlDump task drushSqlDump:development projectID taskDrushSqlDump task drushSqlDump:production projectID taskDrushUserLogin task drushUserLogin:destination:development environmentID taskDrushSqlSync task drushSqlSync:source:development projectID taskDrushSqlSync task drushSqlSync:source:production projectID taskDrushSqlSync task drushSqlSync:destination:development projectID taskDrushRsyncFiles task drushRsync:source:development projectID taskDrushRsyncFiles task drushRsync:source:production projectID taskDrushRsyncFiles task drushRsync:destination:development projectID deleteTask task delete projectID updateTask task update projectID uploadFilesForTask task update projectID deleteFilesForTask task delete projectID getBackupsByEnvironmentId deployment view projectID getEnvironmentsByProjectId environment view projectID getEnvironmentServicesByEnvironmentId environment view projectID getEnvVarsByEnvironmentId env_var environment:view:development projectID getEnvVarsByEnvironmentId env_var environment:view:production projectID getEnvVarsByProjectId env_var project:view projectID addGroup group add getOpenshiftByProjectId openshift view projectID addProject project add getProjectByEnvironmentId project view projectID getProjectByGitUrl project view projectID getProjectByName project view projectID addRestore restore add projectID updateRestore restore update projectID taskDrushCacheClear task drushCacheClear:development projectID taskDrushCacheClear task drushCacheClear:production projectID taskDrushCron task drushCron:development projectID taskDrushCron task drushCron:production projectID getFilesByTaskId task view projectID getTasksByEnvironmentId task view projectID getTaskByRemoteId task view projectID getTaskById task view projectID addUser user add Name Resource Scope Attributes deleteBackup backup delete projectID addEnvVariable (to Project) env_var project:add projectID addEnvVariable (to Environment) env_var environment:add:production projectID deleteEnvVariable env_var delete projectID deleteEnvVariable (from Project) env_var project:delete projectID deleteEnvVariable (from Environment) env_var environment:delete:production projectID getEnvVarsByProjectId env_var project:viewValue projectID getEnvVarsByEnvironmentId env_var environment:viewValue:production projectID addOrUpdateEnvironment environment addOrUpdate:production projectID updateEnvironment environment update:production projectID addDeployment environment deploy:production projectID deleteDeployment deployment delete projectID updateDeployment deployment update projectID setEnvironmentServices environment update:production projectID deployEnvironmentLatest environment deploy:production projectID deployEnvironmentBranch environment deploy:production projectID deployEnvironmentPullrequest environment deploy:production projectID deployEnvironmentPromote environment deploy:production projectID userCanSshToEnvironment environment ssh:production projectID updateGroup group update groupID deleteGroup group delete groupID addUserToGroup group addUser groupID removeUserFromGroup group removeUser groupID addNotificationToProject project addNotification projectID removeNotificationFromProject project removeNotification projectID updateProject project update projectID addGroupsToProject project addGroup projectID removeGroupsFromProject project removeGroup projectID addTask task add:production projectID taskDrushUserLogin task drushUserLogin:destination:production environmentID taskDrushSqlSync task drushSqlSync:destination:production projectID taskDrushRsyncFiles task drushRsync:destination:production projectID addBackup backup add projectID getBackupsByEnvironmentId backup view projectID addEnvVariable (to Environment) env_var environment:add:development projectID deleteEnvVariable (from Environment) env_var environment:delete:development projectID getEnvVarsByEnvironmentId env_var environment:viewValue:development projectID addOrUpdateEnvironment environment addOrUpdate:development projectID updateEnvironment environment update:development projectID deleteEnvironment environment delete:development projectID addDeployment environment deploy:development projectID setEnvironmentServices environment update:development projectID deployEnvironmentLatest environment deploy:development projectID deployEnvironmentBranch environment deploy:development projectID deployEnvironmentPullrequest environment deploy:development projectID deployEnvironmentPromote environment deploy:development projectID getNotificationsByProjectId notification view projectID addTask task add:development projectID taskDrushArchiveDump task drushArchiveDump:development projectID taskDrushArchiveDump task drushArchiveDump:production projectID taskDrushSqlDump task drushSqlDump:development projectID taskDrushSqlDump task drushSqlDump:production projectID taskDrushUserLogin task drushUserLogin:destination:development environmentID taskDrushSqlSync task drushSqlSync:source:development projectID taskDrushSqlSync task drushSqlSync:source:production projectID taskDrushSqlSync task drushSqlSync:destination:development projectID taskDrushRsyncFiles task drushRsync:source:development projectID taskDrushRsyncFiles task drushRsync:source:production projectID taskDrushRsyncFiles task drushRsync:destination:development projectID deleteTask task delete projectID updateTask task update projectID uploadFilesForTask task update projectID deleteFilesForTask task delete projectID getBackupsByEnvironmentId deployment view projectID getEnvironmentsByProjectId environment view projectID getEnvironmentServicesByEnvironmentId environment view projectID getEnvVarsByEnvironmentId env_var environment:view:development projectID getEnvVarsByEnvironmentId env_var environment:view:production projectID getEnvVarsByProjectId env_var project:view projectID addGroup group add getOpenshiftByProjectId openshift view projectID addProject project add getProjectByEnvironmentId project view projectID getProjectByGitUrl project view projectID getProjectByName project view projectID addRestore restore add projectID updateRestore restore update projectID taskDrushCacheClear task drushCacheClear:development projectID taskDrushCacheClear task drushCacheClear:production projectID taskDrushCron task drushCron:development projectID taskDrushCron task drushCron:production projectID getFilesByTaskId task view projectID getTasksByEnvironmentId task view projectID getTaskByRemoteId task view projectID getTaskById task view projectID addUser user add Name Resource Scope Attributes deleteEnvironment environment delete:production projectID deleteProject project delete projectID getProjectByEnvironmentId project viewPrivateKey projectID getProjectByGitUrl project viewPrivateKey projectID getProjectByName project viewPrivateKey projectID deleteBackup backup delete projectID addEnvVariable (to Project) env_var project:add projectID addEnvVariable (to Environment) env_var environment:add:production projectID deleteEnvVariable env_var delete projectID deleteEnvVariable (from Project) env_var project:delete projectID deleteEnvVariable (from Environment) env_var environment:delete:production projectID getEnvVarsByProjectId env_var project:viewValue projectID getEnvVarsByEnvironmentId env_var environment:viewValue:production projectID addOrUpdateEnvironment environment addOrUpdate:production projectID updateEnvironment environment update:production projectID addDeployment environment deploy:production projectID deleteDeployment deployment delete projectID updateDeployment deployment update projectID setEnvironmentServices environment update:production projectID deployEnvironmentLatest environment deploy:production projectID deployEnvironmentBranch environment deploy:production projectID deployEnvironmentPullrequest environment deploy:production projectID deployEnvironmentPromote environment deploy:production projectID updateGroup group update groupID deleteGroup group delete groupID addUserToGroup group addUser groupID removeUserFromGroup group removeUser groupID addNotificationToProject project addNotification projectID removeNotificationFromProject project removeNotification projectID updateProject project update projectID addGroupsToProject project addGroup projectID removeGroupsFromProject project removeGroup projectID addTask task add:production projectID taskDrushUserLogin task drushUserLogin:destination:production environmentID taskDrushSqlSync task drushSqlSync:destination:production projectID taskDrushRsyncFiles task drushRsync:destination:production projectID addBackup backup add projectID getBackupsByEnvironmentId backup view projectID addEnvVariable (to Environment) env_var environment:add:development projectID deleteEnvVariable (from Environment) env_var environment:delete:development projectID getEnvVarsByEnvironmentId env_var environment:viewValue:development projectID addOrUpdateEnvironment environment addOrUpdate:development projectID updateEnvironment environment update:development projectID deleteEnvironment environment delete:development projectID addDeployment environment deploy:development projectID setEnvironmentServices environment update:development projectID deployEnvironmentLatest environment deploy:development projectID deployEnvironmentBranch environment deploy:development projectID deployEnvironmentPullrequest environment deploy:development projectID deployEnvironmentPromote environment deploy:development projectID getNotificationsByProjectId notification view projectID addTask task add:development projectID taskDrushArchiveDump task drushArchiveDump:development projectID taskDrushArchiveDump task drushArchiveDump:production projectID taskDrushSqlDump task drushSqlDump:development projectID taskDrushSqlDump task drushSqlDump:production projectID taskDrushUserLogin task drushUserLogin:destination:development environmentID taskDrushSqlSync task drushSqlSync:source:development projectID taskDrushSqlSync task drushSqlSync:source:production projectID taskDrushSqlSync task drushSqlSync:destination:development projectID taskDrushRsyncFiles task drushRsync:source:development projectID taskDrushRsyncFiles task drushRsync:source:production projectID taskDrushRsyncFiles task drushRsync:destination:development projectID deleteTask task delete projectID updateTask task update projectID uploadFilesForTask task update projectID deleteFilesForTask task delete projectID getBackupsByEnvironmentId deployment view projectID getEnvironmentsByProjectId environment view projectID getEnvironmentServicesByEnvironmentId environment view projectID getEnvVarsByEnvironmentId env_var environment:view:development projectID getEnvVarsByEnvironmentId env_var environment:view:production projectID getEnvVarsByProjectId env_var project:view projectID addGroup group add getOpenshiftByProjectId openshift view projectID addProject project add getProjectByEnvironmentId project view projectID getProjectByGitUrl project view projectID getProjectByName project view projectID addRestore restore add projectID updateRestore restore update projectID taskDrushCacheClear task drushCacheClear:development projectID taskDrushCacheClear task drushCacheClear:production projectID taskDrushCron task drushCron:development projectID taskDrushCron task drushCron:production projectID getFilesByTaskId task view projectID getTasksByEnvironmentId task view projectID getTaskByRemoteId task view projectID getTaskById task view projectID addUser user add Name Resource Scope Attributes addOrUpdateEnvironmentStorage environment storage addNotificationSlack notification add updateNotificationSlack notification update deleteNotificationSlack notification delete addKubernetes kubernetes add updateKubernetes kubernetes update deleteKubernetes kubernetes delete deleteAllKubernetes kubernetes deleteAll getAllOpenshifts openshift viewAll getAllProjects project viewAll addSshKey ssh_key add userID updateSshKey ssh_key update userID deleteSshKey ssh_key delete userID getUserSshKeys ssh_key view:user userID updateUser user update userID deleteUser user delete userID deleteEnvironment environment delete:production projectID deleteProject project delete projectID getProjectByEnvironmentId project viewPrivateKey projectID getProjectByGitUrl project viewPrivateKey projectID getProjectByName project viewPrivateKey projectID deleteBackup backup delete projectID addEnvVariable (to Project) env_var project:add projectID addEnvVariable (to Environment) env_var environment:add:production projectID deleteEnvVariable env_var delete projectID deleteEnvVariable (from Project) env_var project:delete projectID deleteEnvVariable (from Environment) env_var environment:delete:production projectID getEnvVarsByProjectId env_var project:viewValue projectID getEnvVarsByEnvironmentId env_var environment:viewValue:production projectID addOrUpdateEnvironment environment addOrUpdate:production projectID updateEnvironment environment update:production projectID allEnvironments environment viewAll getEnvironmentStorageMonthByEnvironmentId environment storage getEnvironmentHoursMonthByEnvironmentId environment storage getEnvironmentHitsMonthByEnvironmentId environment storage addOrUpdateEnvironmentStorage environment storage addDeployment environment deploy:production projectID deleteDeployment deployment delete projectID updateDeployment deployment update projectID setEnvironmentServices environment update:production projectID deployEnvironmentLatest environment deploy:production projectID deployEnvironmentBranch environment deploy:production projectID deployEnvironmentPullrequest environment deploy:production projectID deployEnvironmentPromote environment deploy:production projectID updateGroup group update groupID deleteGroup group delete groupID addUserToGroup group addUser groupID removeUserFromGroup group removeUser groupID addNotificationToProject project addNotification projectID removeNotificationFromProject project removeNotification projectID updateProject project update projectID addGroupsToProject project addGroup projectID removeGroupsFromProject project removeGroup projectID addTask task add:production projectID taskDrushUserLogin task drushUserLogin:destination:production environmentID taskDrushSqlSync task drushSqlSync:destination:production projectID taskDrushRsyncFiles task drushRsync:destination:production projectID addBackup backup add projectID getBackupsByEnvironmentId backup view projectID addEnvVariable (to Environment) env_var environment:add:development projectID deleteEnvVariable (from Environment) env_var environment:delete:development projectID getEnvVarsByEnvironmentId env_var environment:viewValue:development projectID addOrUpdateEnvironment environment addOrUpdate:development projectID updateEnvironment environment update:development projectID deleteEnvironment environment delete:development projectID addDeployment environment deploy:development projectID setEnvironmentServices environment update:development projectID deployEnvironmentLatest environment deploy:development projectID deployEnvironmentBranch environment deploy:development projectID deployEnvironmentPullrequest environment deploy:development projectID deployEnvironmentPromote environment deploy:development projectID getNotificationsByProjectId notification view projectID addTask task add:development projectID taskDrushArchiveDump task drushArchiveDump:development projectID taskDrushArchiveDump task drushArchiveDump:production projectID taskDrushSqlDump task drushSqlDump:development projectID taskDrushSqlDump task drushSqlDump:production projectID taskDrushUserLogin task drushUserLogin:destination:development environmentID taskDrushSqlSync task drushSqlSync:source:development projectID taskDrushSqlSync task drushSqlSync:source:production projectID taskDrushSqlSync task drushSqlSync:destination:development projectID taskDrushRsyncFiles task drushRsync:source:development projectID taskDrushRsyncFiles task drushRsync:source:production projectID taskDrushRsyncFiles task drushRsync:destination:development projectID deleteTask task delete projectID updateTask task update projectID uploadFilesForTask task update projectID deleteFilesForTask task delete projectID getBackupsByEnvironmentId deployment view projectID getEnvironmentsByProjectId environment view projectID getEnvironmentServicesByEnvironmentId environment view projectID getEnvVarsByEnvironmentId env_var environment:view:development projectID getEnvVarsByEnvironmentId env_var environment:view:production projectID getEnvVarsByProjectId env_var project:view projectID addGroup group add getOpenshiftByProjectId openshift view projectID addProject project add getProjectByEnvironmentId project view projectID getProjectByGitUrl project view projectID getProjectByName project view projectID addRestore restore add projectID updateRestore restore update projectID taskDrushCacheClear task drushCacheClear:development projectID taskDrushCacheClear task drushCacheClear:production projectID taskDrushCron task drushCron:development projectID taskDrushCron task drushCron:production projectID getFilesByTaskId task view projectID getTasksByEnvironmentId task view projectID getTaskByRemoteId task view projectID getTaskById task view projectID addUser user add Name Resource Scope Attributes deleteAllBackups backup deleteAll deleteAllEnvironments environment deleteAll getEnvironmentStorageMonthByEnvironmentId environment storage getEnvironmentHoursMonthByEnvironmentId environment storage getEnvironmentHitsMonthByEnvironmentId environment storage deleteAllGroups group deleteAll deleteAllNotificationSlacks notification deleteAll removeAllNotificationsFromAllProjects notification removeAll getAllOpenshifts openshift viewAll deleteAllProjects project deleteAll deleteAllSshKeys ssh_key deleteAll removeAllSshKeysFromAllUsers ssh_key removeAll deleteAllUsers user deleteAll addOrUpdateEnvironmentStorage environment storage addNotificationSlack notification add updateNotificationSlack notification update deleteNotificationSlack notification delete addKubernetes kubernetes add updateKubernetes kubernetes update deleteKubernetes kubernetes delete deleteAllKubernetes kubernetes deleteAll getAllProjects project viewAll addSshKey ssh_key add userID updateSshKey ssh_key update userID deleteSshKey ssh_key delete userID getUserSshKeys ssh_key view:user userID updateUser user update userID deleteUser user delete userID deleteEnvironment environment delete:production projectID deleteProject project delete projectID getProjectByEnvironmentId project viewPrivateKey projectID getProjectByGitUrl project viewPrivateKey projectID getProjectByName project viewPrivateKey projectID deleteBackup backup delete projectID addEnvVariable (to Project) env_var project:add projectID addEnvVariable (to Environment) env_var environment:add:production projectID deleteEnvVariable env_var delete projectID deleteEnvVariable (from Project) env_var project:delete projectID deleteEnvVariable (from Environment) env_var environment:delete:production projectID getEnvVarsByProjectId env_var project:viewValue projectID getEnvVarsByEnvironmentId env_var environment:viewValue:production projectID addOrUpdateEnvironment environment addOrUpdate:production projectID updateEnvironment environment update:production projectID addDeployment environment deploy:production projectID deleteDeployment deployment delete projectID updateDeployment deployment update projectID setEnvironmentServices environment update:production projectID deployEnvironmentLatest environment deploy:production projectID deployEnvironmentBranch environment deploy:production projectID deployEnvironmentPullrequest environment deploy:production projectID deployEnvironmentPromote environment deploy:production projectID updateGroup group update groupID deleteGroup group delete groupID addUserToGroup group addUser groupID removeUserFromGroup group removeUser groupID addNotificationToProject project addNotification projectID removeNotificationFromProject project removeNotification projectID updateProject project update projectID addGroupsToProject project addGroup projectID removeGroupsFromProject project removeGroup projectID addTask task add:production projectID taskDrushUserLogin task drushUserLogin:destination:production environmentID taskDrushSqlSync task drushSqlSync:destination:production projectID taskDrushRsyncFiles task drushRsync:destination:production projectID addBackup backup add projectID getBackupsByEnvironmentId backup view projectID addEnvVariable (to Environment) env_var environment:add:development projectID deleteEnvVariable (from Environment) env_var environment:delete:development projectID getEnvVarsByEnvironmentId env_var environment:viewValue:development projectID addOrUpdateEnvironment environment addOrUpdate:development projectID updateEnvironment environment update:development projectID deleteEnvironment environment delete:development projectID addDeployment environment deploy:development projectID setEnvironmentServices environment update:development projectID deployEnvironmentLatest environment deploy:development projectID deployEnvironmentBranch environment deploy:development projectID deployEnvironmentPullrequest environment deploy:development projectID deployEnvironmentPromote environment deploy:development projectID getNotificationsByProjectId notification view projectID addTask task add:development projectID taskDrushArchiveDump task drushArchiveDump:development projectID taskDrushArchiveDump task drushArchiveDump:production projectID taskDrushSqlDump task drushSqlDump:development projectID taskDrushSqlDump task drushSqlDump:production projectID taskDrushUserLogin task drushUserLogin:destination:development environmentID taskDrushSqlSync task drushSqlSync:source:development projectID taskDrushSqlSync task drushSqlSync:source:production projectID taskDrushSqlSync task drushSqlSync:destination:development projectID taskDrushRsyncFiles task drushRsync:source:development projectID taskDrushRsyncFiles task drushRsync:source:production projectID taskDrushRsyncFiles task drushRsync:destination:development projectID deleteTask task delete projectID updateTask task update projectID uploadFilesForTask task update projectID deleteFilesForTask task delete projectID getBackupsByEnvironmentId deployment view projectID getEnvironmentsByProjectId environment view projectID getEnvironmentServicesByEnvironmentId environment view projectID getEnvVarsByEnvironmentId env_var environment:view:development projectID getEnvVarsByEnvironmentId env_var environment:view:production projectID getEnvVarsByProjectId env_var project:view projectID addGroup group add getOpenshiftByProjectId openshift view projectID addProject project add getProjectByEnvironmentId project view projectID getProjectByGitUrl project view projectID getProjectByName project view projectID addRestore restore add projectID updateRestore restore update projectID taskDrushCacheClear task drushCacheClear:development projectID taskDrushCacheClear task drushCacheClear:production projectID taskDrushCron task drushCron:development projectID taskDrushCron task drushCron:production projectID getFilesByTaskId task view projectID getTasksByEnvironmentId task view projectID getTaskByRemoteId task view projectID getTaskById task view projectID addUser user add"},{"location":"administering-lagoon/using-harbor/","title":"Harbor","text":"

Harbor is used as the default package repository for Lagoon when deploying to Kubernetes infrastructure. Harbor provides a Docker registry and a container security scanning solution provided by Trivy.

Note

When running Lagoon locally, the configuration for Harbor is handled entirely automagically.

If you are running Lagoon locally, you can access that UI at localhost:8084. The username is admin and the password is admin.

Note

If you are hosting a site with a provider (such as amazee.io), they may not allow customer access to the Harbor UI.

Once logged in, the first screen is a list of all repositories your user has access to. Each \"repository\" in Harbor correlates to a project in Lagoon.

Within each Harbor repository, you'll see a list of container images from all environments with a single Lagoon project.

From here, you can drill down into an individual container in order to see its details, including an overview of its security scan results.

"},{"location":"administering-lagoon/using-harbor/security-scanning/","title":"Security Scanning","text":"

Harbor comes with a built-in security scanning solution provided by the Trivy service. This service analyzes a specified container image for any installed packages, and collects the version numbers of those installed packages. The Trivy service then searches the National Vulnerability Database for any CVEs (common vulnerabilities and exposures) affecting those package versions. Trivy is also library aware, so it will scan any Composer files or other package library definition files and report any vulnerabilities found within those package versions. These vulnerabilities are then reported within Harbor for each individual container.

An example of a security scan in Harbor, showing applicable vulnerabilities for a scanned container:

"},{"location":"administering-lagoon/using-harbor/harbor-settings/","title":"Running Harbor Locally","text":"

Lagoon supports running Harbor locally, and it is automatically used for hosting all Kubernetes-based builds (any time the project's activeSystemsDeploy value is set to lagoon_kubernetesBuildDeploy). When Harbor is ran locally, it makes use of MinIO as a storage backend, which is an AWS S3 compatible local storage solution.

"},{"location":"administering-lagoon/using-harbor/harbor-settings/#settings","title":"Settings","text":"

Harbor is composed of multiple containers, which all require different settings in order for them to run successfully.

"},{"location":"administering-lagoon/using-harbor/harbor-settings/#environment-variables","title":"Environment Variables","text":"

The following environment variables are required to be set in order for Harbor to properly start:

  • HARBOR_REGISTRY_STORAGE_AMAZON_BUCKET
    • This needs to be set to the name of the AWS bucket which Harbor will save images to.
    • Defaults to harbor-images when Lagoon is run locally or during CI testing.
  • HARBOR_REGISTRY_STORAGE_AMAZON_REGION
    • This needs to be set to the AWS region in which Harbor's bucket is located.
    • Defaults to us-east-1 when Lagoon is run locally or during CI testing.
  • REGISTRY_STORAGE_S3_ACCESSKEY
    • This needs to be set to the AWS access key Harbor should use to read and write to the AWS bucket.
    • Defaults to an empty string when Lagoon is run locally or during CI testing, as MinIO does not require authentication.
  • REGISTRY_STORAGE_S3_SECRETKEY
    • This needs to be set to the AWS secret key Harbor should use to read and write to the AWS bucket.
    • Defaults to an empty string when Lagoon is run locally or during CI testing, as MinIO does not require authentication.

The following environment variables can be set if required:

  • HARBOR_REGISTRY_STORAGE_AMAZON_ENDPOINT
    • If this variable is set, the Harbor registry will use its value as the address of the s3 entrypoint.
    • Defaults to https://s3.amazonaws.com when this variable is not set.
"},{"location":"administering-lagoon/using-harbor/harbor-settings/#container-specific-settings","title":"Container Specific Settings","text":"

The following containers make use of configuration files:

  • HarborRegistry
  • HarborRegistryCtl
  • Harbor-Core
  • Harbor-Database
  • Harbor-Jobservice
  • Harbor-Trivy

The following containers do not require configuration files to run:

  • Harbor-Nginx
  • Harbor-Portal
  • Harbor-Redis
"},{"location":"administering-lagoon/using-harbor/harbor-settings/harbor-core/","title":"Harbor-Core","text":"

Harbor-Core requires a configuration file to start, which is located at /etc/core/app.conf within the container. Any changes made to this config file are temporary and will not persist once the pod is restarted.

The configmap from which this config file is generated is stored within Lagoon in the services/harbor-core/harbor-core.yml file. Any changes made to this configmap will be persisted across container restarts.

"},{"location":"administering-lagoon/using-harbor/harbor-settings/harbor-core/#config-file-contents","title":"Config File Contents","text":"
  • _REDIS_URL
    • Tells harbor-core and the Chartmuseum service connection info for the Redis server.
    • The default value is harbor-redis:6379,100,.
  • _REDIS_URL_REG
    • The url which harborregistry should use to connect to the Redis server.
    • The default value is redis://harbor-redis:6379/2.
  • ADMIRAL_URL
    • Tells harbor-core where to find the admiral service.
    • This service is not used with Lagoon's implementation of Harbor.
    • The default value is NA.
  • CFG_EXPIRATION
    • This value is not used.
    • The default value is 5.
  • CHART_CACHE_DRIVER
    • Tells harbor-core where to store any uploaded charts.
    • The default value is redis.
  • CLAIR_ADAPTER_URL
    • The URL that harbor-core should use to connect to the harbor-trivy service.
    • The default value is http://harbor-trivy:8080.
  • CLAIR_DB
    • The database type harborclair should use.
    • This value is not used, and is included only for legacy support
    • The default value is postgres.
  • CLAIR_DB_HOST
    • This value is not used, and is included only for legacy support
    • Tells harbor-core where to find the harborclair service.
    • The default value is harbor-database.
  • CLAIR_DB_PASSWORD
    • The password used to access harborclair's postgres database.
    • The default value is test123 when run locally or during CI testing.
    • This value is not used, and is included only for legacy support
    • This value is retrieved from a secret created when Harbor is first set up on a running Lagoon.
  • CLAIR_DB_PORT
    • The port harborclair should use to connect to the harborclair server.
    • This value is not used, and is included only for legacy support
    • The default value is 5432.
  • CLAIR_DB_SSLMODE
    • Whether or not harborclair should use SSL to connect to the postgresql server.
    • This value is not used, and is included only for legacy support
    • The default value is disable.
  • CLAIR_DB_USERNAME
    • The user harborclair should use to connect to the postgresql server.
    • This value is not used, and is included only for legacy support
    • The default value is postgres.
  • CLAIR_HEALTH_CHECK_SERVER_URL
    • This value tells harbor-core where it should issue health checks to for the harbor-trivy service.
    • The default value is http://harbor-trivy:8080
  • CLAIR_URL
    • The URL that harbor-core should use to connect to the harbor-trivy service.
    • The default value is http://harbor-trivy:6060.
  • CONFIG_PATH
    • Where harbor-core should look for its config file.
    • The default value is /etc/core/app.conf.
  • CORE_SECRET
    • This value is a pre-shared key that must match between the various services connecting to harbor-core.
    • The default value is set to secret123 when Harbor is run locally or during CI testing.
    • This value is retrieved from a secret created when Harbor is first set up on a running Lagoon.
  • CORE_URL
    • The URL that harbor-core should publish to other Harbor services in order for them to connect to the harbor-core service.
    • The default value is http://harbor-core:8080.
  • DATABASE_TYPE
    • The database type Harbor should use.
    • The default value is postgresql.
  • HARBOR_ADMIN_PASSWORD
    • The password which should be used to access harbor using the admin user.
    • The default value is admin when run locally or during CI testing.
    • This value is retreived from a secret created when Harbor is first set up on a running Lagoon.
  • HARBOR_NGINX_ENDPOINT
    • This environment variable tells harborregistry where its NGINX ingress controller, harbor-nginx, is running in order to construct proper push and pull instructions in the UI, among other things.
    • The default value is set to http://harbor-nginx:8080 when run locally or during CI testing.
    • Lagoon attempts to obtain and set this variable automagically when run in production. If that process fails, this service will fail to run.
  • HTTP_PROXY
    • The default value is an empty string.
  • HTTPS_PROXY
    • The default value is an empty string.
  • JOBSERVICE_SECRET
    • This value is a pre-shared key that must match between the various services connecting to harbor-jobservice.
    • The default value is set to secret123 when Harbor is run locally or during CI testing.
    • This value is retrieved from a secret created when Harbor is first set up on a running Lagoon.
  • JOBSERVICE_URL
    • The URL that harbor-core should use to connect to the harbor-jobservice service.
    • The default value is http://harbor-jobservice:8080.
  • LOG_LEVEL
    • The default log level of the harbor-core service.
    • The default value is error.
  • NO_PROXY
    • A list of hosts which should never have their requests proxied.
    • The default is harbor-core,harbor-jobservice,harbor-database,harbor-trivy,harborregistry,harbor-portal,127.0.0.1,localhost,.local,.internal.
  • PORTAL_URL
    • This value tells the service where to connect to the harbor-portal service.
    • The default value is http://harbor-portal:8080.
  • POSTGRESQL_DATABASE
    • The postgres database harbor-core should use when connecting to the postgresql server.
    • The default value is registry.
  • POSTGRESQL_HOST
    • Where harbor-core should connect to the postgresql server.
    • The default value is harbor-database.
  • POSTGRESQL_MAX_IDLE_CONNS
    • The maximum number of idle connections harbor-core should leave open to the postgresql server.
    • The default value is 50.
  • POSTGRESQL_MAX_OPEN_CONNS
    • The maximum number of open connections harbor-core should have to the postgresql server.
    • The default value is 100.
  • POSTGRESQL_PASSWORD
    • The password Harbor should use to connect to the postgresql server.
    • The default value is a randomly generated value.
  • POSTGRESQL_PORT
    • The port harbor-core should use to connect to the postgresql server.
    • The default value is 5432.
  • POSTGRESQL_USERNAME
    • The username harbor-core should use to connect to the postgresql server.
    • The default value is postgres.
  • POSTGRESQL_SSLMODE
    • Whether or not harbor-core should use SSL to connect to the postgresql server.
    • The default value is disable.
  • REGISTRY_HTTP_SECRET
    • This value is a pre-shared key that must match between the various services connecting to harborregistry.
    • The default value is set to secret123 when Harbor is run locally or during CI testing.
    • This value is retreived from a secret created when Harbor is first set up on a running Lagoon.
  • REGISTRY_STORAGE_PROVIDER_NAME
    • The storage backend that harborregistry should use.
    • The default value is s3.
  • REGISTRY_URL
    • The URL that harbor-core should use to connect to the harborregistry service..
    • The default value is http://harborregistry:5000.
  • REGISTRYCTL_URL
    • This value tells the service where to connect to the harborregistryctl service.
    • The default value is set to http://harborregistryctl:8080.
  • ROBOT_TOKEN_DURATION
    • This values sets how many days each issues robot token should be valid for.
    • The default value is set to 999.
  • SYNC_REGISTRY
    • This value is not used.
    • The default value is false.
  • TOKEN_SERVICE_URL
    • The URL that the harbor-core service publishes to other services in order to retrieve a JWT token.
    • The default value is http://harbor-core:8080/service/token.
  • TRIVY_ADAPTER_URL
    • The URL that the harbor-core service should use to connect to the harbor-trivy service.
    • The default value is http://harbor-trivy:8080.
  • WITH_CHARTMUSEUM
    • Tells harbor-core if the Chartmuseum service is being used.
    • This service is not used with Lagoon's implementation of Harbor.
    • The default value is false.
  • WITH_CLAIR
    • Tells harbor-core if the harborclair service is being used.
    • Lagoon does use this service in its implementation of Harbor.
    • The default value is true.
  • WITH_NOTARY
    • Tells harbor-core if the Notary service is being used.
    • This service is not used with Lagoon's implementation of Harbor.
    • The default value is false.
  • WITH_TRIVY
    • Tells harbor-core if the Trivy service is being used.
    • The default value is true.
"},{"location":"administering-lagoon/using-harbor/harbor-settings/harbor-database/","title":"Harbor-Database","text":"

Harbor-Database requires specific environment variables to be set in order to start, which are stored within secrets as described in the services/harbor-database/harbor-core.yml file.

"},{"location":"administering-lagoon/using-harbor/harbor-settings/harbor-database/#config-file-contents","title":"Config File Contents","text":"
  • POSTGRES_DB
    • The default database to be set up when initializing the Postgres service.
    • The default value is postgres.
  • POSTGRES_PASSWORD
    • The root password for the Postgres database.
    • The default value is test123.
    • This value is retrieved from a secret created when Harbor is first set up on a running Lagoon.
  • POSTGRES_USER
    • The default user to be set up when initializing the Postgres service.
    • The default value is postgres.
"},{"location":"administering-lagoon/using-harbor/harbor-settings/harbor-jobservice/","title":"Harbor-Jobservice","text":"

Harbor-Jobservice requires a configuration file to start, which is located at /etc/jobservice/config.yml within the container. Any changes made to this config file are temporary and will not persist once the pod is restarted.

The configmap from which this config file is generated is stored within Lagoon in the services/harbor-jobservice/harbor-jobservice.yml file. Any changes made to this configmap will be persisted across container restarts.

"},{"location":"administering-lagoon/using-harbor/harbor-settings/harbor-jobservice/#config-file-contents","title":"Config File Contents","text":"
  • CORE_URL
    • This value tells harbor-jobservice where harbor-core can be reached.
    • The default value is http://harbor-core:8080.
  • CORE_SECRET
    • This value is a pre-shared key that must match between the various services connecting to harbor-core.
    • The default value is set to secret123 when Harbor is run locally or during CI testing.
    • This value is retrieved from a secret created when Harbor is first set up on a running Lagoon.
  • HTTP_PROXY
    • The default value is an empty string.
  • HTTPS_PROXY
    • The default value is an empty string.
  • JOBSERVICE_SECRET
    • This value is a pre-shared key that must match between the various services connecting to harbor-jobservice.
    • The default value is set to secret123 when Harbor is run locally or during CI testing.
    • This value is retrieved from a secret created when Harbor is first set up on a running Lagoon.
  • LOG_LEVEL
    • The logging level this service should use.
    • The default value is error.
      • This can also be set to debug to enable very verbose logging.
  • NO_PROXY
    • A list of hosts which should never have their requests proxied.
    • The default is harbor-core,harbor-jobservice,harbor-database,harbor-trivy,harborregistry,harbor-portal,127.0.0.1,localhost,.local,.internal.
  • REGISTRY_CONTROLLER_URL
    • This value tells the service where to connect to the harborregistryctl service.
    • The default value is set to http://harborregistryctl:8080
  • SCANNER_LOG_LEVEL
    • The logging level the scanning service should use.
    • The default value is error.
      • This can also be set to debug to enable very verbose logging.
  • SCANNER_STORE_REDIS_URL
    • This value tells harbor-trivy how to connect to its Redis store.
    • The default value is redis://harbor-redis:6379/4.
"},{"location":"administering-lagoon/using-harbor/harbor-settings/harbor-trivy/","title":"Harbor-Trivy","text":"

Harbor-Trivy is configured via specific environment variables and does not use a config file.

"},{"location":"administering-lagoon/using-harbor/harbor-settings/harbor-trivy/#environment-variables","title":"Environment Variables","text":"
  • SCANNER_LOG_LEVEL
    • The logging level this service should use.
    • The default value is error.
      • This can be set to debug to enable very verbose logging.
  • SCANNER_STORE_REDIS_URL
    • This value tells harbor-trivy how to connect to its Redis store.
    • The default value is redis://harbor-redis:6379/4.
  • SCANNER_JOB_QUEUE_REDIS_URL
    • This value tells harbor-trivy how to connect to its Redis store.
    • The default value is redis://harbor-redis:6379/4.
  • SCANNER_TRIVY_VULN_TYPE
    • This value tells harbor-trivy what types of vulnerabilities it should be searching for.
    • The default value is os,library
"},{"location":"administering-lagoon/using-harbor/harbor-settings/harborregistry/","title":"HarborRegistry","text":"

HarborRegistry requires a configuration file to start, which is located at /etc/registry/config.yml within the container. Any changes made to this config file are temporary and will not persist once the pod is restarted.

This config file is stored within the services/harborregistry/harborregistry.yml file and loaded into the container as /etc/registry/pre-config.yml.

A custom container entrypoint, services/harborregistry/entrypoint.sh, then transposes provided environment variables into this config file and saves the results as /etc/registry/config.yml.

"},{"location":"administering-lagoon/using-harbor/harbor-settings/harborregistry/#config-file-contents","title":"Config File Contents","text":"
  • CORE_SECRET
    • This value is a pre-shared key that must match between the various services connecting to harbor-core.
    • The default value is set to secret123 when Harbor is run locally or during CI testing.
    • This value is retrieved from a secret created when Harbor is first set up on a running Lagoon.
  • HARBOR_NGINX_ENDPOINT
    • This environment variable tells harborregistry where its NGINX ingress controller, harbor-nginx, is running in order to construct proper push and pull instructions in the UI, among other things.
    • The default value is set to http://harbor-nginx:8080 when run locally or during CI testing.
    • Lagoon attempts to obtain and set this variable automagically when run in production. If that process fails, this service will fail to run.
  • JOBSERVICE_SECRET
    • This value is a pre-shared key that must match between the various services connecting to harbor-jobservice.
    • The default value is set to secret123 when Harbor is run locally or during CI testing.
    • This value is retrieved from a secret created when Harbor is first set up on a running Lagoon.
  • REGISTRY_HTTP_SECRET
    • This value is a pre-shared key that must match between the various services connecting to harborregistry.
    • The default value is set to secret123 when Harbor is run locally or during CI testing.
    • This value is retrieved from a secret created when Harbor is first set up on a running Lagoon.
  • REGISTRY_REDIS_PASSWORD
    • This environment variable tells harborregistryctl the password that should be used to connect to Redis.
    • The default value is an empty string.
"},{"location":"administering-lagoon/using-harbor/harbor-settings/harborregistryctl/","title":"HarborRegistryCtl","text":"

HarborRegistryCtl requires a configuration file to start, which is located at /etc/registryctl/config.yml within the container. Any changes made to this config file are temporary and will not persist once the pod is restarted.

The configmap from which this config file is generated is stored within Lagoon in the services/harborregistryctl/harborregistry.yml file. Any changes made to this configmap will be persisted across container restarts.

"},{"location":"administering-lagoon/using-harbor/harbor-settings/harborregistryctl/#config-file-contents","title":"Config File Contents","text":"
  • CORE_SECRET
    • This value is a pre-shared key that must match between the various services connecting to harbor-core.
    • The default value is set to secret123 when Harbor is run locally or during CI testing.
    • This value is retrieved from a secret created when Harbor is first set up on a running Lagoon.
  • JOBSERVICE_SECRET
    • This value is a pre-shared key that must match between the various services connecting to harbor-jobservice.
    • The default value is set to secret123 when Harbor is run locally or during CI testing.
    • This value is retrieved from a secret created when Harbor is first set up on a running Lagoon.
  • REGISTRY_HTTP_SECRET
    • This value is a pre-shared key that must match between the various services connecting to harborregistry.
    • The default value is set to secret123 when Harbor is run locally or during CI testing.
    • This value is retrieved from a secret created when Harbor is first set up on a running Lagoon.
  • REGISTRY_REDIS_PASSWORD
    • This environment variable tells harborregistryctl the password that should be used to connect to Redis.
    • The default value is an empty string.
"},{"location":"applications/","title":"A wide range of Applications, Frameworks and Languages are supported by Lagoon","text":"

Lagoon broadly classifies three levels in the application stack:

"},{"location":"applications/#languages","title":"Languages","text":"

The core building blocks of any Lagoon project, these are usually provided by Lagoon-specific images.

"},{"location":"applications/#frameworks","title":"Frameworks","text":"

These take those base images, and add in the necessary logic, tools and packages needed to serve a website, or drive an application.

"},{"location":"applications/#applications","title":"Applications","text":"

Usually built on top of Frameworks, this is the layer that content editors or developers will interact with to shape the finished product.

When we reference any repositories for use on Lagoon, we usually refer to them in three ways:

"},{"location":"applications/#templates","title":"Templates","text":"

These are fully-functional, cloneable starter repositories, maintained and updated regularly, ready to be extended and used with little customization.

"},{"location":"applications/#examples","title":"Examples","text":"

These are fully functional repositories, maintained and updated regularly, but may require some effort to make work for your individual project.

"},{"location":"applications/#demos","title":"Demos","text":"

These are repositories that have been built as a demonstration, and are usable for some of the concepts within, but aren't routinely maintained or updated.

For a more complete list, check out out our GitHub repository: https://www.github.com/lagoon-examples and our website https://lagoon.sh/application/

"},{"location":"applications/node/","title":"Node.js","text":""},{"location":"applications/node/#introduction","title":"Introduction","text":"

Lagoon provides Node.js images that are based on the official Node Alpine images.

More information on how to adapt your project to run on Lagoon can be found in our Node.js Docker Images section.

"},{"location":"applications/options/","title":"Configuring Applications for use on Lagoon","text":""},{"location":"applications/options/#lagoonyml","title":"lagoon.yml","text":"

Project- and environment-level configuration for Lagoon is provided in the .lagoon.yml file in your repository.

See lagoon-yml.md.

"},{"location":"applications/options/#docker-composeyml","title":"docker-compose.yml","text":"

Service-level configuration for Lagoon in provided in the docker-compose.yml file in your repository. In particular, the lagoon.type and associated service labels are documented in the individual services.

See docker-compose-yml.md

"},{"location":"applications/options/#storage","title":"Storage","text":"

Lagoon has the ability to provision storage for most services - the built-in Lagoon service types have a -persistent variant that can add in the necessary PVCs, volumes, etc. We have updated our examples to reflect this configuration locally.

"},{"location":"applications/options/#databases","title":"Databases","text":"

Lagoon has configurations available for:

  • Mariadb - all supported versions
  • PostgreSQL - all supported versions
"},{"location":"applications/options/#database-as-a-service","title":"Database-as-a-service","text":"

Lagoon also has the capability to utilize the dbaas-operator to automatically provision these databases using an underlying managed database service (i.e. RDS, Google Cloud Databases, Azure Database). This will happen automatically when these services are provisioned and configured for your cluster. If these are not available, a pod will be provisioned as a fallback.

"},{"location":"applications/options/#cache","title":"Cache","text":"

Lagoon supports Redis as a cache backend. In production, some users provision a managed Redis service for their production environments to help them scale.

"},{"location":"applications/options/#search","title":"Search","text":"

Lagoon supports Elasticsearch, Solr and OpenSearch as search providers. External search providers can also be configured if required.

"},{"location":"applications/options/#ingressroutes","title":"Ingress/Routes","text":"

Lagoon auto-generates routes for services that have ingress requirements. Custom routes can be provided in the .lagoon.yml on a per-service basis.

"},{"location":"applications/options/#environment-variables","title":"Environment Variables","text":"

Lagoon makes heavy use of environment variables, at build and runtime. Where these are used to provide critical configuration for your application (e.g. database config/credentials) - it is important that the local and Lagoon versions are named similarly.

See environment-variables.md.

"},{"location":"applications/other/","title":"Running other applications on Lagoon","text":"

Even if Lagoon doesn't have a base image for your particular application, framework or language, Lagoon can still build it!

Extending on, or inheriting from the commons image, Lagoon can run almost any workload.

"},{"location":"applications/other/#hugo","title":"Hugo","text":"

This brief example shows how to build a Hugo website and serve it as static files in an NGINX image. The commons image is used to add Hugo, copy the site in, and build it. The NGINX image is then used to serve the site, with the addition of a customized NGINX config.

nginx.dockerfile
FROM uselagoon/commons as builder\n\nRUN apk add hugo git\nWORKDIR /app\nCOPY . /app\nRUN hugo\n\nFROM uselagoon/nginx\n\nCOPY --from=builder /app/public/ /app\nCOPY lagoon/static-files.conf /etc/nginx/conf.d/app.conf\n\nRUN fix-permissions /usr/local/openresty/nginx\n
docker-compose.yml
services:\nnginx:\nbuild:\ncontext: .\ndockerfile: lagoon/nginx.Dockerfile\nlabels:\nlagoon.type: nginx\n
"},{"location":"applications/php/","title":"PHP","text":""},{"location":"applications/php/#introduction","title":"Introduction","text":"

Lagoon supports a wide range of PHP-based applications, such as Drupal, Laravel, Wordpress, Magento and Symfony.

More information on how to adapt your PHP project to run on Lagoon can be found in our PHP-cli Docker Images and PHP-FPM Docker Images sections.

"},{"location":"applications/python/","title":"Python","text":""},{"location":"applications/python/#introduction","title":"Introduction","text":"

Lagoon provides images for Python 3.7 and above that can be used to build web apps in a wide range of Python-based frameworks and applications.

More information on how to adapt your Python project to run on Lagoon can be found in our Python Docker Images section.

"},{"location":"applications/ruby/","title":"Ruby and Ruby on Rails","text":""},{"location":"applications/ruby/#introduction","title":"Introduction","text":"

We provide images for Ruby 3.0 and above, built on the official Ruby alpine Docker images.

Below we assume that you're attempting to get a Rails app deployed on Lagoon, although most of the details described are really framework-neutral.

"},{"location":"applications/ruby/#getting-rails-running-on-lagoon","title":"Getting Rails running on Lagoon","text":""},{"location":"applications/ruby/#responding-to-requests","title":"Responding to requests","text":"

The Ruby on Rails example in the Lagoon examples repository is instructive here.

In the docker-compose.yml we set up a service named ruby, which is the primary service that will be processing any dynamic requests.

If you look at the dockerfile specified for the ruby service, you'll see that we're exposing port 3000. The nginx service will direct any requests for non-static assets to the ruby service on this port (see the nginx configuration file for more details).

"},{"location":"applications/ruby/#logging","title":"Logging","text":"

The Lagoon logging infrastructure is described in the docs here. Essentially, in order to make use of the infrastructure, logs need to be sent via a UDP message to udp://application-logs.lagoon.svc:5140.

In our Rails example, we're importing the logstash-logger gem, and then in our config/application.rb we're initializing it with the following:

config/application.rb
    if ENV.has_key?('LAGOON_PROJECT') && ENV.has_key?('LAGOON_ENVIRONMENT') then\nlagoon_namespace = ENV['LAGOON_PROJECT'] + \"-\" + ENV['LAGOON_ENVIRONMENT']\nLogStashLogger.configure do |config|\nconfig.customize_event do |event|\nevent[\"type\"] = lagoon_namespace\nend\nend\nconfig.logstash.host = 'application-logs.lagoon.svc'\nconfig.logstash.type = :udp\nconfig.logstash.port = 5140\nend\n
"},{"location":"applications/ruby/#database-configuration","title":"Database configuration","text":"

The example uses our PostgreSQL image (see the docker-compose.yml file). Configuring database access in Rails for Lagoon is very straightforward. Since Lagoon injects the database host, name, and credentials as environment variables, we can change our config/database.yml to be aware of these env vars, and consume them if they exist.

config/database.yml
default: &default\nadapter: postgresql\nencoding: unicode\npool: <%= ENV.fetch(\"RAILS_MAX_THREADS\") { 5 } %>\nusername: <%= ENV.fetch(\"POSTGRES_USERNAME\") { \"drupal\" } %>\npassword: <%= ENV.fetch(\"POSTGRES_PASSWORD\") { \"drupal\" } %>\nhost: <%= ENV.fetch(\"POSTGRES_HOST\") { \"postgres\" } %>\ndatabase: <%= ENV.fetch(\"('POSTGRES_DATABASE'\") { \"drupal\" } %>\n
"},{"location":"applications/wordpress/","title":"WordPress on Lagoon","text":"

The WordPress template is configured to use Composer to install WordPress, its dependencies, and themes.

The WordPress template is based on the https://github.com/roots/bedrock boilerplate, but extended to match a standardized Lagoon deployment pattern.

"},{"location":"applications/wordpress/#composer-install","title":"Composer Install","text":"

The template uses Composer to install WordPress and its themes.

"},{"location":"applications/wordpress/#database","title":"Database","text":"

Lagoon can support MariaDB and PostgreSQL databases, but as support for PostgreSQL is limited in WordPress, it isn't recommended for use.

"},{"location":"applications/wordpress/#nginx-configuration","title":"NGINX configuration","text":"

Lagoon doesn't have a built-in configuration for WordPress - instead, the template comes with a starting nginx.conf - please contribute any improvements you may find!

"},{"location":"applications/wordpress/#wp-cli","title":"WP-CLI","text":"

The Lagoon template installs wp-cli into the cli image to manage your WordPress install.

"},{"location":"community/discord/","title":"Lagoon Community on Discord","text":"

Our official community meeting space is the Lagoon Discord.

We\u2019re starting this community as a place for all Lagoon users to collaborate, solve problems, share ideas, and contribute back to the Lagoon project. We\u2019re working to consolidate our community as it\u2019s currently spread out over Slack and various other places. We also wanted to invite all of our users and customers to join so that everyone can benefit from the community, no matter how they\u2019re using Lagoon.

Please remember that this is not to replace your current support channels - those will remain the same. This is a place to connect with other users as well as the Lagoon maintainers.

We ask that all community members review our Participation and Moderation Guidelines, as well as the Code of Conduct.

In addition to our Zoom Community Hours, we'll also be hosting Community Hours on Discord in 2023!

"},{"location":"community/moderation/","title":"Lagoon Moderation Guidelines","text":"

These guidelines have been adapted from Drupal Diversity & Inclusion\u2019s Moderation Guidelines.

In Lagoon spaces, strive to promote understanding, empathy, and increase personal awareness of all people. This includes people from across the Drupal Community and the greater Technical Community, even those you may personally disagree with.

If kicked from the Discord, the kicked user can send a private message (PM) to the kicker or another Moderator, if desired, for re-admittance. If a disruptive person is engaging in what appears to be intentionally inflammatory, bullying, or harassing behavior provoking hostile responses (or acting in a hostile manner), kicking is faster and easier than trying to placate a disruptive person whose behavior is causing distress to other channel members.

The kick is not a ban. There are times when disruptive or triggering comments and statements are genuine and break the lines of communication between two parties. By speaking with a Moderator, the (potentially) disruptive person can be coached on using more sensitive, inclusive, and diverse-aware language, and on engaging in a more constructive manner.

"},{"location":"community/moderation/#tiered-responses","title":"Tiered Responses","text":"
  1. Tier One Response

    User is welcomed in the channel, asked to read some scroll back, and given a link to participation guidelines.

  2. Tier Two Response

    User is gently reminded in channel to keep posts on topic, and/or of participation guidelines.

  3. Tier Three Response

    User is PM\u2019d by available Moderator to explain the problem(s) with their posts and given suggestions of what to do differently.

  4. Tier Four Response

    If behavior continues, User is kicked for no less than 24 hours from the Discord.

"},{"location":"community/moderation/#non-tiered-response-banning","title":"Non-Tiered Response Banning","text":"

Intentionally disruptive individuals get kicked, not tiered. Repeated offenses will result in a ban.

"},{"location":"community/participation/","title":"Lagoon Participation Guidelines","text":"

We ask that all members of our community, in any spaces, virtual or physical, adhere to our Code of Conduct.

These guidelines have been adapted from Drupal Diversity & Inclusion\u2019s Participation Guidelines.

  1. Listen actively, read carefully, and be understanding.
    • If joining a conversation, read the backlog. Give other Participants the opportunity to communicate effectively.
    • Assume good intent behind other Participants\u2019 statements. The open-source software community is very diverse, with Participants from all over the globe. Be aware of cultural and linguistic quirks.
    • There are also many Participants who are new to this space. Assume that they have good intent but have not yet mastered the language or ideas. We want to help them!
  2. Speak from your own experience, instead of generalizing. Recognize the worth of others\u2019 experience. Try not to speak for others.
    • Use \u201cI\u201d instead of \u201cthey,\u201d \u201cwe,\u201d and \u201cyou\u201d.
    • All Participants should recognize that other Participants have their own unique experiences.
    • Don\u2019t invalidate another Participant\u2019s story with your own spin on their experience. Instead, share your own story and experience.
  3. Challenge ideas, feelings, concerns, or one another by asking questions. Refrain from personal attacks. Focus on ideas first.
    • Avoid verbal challenges, backhanded insults, gender/race/region stereotyping, etc.
  4. Take part to the fullest of your ability and availability.
    • Community growth depends on the inclusion of individual voices. The channel wants you to speak up and speak out. Everyone has a different amount of time to contribute. We value participation here if you can give 5 minutes or 5 hours.
    • We do welcome those who quietly come to lurk and learn, or \u201clurk,\u201d but please introduce yourself and say hello!
  5. Accept that it is not always the goal to agree.
    • There are often many different \u201cright\u201d answers to technical issues, even though they may not work for your setup.
  6. Be conscious of language differences and unintended connotations.
    • \u201cText is hard\u201d - be aware that it is difficult to communicate effectively via text.
  7. Acknowledge individuals\u2019 identities.
    • Use stated names and pronouns. Do not challenge a person\u2019s race, sexuality, disability, etc.
    • f you are unsure how to address someone, ask them discreetly and respectfully. For example, if you are unsure what pronouns to use, send a private message and ask. Using the correct pronouns will help others.
  8. Some off-topic conversation is okay.
    • Some cross posting of announcements is okay. The following is not permitted:
      • Thread hijacking
      • Spamming
      • Commercial advertising
      • Overt self-promotion
      • Excessive going off-topic, especially during official meeting times or focused conversations
    • Consider announcing more appropriate places or times for in-depth off-topic conversations.
    • If you are not sure what\u2019s appropriate, please contact an admin.
  9. Sharing content from inside Lagoon spaces must only be done with explicit consent. Any sharing must also be carefully considered, and without harassment or intent to harm any Participants.
    • This forum should be considered public. Assume that anyone can and may read anything posted here.
    • When sharing any Lagoon content, permission from all Participants must be obtained first. This applies whether content is quoted, summarized, or screenshotted. This includes sharing in any public medium: on Twitter, in a blog post, in an article, on a podcast, etc. These spaces are where the discussion and work in progress is taking place. Removing snippets of a conversation takes away context. This can distort and discourage discussion, especially when this is done without the goal of driving the Lagoon project forward.
    • As stated above, if you take screenshots and post them to social media or other forums, you must get permission from the person that posted it. When getting permission, include the option of removing identifying information. Permission is still needed even if identifying information is removed. This includes any content from Discord, Github, or any other Lagoon medium.
    • If you want to share something, just ask! \u201cHey, is it ok to share this on Twitter? I\u2019m happy to credit you!\u201d
    • If it is necessary for a participant to take a screenshot to report harassing behavior to Lagoon moderators, this may be done without obtaining permission. It is not, however, acceptable to take screenshots to publicly or privately shame an individual. Again, this applies only to reporting harassing behavior.
  10. Address complaints between one another in the space when safe and appropriate.
    • When safe, try to clarify and engage in the space where the conflict happened. For example, in the Discord channel.
    • Ping admins or Community Manager (Alanna) when conflict is escalating.
    • Ask for help.
    • If the topic of conflict is off-topic for Lagoon, move the conversation to a more appropriate channel.

Additional considerations for in-person Lagoon spaces

  1. Follow the event\u2019s Code of Conduct, if there is one. If not, our Code of Conduct applies.
  2. Do not touch people, their mobility devices, or other assistive equipment without their consent. If someone asks you to stop a certain behavior, stop immediately.
  3. Report any issues to the event\u2019s staff. If an issue involves Lagoon team members, report to uselagoon@amazee.io.

The Lagoon team reserves the right to terminate anyone\u2019s access to the Lagoon spaces.

"},{"location":"contributing-to-lagoon/api-debugging/","title":"API Debugging","text":"

1 . Ensure the dev script at services/api/package.json includes the following:

services/api/package.json
node --inspect=0.0.0.0:9229\n

2 . Update docker-compose.yml to map the dist folder and expose the 9229 port:

docker-compose.yml
  api:\nimage: ${IMAGE_REPO:-lagoon}/api\ncommand: yarn run dev\nvolumes:\n- ./services/api/src:/app/services/api/src\n- ./services/api/dist:/app/services/api/dist\ndepends_on:\n- api-db\n- local-api-data-watcher-pusher\n- keycloak\nports:\n- '3000:3000'\n- '9229:9229'\n

3 . Add the following to .vscode/launch.json:

.vscode/launch.json
{\n// Use IntelliSense to learn about possible attributes.\n// Hover to view descriptions of existing attributes.\n// For more information, visit: https://go.microsoft.com/fwlink/?linkid=830387.\n\"version\": \"0.2.0\",\n\"configurations\": [\n{\n\"name\": \"Docker: Attach to Node\",\n\"type\": \"node\",\n\"request\": \"attach\",\n\"port\": 9229,\n\"address\": \"localhost\",\n\"outFiles\": [\"${workspaceRoot}/app/services/api/dist/**/*.js\"],\n\"localRoot\": \"${workspaceFolder}/services/api\",\n\"remoteRoot\": \"/app/services/api\",\n\"sourceMaps\": true,\n\"protocol\": \"inspector\"\n}\n]\n}\n

4 . Rebuild/restart the containers:

Restart containers
rm build/api && make build/api && docker-compose restart api\n

5 . Restart VScode.

"},{"location":"contributing-to-lagoon/developing-lagoon/","title":"Developing Lagoon","text":"

Development of Lagoon locally can now be performed on a local Kubernetes cluster, or via Docker Compose (as a fallback).

Note

The full Lagoon stack relies on a range of upstream projects which are currently incompatible with ARM-based architectures, such as the the M1/M2 Apple Silicon-based machines. For this reason, running or developing lagoon-core or lagoon-remote locally on these architectures is not currently supported. See https://github.com/uselagoon/lagoon/issues/3189 for more information.

"},{"location":"contributing-to-lagoon/developing-lagoon/#docker","title":"Docker","text":"

Docker must be installed to build and run Lagoon locally.

"},{"location":"contributing-to-lagoon/developing-lagoon/#install-docker-and-docker-compose","title":"Install Docker and Docker Compose","text":"

Please check the official docs for how to install Docker.

Docker Compose is included in Docker for Mac installations. For Linux installations see the directions here.

"},{"location":"contributing-to-lagoon/developing-lagoon/#configure-docker","title":"Configure Docker","text":"

You will need to update your insecure registries in Docker. Read the instructions here on how to do that. We suggest adding the entire local IPv4 Private Address Spaces to avoid unnecessary reconfiguration between Kubernetes and Docker Compose. e.g. \"insecure-registries\" : [\"172.16.0.0/12\",\"192.168.0.0/16\"],

"},{"location":"contributing-to-lagoon/developing-lagoon/#allocate-enough-docker-resources","title":"Allocate Enough Docker Resources","text":"

Running a Lagoon, Kubernetes, or Docker cluster on your local machine consumes a lot of resources. We recommend that you give your Docker host a minimum of 8 CPU cores and 12GB RAM.

"},{"location":"contributing-to-lagoon/developing-lagoon/#build-lagoon-locally","title":"Build Lagoon Locally","text":"

Warning

Only consider building Lagoon this way if you intend to develop features or functionality for it, or want to debug internal processes. We will also be providing instruction to install Lagoon without building it (i.e. by using the published releases).

We're using make (see the Makefile) in order to build the needed Docker images, configure Kubernetes and run tests.

We have provided a number of routines in the Makefile to cover most local development scenarios. Here we will run through a complete process.

"},{"location":"contributing-to-lagoon/developing-lagoon/#build-images","title":"Build images","text":"
  1. Here -j8 tells make to run 8 tasks in parallel to speed the build up. Adjust as necessary.
  2. We have set SCAN_IMAGES=false as a default to not scan the built images for vulnerabilities. If set to true, a scan.txt file will be created in the project root with the scan output.
Build images
make -j8 build\n
  1. Start Lagoon test routine using the defaults in the Makefile (all tests).
Start tests
make kind/test\n

Warning

There are a lot of tests configured to run by default - please consider only testing locally the minimum that you need to ensure functionality. This can be done by specifying or removing tests from the TESTS variable in the Makefile.

This process will:

  1. Download the correct versions of the local development tools if not installed - kind, kubectl, helm, jq.
  2. Update the necessary Helm repositories for Lagoon to function.
  3. Ensure all of the correct images have been built in the previous step.
  4. Create a local KinD cluster, which provisions an entire running Kubernetes cluster in a local Docker container. This cluster has been configured to talk to a provisioned image registry that we will be pushing the built Lagoon images to. It has also been configured to allow access to the host filesystem for local development.
  5. Clone Lagoon from https://github.com/uselagoon/lagoon-charts (use the CHARTS_TREEISH variable in the Makefile to control which branch if needed).
  6. Install the Harbor Image registry into the KinD cluster and configure its ingress and access properly.
  7. Docker will push the built images for Lagoon into the Harbor image registry.
  8. It then uses the Makefile from lagoon-charts to perform the rest of the setup steps.
  9. A suitable ingress controller is installed - we use the NGINX Ingress Controller.
  10. A local NFS server provisioner is installed to handle specific volume requests - we use one that handles Read-Write-Many operations (RWX).
  11. Lagoon Core is then installed, using the locally built images pushed to the cluster-local Image Registry, and using the default configuration, which may exclude some services not needed for local testing. The installation will wait for the API and Keycloak to come online.
  12. The DBaaS providers are installed - MariaDB, PostgreSQL and MongoDB. This step provisions standalone databases to be used by projects running locally, and emulates the managed services available via cloud providers (e.g. Cloud SQL, RDS or Azure Database).
  13. Lagoon Remote is then installed, and configured to talk to the Lagoon Core, databases and local storage. The installation will wait for this to complete before continuing.
  14. To provision the tests, the Lagoon Test chart is then installed, which provisions a local Git server to host the test repositories, and pre-configures the Lagoon API database with the default test users, accounts and configuration. It then performs readiness checks before starting tests.
  15. Lagoon will run all the tests specified in the TESTS variable in the Makefile. Each test creates its own project & environments, performs the tests, and then removes the environments & projects. The test runs are output to the console log in the lagoon-test-suite-* pod, and can be accessed one test per container.

Ideally, all of the tests pass and it's all done!

"},{"location":"contributing-to-lagoon/developing-lagoon/#view-the-test-progress-and-your-local-cluster","title":"View the test progress and your local cluster","text":"

The test routine creates a local Kubeconfig file (called kubeconfig.kind.lagoon in the root of the project, that can be used with a Kubernetes dashboard, viewer or CLI tool to access the local cluster. We use tools like Lens, Octant, kubectl or Portainer in our workflows. Lagoon Core, Remote and Tests all build in the Lagoon namespace, and each environment creates its own namespace to run, so make sure to use the correct context when inspecting.

In order to use kubectl with the local cluster, you will need to use the correct Kubeconfig. This can be done for every command or it can be added to your preferred tool:

kubeconfig.kind.lagoon
KUBECONFIG=./kubeconfig.kind.lagoon kubectl get pods -n lagoon\n

The Helm charts used to build the local Lagoon are cloned into a local folder and symlinked to lagoon-charts.kind.lagoon where you can see the configuration. We'll cover how to make easy modifications later in this documentation.

"},{"location":"contributing-to-lagoon/developing-lagoon/#interact-with-your-local-lagoon-cluster","title":"Interact with your local Lagoon cluster","text":"

The Makefile includes a few simple routines that will make interacting with the installed Lagoon simpler:

Create local ports
make kind/port-forwards\n

This will create local ports to expose the UI (6060), API (7070) and Keycloak (8080). Note that this logs to stdout, so it should be performed in a secondary terminal/window.

Retrieve admin creds
make kind/get-admin-creds\n

This will retrieve the necessary credentials to interact with the Lagoon.

  • The JWT is an admin-scoped token for use as a bearer token with your local GraphQL client. See more in our GraphQL documentation.
  • There is a token for use with the \"admin\" user in Keycloak, who can access all users, groups, roles, etc.
  • There is also a token for use with the \"lagoonadmin\" user in Lagoon, which can be allocated default groups, permissions, etc.
Re-push images
make kind/dev\n

This will re-push the images listed in KIND_SERVICES with the correct tag, and redeploy the lagoon-core chart. This is useful for testing small changes to Lagoon services, but does not support \"live\" development. You will need to rebuild these images locally first, e.g rm build/api && make build/api.

Build typescript services
make kind/local-dev-patch\n

This will build the typescript services, using your locally installed Node.js (it should be >16.0). It will then:

  • Mount the \"dist\" folders from the Lagoon services into the correct lagoon-core pods in Kubernetes
  • Redeploy the lagoon-core chart with the services running with nodemonwatching the code for changes
  • This will facilitate \"live\" development on Lagoon.
  • Note that occasionally the pod in Kubernetes may require redeployment for a change to show. Clean any build artifacts from those services if you're rebuilding different branches with git clean -dfx as the dist folders are ignored by Git.
Initiate logging
make kind/local-dev-logging\n

This will create a standalone OpenDistro for Elasticsearch cluster in your local Docker, and configure Lagoon to dispatch all logs (Lagoon and project) to it, using the configuration in lagoon-logging.

Re-run tests.
make kind/retest\n# OR\nmake kind/retest TESTS='[features-kubernetes]'\n

This will re-run a suite of tests (defined in the TESTS variable) against the existing cluster. It will re-push the images needed for tests (tests, local-git, and the data-watcher-pusher). You can specify tests to run by passing the TESTS variable inline.

If updating a test configuration, the tests image will need to be rebuilt and pushed, e.g rm build/tests && make build/tests && make kind/push-images IMAGES='tests' && make kind/retest TESTS='[api]'

Push all images
make kind/push-images\n# OR\nmake kind/push-images IMAGES='tests local-git'\n

This will push all the images up to the image registry. Specifying IMAGES will tag and push specific images.

Remove cluster
make kind/clean\n

This will remove the KinD Lagoon cluster from your local Docker.

"},{"location":"contributing-to-lagoon/developing-lagoon/#ansible","title":"Ansible","text":"

The Lagoon test uses Ansible to run the test suite. Each range of tests for a specific function has been split into its own routine. If you are performing development work locally, select which tests to run, and update the $TESTS variable in the Makefile to reduce the concurrent tests running.

The configuration for these tests is held in three services:

  • tests is the Ansible test services themselves. The local testing routine runs each individual test as a separate container within a test-suite pod. These are listed below.
  • local-git is a Git server hosted in the cluster that holds the source files for the tests. Ansible pulls and pushes to this repository throughout the tests
  • api-data-watcher-pusher is a set of GraphQL mutations that pre-populates local Lagoon with the necessary Kubernetes configuration, test user accounts and SSH keys, and the necessary groups and notifications. Note that this will wipe local projects and environments on each run.

The individual routines relevant to Kubernetes are:

  • active-standby-kubernetes runs tests to check active/standby in Kubernetes.
  • api runs tests for the API - branch/PR deployment, promotion.
  • bitbucket, gitlab and github run tests for the specific SCM providers.
  • drupal-php74 runs a single-pod MariaDB, MariaDB DBaaS and a Drush-specific test for a Drupal 8/9 project (drupal-php73 doesn't do the Drush test).
  • drupal-postgres runs a single-pod PostgreSQL and a PostgreSQL DBaaS test for a Drupal 8 project.
  • elasticsearch runs a simple NGINX proxy to an Elasticsearch single-pod.
  • features-variables runs tests that utilize variables in Lagoon.
  • features-kubernetes runs a range of standard Lagoon tests, specific to Kubernetes.
  • features-kubernetes-2 runs more advanced kubernetes-specific tests - covering multi-project and subfolder configurations.
  • nginx, node and python run basic tests against those project types.
  • node-mongodb runs a single-pod MongoDB test and a MongoDB DBaaS test against a Node.js app.
"},{"location":"contributing-to-lagoon/developing-lagoon/#local-development","title":"Local Development","text":"

Most services are written in Node.js. As many of these services share similar Node.js code and Node.js packages, we're using a feature of Yarn, called Yarn workspaces. Yarn workspaces need a package.json in the project's root directory that defines the workspaces.

The development of the services can happen directly within Docker. Each container for each service is set up in a way that its source code is mounted into the running container (see docker-compose.yml). Node.js itself is watching the code via nodemon , and restarts the Node.js process automatically on a change.

"},{"location":"contributing-to-lagoon/developing-lagoon/#lagoon-commons","title":"lagoon-commons","text":"

The services not only share many Node.js packages, but also share actual custom code. This code is within node-packages/lagoon-commons. It will be automatically symlinked by Yarn workspaces. Additionally, the nodemon of the services is set up in a way that it checks for changes in node-packages and will restart the node process automatically.

"},{"location":"contributing-to-lagoon/developing-lagoon/#troubleshooting","title":"Troubleshooting","text":""},{"location":"contributing-to-lagoon/developing-lagoon/#i-cant-build-a-docker-image-for-any-nodejs-based-service","title":"I can't build a Docker image for any Node.js based service","text":"

Rebuild the images via:

Rebuild images
    make clean\n    make build\n
"},{"location":"contributing-to-lagoon/developing-lagoon/#i-get-errors-about-missing-node_modules-content-when-i-try-to-build-run-a-nodejs-based-image","title":"I get errors about missing node_modules content when I try to build / run a Node.js based image","text":"

Make sure to run yarn in Lagoon's root directory, since some services have common dependencies managed by yarn workspaces.

"},{"location":"contributing-to-lagoon/developing-lagoon/#i-get-an-error-resolving-the-nipio-domains","title":"I get an error resolving the nip.io domains","text":"Error
Error response from daemon: Get https://registry.172.18.0.2.nip.io:32080/v2/: dial tcp: lookup registry.172.18.0.2.nip.io: no such host\n

This can happen if your local resolver filters private IPs from results. You can work around this by editing /etc/resolv.conf and adding a line like nameserver 8.8.8.8 at the top to use a public resolver that doesn't filter results.

"},{"location":"contributing-to-lagoon/developing-lagoon/#example-workflows","title":"Example workflows","text":"

Here are some development scenarios and useful workflows for getting things done.

"},{"location":"contributing-to-lagoon/developing-lagoon/#add-tests","title":"Add tests","text":"
  1. Repeat the first step above.
  2. Edit tests/tests/features-variables.yaml and add a test case.
  3. Rebuild the tests image.
Build tests
rm build/tests\nmake -j8 build/tests\n
  1. Push the new tests image into the cluster registry.
Push test image
make kind/push-images IMAGES=tests\n
  1. Rerun the tests.
Re-run tests
make kind/retest TESTS='[features-variables]'\n
"},{"location":"contributing-to-lagoon/documentation/","title":"Contributing to Lagoon documentation","text":"

We really value anything that you can offer us!

We've made building and viewing the documentation really straightforward, and the team is always ready to help out with reviews or pointers.

We use mkdocs with the excellent Material theme.

"},{"location":"contributing-to-lagoon/documentation/#viewing-and-updating-docs-locally","title":"Viewing and updating docs locally","text":"

From the root of the Lagoon repository (you'll need Docker), run:

Get local docs up and running.
docker run --rm -it -p 127.0.0.1:8000:8000 -v ${PWD}:/docs ghcr.io/amazeeio/mkdocs-material\n

This will start a development server on http://127.0.0.1:8000, configured to live-reload on any updates.

The customized Docker image contains all the necessary extensions.

Alternatively, to run the mkdocs package locally, you'll need to install mkdocs, and then install all of the necessary plugins.

Install mkdocs
pip3 install -r docs/requirements.txt\nmkdocs serve\n
"},{"location":"contributing-to-lagoon/documentation/#editing-in-the-cloud","title":"Editing in the Cloud","text":"

Each documentation page also has an \"edit\" pencil in the top right, that will take you to the correct page in the Git repository.

Feel free to contribute here, too - you can always use the built-in github.dev web-based editor. It's got basic Markdown previews, but none of the mkdocs loveliness.

"},{"location":"contributing-to-lagoon/documentation/#how-we-deploy-documentation","title":"How we deploy documentation","text":"

We use the Deploy MkDocs GitHub Action to build all main branch pushes, and trigger a deployment of the gh-pages branch.

"},{"location":"contributing-to-lagoon/releasing/","title":"Releasing Lagoon","text":"

Lagoon has a number of moving parts, making releases quite complicated!

"},{"location":"contributing-to-lagoon/releasing/#lagoon-core-tags-and-testing","title":"Lagoon-core - tags and testing","text":"
  1. Ensure all the identified pull requests have been merged into main branch for:
    • uselagoon/lagoon
    • uselagoon/build-deploy-tool
    • uselagoon/lagoon-ui
  2. Once you are confident, push the next tag in sequence (minor or patch) to the main branch in the format v2.MINOR.PATCH as per semver. This will trigger a Jenkins build, visible at https://ci.lagoon.sh/blue/organizations/jenkins/lagoon/branches
  3. Whilst this is building, push lightweight tags to the correct commits on lagoon-ui and build-deploy-tool in the format core-v2.MINOR.PATCH. Note that there are no other tags or releases on build-deploy-tool, but lagoon-ui also has it's own semver releases that are based on it's features.
  4. Once the build has completed successfully in Jenkins, head to https://github.com/uselagoon/lagoon-charts to prepare the charts release
  5. In the chart.yaml for the lagoon-core and lagoon-test charts, update the following fields:

    • version: This is the next \"minor\" release of the chart - we usually use minor for a corresponding lagoon-core release
    • appVersion: This is the actual tag of the released lagoon-core
    • artifacthub.io/changes: All that's needed are the two lines in the below snippet, modified for the actual appVersion being released.

    sample chart.yml snippets

    # This is the chart version. This version number should be incremented each\n# time you make changes to the chart and its templates, including the app\n# version.\n# Versions are expected to follow Semantic Versioning (https://semver.org/)\nversion: 1.28.0\n# This is the version number of the application being deployed. This version\n# number should be incremented each time you make changes to the application.\n# Versions are not expected to follow Semantic Versioning. They should reflect\n# the version the application is using.\nappVersion: v2.14.2\n# This section is used to collect a changelog for artifacthub.io\n# It should be started afresh for each release\n# Valid supported kinds are added, changed, deprecated, removed, fixed and security\nannotations:\nartifacthub.io/changes: |\n- kind: changed\ndescription: update Lagoon appVersion to v2.14.2\n
    Only lagoon-core and lagoon-test charts are updated as a result of a lagoon-core release. Follow the lagoon-remote process if there are any other changes.

  6. Create a PR for this chart release, and the Github Actions suite will undertake a full suite of tests:

    • Lint and test charts - matrix: performs a lint and chart install against the current tested version of Kubernetes
    • Lint and test charts - current: performs a lint and chart install against previous/future versions of Kubernetes
    • Lagoon tests: runs the full series of ansible tests against the release.

    Usually, failures in the lint and test charts are well explained (missing/misconfigured chart settings). If a single Lagoon test failes, it may just need re-running. If multiple failures occur, they will need investigating.

Once those tests have all passed successfully, you can proceed with creating the releases:

"},{"location":"contributing-to-lagoon/releasing/#lagoon-core-releases-and-release-notes","title":"Lagoon-core - releases and release notes","text":"
  1. In uselagoon/lagoon create a release from the tag pushed earlier. Use the \"Generate release notes\" button to create the changelog. Look at previous releases for what we include in the release - and the lagoon-images link will always be the most recent released version. Note that the links to the charts, lagoon-ui and build-deploy-tool can all be filled in now, but the links won't work until the future steps. Mark this as the latest release and Publish the release.
  2. In uselagoon/build-deploy-tool create a release from the tag pushed earlier. Use the \"Generate release notes\" button to create the changelog - ensuring that the last core-v2.X tag is used, not any other tag. Look at previous releases for what we include in the release - Mark this as the latest release and Publish the release.
  3. In uselagoon/lagoon-ui create a release from the tag pushed earlier. Use the \"Generate release notes\" button to create the changelog - ensuring that the last core-v2.X tag is used, not any other tag. Look at previous releases for what we include in the release - Mark this as the latest release and Publish the release.
  4. In uselagoon/lagoon-charts merge the successful PR, this will create the lagoon-core and lagoon-test releases for you. Edit the resulting lagoon-core chart release to note the corresponding lagoon release in the title and text box, as per previous releases.
"},{"location":"contributing-to-lagoon/releasing/#lagoon-remote-releases-and-release-notes","title":"Lagoon-remote - releases and release notes","text":"

Lagoon remote has a release cycle separate to Lagoon Core, and as such, can be released anytime that a dependency sub-chart or service is updated.

"},{"location":"contributing-to-lagoon/tests/","title":"Tests","text":"

All of our tests are written with Ansible and mostly follow this approach:

  1. They create a new Git repository.
  2. Add and commit some files from a list of files (in tests/files) into this Git repository.
  3. Push this Git repository to a Git server (either locally or on GitHub).
  4. Send a trigger to a trigger service (for example a webhook to the webhook handler, which is the same as a real webhook that would be sent).
  5. Starts to monitor the URL at which the test would expect something to happen (like deploying a Node.js app that has the Git branch as an HTML text).
  6. Compares the result on the URL with the expected result.

Lagoon is mostly tested in 3 different ways:

"},{"location":"contributing-to-lagoon/tests/#1-locally","title":"1. Locally","text":"

During local development, the best way to test is locally. All tests are started via make. Make will download and build all the required dependencies.

Make tests
make tests\n

This will run all defined tests. If you only want to run a subset of the tests, run make tests-list to see all existing tests and run them individually.

For example, make tests/node will run the Node.js Docker images tests.

In order to actually see what is happening inside the microservices, we can use make logs:

Make logs
make logs\n

Or only for a specific service:

Make logs
make logs service=webhook-handler\n
"},{"location":"contributing-to-lagoon/tests/#2-automated-integration-testing","title":"2. Automated integration testing","text":"

In order to test pull requests that are created against Lagoon, we have a fully automatic integration test running on a dedicated Jenkins instance: https://ci.lagoon.sh. It is defined inside the .Jenkinsfile, and runs automatically for every pull request that is opened.

This will build all images, start a Kubernetes cluster and run a series of tests.

The tests can be found here:

  • https://ci.lagoon.sh/blue/organizations/jenkins/lagoon/activity
"},{"location":"docker-images/commons/","title":"Commons","text":"

The Lagoon commons Docker image. Based on the official Alpine images.

This image has no functionality itself, but is instead a base image, intended to be extended and utilized to build other images. All the alpine-based images in Lagoon inherit components from commons.

"},{"location":"docker-images/commons/#included-tooling","title":"Included tooling","text":"
  • docker-sleep - standardized one-hour sleep
  • fix-permissions - automatically fixes permissions on a given directory to all group read-write
  • wait-for - a small script to ensure that services are up and running in the correct order - based off https://github.com/eficode/wait-for
  • entrypoint-readiness - checks to make sure that long-running entrypoints have completed
  • entrypoints - a script to source all entrypoints under /lagoon/entrypoints/* in an alphabetical/numerical order
"},{"location":"docker-images/commons/#included-entrypoints","title":"Included entrypoints","text":"

The list of default entrypoints in this image is found at https://github.com/uselagoon/lagoon-images/tree/main/images/commons/lagoon/entrypoints. Subsequent downstream images will also contribute entrypoints under /lagoon that are run in the eventual image.

"},{"location":"docker-images/mariadb/","title":"MariaDB","text":"

MariaDB is the open source successor to MySQL.

The Lagoon MariaDB image Dockerfile. Based on the official packages mariadb and mariadb-client provided by the the upstream Alpine image.

This Dockerfile is intended to be used to set up a standalone MariaDB database server.

  • 10.4 Dockerfile (Alpine 3.12 Support until May 2022) - uselagoon/mariadb-10.4
  • 10.5 Dockerfile (Alpine 3.14 Support until May 2023) - uselagoon/mariadb-10.5
  • 10.6 Dockerfile (Alpine 3.16 Support until May 2024) - uselagoon/mariadb-10.6
  • 10.11 Dockerfile (Alpine 3.18 Support until May 2025) - uselagoon/mariadb-10.11

Info

As these images are not built from the upstream MariaDB images, their support follows a different cycle - and will only receive updates as long as the underlying Alpine images receive support - see https://alpinelinux.org/releases/ for more information. In practice, most MariaDB users will only be running these containers locally - the production instances will use the Managed Cloud Databases provided by the DBaaS Operator

"},{"location":"docker-images/mariadb/#lagoon-adaptions","title":"Lagoon adaptions","text":"

The default exposed port of MariaDB containers is port 3306.

To allow Lagoon to select the best way to run the MariaDB container, use lagoon.type: mariadb - this allows the DBaaS operator to provision a cloud database if available in the cluster. Use lagoon.type: mariadb-single to specifically request MariaDB in a container. Persistent storage is always provisioned for MariaDB containers at /var/lib/mysql.

This image is prepared to be used on Lagoon. There are therefore some things already done:

  • Folder permissions are automatically adapted with fix-permissions, so this image will work with a random user.
  • readiness-probe.sh script to check when MariaDB container is ready.
"},{"location":"docker-images/mariadb/#docker-composeyml-snippet","title":"docker-compose.yml snippet","text":"docker-compose.yml
    mariadb:\nimage: uselagoon/mariadb-10.6-drupal:latest\nlabels:\n# tells Lagoon this is a MariaDB database\nlagoon.type: mariadb\nports:\n# exposes the port 3306 with a random local port, find it with `docker-compose port mariadb 3306`\n- \"3306\"\nvolumes:\n# mounts a named volume at the default path for MariaDB\n- db:/var/lib/mysql\n
"},{"location":"docker-images/mariadb/#included-tools","title":"Included tools","text":"
  • mysqltuner.pl - Perl script useful for database parameter tuning.
  • mysql-backup.sh - Script for automating the daily MySQL backups on development environment.
  • pwgen - Utility to generate random and complex passwords.
"},{"location":"docker-images/mariadb/#included-mycnf-configuration-file","title":"Included my.cnf configuration file","text":"

The image ships a default MariaDB configuration file, optimized to work on Lagoon. Some options are configurable via environment variables.

"},{"location":"docker-images/mariadb/#environment-variables","title":"Environment Variables","text":"Environment Variable Default Description MARIADB_DATABASE lagoon Database name created at startup. MARIADB_USER lagoon Default user created at startup. MARIADB_PASSWORD lagoon Password of default user created at startup. MARIADB_ROOT_PASSWORD Lag00n MariaDB root user's password. MARIADB_CHARSET utf8mb4 Set the server charset. MARIADB_COLLATION utf8mb4_bin Set server collation. MARIADB_MAX_ALLOWED_PACKET 64M Set the max_allowed_packet size. MARIADB_INNODB_BUFFER_POOL_SIZE 256M Set the MariaDB InnoDB buffer pool size. MARIADB_INNODB_BUFFER_POOL_INSTANCES 1 Number of InnoDB buffer pool instances. MARIADB_INNODB_LOG_FILE_SIZE 64M Size of InnoDB log file. MARIADB_LOG_SLOW (not set) Variable to control the save of slow queries. MARIADB_LOG_QUERIES (not set) Variable to control the save of ALL queries. BACKUPS_DIR /var/lib/mysql/backup Default path for databases backups. MARIADB_DATA_DIR /var/lib/mysql Path of the MariaDB data dir, be careful, changing this can occur data loss! MARIADB_COPY_DATA_DIR_SOURCE (not set) Path which the entrypoint script of mariadb will use to copy into the defined MARIADB_DATA_DIR, this can be used for prepopulating the MariaDB with a database. The scripts expects actual MariaDB data files and not a sql file! Plus it only copies data if the destination does not already have a mysql datadir in it.

If the LAGOON_ENVIRONMENT_TYPE variable is set to production, performances are set accordingly by using MARIADB_INNODB_BUFFER_POOL_SIZE=1024 and MARIADB_INNODB_LOG_FILE_SIZE=256.

"},{"location":"docker-images/mongodb/","title":"MongoDB","text":"

MongoDB is a general purpose, document-based, distributed database built for modern application developers and for the cloud era. MongoDB is a document database, which means it stores data in JSON-like documents.

  • from mongodb.com
"},{"location":"docker-images/mongodb/#supported-versions","title":"Supported Versions","text":"

4.0 Dockerfile - uselagoon/mongo-4

This Dockerfile is intended to be used to set up a standalone MongoDB database server.

"},{"location":"docker-images/mongodb/#lagoon-adaptions","title":"Lagoon adaptions","text":"

This image is prepared to be used on Lagoon. There are therefore some things already done:

  • Folder permissions are automatically adapted with fix-permissions, so this image will work with a random user, and therefore also on Kubernetes or OpenShift.
"},{"location":"docker-images/nginx/","title":"NGINX","text":"

The Lagoon nginx image Dockerfile. Based on the official openresty/openresty images.

This Dockerfile is intended to be used as a base for any web servers within Lagoon.

"},{"location":"docker-images/nginx/#lagoon-adaptions","title":"Lagoon adaptions","text":"

The default exposed port of NGINX containers is port 8080.

This image is prepared to be used on Lagoon. There are therefore some things already done:

  • Folder permissions are automatically adapted with fix-permissions, so this image will work with a random user.
  • The files within /etc/nginx/* are parsed through envplate with a container-entrypoint.
"},{"location":"docker-images/nginx/#included-nginx-configuration-static-filesconf","title":"Included NGINX configuration (static-files.conf)","text":"

Warning

By default NGINX only serves static files - this can be used for static sites that don't require a database or PHP components: for example, static site generators like Hugo, Jekyll or Gatsby.

If you need PHP, have a look at the php-fpm image and use nginx and php-fpm in tandem.

Build the content during the build process and inject it into the nginx container.

"},{"location":"docker-images/nginx/#helpers","title":"Helpers","text":""},{"location":"docker-images/nginx/#redirects-mapconf","title":"redirects-map.conf","text":"

In order to create redirects, we have redirects-map.conf in place. This helps you to redirect marketing domains to sub-sites or do non-www to www redirects. If you have a lot of redirects, we suggest having redirects-map.conf stored next to your code for easier maintainability.

Note

If you only have a few redirects, there's a handy trick to create the redirects with a RUN command in your nginx.dockerfile.

Here's an example showing how to redirect www.example.com to example.com and preserve the request:

Redirect
RUN echo \"~^www.example.com http://example.com\\$request_uri;\" >> /etc/nginx/redirects-map.conf\n

To get more details about the various types of redirects that can be achieved, see the documentation within the redirects-map.conf directly.

After you put the redirects-map.conf in place, you also need to include it in your nginx.dockerfile in order to get the configuration file into your build.

nginx.dockerfile
COPY redirects-map.conf /etc/nginx/redirects-map.conf\n
"},{"location":"docker-images/nginx/#basic-authentication","title":"Basic Authentication","text":"

Basic authentication is enabled automatically when the BASIC_AUTH_USERNAME and BASIC_AUTH_PASSWORD environment variables are set.

Warning

Automatic basic auth configuration is provided for convenience. It should not be considered a secure method of protecting your website or private data.

"},{"location":"docker-images/nginx/#environment-variables","title":"Environment Variables","text":"

Some options are configurable via environment variables.

Environment Variable Default Description BASIC_AUTH restricted Set to off to disable basic authentication. BASIC_AUTH_USERNAME (not set) Username for basic authentication. BASIC_AUTH_PASSWORD (not set) Password for basic authentication (unencrypted). FAST_HEALTH_CHECK (not set) Set to true to redirect GET requests from certain user agents (StatusCake, Pingdom, Site25x7, Uptime, nagios) to the lightweight Lagoon service healthcheck."},{"location":"docker-images/nodejs/","title":"Node.js","text":"

The Lagoon Node.js Docker image. Based on the official Node Alpine images.

"},{"location":"docker-images/nodejs/#supported-versions","title":"Supported Versions","text":"

We ship 2 versions of Node.js images: the normal node:version image and the node:version-builder.

The builder variant of those images comes with additional tooling that is needed when you build Node.js apps (such as the build libraries, npm and Yarn). For a full list check out their Dockerfile.

  • 12 (available for compatibility only, no longer officially supported) - uselagoon/node-12
  • 14 (available for compatibility only, no longer officially supported) - uselagoon/node-14
  • 16 Dockerfile (Security Support until September 2023) - uselagoon/node-16
  • 18 Dockerfile (Security Support until April 2025) - uselagoon/node-18
  • 20 Dockerfile (Security Support until April 2026) - uselagoon/node-20

Tip

We stop updating EOL Node.js images usually with the Lagoon release that comes after the officially communicated EOL date: https://nodejs.org/en/about/releases/.

"},{"location":"docker-images/nodejs/#lagoon-adaptions","title":"Lagoon adaptions","text":"

The default exposed port of Node.js containers is port 3000.

Persistent storage is configurable in Lagoon, using the lagoon.type: node-persistent. See the docs for more info

Use the following labels in your docker-compose.yml file to configure it:

  • lagoon.persistent = use this to define the path in the container to use as persistent storage - e.g. /app/files.
  • lagoon.persistent.size = this to tell Lagoon how much storage to assign this path.
  • If you have multiple services that share the same storage, use this lagoon.persistent.name = (optional) use this to tell Lagoon to use the storage defined in another named service.
"},{"location":"docker-images/nodejs/#docker-composeyml-snippet","title":"docker-compose.yml snippet","text":"docker-compose.yml
    node:\nbuild:\n# this configures a build from a Dockerfile in the root folder\ncontext: .\ndockerfile: Dockerfile\nlabels:\n# tells Lagoon this is a node service, configured with 500MB of persistent storage at /app/files\nlagoon.type: node-persistent\nlagoon.persistent: /app/files\nlagoon.persistent.size: 500Mi\nports:\n# local development only\n# this exposes the port 3000 with a random local port\n# find it with `docker-compose port node 3000`\n- \"3000\"\nvolumes:\n# local development only\n# mounts a named volume (files) at the defined path for this service to replicate production\n- files:/app/files\n
"},{"location":"docker-images/opensearch/","title":"OpenSearch","text":"

OpenSearch is a community-driven, Apache 2.0-licensed open source search and analytics suite that makes it easy to ingest, search, visualize, and analyze data.

  • from https://opensearch.org/
"},{"location":"docker-images/opensearch/#supported-versions","title":"Supported versions","text":"
  • 2 Dockerfile - uselagoon/opensearch-2
"},{"location":"docker-images/opensearch/#environment-variables","title":"Environment Variables","text":"

Some options are configurable via environment variables.

Environment Variable Default Description OPENSEARCH_JAVA_OPTS -Xms512m -Xmx512m Sets the memory usage of the OpenSearch container. Both values need be the same value or OpenSearch will not start cleanly."},{"location":"docker-images/opensearch/#known-issues","title":"Known issues","text":"

On Linux-based systems, the start of the OpenSearch container may fail due to a low vm.max_map_count setting.

Error
opensearch_1  | ERROR: [1] bootstrap checks failed\nopensearch_1  | [1]: max virtual memory areas vm.max_map_count [65530] is too low, increase to at least [262144]\n

Solution to this issue can be found here.

"},{"location":"docker-images/php-cli/","title":"PHP-CLI","text":"

The Lagoon php-cli Docker image. Based on Lagoon php-fpm image, it has all the needed command line tools for daily operations.

Containers (or pods) started from cli images are responsible for building code for Composer or Node.js based projects.

The image also contains database clis for both MariaDB and PostgreSQL.

Info

This Dockerfile is intended to be used as a base for any cli needs within Lagoon.

"},{"location":"docker-images/php-cli/#supported-versions","title":"Supported versions","text":"
  • 7.3 (available for compatibility only, no longer officially supported)
  • 7.4 (available for compatibility only, no longer officially supported)
  • 8.0 Dockerfile (Security Support until November 2023) - uselagoon/php-8.0-cli
  • 8.1 Dockerfile (Security Support until November 2024) - uselagoon/php-8.1-cli
  • 8.2 Dockerfile (Security Support until December 2025) - uselagoon/php-8.2-cli

All PHP versions use their own Dockerfiles.

"},{"location":"docker-images/php-cli/#lagoon-adaptions","title":"Lagoon adaptions","text":"

This image is prepared to be used on Lagoon. There are therefore some things already done:

  • Folder permissions are automatically adapted with fix-permissions, so this image will work with a random user.
  • COMPOSER_ALLOW_SUPERUSER=1 removes warning about use of Composer as root.
  • 80-shell-timeout.sh script checks if containers are running in a Kubernetes environment and then set a 10 minutes timeout to idle cli pods.
  • cli containers use an SSH key injected by Lagoon or defined into SSH_PRIVATE_KEYenvironment variable.
"},{"location":"docker-images/php-cli/#included-cli-tools","title":"Included CLI tools","text":"

The included CLI tools are:

  • composer version 1.9.0 (changeable via COMPOSER_VERSION and COMPOSER_HASH_SHA256)
  • node.js verison 17 (as of Mar 2022)
  • npm
  • yarn
  • mariadb-client
  • postgresql-client
"},{"location":"docker-images/php-cli/#change-nodejs-version","title":"Change Node.js Version","text":"

By default this image ships with the nodejs-current package (v17 as of Mar 2022). If you need another version you can remove the current version and install the one of your choice. For example, to install Node.js 16, modify your dockerfile to include:

Update Node.js version
RUN apk del nodejs-current \\\n&& apk add --no-cache nodejs=~16\n
"},{"location":"docker-images/php-cli/#environment-variables","title":"Environment variables","text":"

Some options are configurable via environment variables. The php-fpm environment variables also apply.

Name Default Description MARIADB_MAX_ALLOWED_PACKET 64M Controls the max allowed packet for the MySql client."},{"location":"docker-images/php-fpm/","title":"PHP-FPM","text":"

The Lagoon php-fpm Docker image. Based on the official PHP Alpine images.

PHP-FPM (FastCGI Process Manager) is an alternative PHP FastCGI implementation with some additional features useful for sites of any size, especially busier sites.

  • from https://php-fpm.org/

FastCGI is a way of having server scripts execute time-consuming code just once instead of every time the script is loaded, reducing overhead.

Info

This Dockerfile is intended to be used as a base for any PHP needs within Lagoon. This image itself does not create a web server, rather a php-fpm fastcgi listener. You may need to adapt the php-fpm pool config.

"},{"location":"docker-images/php-fpm/#supported-versions","title":"Supported versions","text":"
  • 7.3 (available for compatibility only, no longer officially supported) - uselagoon/php-7.3-fpm
  • 7.4 (available for compatibility only, no longer officially supported) - uselagoon/php-7.4-fpm
  • 8.0 Dockerfile (Security Support until November 2023) - uselagoon/php-8.0-fpm
  • 8.1 Dockerfile (Security Support until November 2024) - uselagoon/php-8.1-fpm
  • 8.2 Dockerfile (Security Support until December 2025) - uselagoon/php-8.2-fpm

All PHP versions use their own Dockerfiles.

Tip

We stop updating End of Life (EOL) PHP images usually with the Lagoon release that comes after the officially communicated EOL date: https://www.php.net/supported-versions.php. Previous published versions will remain available.

"},{"location":"docker-images/php-fpm/#lagoon-adaptions","title":"Lagoon adaptions","text":"

This image is prepared to be used on Lagoon. There are therefore some things are already done:

  • Folder permissions are automatically adapted with fix-permissions, so this image will work with a random user.
  • The /usr/local/etc/php/php.ini and /usr/local/etc/php-fpm.conf, plus all files within /usr/local/etc/php-fpm.d/ , are parsed through envplate with a container-entrypoint.
  • See the Dockerfile for installed PHP extensions.
  • To install further extensions, extend your Dockerfile from this image. Install extensions according to the docs, under the heading How to install more PHP extensions.
"},{"location":"docker-images/php-fpm/#included-php-config","title":"Included PHP config","text":"

The included PHP config contains sensible values that will make the creation of PHP pools config easier. Here is a list of some of these. Check /usr/local/etc/php.ini, /usr/local/etc/php-fpm.conf for all of them:

Value Details max_execution_time = 900 Changeable via PHP_MAX_EXECUTION_TIME. realpath_cache_size = 256k For handling big PHP projects. memory_limit = 400M For big PHP projects (changeable via PHP_MEMORY_LIMIT). opcache.memory_consumption = 265 For big PHP projects. opcache.enable_file_override = 1 and opcache.huge_code_pages = 1 For faster PHP. display_errors = Off and display_startup_errors = Off For sensible production values (changeable via PHP_DISPLAY_ERRORS and PHP_DISPLAY_STARTUP_ERRORS). upload_max_filesize = 2048M For big file uploads. apc.shm_size = 32m and apc.enabled = 1 Changeable via PHP_APC_SHM_SIZE and PHP_APC_ENABLED.

Also, php-fpm error logging happens in stderr.

\ud83d\udca1 If you don't like any of these configs, you have three possibilities:

  1. If they are changeable via environment variables, use environment variables (this is the preferred method, see table of environment variables below).
  2. Create your own fpm-pool config and set via php_admin_value and php_admin_flag.
    1. Learn more about them in this documentation for Running PHP as an Apache module. This documentation refers to Apache, but it is also the case for php-fpm).

      Important:

      1. If you want to provide your own php-fpm pool, overwrite the file /usr/local/etc/php-fpm.d/www.conf with your own config, or rename this file if you want it to have another name. If you don't do that, the provided pool will be started! 2. PHP values with the PHP_INI_SYSTEM changeable mode cannot be changed via an fpm-pool config. They need to be changed either via already provided environment variables or: 3. Provide your own php.ini or php-fpm.conf file (this is the least preferred method).

"},{"location":"docker-images/php-fpm/#default-fpm-pool","title":"Default fpm-pool","text":"

This image is shipped with an fpm-pool config (php-fpm.d/www.conf) that creates an fpm-pool and listens on port 9000. This is because we try to provide an image which already covers most needs for PHP, so you don't need to create your own. You are welcome to do so if you like, though!

Here a short description of what this file does:

  • Listens on port 9000 via IPv4 and IPv6.
  • Uses the pm dynamic and creates between 2-50 children.
  • Re-spawns php-fpm pool children after 500 requests to prevent memory leaks.
  • Replies with pong when making a fastcgi request to /ping (good for automated testing to check if the pool started).
  • catch_workers_output = yes to see PHP errors.
  • clear_env = no to be able to inject PHP environment variables via regular Docker environment variables.
"},{"location":"docker-images/php-fpm/#environment-variables","title":"Environment Variables","text":"

Some options are configurable via environment variables.

Environment Variable Default Description NEWRELIC_ENABLED false Enable NewRelic performance monitoring, needs NEWRELIC_LICENSE be configured. NEWRELIC_LICENSE (not set) NewRelic license to be used. Important: NEWRELIC_ENABLED needs to be set totrue in order for NewRelic to be enabled. NEWRELIC_BROWSER_MONITORING_ENABLED true This enables auto-insertion of the JavaScript fragments for NewRelic browser monitoring. Important: NEWRELIC_ENABLED needs to be set totrue in order for NewRelic to be enabled. NEWRELIC_DISTRIBUTED_TRACING_ENABLED false This enables distributed tracing. Important: NEWRELIC_ENABLED needs to be set totrue in order for NewRelic to be enabled. PHP_APC_ENABLED 1 Can be set to 0 to disable APC. PHP_APC_SHM_SIZE 32m The size of each shared memory segment given. PHP_DISPLAY_ERRORS Off Configures whether errors are printed or hidden. See php.net. PHP_DISPLAY_STARTUP_ERRORS Off Configures whether startup errors are printed or hidden. See php.net. PHP_ERROR_REPORTING Production E_ALL & ~E_DEPRECATED & ~E_STRICT Development: E_ALL & ~E_DEPRECATED & ~E_STRICT & ~E_NOTICE The desired logging level you'd like PHP to use. See php.net. PHP_FPM_PM_MAX_CHILDREN 50 The the maximum number of child processes. See php.net. PHP_FPM_PM_MAX_REQUESTS 500 The number of requests each child process should execute before re-spawning. See php.net. PHP_FPM_PM_MAX_SPARE_SERVERS 2 The desired maximum number of idle server processes. See php.net. PHP_FPM_PM_MIN_SPARE_SERVERS 2 The desired minimum number of idle server processes. See php.net. PHP_FPM_PM_PROCESS_IDLE_TIMEOUT 60s The number of seconds after which an idle process will be killed. See php.net. PHP_FPM_PM_START_SERVERS 2 The number of child processes created on startup. See php.net. PHP_MAX_EXECUTION_TIME 900 Maximum execution time of each script, in seconds. See php.net. PHP_MAX_FILE_UPLOADS 20 The maximum number of files allowed to be uploaded simultaneously. See php.net. PHP_MAX_INPUT_VARS 2000 How many input variables will be accepted. See php.net. PHP_MEMORY_LIMIT 400M Maximum amount of memory a script may consume. See php.net. XDEBUG_ENABLE (not set) Set to true to enable xdebug extension. BLACKFIRE_ENABLED (not set) Set to true to enable blackfire extension. BLACKFIRE_SERVER_ID (not set) Set to Blackfire Server ID provided by Blackfire.io. Needs BLACKFIRE_ENABLED set to true. BLACKFIRE_SERVER_TOKEN (not set) Set to Blackfire Server Token provided by Blackfire.io. Needs BLACKFIRE_ENABLED set to true. BLACKFIRE_LOG_LEVEL 3 Change the log level of the blackfire agent. Available values: log verbosity level (4: debug, 3: info, 2: warning, 1: error) See blackfire.io."},{"location":"docker-images/postgres/","title":"PostgreSQL","text":"

The Lagoon PostgreSQL Docker image. Based on the official PostgreSQL Alpine images.

"},{"location":"docker-images/postgres/#supported-versions","title":"Supported versions","text":"
  • 11 Dockerfile (Security Support until November 2023) - uselagoon/postgres-11
  • 12 Dockerfile (Security Support until November 2024) - uselagoon/postgres-12
  • 13 Dockerfile (Security Support until November 2025) - uselagoon/postgres-13
  • 14 Dockerfile (Security Support until November 2026) - uselagoon/postgres-14
  • 15 Dockerfile (Security Support until November 2027) - uselagoon/postgres-15

Tip

We stop updating EOL PostgreSQL images usually with the Lagoon release that comes after the officially communicated EOL date: https://www.postgresql.org/support/versioning

"},{"location":"docker-images/postgres/#lagoon-adaptions","title":"Lagoon adaptions","text":"

The default exposed port of Postgres containers is port 5432.

To allow Lagoon to select the best way to run the Postgres container, use lagoon.type: postgres - this allows DBaaS operator to provision a cloud database if available in the cluster. Use lagoon.type: postgres-single to specifically request Postgres in a container. Persistent storage is always provisioned for postgres containers at /var/lib/postgresql/data.

"},{"location":"docker-images/postgres/#docker-composeyml-snippet","title":"docker-compose.yml snippet","text":"docker-compose.yml
postgres:\nimage: uselagoon/postgres-14-drupal:latest\nlabels:\n# tells Lagoon this is a Postgres database\nlagoon.type: postgres\nports:\n# exposes the port 5432 with a random local port\n# find it with `docker-compose port postgres 5432`\n- \"5432\"\nvolumes:\n# mounts a named volume at the default path for Postgres\n- db:/var/lib/postgresql/data\n
"},{"location":"docker-images/postgres/#tips-tricks","title":"Tips & Tricks","text":"

If you have SQL statements that need to be run immediately after container startup to initialize the database, you can place those .sql files in the container's docker-entrypoint-initdb.d directory. Any .sql files contained in that directory are run automatically at startup, as part of bringing the PostgreSQL container up.

Warning

These scripts are only run if the container is started with an empty database.

"},{"location":"docker-images/python/","title":"Python","text":"

The Lagoon python Docker image. Based on the official Python Alpine images.

"},{"location":"docker-images/python/#supported-versions","title":"Supported Versions","text":"
  • 2.7 (available for compatibility only, no longer officially supported) - uselagoon/python-2.7
  • 3.7 Dockerfile (Security Support until July 2023) - uselagoon/python-3.7
  • 3.8 Dockerfile (Security Support until October 2024) - uselagoon/python-3.8
  • 3.9 Dockerfile (Security Support until October 2025) - uselagoon/python-3.9
  • 3.10 Dockerfile (Security Support until October 2026) - uselagoon/python-3.10
  • 3.11 Dockerfile (Security Support until October 2027) - uselagoon/python-3.11

Tip

We stop updating and publishing EOL Python images usually with the Lagoon release that comes after the officially communicated EOL date: https://devguide.python.org/versions/#versions. Previous published versions will remain available.

"},{"location":"docker-images/python/#lagoon-adaptions","title":"Lagoon adaptions","text":"

The default exposed port of Python containers is port 8800.

Persistent storage is configurable in Lagoon, using the lagoon.type: python-persistent. See the docs for more info

Use the following labels in your docker-compose.yml file to configure it: lagoon.persistent = use this to define the path in the container to use as persistent storage - e.g. /app/files lagoon.persistent.size = this to tell Lagoon how much storage to assign this path

If you have multiple services that share the same storage, use this lagoon.persistent.name = (optional) use this to tell Lagoon to use the storage defined in another named service

"},{"location":"docker-images/python/#docker-composeyml-snippet","title":"docker-compose.yml snippet","text":"docker-compose.yml
python:\nbuild:\n# this configures a build from a Dockerfile in the root folder\ncontext: .\ndockerfile: Dockerfile\nlabels:\n# tells Lagoon this is a python service, configured with 500MB of persistent storage at /app/files\nlagoon.type: python-persistent\nlagoon.persistent: /app/files\nlagoon.persistent.size: 500Mi\nports:\n# local development only\n# this exposes the port 8800 with a random local port\n# find it with `docker-compose port python 8800`\n- \"8800\"\nvolumes:\n# local development only\n# mounts a named volume (files) at the defined path for this service to replicate production\n- files:/app/files\n
"},{"location":"docker-images/rabbitmq/","title":"RabbitMQ","text":"

The Lagoon RabbitMQ Dockerfile with management plugin installed. Based on the official rabbitmq:3-management image at docker-hub.

This Dockerfile is intended to be used to set up a standalone RabbitMQ queue broker, as well as a base image to set up a cluster with high availability queue support by default (Mirrored queues).

By default, the RabbitMQ broker is started as single node. If you want to start a cluster, you need to use the rabbitmq-cluster Docker image, based on rabbitmq image plus the rabbitmq_peer_discovery_k8s plugin.

"},{"location":"docker-images/rabbitmq/#supported-versions","title":"Supported versions","text":"
  • 3.10 Dockerfile (Security Support until July 2023) - uselagoon/rabbitmq
"},{"location":"docker-images/rabbitmq/#lagoon-adaptions","title":"Lagoon adaptions","text":"

This image is prepared to be used on Lagoon. There are therefore some things already done:

  • Folder permissions are automatically adapted with fix-permissions, so this image will work with a random user.
  • The file /etc/rabbitmq/definitions.json is parsed through envplate with a container-entrypoint.
"},{"location":"docker-images/rabbitmq/#included-rabbitmq-default-schema-definitionsjson","title":"Included RabbitMQ default schema (definitions.json)","text":"
  • To enable the support for Mirrored Queues, at least one policymust exist.
  • In the definitions.json schema file, minimal entities are defined to make the

    container run: virtualhost (vhost), username , and password to access management

    UI, permissions , and policies.

By default, a policy called lagoon-ha is created at startup, but it is not active because it doesn't match any queue's name pattern (see default Environment Variables).

definitions.json
\"policies\":[\n{\"vhost\":\"${RABBITMQ_DEFAULT_VHOST}\",\"name\":\"lagoon-ha\",\"pattern\":\"${RABBITMQ_DEFAULT_HA_PATTERN}\", \"definition\":{\"ha-mode\":\"exactly\",\"ha-params\":2,\"ha-sync-mode\":\"automatic\",\"ha-sync-batch-size\":5}}\n]\n

By default, the ha-mode is set to exactly which controls the exact number of mirroring nodes for a queue (mirrors). The number of nodes is controller by ha-params.

For further information and custom configuration, please refer to official RabbitMQ documentation.

"},{"location":"docker-images/rabbitmq/#environment-variables","title":"Environment Variables","text":"

Some options are configurable via environment variables.

Environment Variable Default Description RABBITMQ_DEFAULT_USER guest Username for management UI access. RABBITMQ_DEFAULT_PASS guest Password for management UI access. RABBITMQ_DEFAULT_VHOST / RabbitMQ main virtualhost. RABBITMQ_DEFAULT_HA_PATTERN ^$ Regular expression to match for mirrored queues."},{"location":"docker-images/redis/","title":"Redis","text":"

Lagoon Redis image Dockerfile, based on offical redis:alpine image.

This Dockerfile is intended to be used to set up a standalone Redis ephemeral server by default.

"},{"location":"docker-images/redis/#supported-versions","title":"Supported versions","text":"
  • 5 (available for compatibility only, no longer officially supported) - uselagoon/redis-5 or uselagoon/redis-5-persistent
  • 6 Dockerfile - uselagoon/redis-6 or uselagoon/redis-6-persistent
  • 7 Dockerfile - uselagoon/redis-7 or uselagoon/redis-7-persistent
"},{"location":"docker-images/redis/#usage","title":"Usage","text":"

There are 2 different flavors of Redis Images: Ephemeral and Persistent.

"},{"location":"docker-images/redis/#ephemeral","title":"Ephemeral","text":"

The ephemeral image is intended to be used as an in-memory cache for applications and will not retain data across container restarts.

When being used as an in-memory (RAM) cache, the first thing you might want to tune if you have large caches is to adapt the MAXMEMORY variable. This variable controls the maximum amount of memory (RAM) which redis will use to store cached items.

"},{"location":"docker-images/redis/#persistent","title":"Persistent","text":"

The persistent Redis image will persist data across container restarts and can be used for queues or application data that will need persistence.

We don't typically suggest using a persistent Redis for in-memory cache scenarios as this might have unintended side-effects on your application while a Redis container is restarting and loading data from disk.

"},{"location":"docker-images/redis/#lagoon-adaptions","title":"Lagoon adaptions","text":"

This image is prepared to be used on Lagoon. There are therefore some things already done:

  • Folder permissions are automatically adapted with fix-permissionsso this image will work with a random user.
  • The files within /etc/redis/* are templated using envplate via a container-entrypoint.
"},{"location":"docker-images/redis/#included-redisconf-configuration-file","title":"Included redis.conf configuration file","text":"

The image ships a default Redis configuration file, optimized to work on Lagoon.

"},{"location":"docker-images/redis/#environment-variables","title":"Environment Variables","text":"

Some options are configurable via environment variables.

Environment Variable Default Description DATABASES -1 Default number of databases created at startup. LOGLEVEL notice Define the level of logs. MAXMEMORY 100mb Maximum amount of memory. MAXMEMORYPOLICY allkeys-lru The policy to use when evicting keys if Redis reaches its maximum memory usage. REDIS_PASSWORD disabled Enables authentication feature."},{"location":"docker-images/redis/#custom-configuration","title":"Custom configuration","text":"

By building on the base image you can include custom configuration. See https://raw.githubusercontent.com/antirez/redis/4.0/redis.conf for full documentation of the Redis configuration file.

"},{"location":"docker-images/redis/#redis-persistent","title":"Redis-persistent","text":"

Based on the Lagoon redis image, the Lagoon redis-persistent Docker image is intended for use when the Redis service must be utilized in persistent mode (ie. with a persistent volume where keys will be saved to disk).

It differs from redis only with the FLAVOR environment variable, which will use the respective Redis configuration according to the version of redis in use.

"},{"location":"docker-images/redis/#troubleshooting","title":"Troubleshooting","text":"

The Lagoon Redis images all come pre-loaded with the redis-cli command, which allows for querying the Redis service for information and setting config values dynamically. To use this utility, you can simply SSH into your Redis pod by using the instructions [here] (../using-lagoon-advanced/ssh.md) with redis as the pod value then run it from the terminal once you've connected.

"},{"location":"docker-images/redis/#maximum-memory-policy","title":"Maximum Memory Policy","text":"

By default, the Lagoon redis images are set to use the allkeys-lru policy. This policy will alow ANY keys stored in Redis to be evicted if/when the Redis service hits its maxmemory limit according to when the key was least recently used.

For typical installations, this is the ideal configuration, as Drupal may not set a TTL value for each key cached in Redis. If the maxmemory-policy is set to something like volatile-lru and Drupal doesn't provide these TTL tags, this would result in the Redis container filling up, being totally unable to evict ANY keys, and ceasing to accept new cache keys at all.

More information on Redis' maxmemory policies can be found in Redis' official documentation.

Proceed with Caution

Changing this setting can lead to Redis becoming completely full and cause outages as a result.

"},{"location":"docker-images/redis/#tuning-redis-maxmemory-value","title":"Tuning Redis' maxmemory value","text":"

Finding the optimal amount of memory to give Redis can be quite the difficult task. Before attempting to tune your Redis cache's memory size, it is prudent to let it run normally for as long as practical, with at least a day of typical usage being the ideal minimum timeframe.

There are a few high level things you can look at when tuning these memory values:

  • The first thing to check is the percentage of memory in use by Redis currently.
    • If this percentage is less than 50%, you might consider lowering the maxmemory value by 25%.
    • If this percentage is between 50% and 75%, things are running just fine.
    • If this value is greater than 75%, then it's worth looking at other variables to see if maxmemory needs to be increased.
  • If you find that your Redis' memory usage percentage is high, the next thing to look at is the number of key evictions.
    • A large number of key evictions and a memory usage greater than 95% is a fairly good indicator that your redis needs a higher maxmemory setting.
    • If the number of key evictions doesn't seem high and typical response times are reasonable, this is simply indicative of Redis doing its job and managing its allocated memory as expected.
"},{"location":"docker-images/redis/#example-commands","title":"Example commands","text":"

The following commands can be used to view information about the Redis service:

  • View all info about the Redis service: redis-cli info
  • View service memory information: redis-cli info memory
  • View service keyspace information: redis-cli info keyspace
  • View service statistics: redis-cli info stats

It is also possible to set values for the Redis service dynamically without a restart of the Redis service. It is important to note that these dynamically set values will not persist if the pod is restarted (which can happen as a result of a deployment, maintenance, or even just being shuffled from one node to another).

  • Set maxmemory config value dynamically to 500mb: config set maxmemory 500mb
  • Set maxmemory-policy config value dynamically to volatile-lru: config set maxmemory-policy volatile-lru
"},{"location":"docker-images/ruby/","title":"Node.js","text":"

The Lagoon ruby Docker image. Based on the official Python Alpine images.

"},{"location":"docker-images/ruby/#supported-versions","title":"Supported Versions","text":"
  • 3.0 Dockerfile (Security Support until March 2024) - uselagoon/ruby-3.0
  • 3.1 Dockerfile (Security Support until March 2025) - uselagoon/ruby-3.1
  • 3.2 Dockerfile (Security Support until March 2026) - uselagoon/ruby-3.2

Tip

We stop updating and publishing EOL Ruby images usually with the Lagoon release that comes after the officially communicated EOL date: https://www.ruby-lang.org/en/downloads/releases/. Previous versions will remain available.

"},{"location":"docker-images/ruby/#lagoon-adaptions","title":"Lagoon adaptions","text":"

The default exposed port of ruby containers is port 3000.

Lagoon has no \"pre-defined\" type for Ruby services, they should be configured with the lagoon.type: generic and a port set with lagoon.port: 3000

"},{"location":"docker-images/ruby/#docker-composeyml-snippet","title":"docker-compose.yml snippet","text":"docker-compose.yml
ruby:\nbuild:\n# this configures a build from a Dockerfile in the root folder\ncontext: .\ndockerfile: Dockerfile\nlabels:\n# tells Lagoon this is a generic service, configured to expose port 3000\nlagoon.type: generic\nlagoon.port: 3000\nports:\n# local development only\n# this exposes the port 3000 with a random local port\n# find it with `docker-compose port ruby 3000`\n- \"3000\"\n
"},{"location":"docker-images/solr/","title":"Solr","text":"

The Lagoon Solr image Dockerfile. Based on the official solr:<version>-alpine images.

This Dockerfile is intended to be used to set up a standalone Solr server with an initial core mycore.

"},{"location":"docker-images/solr/#supported-versions","title":"Supported Versions","text":"
  • 5.5 (available for compatibility only, no longer officially supported)
  • 6.6 (available for compatibility only, no longer officially supported)
  • 7.7 (available for compatibility only, no longer officially supported)
  • 7 Dockerfile - uselagoon/solr-7
  • 8 Dockerfile - uselagoon/solr-8
"},{"location":"docker-images/solr/#lagoon-adaptions","title":"Lagoon adaptions","text":"

This image is prepared to be used on Lagoon. There are therefore some things already done:

  • Folder permissions are automatically adapted with fix-permissions, so this image will work with a random user.
  • 10-solr-port.sh script to fix and check Solr port.
  • 20-solr-datadir.sh script to check if Solr config is compliant for Lagoon. This sets directory paths, and configures the correct lock type.
"},{"location":"docker-images/solr/#environment-variables","title":"Environment Variables","text":"

Some options are configurable via environment variables.

Environment Variable Default Description SOLR_JAVA_MEM 512M Default Java HEAP size (ie. SOLR_JAVA_MEM=\"-Xms10g -Xmx10g\"). SOLR_DATA_DIR /var/solr Path of the solr data dir. Be careful, changing this can cause data loss! SOLR_COPY_DATA_DIR_SOURCE (not set) Path which the entrypoint script of solr will use to copy into the defined SOLR_DATA_DIR, this can be used for prepopulating the Solr with a core. The scripts expects actual Solr data files! Plus it only copies data if the destination does not already have a solr core in it."},{"location":"docker-images/varnish/","title":"Varnish","text":"

The Lagoon Varnish Docker images. Based on the official Varnish package

"},{"location":"docker-images/varnish/#supported-versions","title":"Supported versions","text":"
  • 5 (available for compatibility only, no longer officially supported) - uselagoon/varnish-5
  • 6 Dockerfile - uselagoon/varnish-6
  • 7 Dockerfile - uselagoon/varnish-7
"},{"location":"docker-images/varnish/#included-varnish-modules","title":"Included varnish modules","text":"
  • vbox-dynamic - Dynamic backends from DNS lookups and service discovery from SRV records.
  • vbox-bodyaccess - Varnish vmod that lets you access the request body.
"},{"location":"docker-images/varnish/#lagoon-adaptions","title":"Lagoon adaptions","text":"

This image is prepared to be used on Lagoon. There are therefore some things already done:

  • Folder permissions are automatically adapted with fix-permissions, so this image will work with a random user.
"},{"location":"docker-images/varnish/#included-defaultvcl-configuration-file","title":"Included default.vcl configuration file","text":"

The image ships a default vcl configuration file, optimized to work on Lagoon. Some options are configurable via environments variables (see Environment Variables).

"},{"location":"docker-images/varnish/#environment-variables","title":"Environment Variables","text":"

Some options are configurable via environment variables.

Environment Variable Default Description VARNISH_BACKEND_HOST NGINX Default backend host. VARNISH_BACKEND_PORT 8080 Default listening Varnish port. VARNISH_SECRET lagoon_default_secret Varnish secret used to connect to management. LIBVMOD_DYNAMIC_VERSION 5.2 Default version of vmod-dynamic module. LIBVMOD_BODYACCESS_VERSION 5.0 Default version of vmod-bodyaccess module. HTTP_RESP_HDR_LEN 8k Maximum length of any HTTP backend response header. HTTP_RESP_SIZE 32k Maximum number of bytes of HTTP backend response we will deal with. NUKE_LIMIT 150 Maximum number of objects we attempt to nuke in order to make space for an object body. CACHE_TYPE malloc Type of varnish cache. CACHE_SIZE 100M Cache size. LISTEN 8080 Default backend server port. MANAGEMENT_LISTEN 6082 Default management listening port."},{"location":"drupal/","title":"Drupal on Lagoon","text":"

Lagoon was built to host Drupal sites (no, seriously, it was - at least initially!)

In this section you'll find more information on the various services that have been customised for use with Drupal.

"},{"location":"drupal/#drupal_integrations-drupal-scaffolding-package","title":"drupal_integrations Drupal scaffolding package","text":"

The drupal_integrations package, available on pacakagist extends Drupal's core-composer-scaffold for use on Lagoon. It also provides additional Drush command drush la to retreive the Drush aliases for your Lagoon project.

"},{"location":"drupal/#lagoon-logs-drupal-module","title":"lagoon-logs Drupal module","text":"

The lagoon_logs module, availalble on drupal.org provides zero-configuration logging for Drupal on Lagoon.

"},{"location":"drupal/drush-9/","title":"Drush 9","text":""},{"location":"drupal/drush-9/#aliases","title":"Aliases","text":"

Unfortunately, Drush 9 does not provide the ability to inject dynamic site aliases like Drush 8 did. We are working with the Drush team to implement this again. In the meantime, we have a workaround that allows you to use Drush 9 with Lagoon.

"},{"location":"drupal/drush-9/#basic-idea","title":"Basic Idea","text":"

Drush 9 provides a new command, drush site:alias-convert , which can convert Drush 8-style site aliases over to the Drush 9 YAML site alias style. This will create a on- time export of the site aliases currently existing in Lagoon, and save them in /app/drush/sites . These are then used when running a command like drush sa.

"},{"location":"drupal/drush-9/#preparation","title":"Preparation","text":"

In order to be able to use drush site:alias-convert , you need to do the following:

  • Rename the aliases.drushrc.php inside the drush folder to lagoon.aliases.drushrc.php.
"},{"location":"drupal/drush-9/#generate-site-aliases","title":"Generate Site Aliases","text":"

You can now convert your Drush aliases by running the following command in your project using the cli container:

Generate Site Aliases
docker-compose exec cli drush site:alias-convert /app/drush/sites --yes\n

It's good practice to commit the resulting YAML files into your Git repository, so that they are in place for your fellow developers.

"},{"location":"drupal/drush-9/#use-site-aliases","title":"Use Site Aliases","text":"

In Drush 9, all site aliases are prefixed with a group. In our case, this is lagoon. You can show all site aliases with their prefix via:

Show all site aliases
drush sa --format=list\n

and to use them:

Using Drush site alias
drush @lagoon.main ssh\n
"},{"location":"drupal/drush-9/#update-site-aliases","title":"Update Site Aliases","text":"

If a new environment in Lagoon has been created, you can run drush site:alias-convert to update the site aliases file. If running this command does not update lagoon.site.yml, try deleting lagoon.site.yml first, and then re-run drush site:alias-convert.

"},{"location":"drupal/drush-9/#drush-rsync-from-local-to-remote-environments","title":"Drush rsync from local to remote environments","text":"

If you would like to sync files from a local environment to a remote environment, you need to pass additional parameters:

Drush rsync
drush rsync @self:%files @lagoon.main:%files -- --omit-dir-times --no-perms --no-group --no-owner --chmod=ugo=rwX\n

This also applies to syncing one remote environment to another, if you're not using the Lagoon tasks UI to copy files between environments.

For example, if you wanted to sync the files from @lagoon.main to @lagoon.dev , and ran drush rsync @lagoon.main @lagoon.dev locally, without the extra parameters, you would probably run into a \"Cannot specify two remote aliases\" error.

To resolve this, you would first need to SSH into your destination environment drush @lagoon.dev ssh, and then execute the rsync command with parameters similar to the above:

Drush rsync
drush rsync @lagoon.main:%files  @self:%files -- --omit-dir-times --no-perms --no-group --no-owner --chmod=ugo=rwX\n

This is not necessary if you rsync from a remote to a local environment.

Also, we're working with the Drush maintainers to find a way to inject this automatically.

"},{"location":"drupal/first-deployment-of-drupal/","title":"First Deployment of Drupal","text":""},{"location":"drupal/first-deployment-of-drupal/#1-make-sure-you-are-all-set","title":"1. Make sure you are all set","text":"

In order to make your first deployment a successful one, please make sure that your Drupal Project is Lagoonized and you have set up the project in Lagoon. If not, don't worry! Follow the Step-by-Step Guide which show you how this works.

"},{"location":"drupal/first-deployment-of-drupal/#2-push","title":"2. Push","text":"

With Lagoon, you create a new deployment by pushing into a branch that is configured to be deployed.

If you don't have any new code to push, don't worry, you can run

Git push
git commit --allow-empty -m \"go, go! Power Rangers!\"\ngit push\n

This will trigger a push, and the Git hosting will inform Lagoon about this push via the configured webhook.

If all is correct, you will see a notification in your configured chat system (this is configured by your friendly Lagoon administrator):

This tells you that Lagoon has just started to deploy your code. Depending on the size of the codebase and amount of containers, this will take a couple of seconds. Just relax. If you'd like to know what's happening now, check out the Build and Deploy Process of Lagoon.

You can also check your Lagoon UI to see the progress of any deployment (your Lagoon administrator has the info).

"},{"location":"drupal/first-deployment-of-drupal/#3-a-fail","title":"3. A fail","text":"

Depending on the post-rollout tasks defined in .lagoon.yml , you might have run some tasks like drush updb or drush cr. These Drush tasks depend on a database existing within the environment, which obviously does not exist yet. Let's fix that! Keep reading.

"},{"location":"drupal/first-deployment-of-drupal/#4-synchronize-local-database-to-the-remote-lagoon-environment","title":"4. Synchronize local database to the remote Lagoon environment","text":"

With full Drush site alias support in Lagoon, you can synchronize a local database with the remote Lagoon environment.

Warning

You may have to tell pygmy about your public keys before the next step.

If you get an error like Permission denied (publickey), check out the documentation here: pygmy - adding ssh keys

First let's make sure that you can see the Drush site aliases:

Get site aliases
drush sa\n

This should return your just deployed environment (let's assume you just pushed into develop):

Returned site aliases
[drupal-example]cli-drupal:/app$ drush sa\n@develop\n@self\ndefault\n

With this we can now synchronize the local database (which is represented in Drush via the site alias @self) with the remote one (@develop):

Drush sql-sync
drush sql-sync @self @develop\n

You should see something like:

Drush sql-sync results
[drupal-example]cli-drupal:/app$ drush sql-sync @self @develop\nYou will destroy data in ssh.lagoon.amazeeio.cloud/drupal and replace with data from drupal.\nDo you really want to continue? (y/n): y\nStarting to dump database on Source.                                                                              [ok]\nDatabase dump saved to /home/drush-backups/drupal/20180227075813/drupal_20180227_075815.sql.gz               [success]\nStarting to discover temporary files directory on Destination.                                                    [ok]\nYou will delete files in drupal-example-develop@ssh.lagoon.amazeeio.cloud:/tmp/drupal_20180227_075815.sql.gz and replace with data from /home/drush-backups/drupal/20180227075813/drupal_20180227_075815.sql.gz\nDo you really want to continue? (y/n): y\nCopying dump file from Source to Destination.                                                                     [ok]\nStarting to import dump file onto Destination database.\n

Now let's try another deployment, again an empty push:

Git push
git commit --allow-empty -m \"go, go! Power Rangers!\"\ngit push\n

This time all should be green:

Click on the links in the notification, and you should see your Drupal site loaded in all its beauty! It will probably not have images yet, which we will handle in Step 6.

If it is still failing, check the logs link for more information.

"},{"location":"drupal/first-deployment-of-drupal/#5-synchronize-local-files-to-the-remote-lagoon-environment","title":"5. Synchronize local files to the remote Lagoon environment","text":"

You probably guessed it: we can do it with Drush:

Drush rsync
drush rsync @self:%files @develop:%files\n

It should show you something like:

Drush rsync results
[drupal-example]cli-drupal:/app$ drush rsync @self:%files @develop:%files\nYou will delete files in drupal-example-develop@ssh.lagoon.amazeeio.cloud:/app/web/sites/default/files and replace with data from /app/web/sites/default/files/\nDo you really want to continue? (y/n): y\n

In some cases, though, it might not look correct, like here:

Drush rsync results
[drupal-example]cli-drupal:/app$ drush rsync @self:%files @develop:%files\nYou will delete files in drupal-example-develop@ssh.lagoon.amazeeio.cloud:'/app/web/%files' and replace with data from '/app/web/%files'/\nDo you really want to continue? (y/n):\n

The reason for that is that the Drupal cannot resolve the path of the files directory. This most probably has to do that the Drupal is not fully configured or has a missing database. For a workaround you can use drush rsync @self:sites/default/files @develop:sites/default/files, but we suggest that you actually check your local and remote Drupal (you can test with drush status to see if the files directory is correctly configured).

"},{"location":"drupal/first-deployment-of-drupal/#6-its-done","title":"6. It's done","text":"

As soon as Lagoon is done building and deploying it will send a second notification to the chat system, like so:

This tells you:

  • Which project has been deployed.
  • Which branch and Git SHA has been deployed.
  • A link to the full logs of the build and deployment.
  • Links to all routes (URLs) where the environment can be reached.

That's it! We hope that wasn't too hard - making devOps accessible is what we are striving for.

"},{"location":"drupal/first-deployment-of-drupal/#but-wait-how-about-other-branches-or-the-production-environment","title":"But wait, how about other branches or the production environment?","text":"

That's the beauty of Lagoon: it's exactly the same: Push the branch name you defined to be your production branch and that one will be deployed.

"},{"location":"drupal/first-deployment-of-drupal/#failure-dont-worry","title":"Failure? Don't worry.","text":"

Did the deployment fail? Oh no! But we're here to help:

  1. Click on the logs link in the error notification. It will tell you where in the deployment process the failure happened.
  2. If you can't figure it out, ask your Lagoon administrator, they are here to help!
"},{"location":"drupal/integrate-drupal-and-fastly/","title":"Integrate Drupal & Fastly","text":""},{"location":"drupal/integrate-drupal-and-fastly/#prerequisites","title":"Prerequisites","text":"
  • A Drupal 7, 8 or 9 site
  • A Fastly service ID
  • A Fastly API token with the permission to purge
"},{"location":"drupal/integrate-drupal-and-fastly/#drupal-8-or-9-with-cache-tag-purging","title":"Drupal 8 or 9 with cache tag purging","text":"

Use Composer to get the latest version of the module:

Install Fastly
composer require drupal/fastly drupal/http_cache_control drupal/purge\n

You will need to enable the following modules:

  • fastly
  • fastlypurger
  • http_cache_control (2.x)
  • purge
  • purge_ui (technically optional, but this is really handy to have enabled on production)
  • purge_processor_lateruntime
  • purge_processor_cron
  • purge_queuer_coretags
  • purge_drush (useful for purge via Drush, here is a list of commands)
"},{"location":"drupal/integrate-drupal-and-fastly/#configure-the-fastly-module-in-drupal","title":"Configure the Fastly module in Drupal","text":"

Configure the Fastly service ID and API token. You can use runtime environment variables, or you can edit the settings form found at /admin/config/services/fastly:

  • FASTLY_API_TOKEN
  • FASTLY_API_SERVICE

A site ID is required, the module will generate one for you when you first install it. The idea behind the site ID is that it is a unique string which is appended as a cache tag on all requests. Thus, you are able to purge a single site from Fastly, even though multiple sites may flow through the same service in Fastly.

"},{"location":"drupal/integrate-drupal-and-fastly/#set-the-purge-options","title":"Set the purge options","text":"
  • Cache tag hash length: 4
  • Purge method: Use soft purge

A 4 character cache tag is plenty for most sites, a 5 character cache tag is likely better for sites with millions of entities (to reduce cache tag collisions).

"},{"location":"drupal/integrate-drupal-and-fastly/#soft-purging-should-be-used-this-means-the-item-in-fastly-is-marked-as-stale-rather-than-being-purged-so-that-it-can-be-used-in-the-event-the-origin-is-down-with-the-feature-serve-while-stale","title":"Soft purging should be used, this means the item in Fastly is marked as stale, rather than being purged so that it can be used in the event the origin is down (with the feature 'serve while stale').","text":""},{"location":"drupal/integrate-drupal-and-fastly/#set-the-stale-content-options","title":"Set the Stale Content Options","text":"

Set the options to what makes sense for your site. Minimum 1 hour (3600), maximum 1 week 604800). Generally something like the following will be fine:

  1. Stale while revalidate - on, 14440 seconds
  2. Stale if error - on, 604800 seconds

Optionally configure the webhooks (so you can ping Slack for instance when a cache purge is sent).

"},{"location":"drupal/integrate-drupal-and-fastly/#configure-the-purge-module","title":"Configure the Purge module","text":"

Visit the purge page /admin/config/development/performance/purge

Set up the following options:

"},{"location":"drupal/integrate-drupal-and-fastly/#cache-invalidation","title":"Cache Invalidation","text":"
  • Drupal Origin: Tag
  • Fastly: E, Tag, URL
"},{"location":"drupal/integrate-drupal-and-fastly/#queue","title":"Queue","text":"
  • Queuers: Core tags queuer, Purge block(s)
  • Queue: Database
  • Processors: Core processor, Late runtime processor, Purge block(s)

What this means is that we will be using Drupal's built in core tag queuer (add tags to the queue), the queue will be stored in the database (default), and the queue will be processed by

  • Cron processor
  • Late runtime processor

In order for the cron processor to run, you need to ensure that cron is running on your site. Ideally every minute. You can manually run it in your cli pod, to ensure that purge_processor_cron_cron() is being executed without errors.

start cron
[drupal8]production@cli-drupal:/app$ drush cron -v\n ...\n [notice] Starting execution of purge_processor_cron_cron(), execution of node_cron() took 21.16ms.\n

The Late runtime processor will run in hook_exit() for every page load, this can be useful to process the purges nearly as quickly as they come into the queue.

By having both, you guarantee that purges happen as soon as possible.

"},{"location":"drupal/integrate-drupal-and-fastly/#optimal-cache-header-setup","title":"Optimal Cache Header Setup","text":"

Out of the box, Drupal does not have the power to set different cache lifetimes in the browser vs in Fastly. So if you do set long cache lifetimes in Drupal, often end users will not see them if their browser has cached the page. If you install the 2.x version of the HTTP Cache Control module, this will give you a lot more flexibility on what caches and for how long.

For most sites, a sensible default could be

  • Shared cache maximum age : 1 month
  • Browser cache maximum age : 10 minutes
  • 404 cache maximum age: 15 minutes
  • 302 cache maximum age: 1 hour
  • 301 cache maximum age: 1 hour
  • 5xx cache maximum age: no cache

Note

This relies on your site having accurate cache tags represented for all the content that exists on the page.

"},{"location":"drupal/integrate-drupal-and-fastly/#viewing-caching-headers-using-curl","title":"Viewing caching headers using cURL","text":"

Use this function: (works in Linux and Mac OSX)

cURL function
function curlf() { curl -sLIXGET -H 'Fastly-Debug:1' \"$@\" | grep -iE 'X-Cache|Cache-Control|Set-Cookie|X-Varnish|X-Hits|Vary|Fastly-Debug|X-Served|surrogate-control|surrogate-key' }\n
Using cURL
$ curlf https://www.example-site-fastly.com\ncache-control: max-age=601, public, s-maxage=2764800\nsurrogate-control: max-age=2764800, public, stale-while-revalidate=3600, stale-if-error=3600\nfastly-debug-path: (D cache-wlg10427-WLG 1612906144) (F cache-wlg10426-WLG 1612906141) (D cache-fra19179-FRA 1612906141) (F cache-fra19122-FRA 1612906141)\nfastly-debug-ttl: (H cache-wlg10427-WLG - - 3) (M cache-fra19179-FRA - - 0)\nfastly-debug-digest: 1118d9fefc8a514ca49d49cb6ece04649e1acf1663398212650bb462ba84c381\nx-served-by: cache-fra19179-FRA, cache-wlg10427-WLG\nx-cache: MISS, HIT\nx-cache-hits: 0, 1\nvary: Cookie, Accept-Encoding\n

From the above headers we can see that:

  • The HTML page is cacheable
  • Browsers will cache the page for 601 seconds
  • Fastly will cache the page for 32 days (2764800 seconds)
  • Tiered caching is in effect (edge PoP in Wellington, and shield PoP in France)
  • The HTML page was a cache hit at the edge PoP
"},{"location":"drupal/integrate-drupal-and-fastly/#sending-manual-purge-requests-to-fastly","title":"Sending manual purge requests to Fastly","text":"

If you ever want to remove a specific page from cache manually, there are ways to do this.

For a single page, you do not need any authentication:

Single page cURL
curl -Ssi -XPURGE -H 'Fastly-Soft-Purge:1' https://www.example.com/subpage\n

For cache tags, you need to supply your API token for authentication:

Cache tags
curl -XPOST -H \"Fastly-Key:<Fastly API Key>\" https://api.fastly.com/service/<serviceID>/purge/<surrogatekey>\n

You can always find what your site ID cache tag is by using PHP

Find site ID cache tag
php > var_dump(substr(base64_encode(md5('bananasite', true)), 0, 4));\nstring(4) \"DTRk\"\n

So you can purge your entire site from Fastly fairly easily.

"},{"location":"drupal/integrate-drupal-and-fastly/#true-client-ips","title":"True client IPs","text":"

We configure Fastly to send the actual client IP back on the HTTP header True-Client-IP, you can make Drupal respect this header with the following changes in settings.php:

settings.php
$settings['reverse_proxy'] = TRUE;\n$settings['reverse_proxy_header'] = 'HTTP_TRUE_CLIENT_IP';\n
"},{"location":"drupal/integrate-drupal-and-fastly/#drush-integration","title":"Drush integration","text":"settings.php
 fastly:\n   fastly:purge:all (fpall)                                                    Purge whole service.\n   fastly:purge:key (fpkey)                                                    Purge cache by key.\n   fastly:purge:url (fpurl)                                                    Purge cache by Url.\n
"},{"location":"drupal/integrate-drupal-and-fastly/#drupal-7-with-url-based-purging","title":"Drupal 7 with URL based purging","text":"
  1. Download and install the Fastly Drupal module.
  2. Configure the Fastly service ID and API token.
  3. Optionally configure the webhooks (so you can ping Slack for instance when a cache purge is sent)
  4. Only URL based purging can be done in Drupal 7 (simple purging).
  5. Alter Drupal's client IP in settings.php:
settings.php
$conf['reverse_proxy_header'] = 'HTTP_TRUE_CLIENT_IP';\n
"},{"location":"drupal/phpunit-and-phpstorm/","title":"PHPUnit and PhpStorm","text":"

Note

This document assumes the following:

- You are using Docker.

- You are using a standard Amazee/Lagoon project with a docker-compose.yml file.

- You are on a Mac - it should work for other operating systems but folder structure and some configuration settings may be different.

"},{"location":"drupal/phpunit-and-phpstorm/#configuring-the-project","title":"Configuring the project","text":"
  1. Duplicate* the /core/phpunit.xml.dist file to /core/phpunit.xml
  2. Edit* /core/phpunit.xml and fill in the following variables:

    • SIMPLETEST_DB: mysql://drupal:drupal@mariadb:3306/drupal#db
    • SIMPLETEST_BASE_URL: <PROJECT_URL>
"},{"location":"drupal/phpunit-and-phpstorm/#configuring-phpstorm","title":"Configuring PhpStorm","text":""},{"location":"drupal/phpunit-and-phpstorm/#set-up-docker","title":"Set Up Docker","text":"
  1. In PhpStorm, go to File > Settings > Build, Execution, Deployment > Docker
  2. Click: +
  3. Select: Docker for Mac
"},{"location":"drupal/phpunit-and-phpstorm/#set-up-cli-interpreter","title":"Set Up CLI interpreter","text":"

Add a new CLI interpreter:

  1. In PhpStorm, go to File > Settings > Languages & Frameworks > PHP
  2. Click ... and then +
  3. Next select: Add a new CLI interpreter from Docker, vagrant...
  4. Use the following configurations:
    • Server: <DOCKER>
    • Configuration file(s): ./docker-compose.yml
    • Service: cli
    • Lifecycle: Connect to existing container ('docker-compose exec')
  5. Path mappings:
    • Local path: <ROOT_PATH>
    • Remote path*: /app

"},{"location":"drupal/phpunit-and-phpstorm/#set-up-remote-interpreter","title":"Set Up Remote Interpreter","text":"

Add Remote Interpreter:

  1. In PhpStorm, go to File > Settings > Languages & Frameworks > PHP > Test Frameworks
  2. Click + and select PHPUnit by Remote Interpreter
  3. Use the following configurations:
    • CLI Interpreter: <CLI_INTERPRETER>
    • Path mappings*: <PROJECT_ROOT> -> /app
    • PHPUnit: Use Composer autoloader
    • Path to script*: /app/vendor/autoload.php
    • Default configuration file*: /app/web/core/phpunit.xml

"},{"location":"drupal/phpunit-and-phpstorm/#setupconfigure-runner-template","title":"Setup/Configure Runner Template","text":"
  1. Configure runner:
    1. In PhpStorm, go to Run > Edit Configurations... > Templates > PHPUnit
    2. Use the following configurations:

      1. Test scope: Defined in the configuration file

      2. Interpreter: <CLI_INTERPRETER>

Note

If you are not on a Mac, this may vary.

"},{"location":"drupal/phpunit-and-phpstorm/#final-checks","title":"Final checks","text":""},{"location":"drupal/phpunit-and-phpstorm/#some-final-checks-to-run-before-you-run-a-test","title":"Some final checks to run before you run a test!","text":"
  1. You have the project up and running: $ docker-compose up -d
  2. The project is working without any errors, visit the site just to make sure it all works as expected - this is not 100% necessary, but nice to know it is working normally.
  3. We should be ready to run some tests!
"},{"location":"drupal/phpunit-and-phpstorm/#ready-to-run","title":"Ready to Run","text":"

Now you have the above configuration set up it should be as straightforward as going to the test you want to run and pressing the green arrow!

Once you press this PhpStorm will use Docker to enter the CLI container, then start running PHPUnit based upon the config.

"},{"location":"drupal/step-by-step-getting-drupal-ready-to-run-on-lagoon/","title":"Step by Step: Getting Drupal ready to run on Lagoon","text":""},{"location":"drupal/step-by-step-getting-drupal-ready-to-run-on-lagoon/#1-lagoon-drupal-setting-files","title":"1. Lagoon Drupal Setting Files","text":"

In order for Drupal to work with Lagoon, we need to teach Drupal about Lagoon and Lagoon about Drupal. This happens by copying specific YAML and PHP files into your Git repository.

If you're working on a Drupal project, you can check out one of the various Drupal example projects in our examples repository. We have Drupal 8 and 9 and some variants of each depending on your needs, such as database types. Clone the repository that best suits your needs to get started!

Here is a summary of the Lagoon- and Drupal-specific files you will find:

  • .lagoon.yml - The main file that will be used by Lagoon to understand what should be deployed and many more things. This file has some sensible Drupal defaults. If you would like to edit or modify, please check the documentation for .lagoon.yml.
  • docker-compose.yml, .dockerignore, and *.dockerfile (or Dockerfile) - These files are used to run your local Drupal development environment, they tell Docker which services to start and how to build them. They contain sensible defaults and many commented lines. We hope that it's well-commented enough to be self-describing. If you would like to find out more, see documentation for docker-compose.yml.
  • sites/default/* - These .php and .yml files tell Drupal how to communicate with Lagoon containers both locally and in production. They also provide a straightforward system for specific overrides in development and production environments. Unlike other Drupal hosting systems, Lagoon never ever injects Drupal settings files into your Drupal. Therefore, you can edit them however you like. Like all other files, they contain sensible defaults and some commented parts.
  • drush/aliases.drushrc.php - These files are specific to Drush and tell Drush how to talk to the Lagoon GraphQL API in order to learn about all site aliases there are.
  • drush/drushrc.php - Some sensible defaults for Drush commands.
"},{"location":"drupal/step-by-step-getting-drupal-ready-to-run-on-lagoon/#update-your-gitignore-settings","title":"Update your .gitignore Settings","text":"

Don't forget to make sure your .gitignore will allow you to commit the settings files.

Drupal is shipped with sites/*/settings*.php and sites/*/services*.yml in .gitignore. Remove that, as with Lagoon we don't ever have sensitive information in the Git repository.

"},{"location":"drupal/step-by-step-getting-drupal-ready-to-run-on-lagoon/#note-about-webroot-in-drupal-8","title":"Note about WEBROOT in Drupal 8","text":"

Unfortunately the Drupal community has not decided on a standardized WEBROOT folder name. Some projects put Drupal within web, and others within docroot or somewhere else. The Lagoon Drupal settings files assume that your Drupal is within web, but if this is different for your Drupal, please adapt the files accordingly.

"},{"location":"drupal/step-by-step-getting-drupal-ready-to-run-on-lagoon/#note-about-composerjson","title":"Note about composer.json","text":"

If you installed Drupal via composer, please check your composer.json and make sure that the name is NOT drupal/drupal, as this could confuse Drush and other tools of the Drupal universe, just rename it to something like myproject/drupal

"},{"location":"drupal/step-by-step-getting-drupal-ready-to-run-on-lagoon/#2-customize-docker-composeyml","title":"2. Customize docker-compose.yml","text":"

Don't forget to customize the values in lagoon-project & LAGOON_ROUTE with your site-specific name & the URL you'd like to access the site with. Here's an example:

docker-compose.yml
x-environment:\n&default-environment\nLAGOON_PROJECT: *lagoon-project\n# Route that should be used locally. If you are using pygmy, this route *must* end with .docker.amazee.io.\nLAGOON_ROUTE: http://drupal-example.docker.amazee.io\n
"},{"location":"drupal/step-by-step-getting-drupal-ready-to-run-on-lagoon/#3-build-images","title":"3. Build Images","text":"

First, we need to build the defined images:

Build images
docker-compose build\n

This will tell docker-compose to build the Docker images for all containers that have a build: definition in the docker-compose.yml. Usually for Drupal this is the case for the cli, nginx and php images. We do this because we want to run specific build commands (like composer install) or inject specific environment variables (like WEBROOT) into the images.

Usually, building is not necessary every time you edit your Drupal code (as the code is mounted into the containers from your host), but rebuilding does not hurt. Plus, Lagoon will build the exact same Docker images during a deploy, so you can check that your build will also work during a deployment by just running docker-compose build again.

"},{"location":"drupal/step-by-step-getting-drupal-ready-to-run-on-lagoon/#4-start-containers","title":"4. Start Containers","text":"

Now that the images are built, we can start the containers:

Start containers
docker-compose up -d\n

This will bring up all containers. After the command is done, you can check with docker-compose ps to ensure that they are all fully up and have not crashed. If there is a problem, check the logs with docker-compose logs -f [servicename].

"},{"location":"drupal/step-by-step-getting-drupal-ready-to-run-on-lagoon/#5-rerun-composer-install-for-composer-projects-only","title":"5. Rerun composer install (for Composer projects only)","text":"

In a local development environment, you probably want all dependencies downloaded and installed, so connect to the cli container and run composer install:

Run composer install in CLI
docker-compose exec cli bash\ncomposer install\n

This might sound weird, as there was already a composer install executed during the build step, so let us explain:

  • In order to be able to edit files on the host and have them immediately available in the container, the default docker-composer.yml mounts the whole folder into the the containers (this happens with .:/app:delegated in the volumes section). This also means that all dependencies installed during the Docker build are overwritten with the files on the host.
  • Locally, you probably want dependencies defined as require-dev in composer.json to exist as well, while on a production deployment they would just use unnecessary space. So we run composer install --no-dev in the Dockerfile and composer install manually.

If everything went well, open the LAGOON_ROUTE defined in docker-compose.yml (for example http://drupal.docker.amazee.io) and you should be greeted by a nice Drupal error. Don't worry - that's ok right now, most important is that it tries to load a Drupal site.

If you get a 500 or similar error, make sure everything loaded properly with Composer.

"},{"location":"drupal/step-by-step-getting-drupal-ready-to-run-on-lagoon/#6-check-status-and-install-drupal","title":"6. Check Status and Install Drupal","text":"

Finally it's time to install Drupal, but just before that we want to make sure everything works. We suggest using Drush for that:

Drush status
docker-compose exec cli bash\ndrush status\n

This should return something like:

Drush status result
[drupal-example]cli-drupal:/app$ drush status\n[notice] Missing database table: key_value\nDrupal version       :  8.6.1\nSite URI             :  http://drupal.docker.amazee.io\nDatabase driver      :  mysql\nDatabase hostname    :  mariadb\nDatabase port        :  3306\nDatabase username    :  drupal\nDatabase name        :  drupal\nPHP binary           :  /usr/local/bin/php\nPHP config           :  /usr/local/etc/php/php.ini\nPHP OS               :  Linux\nDrush script         :  /app/vendor/drush/drush/drush\nDrush version        :  9.4.0\nDrush temp           :  /tmp\nDrush configs        :  /home/.drush/drush.yml\n                        /app/vendor/drush/drush/drush.yml\nDrupal root          :  /app/web\nSite path            :  sites/default\n

Warning

You may have to tell pygmy about your public key before the next step.

If you get an error like Permission denied (publickey), check out the documentation here: pygmy - adding ssh keys

Now it is time to install Drupal (if instead you would like to import an existing SQL file, please skip to step 7, but we suggest you start with a clean Drupal installation in the beginning to be sure everything works).

Install Drupal
drush site-install\n

This should output something like:

drush site-install
[drupal-example]cli-drupal:/app$ drush site-install\nYou are about to DROP all tables in your 'drupal' database. Do you want to continue? (y/n): y\nStarting Drupal installation. This takes a while. Consider using the --notify global option.\nInstallation complete.  User name: admin  User password: a7kZJekcqh\nCongratulations, you installed Drupal!\n

Now you can visit the URL defined in LAGOON_ROUTE and you should see a fresh and clean installed Drupal site - Congrats!

"},{"location":"drupal/step-by-step-getting-drupal-ready-to-run-on-lagoon/#7-import-existing-database-dump","title":"7. Import existing Database Dump","text":"

If you already have an existing Drupal site, you probably want to import its database over to your local site.

There are many different ways to create a database dump. If your current hosting provider has Drush installed, you can use the following:

Drush sql-dump
drush sql-dump --result-file=dump.sql\n\nDatabase dump saved to dump.sql\n

Now you have a dump.sql file that contains your whole database.

Copy this file into your Git repository and connect to the cli, and you should see the file in there:

Viewing dump.sql
[drupal-example]cli-drupal:/app$ ls -l dump.sql\n-rw-r--r--    1 root     root          5281 Dec 19 12:46 dump.sql\n

Now you can drop the current database, and then import the dump.

Import dump.sql
drush sql-drop\n\ndrush sql-cli < dump.sql\n

Verify that everything works with visiting the URL of your project. You should have a functional copy of your Drupal site!

"},{"location":"drupal/step-by-step-getting-drupal-ready-to-run-on-lagoon/#8-drupal-files-directory","title":"8. Drupal files directory","text":"

A Drupal site also needs the files directory. As the whole folder is mounted into the Docker containers, add the files into the correct folder (probably web/sites/default/files, sites/default/files or something similar). Remember what you've set as your WEBROOT - it may not be the same for all projects.

"},{"location":"drupal/step-by-step-getting-drupal-ready-to-run-on-lagoon/#9-done","title":"9. Done","text":"

You are done with your local setup. The Lagoon team wishes happy Drupaling!

"},{"location":"drupal/subfolders/","title":"Subfolders","text":"

An example could be: www.example.com points to one Drupal site, while www.example.com/blog loads a blog built in another Drupal.

It would be possible to run both Drupals in a single Git repository and deploy it as a whole, but this workflow might not fit every team, and having separate Git repositories fits some situations better.

"},{"location":"drupal/subfolders/#modifications-of-root-application","title":"Modifications of root application","text":"

The root application (in this example, the Drupal site for www.example.com), needs a couple of NGINX configs that will configure NGINX to be a reverse proxy to the subfolder applications:

"},{"location":"drupal/subfolders/#location_prependconf","title":"location_prepend.conf","text":"

Create a file called location_prepend.conf in the root of your Drupal installation:

location_prepend.conf
resolver 8.8.8.8 valid=30s;\n\nlocation ~ ^/subfolder {\n  # If $http_x_forwarded_proto is empty (If it is not set from an upstream reverseproxy).\n  # Aet it to the current scheme.\n  set_if_empty $http_x_forwarded_proto $scheme;\n\n  proxy_set_header      X-Forwarded-For $proxy_add_x_forwarded_for;\n  proxy_set_header      X-Forwarded-Proto $scheme;\n  proxy_set_header      X-Forwarded-Proto $http_x_forwarded_proto;\n  proxy_set_header      X-Lagoon-Forwarded-Host $host;\n  # Will be used by downstream to know the original host.\n  proxy_set_header      X-REVERSEPROXY $hostname;\n  proxy_set_header      FORWARDED \"\";\n  # Unset FORWARDED because drupal8 gives errors if it is set.\n  proxy_set_header      Proxy \"\";\n  # Unset Proxy because drupal8 gives errors if it is set.\n  proxy_ssl_server_name on;\n\n  # NGINX needs a variable set in order for the DNS resolution to work correctly.\n  set                   $subfolder_drupal_host \"https://nginx-lagoonproject-${LAGOON_GIT_SAFE_BRANCH}.clustername.com:443\";\n  # LAGOON_GIT_SAFE_BRANCH variable will be replaced during docker entrypoint.\n  proxy_pass            $subfolder_drupal_host;\n  proxy_set_header      Host $proxy_host;\n  # $proxy_host will be automatically generated by NGINX based on proxy_pass (it needs to be without scheme and port).\n\n  expires off; # make sure we honor cache headers from the proxy and not overwrite them\n

Replace the following strings:

  • /subfolder with the name of the subfolder you want to use. For example, /blog.
  • nginx with the service that you want to point too in the subfolder project.
  • lagoonproject with the Lagoon projectname of the subfolder project.
"},{"location":"drupal/subfolders/#nginx-dockerfile","title":"NGINX Dockerfile","text":"

Add the following to your NGINX Dockerfile (nginx.dockerfile or Dockerfile.nginx):

nginx.dockerfile
COPY location_prepend.conf /etc/nginx/conf.d/drupal/location_prepend.conf\nRUN fix-permissions /etc/nginx/conf.d/drupal/*\n
"},{"location":"drupal/subfolders/#modifications-of-subfolder-application","title":"Modifications of subfolder application","text":"

Like the root application, we also need to teach the subfolder application (in this example, the Drupal installation for www.example.com/blog), that it is running under a subfolder. To do this, we create two files:

"},{"location":"drupal/subfolders/#location_drupal_append_subfolderconf","title":"location_drupal_append_subfolder.conf","text":"

Create a file called location_drupal_append_subfolder.conf in the root of your subfolder Drupal installation:

location_drupal_append_subfolder.conf
# When injecting a script name that is prefixed with `subfolder`, Drupal will\n# render all URLs with `subfolder` prefixed\nfastcgi_param  SCRIPT_NAME        /subfolder/index.php;\n\n# If we are running via a reverse proxy, we inject the original HOST URL\n# into PHP. With this Drupal will render all URLs with the original HOST URL,\n# and not the current used HOST.\n\n# We first set the HOST to the regular host variable.\nfastcgi_param  HTTP_HOST          $http_host;\n# Then we overwrite it with `X-Lagoon-Forwarded-Host` if it exists.\nfastcgi_param  HTTP_HOST          $http_x_lagoon_forwarded_host if_not_empty;\n

Replace /subfolder with the name of the subfolder you want to use. For example, /blog.

"},{"location":"drupal/subfolders/#server_prepend_subfolderconf","title":"server_prepend_subfolder.conf","text":"

Create a file called server_prepend_subfolder.conf in the root of your subfolder Drupal installation:

server_prepend_subfolder.conf
# Check for redirects before we do the internal NGINX rewrites.\n# This is done because the internal NGINX rewrites uses `last`,\n# which instructs NGINX to not check for rewrites anymore (and\n# `if` is part of the redirect module).\ninclude /etc/nginx/helpers/010_redirects.conf;\n\n# This is an internal NGINX rewrite, it removes `/subfolder/`\n# from the requests so that NGINX handles the request as it would\n# have been `/` from the beginning.\n# The `last` flag is also important. It will cause NGINX not to\n# execute any more rewrites, because it would redirect forever\n# with the rewrites below.\nrewrite ^/subfolder/(.*)          /$1             last;\n\n# Make sure redirects are NOT absolute, to ensure NGINX does not\n# overwrite the host of the URL - which could be something other than\n# what NGINX currently thinks it is serving.\nabsolute_redirect off;\n\n# If a request just has `/subfolder` we 301 redirect to `/subfolder/`\n# (Drupal really likes a trailing slash)\nrewrite ^/subfolder               /subfolder/     permanent;\n\n# Any other request we prefix 301 redirect with `/subfolder/`\nrewrite ^\\/(.*)                   /subfolder/$1   permanent;\n

Replace /subfolder with the name of the subfolder you want to use. For example, /blog.

"},{"location":"drupal/subfolders/#nginx-dockerfile_1","title":"NGINX Dockerfile","text":"

We also need to modify the NGINX Dockerfile.

Add the following to your NGINX Dockerfile (nginx.dockerfile or Dockerfile.nginx):

nginx.dockerfile
COPY location_drupal_append_subfolder.conf /etc/nginx/conf.d/drupal/location_drupal_append_subfolder.conf\nCOPY server_prepend_subfolder.conf /etc/nginx/conf.d/drupal/server_prepend_subfolder.conf\nRUN fix-permissions /etc/nginx/conf.d/drupal/*\n
"},{"location":"drupal/services/","title":"Services","text":""},{"location":"drupal/services/#mariadb-is-the-open-source-successor-to-mysql","title":"MariaDB is the open-source successor to MySQL","text":"

Learn about MariaDB with Drupal

Documentation on the MariaDB-Drupal image.

Documentation on the plain MariaDB image (the MariaDB-Drupal image is built on this).

"},{"location":"drupal/services/#redis-is-a-fast-open-source-in-memory-key-value-data-store-for-use-as-a-database-cache-message-broker-and-queue","title":"Redis is a fast, open-source, in-memory key-value data store for use as a database, cache, message broker, and queue","text":"

Learn about Redis with Drupal.

Documentation on the Redis-persistent image.

"},{"location":"drupal/services/#solr-is-an-open-source-search-platform","title":"Solr is an open-source search platform","text":"

Learn about Solr with Drupal.

Documentation on the Solr-Drupal image.

Documentation on the plain Solr image (the Solr-Drupal image is built on this).

"},{"location":"drupal/services/#varnish-is-a-powerful-open-source-http-engine-and-reverse-http-proxy-that-helps-to-speed-up-your-website","title":"Varnish is a powerful, open-source HTTP engine and reverse HTTP proxy that helps to speed up your website","text":"

Learn about Varnish with Drupal

Documentation on the Varnish-Drupal image.

Documentation on the plain Varnish image (the Varnish-Drupal image is built on this).

"},{"location":"drupal/services/mariadb/","title":"MariaDB-Drupal","text":"

The Lagoon mariadb-drupal Docker image Dockerfile is a customized mariadb image to use within Drupal projects in Lagoon. It differs from the mariadb image only for initial database setup, made by some environment variables:

Environment Variable Default Description MARIADB_DATABASE drupal Drupal database created at startup. MARIADB_USER drupal Default user created at startup. MARIADB_PASSWORD drupal Password of default user created at startup.

If the LAGOON_ENVIRONMENT_TYPE variable is set to production, performances are set accordingly by using MARIADB_INNODB_BUFFER_POOL_SIZE=1024 and MARIADB_INNODB_LOG_FILE_SIZE=256.

"},{"location":"drupal/services/mariadb/#additional-mariadb-logging","title":"Additional MariaDB Logging","text":"

During the course of development, it may be necessary to enable either query logging or slow query logging. To do so, set the environment variables MARIADB_LOG_SLOW or MARIADB_LOG_QUERIES. This can be done in docker-compose.yml.

"},{"location":"drupal/services/mariadb/#connecting-to-mysql-container-from-the-host","title":"Connecting to MySQL container from the host","text":"

If you would like to connect to your MySQL database inside the Docker container with an external tool like Sequel Pro, MySQL Workbench, HeidiSQL, DBeaver, plain old mysql-cli or anything else, here's how to get the IP and port info.

"},{"location":"drupal/services/mariadb/#get-published-mysql-port-from-the-container","title":"Get published MySQL port from the container","text":"

By default, Docker assigns a randomly published port for MySQL during each container start. This is done to prevent port collisions.

To get the published port via docker:

Run: docker port [container_name].

Get port
$ docker port drupal_example_mariadb_1\n3306/tcp -> 0.0.0.0:32797\n

Or via docker-compose inside a Drupal repository:

Run: docker-compose port [service_name] [interal_port].

Set ports
docker-compose port mariab 3306\n0.0.0.0:32797\n
"},{"location":"drupal/services/mariadb/#setting-a-static-port-not-recommended","title":"Setting a static port (not recommended)","text":"

During development, if you are using an external database tool, it may become cumbersome to continually check and set the MySQL connection port.

To set a static port, edit your service definition in your docker-compose.yml.

docker-compose.yml
  mariadb:\n...\nports:\n- \"33772:3306\" # Exposes port 3306 with a 33772 on the host port. Note by doing this you are responsible for managing port collisions`.\n

Warning

By setting a static port you become responsible for managing port collisions.

"},{"location":"drupal/services/mariadb/#connect-to-mysql","title":"Connect to MySQL","text":"

Now you can use these details to connect to whatever database management tool you'd like.

Linux OS X IP/Host IP from container docker.amazee.io Port Published port from container Published port from container Username drupal drupal Password drupal drupal Database drupal drupal"},{"location":"drupal/services/nginx/","title":"NGINX-Drupal","text":"

The Lagoon nginx-drupal Docker image. Optimized to work with Drupal. Based on Lagoon nginx image.

"},{"location":"drupal/services/nginx/#lagoon-adaptions","title":"Lagoon adaptions","text":"

This image is prepared to be used on Lagoon. There are therefore some things already done:

  • Folder permissions are automatically adapted with fix-permissions, so this image will work with a random user.
  • To keep drupal.conf 's configuration file as clean and customizable as possible, we added include directives in the main sections of the file:server, location /, location @drupal and location @php.
  • Further information in the section Drupal.conf customization.
"},{"location":"drupal/services/nginx/#included-drupal-configuration-drupalconf","title":"Included Drupal configuration (drupal.conf)","text":"

The image includes a full NGINX working configuration for Drupal 7, 8 and 9. It includes some extra functionalities like:

  • Support for humanstxt Drupal module.
  • Support for robotstxt Drupal module.
  • Disallow access to vagrant directory for local development.
"},{"location":"drupal/services/nginx/#drupalconf-customization","title":"Drupal.conf customization","text":"

The drupal.conf file is a customized version of the nginx configuration file, optimized for Drupal. Customers have different ways of customizing it:

  • Modifying it (hard to support in case of errors).
  • Using built-in customization through *.conf files.

The drupal.conf file is divided into several sections. The sections we've included in our customizations are:

  • server
  • location /
  • location @drupal
  • location @php.

For each of this section, there are two includes:

  • *_prepend.conf
  • *_append.conf

Here what the location @drupal section looks like:

drupal.conf
location @drupal {\ninclude /etc/nginx/conf.d/drupal/location_drupal_prepend*.conf;\ninclude        /etc/nginx/fastcgi.conf;\nfastcgi_param  SCRIPT_NAME        /index.php;\nfastcgi_param  SCRIPT_FILENAME    $realpath_root/index.php;\nfastcgi_pass   ${NGINX_FASTCGI_PASS:-php}:9000;\ninclude /etc/nginx/conf.d/drupal/location_drupal_append*.conf;\n}\n

This configuration allows customers to create files called location_drupal_prepend.conf and location_drupal_append.conf, where they can put all the configuration they want to insert before and after the other statements.

Those files, once created, MUST exist in the nginx container, so add them to Dockerfile.nginx like so:

dockerfile.nginx
COPY location_drupal_prepend.conf /etc/nginx/conf.d/drupal/location_drupal_prepend.conf\nRUN fix-permissions /etc/nginx/conf.d/drupal/location_drupal_prepend.conf\n
"},{"location":"drupal/services/nginx/#drupal-core-statistics-module-configuration","title":"Drupal Core Statistics Module Configuration","text":"

If you're using the core Statistics module, you may run into an issue that needs a quick configuration change.

With the default NGINX configuration, the request to the tracking endpoint /core/modules/statistics/statistics.php is denied (404).

This is related to the default NGINX configuration:

drupal.conf
location ~* ^.+\\.php$ {\n    try_files /dev/null @drupal;\n}\n

To fix the issue, we instead define a specific location rule and inject this as a location prepend configuration:

drupal.conf
## Allow access to to the statistics endpoint.\nlocation ~* ^(/core/modules/statistics/statistics.php) {\n      try_files /dev/null @php;\n}\n

And copy this during the NGINX container build:

dockerfile.nginx
# Add specific Drupal statistics module NGINX configuration.\nCOPY .lagoon/nginx/location_prepend_allow_statistics.conf /etc/nginx/conf.d/drupal/location_prepend_allow_statistics.conf\n
"},{"location":"drupal/services/php-cli/","title":"PHP-CLI-Drupal","text":"

The Lagoon php-cli-drupal Docker image is optimized to work with Drupal. It is based on the Lagoon php-cli image, and has all the command line tools needed for the daily maintenance of a Drupal website:

  • drush
  • drupal console
  • drush launcher (which will fallback to Drush 8 if there is no site installed Drush found)
"},{"location":"drupal/services/php-cli/#supported-versions","title":"Supported versions","text":"
  • 7.3 (available for compatibility only, no longer officially supported)
  • 7.4 Dockerfile - uselagoon/php-7.4-cli-drupal
  • 8.0 Dockerfile - uselagoon/php-8.0-cli-drupal
  • 8.1 Dockerfile - uselagoon/php-8.1-cli-drupal

All PHP versions use their own Dockerfiles.

"},{"location":"drupal/services/php-cli/#lagoon-adaptions","title":"Lagoon adaptions","text":"

This image is prepared to be used on Lagoon. There are therefore some things already done:

  • Folder permissions are automatically adapted with fix-permissions, so this image will work with a random user.
"},{"location":"drupal/services/redis/","title":"Redis","text":"

We recommend using Redis for internal caching. Add the Redis service to docker-compose.yaml.

docker-compose.yml
  redis:\nimage: uselagoon/redis-5\nlabels:\nlagoon.type: redis\n<< : *default-user # Uses the defined user from top.\nenvironment:\n<< : *default-environment\n

Also, to configure Redis, add the following to your settings.php.

"},{"location":"drupal/services/redis/#drupal-7","title":"Drupal 7","text":"settings.php
  if(getenv('LAGOON')){\n    $conf['redis_client_interface'] = 'PhpRedis';\n    $conf['redis_client_host'] = 'redis';\n    $conf['lock_inc'] = 'sites/all/modules/contrib/redis/redis.lock.inc';\n    $conf['path_inc'] = 'sites/all/modules/contrib/redis/redis.path.inc';\n    $conf['cache_backends'][] = 'sites/all/modules/contrib/redis/redis.autoload.inc';\n    $conf['cache_default_class'] = 'Redis_Cache';\n    $conf['cache_class_cache_form'] = 'DrupalDatabaseCache';\n    $conf['cache_class_cache_field'] = 'DrupalDatabaseCache';\n  }\n

Depending on file system structure, the module paths may need to be updated.

"},{"location":"drupal/services/redis/#drupal-8","title":"Drupal 8","text":"

The Drupal 8 config is largely stock. Notably, Redis is disabled while Drupal is being installed.

settings.php
if (getenv('LAGOON')){\n  $settings['redis.connection']['interface'] = 'PhpRedis';\n  $settings['redis.connection']['host'] = getenv('REDIS_HOST') ?: 'redis';\n  $settings['redis.connection']['port'] = getenv('REDIS_SERVICE_PORT') ?: '6379';\n  $settings['cache_prefix']['default'] = getenv('LAGOON_PROJECT') . '_' . getenv('LAGOON_GIT_SAFE_BRANCH');\n  // Do not set the cache during installations of Drupal.\n  if (!drupal_installation_attempted() && extension_loaded('redis')) {\n    $settings['cache']['default'] = 'cache.backend.redis';\n    // And allows to use it without the Redis module being enabled.\n    $class_loader->addPsr4('Drupal\\\\redis\\\\', 'modules/contrib/redis/src');\n    $settings['bootstrap_container_definition'] = [\n      'parameters' => [],\n      'services' => [\n        'redis.factory' => [\n          'class' => 'Drupal\\redis\\ClientFactory',\n        ],\n        'cache.backend.redis' => [\n          'class' => 'Drupal\\redis\\Cache\\CacheBackendFactory',\n          'arguments' => ['@redis.factory', '@cache_tags_provider.container', '@serialization.phpserialize'],\n        ],\n        'cache.container' => [\n          'class' => '\\Drupal\\redis\\Cache\\PhpRedis',\n          'factory' => ['@cache.backend.redis', 'get'],\n          'arguments' => ['container'],\n        ],\n        'cache_tags_provider.container' => [\n          'class' => 'Drupal\\redis\\Cache\\RedisCacheTagsChecksum',\n          'arguments' => ['@redis.factory'],\n        ],\n        'serialization.phpserialize' => [\n          'class' => 'Drupal\\Component\\Serialization\\PhpSerialize',\n        ],\n      ],\n    ];\n  }\n}\n
"},{"location":"drupal/services/redis/#persistent","title":"Persistent","text":"

Redis can also be configured as a persistent backend.

docker-compose.yml
redis:\nimage: uselagoon/redis-5-persistent\nlabels:\nlagoon.type: redis-persistent\nenvironment:\n<< : *default-environment\n
"},{"location":"drupal/services/redis/#environment-variables","title":"Environment Variables","text":"

Environment variables are meant to store some common information about Redis.

Environment Variable Default Description LOGLEVEL notice Redis loglevel DATABASES 1 Number of databases MAXMEMORY 100mb Maximum memory usage of Redis"},{"location":"drupal/services/redis/#redis-failover","title":"Redis Failover","text":"

Here is a snippet to implement a Redis failover in case of the Redis container not being available (for example, during maintenance)

The following is inserted into Drupal's active settings.php file.

settings.php
if (getenv('LAGOON')) {\n  $contrib_path = is_dir('sites/all/modules/contrib') ? 'sites/all/modules/contrib' : 'sites/all/modules';\n  $redis = DRUPAL_ROOT . '/sites/all/modules/contrib/redis';\n  if (file_exists(\"$redis/redis.module\")) {\n    require_once \"$redis/redis.module\";\n    $conf['redis_client_host'] = getenv('REDIS_HOST') ?: 'redis';\n    $conf['redis_client_port'] = getenv('REDIS_SERVICE_PORT') ?: 6379;\n    $conf['cache_prefix'] = getenv('REDIS_CACHE_PREFIX') ?: getenv('LAGOON_PROJECT') . '_' . getenv('LAGOON_GIT_SAFE_BRANCH');\n    try {\n      // Ensure that there is a connection to redis.\n      $client = Redis_Client::getClient();\n      $response = $client->ping();\n      if (!$response) {\n      throw new \\Exception('Redis could be reached but is not responding correctly.');\n      }\n      $conf['redis_client_interface'] = 'PhpRedis';\n      $conf['lock_inc'] = $contrib_path . '/redis/redis.lock.inc';\n      $conf['path_inc'] = $contrib_path . '/redis/redis.path.inc';\n      $conf['cache_backends'][] = $contrib_path . '/redis/redis.autoload.inc';\n      $conf['cache_default_class'] = 'Redis_Cache';\n    } catch (\\Exception $e) {\n      // Redis is not available for this request we should not configure the\n      // redis backend and ensure no cache is used. This will retry next\n      // request.\n      if (!class_exists('DrupalFakeCache')) {\n        $conf['cache_backends'][] = 'includes/cache-install.inc';\n      }\n      $conf['cache_default_class'] = 'DrupalFakeCache';\n    }\n  }\n}\n
"},{"location":"drupal/services/solr/","title":"Solr-Drupal","text":""},{"location":"drupal/services/solr/#standard-use","title":"Standard use","text":"

For Solr 5.5, 6.6 and 7.7, we ship the default schema files provided by the search_api_solr Drupal module. Add the Solr version you would like to use in your docker-compose.yml file, following our example.

"},{"location":"drupal/services/solr/#custom-schema","title":"Custom schema","text":"

To implement schema customizations for Solr in your project, look to how Lagoon creates our standard images.

  • In the solr section of your docker-compose.yml file, replace image: amazeeio/solr:7.7 with:
docker-compose.yml
  build:\ncontext: .\ndockerfile: solr.dockerfile\n
  • Place your schema files in your code repository. We typically like to use .lagoon/solr.
  • Create a solr.dockerfile.
solr.dockerfile
FROM amazeeio/solr:7.7\n\nCOPY .lagoon/solr /solr-conf/conf\n\nRUN precreate-core drupal /solr-conf\n\nCMD [\"solr-foreground\"]\n

The goal is to have your Solr configuration files exist at /solr-conf/conf in the image you are building.

"},{"location":"drupal/services/solr/#multiple-cores","title":"Multiple cores","text":"

To implement multiple cores, you will also need to ship your own Solr schema as above. The only change needed is to the CMD of the Dockerfile - repeat the pattern of precreate-core corename /solr-conf/ ; for each core you require.

solr.dockerfile
FROM amazeeio/solr:7.7-drupal\n\nRUN precreate-core drupal-index1 /solr-conf && \\\nprecreate-core drupal-index2 /solr-conf && \\\nprecreate-core drupal-index3 /solr-conf\n\nCMD [\"solr-foreground\"]\n
"},{"location":"drupal/services/varnish/","title":"Varnish","text":"

We suggest using Drupal with a Varnish reverse proxy. Lagoon provides a varnish-drupal Docker image that has Varnish already configured with a Drupal Varnish config.

This Varnish config does the following:

  • It understands Drupal session cookies and automatically disables the Varnish caching for any authenticated request.
  • It automatically caches any assets (images, css, js, etc.) for one month, and also sends this header to the browser, so browser cache the assets as well. This happens for authenticated and non-authenticated requests.
  • It has support for BAN and URIBAN which is used by the Drupal 8 purge module.
  • It removes utm_ and gclid from the URL parameter to prevent Google Analytics links from creating multiple cache objects.
  • Many other good things - just check out the drupal.vcl.
"},{"location":"drupal/services/varnish/#usage-with-drupal-8","title":"Usage with Drupal 8","text":"

TL;DR: Check out the drupal8-advanced example in our examples repo, it ships with the needed modules and needed Drupal configuration.

Note: many of these examples are on the same drupal-example-simple repo, but different branches/hashes. Be sure to get the exact branch from the examples list!

"},{"location":"drupal/services/varnish/#install-purge-and-varnish-purge-modules","title":"Install Purge and Varnish Purge modules","text":"

In order to fully use Varnish with Drupal 8 cache tags, you need to install the Purge and Varnish Purge modules. They ship with many submodules. We suggest installing at least the following:

  • purge
  • purge_drush
  • purge_tokens
  • purge_ui
  • purge_processor_cron
  • purge_processor_lateruntime
  • purge_queuer_coretags
  • varnish_purger
  • varnish_purge_tags

Grab them all at once:

Install Purge and Varnish Purge
composer require drupal/purge drupal/varnish_purge\n\ndrush en purge purge_drush purge_tokens purge_ui purge_processor_cron purge_processor_lateruntime purge_queuer_coretags varnish_purger varnish_purge_tags\n
"},{"location":"drupal/services/varnish/#configure-varnish-purge","title":"Configure Varnish Purge","text":"
  1. Visit Configuration > Development > Performance > Purge.
  2. Add a purger via Add purger.
  3. Select Varnish Bundled Purger (not the Varnish Purger, see the #Behind the Scenes section, for more information.).
  4. Click the dropdown beside the just added purger and click Configure.
  5. Give it a nice name, Lagoon Varnish sounds good.
  6. Configure it with:

    Configure Varnish Purge
     TYPE: Tag\n\n REQUEST:\n Hostname: varnish\n (or whatever your Varnish is called in docker-compose.yml)\n Port: 8080\n Path: /\n Request Method: BAN\n Scheme: http\n\n HEADERS:\n Header: Cache-Tags\n Value: [invalidations:separated_pipe]\n
  7. Save configuration.

That's it! If you'd like to test this locally, make sure you read the next section.

"},{"location":"drupal/services/varnish/#configure-drupal-for-varnish","title":"Configure Drupal for Varnish","text":"

There are a few other configurations that can be done:

  1. Uninstall the Internal Page Cache Drupal module with drush pmu page_cache. It can cause some weird double caching situations where only the Varnish cache is cleared, but not the internal cache, and changes appear very slowly to the users. Also, it uses a lot of cache storage on big sites.
  2. Change $config['system.performance']['cache']['page']['max_age'] in production.settings.php to 2628000. This tells Varnish to cache sites for up 1 month, which sounds like a lot, but the Drupal 8 cache tag system is so awesome that it will basically make sure that the Varnish cache is purged whenever something changes.
"},{"location":"drupal/services/varnish/#test-varnish-locally","title":"Test Varnish Locally","text":"

Drupal setups on Lagoon locally have Varnish and the Drupal caches disabled as it can be rather hard to develop with all them set. This is done via the following:

  • The VARNISH_BYPASS=true environment variable in docker-compose.yml which tells Varnish to basically disable itself.
  • Drupal is configured to not send any cache headers (via setting the Drupal config $config['system.performance']['cache']['page']['max_age'] = 0 in development.settings.php).

To test Varnish locally, change the following in docker-compose.yml:

  • Set VARNISH_BYPASS to false in the Varnish service section.
  • Set LAGOON_ENVIRONMENT_TYPE to production in the x-environment section.
  • Run docker-compose up -d , which restarts all services with the new environment variables.

Now you should be able to test Varnish!

Here is a short example assuming there is a node with the ID 1 and has the URL drupal-example.docker.amazee.io/node/1

  1. Run curl -I drupal-example.docker.amazee.io/node/1 and look for these headers:
    • X-LAGOON should include varnish which tells you that the request actually went through Varnish.
    • Age: will be still 0 as Varnish has probably never seen this site before, and the first request will warm the varnish cache.
    • X-Varnish-Cache will be MISS , also telling you that Varnish didn't find a previously cached version of this request.
  2. Now run curl -I drupal-example.docker.amazee.io/node/1 again, and the headers should be:
    • Age: will show you how many seconds ago the request has been cached. In our example it will probably something between 1-30, depending on how fast you are executing the command.
    • X-Varnish-Cache will be HIT, telling you that Varnish successfully found a cached version of the request and returned that one to you.
  3. Change some content at node/1 in Drupal.
  4. Run curl -I drupal-example.docker.amazee.io/node/1 , and the headers should the same as very first request:
    • Age:0
    • X-Varnish-Cache: MISS
"},{"location":"drupal/services/varnish/#varnish-on-drupal-behind-the-scenes","title":"Varnish on Drupal behind the scenes","text":"

If you come from other Drupal hosts or have done a Drupal 8 & Varnish tutorial before, you might have realized that there are a couple of changes in the Lagoon Drupal Varnish tutorial. Let's address them:

"},{"location":"drupal/services/varnish/#usage-of-varnish-bundled-purger-instead-of-varnish-purger","title":"Usage of Varnish Bundled Purger instead of Varnish Purger","text":"

The Varnish Purger purger sends a BAN request for each cache-tag that should be invalidated. Drupal has a lot of cache-tags, and this could lead to quite a large amount of requests sent to Varnish. Varnish Bundled Purger instead sends just one BAN request for multiple invalidations, separated nicely by pipe (|), which fits perfectly with the Varnish regular expression system of bans. This causes less requests and a smaller ban list table inside Varnish.

"},{"location":"drupal/services/varnish/#usage-of-purge-late-runtime-processor","title":"Usage of Purge Late runtime processor","text":"

Contradictory to the Varnish module in Drupal 7, the Drupal 8 Purge module has a slightly different approach to purging caches: It adds them to a queue which is then processed by different processors. Purge suggests using the Cron processor , which means that the Varnish cache is only purged during a cron run. This can lead to old data being cached by Varnish, as your cron is probably not configured to run every minute or so, and can result in confused editors and clients.

Instead, we suggest using the Purge Late runtime processor, which processes the queue at the end of each Drupal request. This has the advantage that if a cache-tag is added to the purge queue (because an editor edited a Drupal node, for example) the cache-tags for this node are directly purged. Together with the Varnish Bundled Purger, this means just a single additional request to Varnish at the very end of a Drupal request, which causes no noticeable processing time on the request.

"},{"location":"drupal/services/varnish/#full-support-for-varnish-ban-lurker","title":"Full support for Varnish Ban Lurker","text":"

Our Varnish configurations have full support for Ban Lurker. Ban Lurker helps you to maintain a clean cache and keep Varnish running smoothly. It is basically a small tool that runs through the Varnish ban list and compares them to the cached requests in the Varnish cache. Varnish bans are used to mark an object in the cache for purging. If Ban Lurker finds an item that should be \"banned,\" it removes them from the cache and also removes the ban itself. Now any seldom-accessed objects with very long TTLs which would normally never be banned and just keep taking up cache space are removed and can be refreshed. This keeps the list of bans small and with that, less processing time for Varnish on each request. Check out the official Varnish post on Ban Lurker and some other helpful reading for more information.

"},{"location":"drupal/services/varnish/#troubleshooting","title":"Troubleshooting","text":"

Varnish doesn't cache? Or something else not working? Here a couple of ways to debug:

  • Run drush p-debug-en to enable debug logging of the purge module. This should show you debugging in the Drupal log under admin/reports/dblog.
  • Make sure that Drupal sends proper cache headers. To best test this, use the URL that Lagoon generates for bypassing the Varnish cache, (locally in our Drupal example this is http://nginx-drupal-example.docker.amazee.io). Check for the Cache-Control: max-age=900, public header, where the 900 is what you configured in $config['system.performance']['cache']['page']['max_age'].
  • Make sure that the environment variable VARNISH_BYPASS is not set to true (see docker-compose.yml and run docker-compose up -d varnish to make sure the environment variable is configured correctly).
  • If all fails, and before you flip your table (\u256f\u00b0\u25a1\u00b0\uff09\u256f\ufe35 \u253b\u2501\u253b, talk to the Lagoon team, we're happy to help.
"},{"location":"installing-lagoon/add-group/","title":"Add Group","text":"Add group
  lagoon add group -N groupname\n
"},{"location":"installing-lagoon/add-project/","title":"Adding a Project","text":""},{"location":"installing-lagoon/add-project/#add-the-project-to-lagoon","title":"Add the project to Lagoon","text":"
  1. Run this command:

    Add project
    lagoon add project \\\n--gitUrl <YOUR-GITHUB-REPO-URL> \\\n--openshift 1 \\\n--productionEnvironment <YOUR-PROD-ENV> \\\n--branches <THE-BRANCHES-YOU-WANT-TO-DEPLOY> \\\n--project <YOUR-PROJECT-NAME>\n
    • The value for --openshift is the ID of your Kubernetes cluster.
    • Your production environment should be the name of the branch you want to have as your production environment.
    • The branches you want to deploy might look like this: \u201c^(main|develop)$\u201d
    • The name of your project is anything you want - \u201cCompany Website,\u201d \u201cexample,\u201d etc.
  2. Go to the Lagoon UI, and you should see your project listed!

"},{"location":"installing-lagoon/add-project/#add-the-deploy-key-to-your-git-repository","title":"Add the deploy key to your Git repository","text":"

Lagoon creates a deploy key for each project. You now need to add it as a deploy key in your Git repository to allow Lagoon to download the code.

  1. Run the following command to get the deploy key:

    Get project-key
    lagoon get project-key --project <YOUR-PROJECT-NAME>\n
  2. Copy the key and save it as a deploy key in your Git repository.

GitHub GitLab Bitbucket

"},{"location":"installing-lagoon/add-project/#add-the-webhooks-endpoint-to-your-git-repository","title":"Add the webhooks endpoint to your Git repository","text":"

In order for Lagoon to be able to deploy on code updates, it needs to be connected to your Git repository

  1. Add your Lagoon cluster's webhook endpoint to your Git repository

    • Payload URL: <LAGOON-WEBHOOK-INGRESS>
    • Content Type: JSON
    • Active: Active (allows you to enable/disable as required)
    • Events: Select the relevant events, or choose All. Usually push, branch create/delete are required

GitHub GitLab Bitbucket

"},{"location":"installing-lagoon/create-user/","title":"Create Lagoon user","text":"
  1. Add user via Lagoon CLI:

    Add user
    lagoon add user --email user@example.com --firstName MyFirstName --lastName MyLastName\n
  2. Go to your email and click the password reset link in the email.

  3. Follow the instructions and log in to Lagoon UI with created password.
  4. Add the SSH public key of the user via Settings.
"},{"location":"installing-lagoon/deploy-project/","title":"Deploy Your Project","text":"
  1. Run the following command to deploy your project:

    Deploy
    lagoon deploy branch -p <YOUR-PROJECT-NAME> -b <YOUR-BRANCH-NAME>\n
  2. Go to the Lagoon UI and take a look at your project - you should now see the environment for this project!

  3. Look in your cluster at your pods list, and you should see the build pod as it begins to clone Git repositories, set up services, etc.See all pods
    kubectl get pods --all-namespaces | grep lagoon-build\n
"},{"location":"installing-lagoon/efs-provisioner/","title":"EFS Provisioner","text":"

Info

This is only applicable to AWS installations.

  1. Add Helm repository:

    Add Helm repo
    helm repo add stable https://charts.helm.sh/stable\n
  2. Create efs-provisioner-values.yml in your config directory and update the values:

    efs-provisioner-values.yml
    efsProvisioner:\nefsFileSystemId: <efsFileSystemId>\nawsRegion: <awsRegion>\npath: /\nprovisionerName: example.com/aws-efs\nstorageClass:\nname: bulk\nisDefault: false\nreclaimPolicy: Delete\nmountOptions: []\nglobal:\ndeployEnv: prod\n
  3. Install EFS Provisioner:

    Install EFS Provisioner
    helm upgrade --install --create-namespace \\\n--namespace efs-provisioner --wait \\\n-f efs-provisioner-values.yaml \\\nefs-provisioner stable/efs-provisioner\n
"},{"location":"installing-lagoon/gitlab/","title":"GitLab","text":"

Not needed for *most* installs, but this is configured to integrate Lagoon with GitLab for user and group authentication.

  1. Create Personal Access token in GitLab for a user with admin access.
  2. Create system hooks under your-gitlab.com/admin/hooks pointing to: webhookhandler.lagoon.example.com and define a random secret token.
    1. Enable \u201crepository update events\u201d
  3. Update lagoon-core-values.yml:

    lagoon-core-values.yml
    api:\nadditionalEnvs:\nGITLAB_API_HOST: <<URL of GitLab example: https://your-gitlab.com>>\nGITLAB_API_TOKEN: << Personal Access token with Access to API >>\nGITLAB_SYSTEM_HOOK_TOKEN: << System Hook Secret Token >>\nwebhook-haondler:\nadditionalEnvs:\nGITLAB_API_HOST: <<URL of GitLab example: https://your-gitlab.com>>\nGITLAB_API_TOKEN: << Personal Access token with Access to API >>\nGITLAB_SYSTEM_HOOK_TOKEN: << System Hook Secret Token >>\nwebhooks2tasks:\nadditionalEnvs:\nGITLAB_API_HOST: <<URL of GitLab example: https://your-gitlab.com>>\nGITLAB_API_TOKEN: << Personal Access token with Access to API >>\nGITLAB_SYSTEM_HOOK_TOKEN: << System Hook Secret Token >>\n
  4. Helm update the lagoon-core helmchart.

  5. If you've already created users in Keycloak, delete them.
  6. Run the following command in an API pod:Sync with GitLab
      yarn sync:gitlab:all\n
"},{"location":"installing-lagoon/install-harbor/","title":"Install Harbor","text":"
  1. Add Helm repository:

    Add Helm repository
    helm repo add harbor https://helm.goharbor.io\n
  2. Consider the optimal configuration of Harbor for your particular circumstances - see their docs for more recommendations:

    1. We recommend using S3-compatible storage for image blobs (imageChartStorage).
    2. We recommend using a managed database service for the Postgres service (database.type).
    3. In high-usage scenarios we recommend using a managed Redis service. (redis.type)
  3. Create the file harbor-values.yml inside of your config directory. The proxy-buffering annotations help with large image pushes:

    harbor-values.yml
    expose:\ningress:\nannotations:\nkubernetes.io/tls-acme: \"true\"\nnginx.ingress.kubernetes.io/proxy-buffering: \"off\"\nnginx.ingress.kubernetes.io/proxy-request-buffering: \"off\"\nhosts:\ncore: harbor.lagoon.example.com\ntls:\nenabled: true\ncertSource: secret\nsecret:\nsecretName: harbor-harbor-ingress\nexternalURL: https://harbor.lagoon.example.com\nharborAdminPassword: <your Harbor Admin Password>\nchartmuseum:\nenabled: false\nclair:\nenabled: false\nnotary:\nenabled: false\ntrivy:\nenabled: false\njobservice:\njobLogger: stdout\n
  4. Install Harbor, checking the requirements for the currently supported Harbor versions:

    Install Harbor
    helm upgrade --install --create-namespace \\\n--namespace harbor --wait \\\n-f harbor-values.yml \\\nharbor harbor/harbor\n
  5. Visit Harbor at the URL you set in harbor.yml.

    1. Username: admin
    2. Password:
    Get Harbor secret
    kubectl -n harbor get secret harbor-core -o jsonpath=\"{.data.HARBOR_ADMIN_PASSWORD}\" | base64 --decode\n
  6. You will need to add the above Harbor credentials to the Lagoon Remote values.yml in the next step, as well as harbor-values.yml.

"},{"location":"installing-lagoon/install-lagoon-remote/","title":"Install Lagoon Remote","text":"

Now we will install Lagoon Remote into the Lagoon namespace. The RabbitMQ service is the broker.

  1. Create lagoon-remote-values.yml in your config directory as you did the previous two files, and update the values.

    • rabbitMQPassword
    Get RabbitMQ password
    kubectl -n lagoon-core get secret lagoon-core-broker -o jsonpath=\"{.data.RABBITMQ_PASSWORD}\" | base64 --decode\n
    • rabbitMQHostname
    lagoon-remote-values.yml
    lagoon-core-broker.lagoon-core.svc.local\n
    • taskSSHHost
    Update SSH Host
    kubectl get service lagoon-core-broker-amqp-ext \\\n-o custom-columns=\"NAME:.metadata.name,IP ADDRESS:.status.loadBalancer.ingress[*].ip,HOSTNAME:.status.loadBalancer.ingress[*].hostname\"\n
    • harbor-password
    Get Harbor secret
    kubectl -n harbor get secret harbor-harbor-core -o jsonpath=\"{.data.HARBOR_ADMIN_PASSWORD}\" | base64 --decode\n
  2. Add the Harbor configuration from the Install Harbor step.

    lagoon-remote-values.yml
    lagoon-build-deploy:\nenabled: true\nextraArgs:\n- \"--enable-harbor=true\"\n- \"--harbor-url=https://harbor.lagoon.example.com\"\n- \"--harbor-api=https://harbor.lagoon.example.com/api/\"\n- \"--harbor-username=admin\"\n- \"--harbor-password=<from harbor-harbor-core secret>\"\nrabbitMQUsername: lagoon\nrabbitMQPassword: <from lagoon-core-broker secret>\nrabbitMQHostname: lagoon-core-broker.lagoon-core.svc.cluster.local\nlagoonTargetName: <name of lagoon remote, can be anything>\ntaskSSHHost: <IP of ssh service loadbalancer>\ntaskSSHPort: \"22\"\ntaskAPIHost: \"api.lagoon.example.com\"\ndbaas-operator:\nenabled: true\nmariadbProviders:\nproduction:\nenvironment: production\nhostname: 172.17.0.1.nip.io\nreadReplicaHostnames:\n- 172.17.0.1.nip.io\npassword: password\nport: '3306'\nuser: root\ndevelopment:\nenvironment: development\nhostname: 172.17.0.1.nip.io\nreadReplicaHostnames:\n- 172.17.0.1.nip.io\npassword: password\nport: '3306'\nuser: root\n
  3. Install Lagoon Remote:

    Install Lagoon remote
    helm upgrade --install --create-namespace \\\n--namespace lagoon \\\n-f remote-values.yaml \\\nlagoon-remote lagoon/lagoon-remote\n
"},{"location":"installing-lagoon/lagoon-backups/","title":"Lagoon Backups","text":"

Lagoon uses the K8up backup operator: https://k8up.io. Lagoon isn\u2019t tightly integrated with K8up, it\u2019s more that Lagoon can create its resources in a way that K8up can automatically discover and backup.

Lagoon has been extensively tested with K8up 1.x, but is not compatible with 2.x yet. We recommend using the 1.1.0 chart version (App version v1.2.0).

  1. Create new AWS User with policies:

    example K8up IAM user
    {\n\"Version\":\"2012-10-17\",\n\"Statement\":[\n{\n\"Sid\":\"VisualEditor0\",\n\"Effect\":\"Allow\",\n\"Action\":[\n\"s3:ListAllMyBuckets\",\n\"s3:CreateBucket\",\n\"s3:GetBucketLocation\"\n],\n\"Resource\":\"*\"\n},\n{\n\"Sid\":\"VisualEditor1\",\n\"Effect\":\"Allow\",\n\"Action\":\"s3:ListBucket\",\n\"Resource\":\"arn:aws:s3:::baas-*\"\n},\n{\n\"Sid\":\"VisualEditor2\",\n\"Effect\":\"Allow\",\n\"Action\":[\n\"s3:PutObject\",\n\"s3:GetObject\",\n\"s3:AbortMultipartUpload\",\n\"s3:DeleteObject\",\n\"s3:ListMultipartUploadParts\"\n],\n\"Resource\":\"arn:aws:s3:::baas-*/*\"\n}\n]\n}\n
  2. Create k8up-values.yml (customize for your provider):

    k8up-values.yml
    k8up:\nenvVars:\n- name: BACKUP_GLOBALS3ENDPOINT\nvalue: 'https://s3.eu-west-1.amazonaws.com'\n- name: BACKUP_GLOBALS3BUCKET\nvalue: ''\n- name: BACKUP_GLOBALKEEPJOBS\nvalue: '1'\n- name: BACKUP_GLOBALSTATSURL\nvalue: 'https://backup.lagoon.example.com'\n- name: BACKUP_GLOBALACCESSKEYID\nvalue: ''\n- name: BACKUP_GLOBALSECRETACCESSKEY\nvalue: ''\n- name: BACKUP_BACKOFFLIMIT\nvalue: '2'\n- name: BACKUP_GLOBALRESTORES3BUCKET\nvalue: ''\n- name: BACKUP_GLOBALRESTORES3ENDPOINT\nvalue: 'https://s3.eu-west-1.amazonaws.com'\n- name: BACKUP_GLOBALRESTORES3ACCESSKEYID\nvalue: ''\n- name: BACKUP_GLOBALRESTORES3SECRETACCESSKEY\nvalue: ''\ntimezone: Europe/Zurich\n
  3. Install K8up:

    Install K8up Step 1
    helm repo add appuio https://charts.appuio.ch\n
    Install K8up Step 2
    kubectl apply -f https://github.com/vshn/k8up/releases/download/v1.2.0/k8up-crd.yaml\n
    Install K8up Step 3
    helm upgrade --install --create-namespace \\\n--namespace k8up \\\n-f k8up-values.yaml \\\n--version 1.1.0 \\\nk8up appuio/k8up\n
  4. Update lagoon-core-values.yml:

    lagoon-core-values.yml
    s3BAASAccessKeyID: <<Access Key ID for restore bucket>>\ns3BAASSecretAccessKey: <<Access Key Secret for restore bucket>>\n
  5. Redeploy lagoon-core.

"},{"location":"installing-lagoon/lagoon-cli/","title":"Install the Lagoon CLI","text":"
  1. Check https://github.com/uselagoon/lagoon-cli#install on how to install for your operating system. For macOS and Linux, you can use Homebrew:
    1. brew tap uselagoon/lagoon-cli
    2. brew install lagoon
  2. The CLI needs to know how to communicate with Lagoon, so run the following command:

    Lagoon config
        lagoon config add \\\n--graphql https://YOUR-API-URL/graphql \\\n--ui https://YOUR-UI-URL \\\n--hostname YOUR.SSH.IP \\\n--lagoon YOUR-LAGOON-NAME \\\n--port 22\n
  3. Access Lagoon by authenticating with your SSH key.

    1. In the Lagoon UI (the URL is in values.yml if you forget), go to Settings.
    2. Add your public SSH key.
    3. You need to set the default Lagoon to your Lagoon so that it doesn\u2019t try to use the amazee.io defaults:

      Lagoon config
          lagoon config default --lagoon <YOUR-LAGOON-NAME>\n
  4. Now run lagoon login. Lagoon talks to SSH and authenticates against your public/private key pair, and gets a token for your username.

  5. Verify via lagoon whoami that you are logged in.

Info

We don\u2019t generally recommend using the Lagoon Admin role, but you\u2019ll need to create an admin account at first to get started. Ideally, you\u2019ll immediately create another account to work from which is not an admin.

"},{"location":"installing-lagoon/lagoon-core/","title":"Install Lagoon Core","text":""},{"location":"installing-lagoon/lagoon-core/#install-the-helm-chart","title":"Install the Helm chart","text":"
  1. Add Lagoon Charts repository to your Helm Repositories:

    Add Lagoon Charts repository
    helm repo add lagoon https://uselagoon.github.io/lagoon-charts/\n
  2. Create a directory for the configuration files we will create, and make sure that it\u2019s version controlled. Ensure that you reference this path in commands referencing your values.yml files.

  3. Create values.yml in the directory you\u2019ve just created. Update the endpoint URLs (change them from api.lagoon.example.com to your values). Example: https://github.com/uselagoon/lagoon-charts/blob/main/charts/lagoon-core/ci/linter-values.yaml
  4. Now run helm upgrade --install command, pointing to values.yml, like so:

    Upgrade Helm with values.yml
    helm upgrade --install --create-namespace --namespace lagoon-core -f values.yml lagoon-core lagoon/lagoon-core`\n
  5. Lagoon Core is now installed!

Warning

Sometimes we run into Docker Hub pull limits. We are considering moving our images elsewhere if this continues to be a problem.

"},{"location":"installing-lagoon/lagoon-core/#configure-keycloak","title":"Configure Keycloak","text":"

Visit the Keycloak dashboard at the URL you defined in the values.yml for Keycloak.

  1. Click \"Administration Console\"
  2. Username: admin
  3. Password: use lagoon-core-keycloak secret, key-value KEYCLOAK_ADMIN_PASSWORD
  4. Retrieve the secret like so:

    Retrieve secret
    kubectl -n lagoon-core get secret lagoon-core-keycloak -o jsonpath=\"{.data.KEYCLOAK_ADMIN_PASSWORD}\" | base64 --decode\n
  5. Click on User on top right.

    1. Go to Manage Account.
    2. Add an Email for the admin account you created.
    3. Save.
  6. Go to Realm Lagoon -> Realm Settings -> Email
    1. Configure email server for Keycloak, test connection via \u201cTest connection\u201d button.
  7. Go to Realm Lagoon -> Realm Settings -> Login
    1. Enable \u201cForgot Password\u201d
    2. Save.
"},{"location":"installing-lagoon/lagoon-core/#log-in-to-the-ui","title":"Log in to the UI","text":"

You should now be able to visit the Lagoon UI at the URL you defined in the values.yml for the UI.

  1. Username: lagoonadmin
  2. Secret: use lagoon-core-keycloak secret key-value: LAGOON-CORE-KEYCLOAK
  3. Retrieve the secret:Retrieve secret
        kubectl -n lagoon-core get secret lagoon-core-keycloak -o jsonpath=\"{.data.KEYCLOAK_LAGOON_ADMIN_PASSWORD}\" | base64 --decode\n
"},{"location":"installing-lagoon/lagoon-files/","title":"Lagoon Files","text":"

Lagoon files are used to store the file output of tasks, such as backups, and can be hosted on any S3-compatible storage.

  1. Create new AWS User with policies:

    Example files IAM user
    {\n\"Version\":\"2012-10-17\",\n\"Statement\":[\n{\n\"Effect\":\"Allow\",\n\"Action\":[\n\"s3:ListBucket\",\n\"s3:GetBucketLocation\",\n\"s3:ListBucketMultipartUploads\"\n],\n\"Resource\":\"arn:aws:s3:::S3_BUCKET_NAME\"\n},\n{\n\"Effect\":\"Allow\",\n\"Action\":[\n\"s3:PutObject\",\n\"s3:GetObject\",\n\"s3:DeleteObject\",\n\"s3:ListMultipartUploadParts\",\n\"s3:AbortMultipartUpload\"\n],\n\"Resource\":\"arn:aws:s3:::S3_BUCKET_NAME/*\"\n}\n]\n}\n
  2. Update lagoon-core-values.yml:

    lagoon-core-values.yml
    s3FilesAccessKeyID: <<Access Key ID>>\ns3FilesBucket: <<Bucket Name for Lagoon Files>>\ns3FilesHost: <<S3 endpoint like \"https://s3.eu-west-1.amazonaws.com\" >>\ns3FilesSecretAccessKey: <<Access Key Secret>>\ns3FilesRegion: <<S3 Region >>\n
  3. If you use ingress-nginx in front of lagoon-core, we suggest setting this configuration which will allow for bigger file uploads:

    lagoon-core-values.yml
    controller:\nconfig:\nclient-body-timeout: '600' # max 600 secs fileuploads\nproxy-send-timeout: '1800' # max 30min connections - needed for websockets\nproxy-read-timeout: '1800' # max 30min connections - needed for websockets\nproxy-body-size: 1024m # 1GB file size\nproxy-buffer-size: 64k # bigger buffer\n
"},{"location":"installing-lagoon/lagoon-logging/","title":"Lagoon Logging","text":"

Lagoon integrates with OpenSearch to store application, container and router logs. Lagoon Logging collects the application, router and container logs from Lagoon projects, and sends them to the logs concentrator. It needs to be installed onto each lagoon-remote instance.

In addition, it should be installed in the lagoon-core cluster to collect logs from the lagoon-core service. This is configured in the LagoonLogs section.

Logging Overview: Lucid Chart

See also: Logging.

Read more about Lagoon logging here: https://github.com/uselagoon/lagoon-charts/tree/main/charts/lagoon-logging

  1. Create lagoon-logging-values.yaml:

    lagoon-logging-values.yaml
    tls:\ncaCert: |\n<< content of ca.pem from Logs-Concentrator>>\nclientCert: |\n<< content of client.pem from Logs-Concentrator>>\nclientKey: |\n<< content of client-key.pem from Logs-Concentrator>>\nforward:\nusername: <<Username for Lagoon Remote 1>>\npassword: <<Password for Lagoon Remote 1>>\nhost: <<ExternalIP of Logs-Concentrator Service LoadBalancer>>\nhostName: <<Hostname in Server Cert of Logs-Concentrator>>\nhostPort: '24224'\nselfHostname: <<Hostname in Client Cert of Logs-Concentrator>>\nsharedKey: <<Generated ForwardSharedKey of Logs-Concentrator>>\ntlsVerifyHostname: false\nclusterName: <<Short Cluster Identifier>>\nlogsDispatcher:\nserviceMonitor:\nenabled: false\nlogging-operator:\nmonitoring:\nserviceMonitor:\nenabled: false\nlagoonLogs:\nenabled: true\nrabbitMQHost: lagoon-core-broker.lagoon-core.svc.cluster.local\nrabbitMQUser: lagoon\nrabbitMQPassword: <<RabbitMQ Lagoon Password>>\nexcludeNamespaces: {}\n
  2. Install lagoon-logging:

    Install lagoon-logging
    helm repo add banzaicloud-stable https://kubernetes-charts.banzaicloud.com\n\nhelm upgrade --install --create-namespace \\\n--namespace lagoon-logging \\\n-f lagoon-logging-values.yaml \\\nlagoon-logging lagoon/lagoon-logging\n
"},{"location":"installing-lagoon/lagoon-logging/#logging-nginx-ingress-controller","title":"Logging NGINX Ingress Controller","text":"

If you'd like logs from ingress-nginx inside lagoon-logging:

  1. The ingress controller must be installed in the namespace ingress-nginx
  2. Add the content of this file to ingress-nginx:

    ingress-nginx log-format-upstream
    controller:\nconfig:\nlog-format-upstream: >-\n{\n\"time\": \"$time_iso8601\",\n\"remote_addr\": \"$remote_addr\",\n\"x-forwarded-for\": \"$http_x_forwarded_for\",\n\"true-client-ip\": \"$http_true_client_ip\",\n\"req_id\": \"$req_id\",\n\"remote_user\": \"$remote_user\",\n\"bytes_sent\": $bytes_sent,\n\"request_time\": $request_time,\n\"status\": \"$status\",\n\"host\": \"$host\",\n\"request_proto\": \"$server_protocol\",\n\"request_uri\": \"$uri\",\n\"request_query\": \"$args\",\n\"request_length\": $request_length,\n\"request_time\": $request_time,\n\"request_method\": \"$request_method\",\n\"http_referer\": \"$http_referer\",\n\"http_user_agent\": \"$http_user_agent\",\n\"namespace\": \"$namespace\",\n\"ingress_name\": \"$ingress_name\",\n\"service_name\": \"$service_name\",\n\"service_port\": \"$service_port\"\n}\n
  3. Your logs should start flowing!

"},{"location":"installing-lagoon/logs-concentrator/","title":"Logs-Concentrator","text":"

Logs-concentrator collects the logs being sent by Lagoon clusters and augments them with additional metadata before inserting them into Elasticsearch.

  1. Create certificates according to ReadMe: https://github.com/uselagoon/lagoon-charts/tree/main/charts/lagoon-logs-concentrator
  2. Create logs-concentrator-values.yml:

    logs-concentrator-values.yml
    tls:\ncaCert: |\n<<contents of ca.pem>>\nserverCert: |\n<<contents of server.pem\nserverKey: |\n<<contents of server-key.pem>>\nelasticsearchHost: elasticsearch-opendistro-es-client-service.elasticsearch.svc.cluster.local\nelasticsearchAdminPassword: <<ElasticSearch Admin Password>>\nforwardSharedKey: <<Random 32 Character Password>>\nusers:\n- username: <<Username for Lagoon Remote 1>>\npassword: <<Random Password for Lagoon Remote 1>>\nservice:\ntype: LoadBalancer\nserviceMonitor:\nenabled: false\n
  3. Install logs-concentrator:

    Install logs-concentrator
    helm upgrade --install --create-namespace \\\n--namespace lagoon-logs-concentrator \\\n-f logs-concentrator-values.yaml \\\nlagoon-logs-concentrator lagoon/lagoon-logs-concentrator\n
"},{"location":"installing-lagoon/opendistro/","title":"OpenDistro","text":"

To install an OpenDistro cluster, you will need to configure TLS and secrets so that Lagoon can talk to it securely. You're going to have to create a handful of JSON files - put these in the same directory as the values files you've been creating throughout this installation process.

Install OpenDistro Helm, according to https://opendistro.github.io/for-elasticsearch-docs/docs/install/helm/

"},{"location":"installing-lagoon/opendistro/#create-keys-and-certificates","title":"Create Keys and Certificates","text":"
  1. Generate certificates

    Note:

    CFSSL is CloudFlare's PKI/TLS swiss army knife. It is both a command line tool and an HTTP API server for signing, verifying, and bundling TLS certificates. It requires Go 1.12+ to build.

    1. Install CFSSL: https://github.com/cloudflare/cfssl
    2. Generate CA. You'll need the following file:
    ca-csr.json
    {\n\"CN\": \"ca.elasticsearch.svc.cluster.local\",\n\"hosts\": [\n\"ca.elasticsearch.svc.cluster.local\"\n],\n\"key\": {\n\"algo\": \"ecdsa\",\n\"size\": 256\n},\n\"ca\": {\n\"expiry\": \"87600h\"\n}\n}\n
  2. Run the following two commands:

    Generate certificate
    cfssl gencert -initca ca-csr.json | cfssljson -bare ca -\nrm ca.csr\n

    You'll get ca-key.pem, and ca.pem. This is your CA key and self-signed certificate.

  3. Next, we'll generate the node peering certificate. You'll need the following two files:

    ca-config.json
    {\n\"signing\": {\n\"default\": {\n\"expiry\": \"87600h\"\n},\n\"profiles\": {\n\"peer\": {\n\"expiry\": \"87600h\",\n\"usages\": [\n\"signing\",\n\"key encipherment\",\n\"server auth\",\n\"client auth\"\n]\n},\n\"client\": {\n\"expiry\": \"87600h\",\n\"usages\": [\n\"signing\",\n\"key encipherment\",\n\"client auth\"\n]\n}\n}\n}\n}\n
    node.json
    {\n\"hosts\": [\n\"node.elasticsearch.svc.cluster.local\"\n],\n\"CN\": \"node.elasticsearch.svc.cluster.local\",\n\"key\": {\n\"algo\": \"ecdsa\",\n\"size\": 256\n}\n}\n
  4. Run the following two commands:

    Generate certificate keys
    cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=peer node.json | cfssljson -bare node\nrm node.csr\n

    You'll get node.pem and node-key.pem. This is the peer certificate that will be used by nodes in the ES cluster.

  5. Next, we'll convert the key to the format supported by Java with the following command:

    Convert key format
    openssl pkey -in node-key.pem -out node-key.pkcs8\n
  6. Now we'll generate the admin certificate. You'll need the following file:

    admin.json
    {\n\"CN\": \"admin.elasticsearch.svc.cluster.local\",\n\"key\": {\n\"algo\": \"ecdsa\",\n\"size\": 256\n}\n}\n
  7. Run the following two commands:

    Generate admin certificate keys
    cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=client admin.json | cfssljson -bare admin\nrm admin.csr\n

    You'll get admin.pem and admin-key.pem. This is the certificate that will be used to perform admin commands on the opendistro-security plugin.

  8. Next, we'll convert the key to the format supported by Java with the following command:

    Convert key format
    openssl pkey -in admin-key.pem -out admin-key.pkcs8\n
"},{"location":"installing-lagoon/opendistro/#installing-opendistro","title":"Installing OpenDistro","text":"

Now that we have our keys and certificates, we can continue with the installation.

  1. Generate hashed passwords.

    1. The elasticsearch-secrets-values.yaml needs two hashed passwords. Create them with this command (run it twice, enter a random password, store both the plaintext and hashed passwords).
    Generate hashed passwords
    docker run --rm -it docker.io/amazon/opendistro-for-elasticsearch:1.12.0 sh -c \"chmod +x /usr/share/elasticsearch/plugins/opendistro_security/tools/hash.sh; /usr/share/elasticsearch/plugins/opendistro_security/tools/hash.sh\"\n
  2. Create secrets:

    1. You'll need to create elasticsearch-secrets-values.yaml. See this gist as an example: https://gist.github.com/Schnitzel/43f483dfe0b23ca0dddd939b12bb4b0b
  3. Install secrets with the following commands:

    Install secrets
    helm repo add incubator https://charts.helm.sh/incubator`\nhelm upgrade --namespace elasticsearch --create-namespace --install elasticsearch-secrets incubator/raw --values elasticsearch-secrets-values.yaml `\n
  4. You'll need to create elasticsearch-values.yaml. See this gist as an example: (fill all <\\> with values) https://gist.github.com/Schnitzel/1e386654b6abf75bf4d66a544db4aa6a

  5. Install Elasticsearch:

    Install Elasticsearch
    helm upgrade --namespace elasticsearch --create-namespace --install elasticsearch opendistro-es-X.Y.Z.tgz --values elasticsearch-values.yaml\n
  6. Configure security inside Elasticsearch with the following:

    Configure security
    kubectl exec -n elasticsearch -it elasticsearch-opendistro-es-master-0 -- bash\nchmod +x /usr/share/elasticsearch/plugins/opendistro_security/tools/securityadmin.sh\n/usr/share/elasticsearch/plugins/opendistro_security/tools/securityadmin.sh -nhnv -cacert /usr/share/elasticsearch/config/admin-root-ca.pem -cert /usr/share/elasticsearch/config/admin-crt.pem -key /usr/share/elasticsearch/config/admin-key.pem -cd /usr/share/elasticsearch/plugins/opendistro_security/securityconfig/\n
  7. Update lagoon-core-values.yaml with:

    lagoon-core-values.yaml
    elasticsearchURL: http://elasticsearch-opendistro-es-client-service.elasticsearch.svc.cluster.local:9200\nkibanaURL: https://<<Kibana Public URL>>\nlogsDBAdminPassword: \"<<PlainText Elasticsearch Admin Password>>\"\n
  8. Rollout Lagoon Core:

    Rollout Lagoon Core
    helm upgrade --install --create-namespace --namespace lagoon-core -f values.yaml lagoon-core lagoon/lagoon-core\n
  9. Sync all Lagoon Groups with Opendistro Elasticsearch

    Sync groups
    kubectl -n lagoon-core exec -it deploy/lagoon-core-api -- sh\nyarn run sync:opendistro-security\n
  10. "},{"location":"installing-lagoon/querying-graphql/","title":"Querying with GraphQL","text":"
    1. You\u2019ll need an app for sending and receiving GraphQL queries. We recommend GraphiQL.

      1. If you\u2019re using Homebrew, you can install it with brew install --cask graphiql.
    2. We need to tell Lagoon Core about the Kubernetes cluster. The GraphQL endpoint is: https://<YOUR-API-URL>/graphql

    3. Go to Edit HTTP Headers, and Add Header.

      1. Header Name: Authorization
      2. Value: Bearer YOUR-TOKEN-HERE
      3. In your home directory, the Lagoon CLI has created a .lagoon.yml file. Copy the token from that file and use it for the value here.
      4. Save.
    4. Now you\u2019re ready to run some queries. Run the following test query to ensure everything is working correctly:

      Get all projects
      query allProjects {allProjects {name } }\n
    5. This should give you the following response:

      API Response
        {\n    \"data\": {\n      \"allProjects\": []\n    }\n  }\n

      Read more about GraphQL here in our documentation.

    6. Once you get the correct response, we need to add a mutation.

      1. Run the following query:

        Add mutation
        mutation addKubernetes {\n  addKubernetes(input:\n  {\n    name: \"<TARGET-NAME-FROM-REMOTE-VALUES.yml>\",\n    consoleUrl: \"<URL-OF-K8S-CLUSTER>\",\n    token: \"xxxxxx\u201d\n    routerPattern: \"${environment}.${project}.lagoon.example.com\"\n  }){id}\n}\n
        1. name: get from lagoon-remote-values.yml
        2. consoleUrl: API Endpoint of Kubernetes cluster. Get from values.yml
        3. token: create a token for the lagoon-build-deploy service account

          Create token
            kubectl -n lagoon create token lagoon-build-deploy --duration 3h\n

    Prior to Kubernetes 1.21:

    Use the lagoon-build-deploy token installed by lagoon-remote:

    Use deploy token
      kubectl -n lagoon describe secret \\\n$(kubectl -n lagoon get secret | grep lagoon-build-deploy | awk '{print $1}') | grep token: | awk '{print $2}'\n

    Info

    Authorization tokens for GraphQL are very short term so you may need to generate a new one. Run lagoon login and then cat the .lagoon.yml file to get the new token, and replace the old token in the HTTP header with the new one.

    "},{"location":"installing-lagoon/requirements/","title":"Installing Lagoon Into Existing Kubernetes Cluster","text":""},{"location":"installing-lagoon/requirements/#requirements","title":"Requirements","text":"
    • Kubernetes 1.23+ (Kubernetes 1.21 is supported, but 1.23 is recommended)
    • Familiarity with Helm and Helm Charts, and kubectl.
    • Ingress controller, we recommend ingress-nginx, installed into ingress-nginx namespace
    • Cert manager (for TLS) - We highly recommend using letsencrypt
    • StorageClasses (RWO as default, RWM for persistent types)

    Note

    We acknowledge that this is a lot of steps, and our roadmap for the immediate future includes reducing the number of steps in this process.

    "},{"location":"installing-lagoon/requirements/#specific-requirements-as-of-january-2023","title":"Specific requirements (as of January 2023)","text":""},{"location":"installing-lagoon/requirements/#kubernetes","title":"Kubernetes","text":"

    Lagoon supports Kubernetes versions 1.21 onwards. We actively test and develop against Kubernetes 1.24, also regularly testing against 1.21,1.22 and 1.25.

    The next large round of breaking changes is in Kubernetes 1.25, and we will endeavour to be across these in advance, although this will require a bump in the minimum supported version of Lagoon.

    "},{"location":"installing-lagoon/requirements/#ingress-nginx","title":"ingress-nginx","text":"

    Lagoon is currently configured only for a single ingress-nginx controller, and therefore defining an IngressClass was not necessary in the past.

    In order to use the recent ingress-nginx controllers (v4 onwards, required for Kubernetes 1.22), the following configuration should be used, as per the ingress-nginx docs.

    • nginx-ingress should be configured as the default controller - set .controller.ingressClassResource.default: true in Helm values
    • nginx-ingress should be configured to watch ingresses without IngressClass set - set .controller.watchIngressWithoutClass: true in Helm values

    This will configure the controller to create any new ingresses with itself as the IngressClass, and also to handle any existing ingresses without an IngressClass set.

    Other configurations may be possible, but have not been tested.

    "},{"location":"installing-lagoon/requirements/#harbor","title":"Harbor","text":"

    Versions 2.1 and 2.2+ of Harbor are currently supported. The method of retrieving robot accounts was changed in 2.2, and the Lagoon remote-controller is able to handle these tokens. This means that Harbor has to be configured with the credentials in lagoon-build-deploy - not lagoon-core.

    We recommend installing a Harbor version greater than 2.6.0 with Helm chart 1.10.0 or greater.

    "},{"location":"installing-lagoon/requirements/#k8up-for-backups","title":"k8up for backups","text":"

    Lagoon has built in configuration for the K8up backup operator. Lagoon can configure prebackup pods, schedules and retentions, and manage backups and restores for K8up. Lagoon currently only supports the 1.x versions of K8up, owing to a namespace change in v2 onwards, but we are working on a fix.

    K8up v2:

    Lagoon does not currently support K8up v2 onwards due to a namespace change here.

    We recommend installing K8up version 1.2.0 with Helm Chart 1.1.0

    "},{"location":"installing-lagoon/requirements/#storage-provisioners","title":"Storage provisioners","text":"

    Lagoon utilizes a default 'standard' StorageClass for most workloads, and the internal provisioner for most Kubernetes platforms will suffice. This should be configured to be dynamic provisioning and expandable where possible.

    Lagoon also requires a StorageClass called 'bulk' to be available to support persistant pod replicas (across nodes). This StorageClass should support ReadWriteMany (RWX) access mode and should be configured to be dynamic provisioning and expandable where possible. See https://kubernetes.io/docs/concepts/storage/persistent-volumes/#access-modes for more information, and the production drivers list for a complete list of compatible drivers.

    We have curently only included the instructions for (the now deprecated) EFS Provisioner. The production EFS CSI driver has issues with provisioning more than 120 PVCs. We are awaiting upstream possible fixes here and here - but most other providers CSI drivers should also work, as will configurations with an NFS-compatible server and provisioner.

    "},{"location":"installing-lagoon/requirements/#how-much-kubernetes-experienceknowledge-is-required","title":"How much Kubernetes experience/knowledge is required?","text":"

    Lagoon uses some very involved Kubernetes and cloud-native concepts, and while full familiarity may not be necessary to install and configure Lagoon, diagnosing issues and contributing may prove difficult without a good level of familiarity.

    As an indicator, comfort with the curriculum for the Certified Kubernetes Administrator would be suggested as a minimum.

    "},{"location":"installing-lagoon/update-lagoon/","title":"Updating","text":"
    1. Download newest charts using Helm.

      Download newest charts
      helm repo update\n
    2. Check with helm diff for changes (https://github.com/databus23/helm-diff).

      Check for changes
      helm diff upgrade --install --create-namespace --namespace lagoon-core \\\n-f values.yml lagoon-core lagoon/lagoon-core\n
    3. Back up the Lagoon databases prior to any Helm actions. We also suggest scaling the API to a single pod, to aid the database migration scripts running in the initContainers.

    4. Run the upgrade using Helm.

      Run upgrade
      helm upgrade --install --create-namespace --namespace lagoon-core \\\n-f values.yaml lagoon-core lagoon/lagoon-core\n
    5. (Note that as of Lagoon v2.11.0, this step is no longer required.) If upgrading Lagoon Core, ensure you run the rerun_initdb.sh script to perform post upgrade migrations.

      Run script
      kubectl --namespace lagoon-core exec -it lagoon-core-api-db-0 -- \\\nsh -c /rerun_initdb.sh\n
    6. Re-scale the API pods back to their original level.

    7. If upgrading Lagoon Core, and you have enabled groups/user syncing for OpenSearch, you may additionally need to run the sync:opendistro-security script to update the groups in OpenSearch. This command can also be prefixed with a GROUP_REGEX=<group-to-sync to sync a single group at a time, as syncing the entire group structure may take a long time.

      Run script
      kubectl --namespace lagoon-core exec -it deploy/lagoon-core-api -- \\\nsh -c yarn sync:opendistro-security\n

    Check https://github.com/uselagoon/lagoon/releases for additional upgrades.

    "},{"location":"installing-lagoon/update-lagoon/#database-backups","title":"Database Backups","text":"

    You may want to back up the databases before upgrading Lagoon Core, the following will create backups you can use to restore from if required. You can delete them afterwards.

    "},{"location":"installing-lagoon/update-lagoon/#api-db","title":"API DB","text":"Back up API DB
    kubectl --namespace lagoon-core exec -it lagoon-core-api-db-0 -- \\\nsh -c 'mysqldump --max-allowed-packet=500M --events \\\n    --routines --quick --add-locks --no-autocommit \\\n    --single-transaction infrastructure | gzip -9 > \\\n    /var/lib/mysql/backup/$(date +%Y-%m-%d_%H%M%S).infrastructure.sql.gz'\n
    "},{"location":"installing-lagoon/update-lagoon/#keycloak-db","title":"Keycloak DB","text":"Back up Keycloak DB
    kubectl --namespace lagoon-core exec -it lagoon-core-keycloak-db-0 -- \\\nsh -c 'mysqldump --max-allowed-packet=500M --events \\\n    --routines --quick --add-locks --no-autocommit \\\n    --single-transaction keycloak | gzip -9 > \\\n    /var/lib/mysql/backup/$(date +%Y-%m-%d_%H%M%S).keycloak.sql.gz'\n
    "},{"location":"logging/kibana-examples/","title":"Kibana Examples","text":"

    Have you seen the Kibana getting started video and are now ready to work with logs? We are here to help! This page will give you examples of Kibana queries you can use. This is not a Kibana 101 class, but it can help you understand some of what you can do in Kibana.

    Ready to get started? Good!

    Note

    Make sure that you have selected your tenant before starting! You can do that by on the Tenant icon on the left-hand menu. Once you have selected your tenant, click on the Discover icon again to get started.

    "},{"location":"logging/kibana-examples/#router-logs","title":"Router Logs","text":"

    Below you'll find examples for two common log requests:

    • Viewing the total number of hits/requests to your site.
    • Viewing the number of hits/requests from a specific IP address.
    "},{"location":"logging/kibana-examples/#total-number-of-hitsrequests-to-your-site","title":"Total Number of hits/requests to your site","text":"
    • Let's start Kibana up and select Discovery (#1 in screen shot below)
    • Then the router logs for your project(#2).
    • From there, we will filter some of this information down a bit. Let's focus on our main production environment.
    • In the search bar (#3), enter:

      openshift_project: \"name of your production project\"

    • This will show you all the hits to your production environment in the given time frame.
    • You can change the time frame in the upper right hand corner (#4).
    • Clicking on the arrow next to the entry (#5) will expand it and show you all the information that was captured.
    • You can add any of those fields to the window by hovering over them and clicking add on the left hand side (#6).
    • You can also further filter your results by using the search bar.

    "},{"location":"logging/kibana-examples/#number-of-hitsrequests-from-a-specific-ip-address","title":"Number of hits/requests from a specific IP address","text":"

    Running the query above will give you a general look at all the traffic to your site, but what if you want to narrow in on a specific IP address? Perhaps you want to see how many times an IP has hit your site and what specific pages they were looking at. This next query should help.

    We are going to start off with the same query as above, but we are going to add a couple of things.

    • First, add the following fields: client_ip and http_request.
    • This will show you a list of all IP addresses and the page they requested. Here is what we see for the Amazee.io page:

    That looks good, but what if we wanted to just show requests from a specific IP address? You can filter for the address by adding it to your search criteria.

    • We are going to add: AND client_ip: \"IP address\".
    • That will filter the results to just show you hits from that specific IP address, and the page they were requesting. Here is what it looks like for our Amazee.io website:

    "},{"location":"logging/kibana-examples/#container-logs","title":"Container Logs","text":"

    Container logs will show you all stout and sterr messages for your specific container and project. We are going to show an example for getting logs from a specific container and finding specific error numbers in that container.

    "},{"location":"logging/kibana-examples/#logs-from-a-container","title":"Logs from a container","text":"

    Want to see the logs for a specific container (php, nginx, etc)? This section will help! Let's focus on looking at NGINX logs.

    • We start by opening up Kibana and selecting Discover (#1 in the screen shot below).
    • From there, we select the container logs for our project (#2).
    • Let's go to the search bar (#3) and enter: kubernetes.container_name: \"nginx\"
    • This will display all NGINX logs for our project.
    • Clicking on the arrow next to an entry (#4) will expand that entry and show you all of the information it gathered.
    • Let's add the message field and the level field to the view. You can do that by clicking on \"Add\" on the left hand side (#5).
    • You can change the time frame in the upper right hand corner of the screen (#6), in the example below I'm looking at logs for the last 4 hours.

    "},{"location":"logging/kibana-examples/#specific-errors-in-logs","title":"Specific errors in logs","text":"

    Want to see how many 500 Internal Server errors you've had in your NGINX container? You can do that by changing the search query. If you search:

    kubernetes.container_name: \"nginx\" AND message: \"500\"

    That will only display 500 error messages in the NGINX container. You can search for any error message in any container that you would like!

    "},{"location":"logging/kibana-examples/#visualization","title":"Visualization","text":"

    Kibana will also give you the option to create visualizations or graphs. We are going to create a chart to show number of hits/requests in a month using the same query we used above.

    1. Click on Visualize on the left hand side of Kibana.
    2. Click on the blue plus sign.
    3. For this example, we are going to select a Vertical Bar chart.
    4. Select the router logs for your project.
    5. Click on X-Axis under Buckets and select Date Histogram, with the interval set to daily
    6. Success!! You should now see a nice bar graph showing your daily traffic.

    Note

    Make sure that you select an appropriate time frame for the data in the upper right hand corner.

    Here is an example of a daily hits visualization chart:

    Also note that you can save your visualizations (and searches)! That will make it even faster to access them in the future. And because each account has their own Kibana Tenant, no searches or visualizations are shared with another account.

    "},{"location":"logging/kibana-examples/#troubleshooting","title":"Troubleshooting","text":""},{"location":"logging/logging/","title":"Logging","text":"

    Lagoon provides access to the following logs via Kibana:

    • Logs from the Kubernetes Routers, including every single HTTP and HTTPS request with:
      • Source IP
      • URL
      • Path
      • HTTP verb
      • Cookies
      • Headers
      • User agent
      • Project
      • Container name
      • Response size
      • Response time
    • Logs from containers:
      • stdout and stderr messages
      • Container name
      • Project
    • Lagoon logs:
      • Webhooks parsing
      • Build logs
      • Build errors
      • Any other Lagoon related logs
    • Application logs:
      • For Drupal: install the Lagoon Logs module in order to receive logs from Drupal Watchdog.
      • For Laravel: install the Lagoon Logs for Laravel package.
      • For other workloads:
        • Send logs to udp://application-logs.lagoon.svc:5140
        • Ensure logs are structured as JSON encoded objects.
        • Ensure the type field contains the name of the Kubernetes namespace ($LAGOON_PROJECT-$LAGOON_ENVIRONMENT).

    To access the logs, please check with your Lagoon administrator to get the URL for the Kibana route (for amazee.io, this is https://logs.amazeeio.cloud/).

    Each Lagoon user account has their own login and will see the logs only for the projects to which they have access.

    Each Lagoon user account also has their own Kibana Tenant, which means no saved searches or visualizations are shared with another account.

    If you would like to know more about how to use Kibana: https://www.elastic.co/webinars/getting-started-kibana.

    "},{"location":"resources/faq/","title":"FAQ","text":""},{"location":"resources/faq/#how-do-i-contact-my-lagoon-administrator","title":"How do I contact my Lagoon administrator?","text":"

    You should have a private Slack channel that was set up for you to communicate - if not, or you've forgotten how to contact us, reach out at support@amazee.io.

    "},{"location":"resources/faq/#i-found-a-bug","title":"I found a bug! \ud83d\udc1e","text":"

    If you've found a bug or security issue, please send your findings to support@amazee.io. Please DO NOT file a GitHub issue for them.

    "},{"location":"resources/faq/#im-interested-in-amazeeios-hosting-services-with-lagoon","title":"I'm interested in amazee.io's hosting services with Lagoon","text":"

    That's great news! You can contact them via email at inquiries@amazee.io.

    "},{"location":"resources/faq/#how-can-i-restore-a-backup","title":"How can I restore a backup?","text":"

    We have backups available for files and databases, typically taken every 24 hours at most. These backups are stored offsite.

    We keep up to 7 daily backups and 4 weekly backups.

    If you ever need to recover or restore a backup, feel free to submit a ticket or send us a message via chat and we will be more than happy to help!

    "},{"location":"resources/faq/#how-can-i-download-a-database-dump","title":"How can I download a database dump?","text":""},{"location":"resources/faq/#im-getting-an-invalid-ssl-certificate-error","title":"I'm getting an invalid SSL certificate error","text":"

    The first thing to try is what is listed in our documentation about SSL.

    If you follow those steps, and you are still seeing an error, please submit a ticket or send us a message on chat and we can help resolve this for you.

    "},{"location":"resources/faq/#im-getting-an-array-error-when-running-a-drush-command","title":"I'm getting an \"Array\" error when running a Drush command","text":"

    This was a bug that was prevalent in Drush versions 8.1.16 and 8.1.17. There error would look something like this:

    Text Only
    The command could not be executed successfully (returned: Array [error]\n(\n[default] => Array\n(\n[default] => Array\n(\n[driver] => mysql\n[prefix] => Array\n(\n[default] =>\n)\n, code: 0)\nError: no database record could be found for source @main [error]\n

    Upgrading Drush should fix that for you. We strongly suggest that you use version 8.3 or newer. Once Drush is upgraded the command should work!

    "},{"location":"resources/faq/#im-seeing-an-internal-server-error-when-trying-to-access-my-kibana-logs","title":"I'm seeing an Internal Server Error when trying to access my Kibana logs","text":"

    No need to panic! This usually happens when a tenant has not been selected. To fix this, follow these steps:

    1. Go to \"Tenants\" on the left-hand menu of Kibana.
    2. Click on your tenant name.
    3. You'll see a pop-up window that says: \"Tenant Change\" and the name of your tenant.
    4. Go back to the \"Discover\" tab and attempt your query again.

    You should now be able to see your logs.

    "},{"location":"resources/faq/#im-unable-to-ssh-into-any-environment","title":"I'm unable to SSH into any environment","text":"

    I'm unable to SSH into any environment. I'm getting the following message: Permission denied (publickey). When I run drush sa no aliases are returned.

    This typically indicates an issue with Pygmy. You can find our troubleshooting docs for Pygmy here: https://pygmy.readthedocs.io/en/master/troubleshooting/

    "},{"location":"resources/faq/#how-can-i-check-the-status-of-a-build","title":"How can I check the status of a build?","text":""},{"location":"resources/faq/#how-do-i-add-a-cron-job","title":"How do I add a cron job?","text":""},{"location":"resources/faq/#how-do-i-add-a-new-route","title":"How do I add a new route?","text":""},{"location":"resources/faq/#how-do-i-remove-a-route","title":"How do I remove a route?","text":"

    You will need to contact your helpful Lagoon administrator should you need to remove a route. You can use the Slack channel that was set up for you to communicate - if not, you can always reach us at support@amazee.io or on Discord.

    "},{"location":"resources/faq/#when-i-run-pygmy-status-no-keys-are-loaded","title":"When I run pygmy status, no keys are loaded","text":"

    You'll need to load your SSH key into pygmy. Here's how: https://pygmy.readthedocs.io/en/master/ssh_agent

    "},{"location":"resources/faq/#when-i-run-drush-sa-no-aliases-are-returned","title":"When I run drush sa no aliases are returned","text":"

    This typically indicates an issue with Pygmy. You can find our troubleshooting docs for Pygmy here: https://pygmy.readthedocs.io/en/master/troubleshooting

    "},{"location":"resources/faq/#my-deployments-fail-with-a-message-saying-drush-needs-a-more-functional-environment","title":"My deployments fail with a message saying: \"drush needs a more functional environment\"","text":"

    This usually means that there is no database uploaded to the project. Follow our step-by-step guide to add a database to your project.

    "},{"location":"resources/faq/#when-i-start-pygmy-i-see-an-address-already-in-use-error","title":"When I start Pygmy I see an \"address already in use\" error?","text":"

    Error starting userland proxy: listen tcp 0.0.0.0:80: bind: address already in use Error: failed to start containers: amazeeio-haproxy

    This is a known error! Most of the time it means that there is already something running on port 80. You can find the culprit by running the following query:

    Text Only
    netstat -ltnp | grep -w ':80'\n

    That should list everything running on port 80. Kill the process running on port 80. Once port 80 is freed up, Pygmy should start up with no further errors.

    "},{"location":"resources/faq/#how-can-i-change-branchespr-environmentsproduction-on-my-project","title":"How can I change branches/PR environments/production on my project?","text":"

    You can make that change using the Lagoon API! You can find the documentation for this change in our GraphQL documentation.

    "},{"location":"resources/faq/#how-do-i-add-a-redirect","title":"How do I add a redirect?","text":""},{"location":"resources/faq/#how-can-i-add-new-users-and-ssh-keys-to-my-projectgroup","title":"How can I add new users (and SSH keys) to my project/group?","text":"

    This can be done via the Lagoon API. You can find the steps documentation for this change in our GraphQL documentation.

    "},{"location":"resources/faq/#can-an-environment-be-completely-deleted-to-roll-out-large-code-changes-to-my-project","title":"Can an environment be completely deleted to roll out large code changes to my project?","text":"

    Environments are fully built from scratch at each deploy, dropping the old database and files and pushing your code would result in a fresh clean build, Don\u2019t forget to re-sync!

    It is possible to delete an environment via GraphQL. You can find the instructions in our GraphQL documentation.

    "},{"location":"resources/faq/#how-do-i-get-my-new-environment-variable-to-show-up","title":"How do I get my new environment variable to show up?","text":"

    Once you've added a runtime environment variable to your production environment via GraphQL, then all you need to do a deploy in order to get your change to show up on your environment.

    "},{"location":"resources/faq/#how-do-i-sftp-files-tofrom-my-lagoon-environment","title":"How do I SFTP files to/from my Lagoon environment?","text":"

    For cloud hosting customers, you can SFTP to your Lagoon environment by using the following information:

    • Server Hostname: ssh.lagoon.amazeeio.cloud
    • Port: 32222
    • Username: <Project-Environment-Name>

    Your username is going to be the name of the environment you are connecting to, most commonly in the pattern PROJECTNAME-ENVIRONMENT.

    You may also be interested in checking out our new Lagoon Sync tool, which you can read about here: https://github.com/uselagoon/lagoon-sync

    Authentication also happens automatically via SSH Public & Private Key Authentication.

    "},{"location":"resources/faq/#i-dont-want-to-use-lets-encrypt-i-have-an-ssl-certificate-i-would-like-to-install","title":"I don't want to use Let's Encrypt. I have an SSL certificate I would like to install","text":"

    We can definitely help with that. Once you have your own SSL certificate, feel free to submit a ticket or send us a message via chat and we will be more than happy to help! You will need to send us the following files:

    • Certificate key (.key)
    • Certificate file (.crt)
    • Intermediate certificates (.crt)

    Also, you will need to set the tls-acme option in .lagoon.yml to false.

    "},{"location":"resources/faq/#is-it-possible-to-mount-an-external-volume-efsfusesmbetc-into-lagoon","title":"Is it possible to mount an external volume (EFS/Fuse/SMB/etc) into Lagoon?","text":"

    Mounting an external volume would need to be handled completely inside of your containers, Lagoon does not provide a provision for this type of connection as part of the platform.

    A developer can handle this by installing the necessary packages into the container (via the Dockerfile), and ensuring the volume mount is connected via a pre- or post-rollout task.

    "},{"location":"resources/faq/#is-there-a-way-to-stop-a-lagoon-build","title":"Is there a way to stop a Lagoon build?","text":"

    If you have a build that has been running for a long time, and want to stop it, you will need to reach out to support. Currently, builds can only be stopped by users with admin access to the cluster.

    "},{"location":"resources/faq/#we-installed-the-elasticsearchsolr-service-on-our-website-how-can-we-get-access-to-the-ui-port-92008983-from-a-browser","title":"We installed the Elasticsearch\\Solr service on our website. How can we get access to the UI (port 9200/8983) from a browser?","text":"

    We suggest only exposing web services (NGINX/Varnish/Node.js) in your deployed environments. Locally, you can get the ports mapped for these services by checking docker-compose ps, and then load http://localhost:<port> in your browser.

    "},{"location":"resources/faq/#i-have-a-question-that-isnt-answered-here","title":"I have a question that isn't answered here","text":"

    You can reach out to the team via Discord or email at uselagoon@amazee.io.

    "},{"location":"resources/glossary/","title":"Glossary","text":"Term Definition Access Mode Controls how a persistent volume can be accessed. Active/Standby Active/Standby deployments, also known as blue/green deployments, are a way to seamlessly switch over your production content. Ansible An open-source suite of software tools that enables infrastructure as code. AWS Amazon Web Services AWS Glacier A secure and inexpensive S3 storage for long-term backup. BitBucket Git hosting owned by Atlassian, which integrates with their tools. Brew Homebrew is a package manager for OSX. CA A Certificate Authority is a trusted entity that issues Secure Sockets Layer (SSL) certificates. CDN Content Delivery Network - distributes content via caching CI Continuous Integration CIDR Classess Inter-Domain Routing - a method of assigning IP addresses CLI Command Line Interface Cluster A unified group of servers or VMs, distributed and managed together, which serves one entity to ensure high availability, load balancing, and scalability. CMS Content Management System Cron job The cron command-line utility is a job scheduler on Unix-like operating systems. Users who set up and maintain software environments use cron to schedule jobs, also known as cron jobs, to run periodically at fixed times, dates, or intervals. Composer A package manager DDoS Distributed Denial of Service DNS Domain Name System Docker A container engine using Linux features and automating application deployment. Docker Compose A tool for defining and running Docker applications via YAML files. Drupal Open-source Content Management System Drush A command line shell for Drupal. EC2 Amazon Elastic Compute Cloud Elasticsearch An open-source search engine. It provides a distributed, multi-tenant-capable full-text search engine with a web interface and schema-free JSON documents. Galera A generic synchronous multi-master replication library for transactional databases. Git A free and open-source distributed version control system. Git Hash/SHA A generated string that identifies each commit. Uses the SHA-1 algorithm GitHub A proprietary version control hosting company using Git. A subsidiary of Microsoft, it offers all of the distributed version control and source code management functionality of Git as well as additional features. GitLab A web-based Git repository manager with CI capabilities. GraphQL An open-source data query and manipulation language for APIs, and a runtime for fulfilling queries with existing data. Harbor An open source container image registry that secures images with role-based access control, scans images for vulnerabilities, and signs images as trusted. Helm A package manager for Kubernetes, it helps you manage Kubernetes applications. Helm Charts Helm Charts help you define, install, and upgrade even the most complex Kubernetes application. HTTP HyperText Transfer Protocol. HTTP is the underlying protocol used by the World Wide Web and this protocol defines how messages are formatted and transmitted, and what actions Web servers and browsers should take in response to various commands. IAM AWS Identity and Access Management (IAM) is a web service that helps you securely control access to AWS resources. IDE An integrated development environment is a software application that provides comprehensive facilities to computer programmers for software development. An IDE normally consists of at least a source code editor, build automation tools, and a debugger. Ingress controller An Ingress controller is a specialized load balancer for Kubernetes (and other containerized) environments. IPTables A command line utility for configuring Linux kernel firewall. Jenkins An open-source automation server. k3s A highly available, certified Kubernetes distribution. k3d k3d is a lightweight wrapper to run k3s in Docker. k8s Numeronym for Kubernetes (K + 8 letters + s) k8up K8up is a backup operator that will handle storage and app backups on a k8s/OpenShift cluster. Kibana An open-source data visualization plugin for Elasticsearch. It provides visualization capabilities on top of the content indexed on an Elasticsearch cluster. KinD Kubernetes in Docker - a tool for running local Kubernetes clusters using Docker container \u201cnodes\u201d. Kind was primarily designed for testing Kubernetes itself, but may be used for local development or CI. kubectl The Kubernetes command-line tool which allows you to run commands against Kubernetes clusters. Kubernetes An open-source system for automating deployment, scaling, and management of containerized applications. Lagoon An open-source application delivery platform for Kubernetes. Lagoonize Configuration changes to allow your app to run on Lagoon. Lando A free, open source, cross-platform, local development environment and DevOps tool built on Docker. Laravel A free, open-source PHP web framework, following the model\u2013view\u2013controller (MVC) architectural pattern and based on Symfony. Let's Encrypt Aa free, automated, and open certificate authority (CA). MariaDB A community-developed, commercially supported fork of the MySQL relational database management system, intended to remain free and open-source software under the GNU General Public License. Master node A single node in the cluster on which a collection of processes which manage the cluster state are running. Microservice The practice of breaking up an application into a series of smaller, more specialized parts, each of which communicate with one another across common interfaces such as APIs and REST interfaces like HTTP MongoDB MongoDB is a cross-platform document-oriented database program. Classified as a NoSQL database program, MongoDB uses JSON-like documents with schema. Multi-Tenant A single instance of software runs on a server and serves multiple tenants - a tenant is a group of users who share common access with privileges to access the software instance. The software is designed to provide each tenant a share of the resources. MVC Model-view-controller - an architectural pattern that separates an application into three main logical components: the model, the view, and the controller. Each of these components are built to handle specific development aspects of an application. NGINX NGINX is a web server which can also be used as a reverse proxy, load balancer, mail proxy and HTTP cache. Node Single EC2 instance (AWS virtual machine) Node.js An open-source, cross-platform, JavaScript runtime environment that executes JavaScript code outside of a browser. OpenSearch A community-driven, Apache 2.0-licensed open source search and analytics suite that makes it easy to ingest, search, visualize, and analyze data. OpenShift Container application platform that brings Docker and Kubernetes to the enterprise. PHP PHP (Personal Home Page) is a general-purpose programming language originally designed for web development. PhpStorm A development tool (IDE) for PHP and web projects. Pod A group of containers that are deployed together on the same host. The basic unit that Kubernetes works with. PostgreSQL A free and open-source relational database management system emphasizing extensibility and technical standards compliance. Public/Private Key Public-key encryption is a cryptographic system that uses two keys -- a public key known to everyone and a private or secret key known only to the recipient of the message. Puppet An open-source software configuration management and deployment tool. PV PersistentVolume - a piece of storage in the cluster that has been provisioned by an administrator or dynamically provisioned using Storage Classes. PVC Persistent Volume Claim - a request for storage by a user. Python Python is an open-source, interpreted, high-level, general-purpose programming language. RabbitMQ An open-source message-broker software. RBAC Role-Based Access Control RDS Relational Database Service Redis An open source, in-memory data store used as a database, cache, streaming engine, and message broker. Restic An open-source backup program. ROX Kubernetes access mode ReadOnlyMany - the volume can be mounted as read-only by many nodes. Ruby An interpreted, high-level, general-purpose programming language which supports multiple programming paradigms. It was designed with an emphasis on programming productivity and simplicity. In Ruby, everything is an object, including primitive data types. RWO Kubernetes access mode ReadWriteOnce - the volume can be mounted as read-write by a single node. ReadWriteOnce access mode still can allow multiple pods to access the volume when the pods are running on the same node. RWOP Kubernetes access mode ReadWriteOncePod - the volume can be mounted as read-write by a single Pod. Use ReadWriteOncePod access mode if you want to ensure that only one pod across whole cluster can read that PVC or write to it. This is only supported for CSI volumes and Kubernetes version 1.22+. RWX Kubernetes access mode ReadWriteMany - the volume can be mounted as read-write by many nodes. SHA-1 Secure Hash Algorithm 1, a hash function which takes an input and produces a 160-bit hash value known as a message digest \u2013 typically rendered as 40 hexadecimal digits. It was designed by the United States National Security Agency, and is a U.S. Federal Information Processing Standard. Solr An open-source enterprise-search platform, written in Java. SSH Secure Socket Shell, a network protocol that provides administrators with a secure way to access a remote computer. SSL Secure Socket Layer Storage Classes A StorageClass provides a way for Kubernetes administrators to describe the \"classes\" of storage they offer. Different classes might map to quality-of-service levels, or to backup policies, or to arbitrary policies determined by the cluster administrators Symfony Symfony is a PHP web application framework and a set of reusable PHP components/libraries, Drupal 8 and up are based on Symfony. TCP Transmission Control Protocol, a standard that defines how to establish and maintain a network conversation through which application programs can exchange data. TLS Transport Layer Security Trivy A simple and comprehensive vulnerability scanner for containers, suitable for CI. TTL Time to live or hop limit is a mechanism that limits the lifespan or lifetime of data in a computer or network. Uptime Robot Uptime monitoring service. Varnish A powerful, open-source HTTP engine/reverse HTTP proxy that can speed up a website by caching (or storing) a copy of a webpage the first time a user visits. VM Virtual Machine Webhook A webhook is a way for an app like GitHub, GitLab, Bitbucket, etc, to provide other applications with immediate data and act upon something, like a pull request. YAML Yet Another Markup Language - YAML is a human-readable data-serialization language. It is commonly used for configuration files and in applications where data is being stored or transmitted."},{"location":"resources/tutorials-and-webinars/","title":"Tutorials, Webinars, and Videos","text":""},{"location":"resources/tutorials-and-webinars/#intro-to-lagoon-webinar","title":"Intro to Lagoon Webinar","text":"

    [Slides]

    "},{"location":"resources/tutorials-and-webinars/#advance-lando-ing-with-lagoon","title":"Advance Lando-ing with Lagoon","text":""},{"location":"resources/tutorials-and-webinars/#webinar-lagoon-insights","title":"Webinar - Lagoon Insights","text":""},{"location":"resources/tutorials-and-webinars/#lagoon-deployment-demo","title":"Lagoon Deployment Demo","text":""},{"location":"resources/tutorials-and-webinars/#how-to-manage-multiple-drupal-sites-with-lagoon","title":"How to Manage Multiple Drupal Sites with Lagoon","text":"

    [Slides]

    "},{"location":"resources/tutorials-and-webinars/#kubernetes-webinar-101","title":"Kubernetes Webinar 101","text":"

    [Slides]

    "},{"location":"resources/tutorials-and-webinars/#kubernetes-webinar-102","title":"Kubernetes Webinar 102","text":"

    [Slides]

    "},{"location":"resources/tutorials-and-webinars/#server-side-rendering-best-practices-how-we-run-decoupled-websites-with-110-million-hits-per-month","title":"Server-side Rendering Best Practices: How We Run Decoupled Websites with 110 Million Hits per Month","text":""},{"location":"resources/tutorials-and-webinars/#lagoon-opensource-docker-build-deployment-system-with-full-drupal-support","title":"Lagoon: OpenSource Docker Build & Deployment System with Full Drupal Support","text":""},{"location":"resources/tutorials-and-webinars/#how-do-i-fix-an-internal-server-error-in-kibana","title":"How do I fix an internal server error in Kibana?","text":""},{"location":"resources/tutorials-and-webinars/#how-do-i-add-a-new-route","title":"How do I add a new route?","text":""},{"location":"resources/tutorials-and-webinars/#how-do-i-check-the-status-of-a-build","title":"How do I check the status of a build?","text":""},{"location":"resources/tutorials-and-webinars/#how-do-i-add-a-redirect-in-lagoon","title":"How do I add a redirect in Lagoon?","text":""},{"location":"resources/tutorials-and-webinars/#how-do-i-download-a-database-dump","title":"How do I download a database dump?","text":""},{"location":"resources/tutorials-and-webinars/#how-do-i-add-a-cron-job","title":"How do I add a cron job?","text":""},{"location":"resources/tutorials-and-webinars/#deploying-web-applications-on-kubernetes-toby-bellwood-techweek21-talk","title":"Deploying web applications on Kubernetes - Toby Bellwood | Techweek21 Talk","text":""},{"location":"resources/tutorials-and-webinars/#dealing-with-unprecedented-scale-during-covid-19-sean-hamlin-techweek21-talk","title":"Dealing with unprecedented scale during Covid-19 - Sean Hamlin| Techweek21 Talk","text":""},{"location":"resources/tutorials-and-webinars/#silverstripe-from-local-to-live-on-lagoon-thom-toogood-techweek21-talk","title":"Silverstripe from local to live on Lagoon -Thom Toogood | Techweek21 Talk","text":""},{"location":"using-lagoon-advanced/active-standby/","title":"Active/Standby","text":""},{"location":"using-lagoon-advanced/active-standby/#configuration","title":"Configuration","text":"

    To change an existing project to support active/standby you'll need to configure some project settings with the Lagoon API.

    • productionEnviromment should be set to the branch name of the current active environment.
    • standbyProductionEnvironment should be set to the branch name of the current environment that is in standby.
    Update project settings
    mutation updateProject {\n  updateProject(input:{\n    id:1234\n    patch:{\n      productionEnvironment:\"production-brancha\"\n      standbyProductionEnvironment:\"production-branchb\"\n    }\n  }){\n    standbyProductionEnvironment\n    name\n    productionEnvironment\n  }\n}\n
    "},{"location":"using-lagoon-advanced/active-standby/#lagoonyml-production_routes","title":".lagoon.yml - production_routes","text":"

    To configure a project for active/standby in the .lagoon.yml file, you'll need to configure the production_routes section with any routes you want to attach to the active environment, and any routes to the standby environment. During an active/standby switch, these routes will migrate between the two environments.

    If you have two production environments, production-brancha and production-branchb, with the current active production environment as production-brancha then:

    • Routes under production_routes.active will direct you to production-brancha.
    • Routes under production_routes.standby will direct you to production-branchb.

    During an active/standby switch, the routes will swap:

    • Routes under production_routes.active will direct you to production-branchb.
    • Routes under production_routes.standby will direct you to production-brancha.
    .lagoon.yml
    production_routes:\nactive:\nroutes:\n- nginx:\n- example.com:\ntls-acme: 'false'\n- active.example.com:\ntls-acme: 'false'\nstandby:\nroutes:\n- nginx:\n- standby.example.com:\ntls-acme: 'false'\n

    Info

    Any routes that are under the section environments..routes will not be moved as part of active/standby. These routes will always be attached to the environment as defined. Ensure that if you do need a specific route to be migrated during an active/standby switch, that you remove them from the environments section and place them under the production_routes section specific to if it should be an active or standby route. See more about routes in .lagoon.yml.

    "},{"location":"using-lagoon-advanced/active-standby/#triggering-a-switch-event","title":"Triggering a switch event","text":""},{"location":"using-lagoon-advanced/active-standby/#via-the-ui","title":"via the UI","text":"

    To trigger the switching of environment routes, you can visit the standby environment in the Lagoon UI and click on the button labeled Switch Active/Standby environments. You will be prompted to confirm your action.

    Once confirmed, it will take you to the tasks page where you can view the progress of the switch.

    "},{"location":"using-lagoon-advanced/active-standby/#via-the-api","title":"via the API","text":"

    To trigger an event to switch the environments, run the following GraphQL mutation. This will tell Lagoon to begin the process.

    Active Standby Switch
    mutation ActiveStandby {\n  switchActiveStandby(\n    input:{\n      project:{\n        name:\"drupal-example\"\n      }\n    }\n  ){\n    id\n    remoteId\n  }\n}\n

    A task is created in the current active environment tasks tab when a switch event is triggered. You can check the status of the switch here.

    Using the remoteId from the switchActiveStandby mutation, we can also check the status of the task.

    Check task status
    query getTask {\n  taskByRemoteId(id: \"<remoteId>\") {\n    id\n    name\n    created\n    started\n    completed\n    status\n    logs\n  }\n}\n
    "},{"location":"using-lagoon-advanced/active-standby/#drush-aliases","title":"drush aliases","text":"

    By default, projects will be created with the following aliases that will be available when active/standby is enabled on a project.

    • lagoon-production
    • lagoon-standby

    The lagoon-production alias will point to whichever site is defined as productionEnvironment, and lagoon-standby will always point to the site that is defined as standbyProductionEnvironment.

    These aliases are configurable by updating the project. Be aware that changing them may require you to update any scripts that rely on them.

    Update Drush Aliases
    mutation updateProject {\n  updateProject(input:{\n    id:1234\n    patch:{\n      productionAlias:\"custom-lagoon-production-alias\"\n      standbyAlias:\"custom-lagoon-standby-alias\"\n    }\n  }){\n    productionAlias\n    name\n    standbyAlias\n  }\n}\n
    "},{"location":"using-lagoon-advanced/active-standby/#disabling-activestandby","title":"Disabling Active/Standby","text":"

    You need to decide which of these 2 branches are the one you want to go forward with as being the main environment and then ensure it is set as the active branch (e.g production-branchb).

    1. In your .lagoon.yml file in this (now active) branch, move the routes from the production_routes.active.routes section into the environments.production-branchb section. This will mean that they are then attached to the production-branchb environment only.
    2. Once you've done this, you can delete the entire production_routes section from the .lagoon.yml file and re-deploy the production-branchb environment.
    3. If you no longer need the other branch production-brancha, you can delete it.
    4. If you keep the branch in Git, you should also remove the production_routes from that branch .lagoon.yml too, just to prevent any confusion. The branch will remain as production type unless you delete and redeploy it (wiping all storage and databases, etc).
    5. Once you've got the project in a state where there is only the production-branchb production environment, and all the other environments are development, update the project to remove the standbyProductionEnvironment from the project so that the active/standby labels on the environments go away.
    Turn off Active/Standby
    mutation updateProject {\n  updateProject(input:{\n    id:1234\n    patch:{\n      productionEnvironment:\"production-branchb\"\n      standbyProductionEnvironment:\"\"\n    }\n  }){\n    standbyProductionEnvironment\n    name\n    productionEnvironment\n  }\n}\n
    "},{"location":"using-lagoon-advanced/active-standby/#notes","title":"Notes","text":"

    When the active/standby trigger has been executed, the productionEnvironment and standbyProductionEnvironments will switch within the Lagoon API. Both environments are still classed as production environment types. We use the productionEnvironment to determine which one is labelled as active. For more information on the differences between environment types, read the documentation for environment types

    Get environments via GraphQL
    query projectByName {\n  projectByName(name:\"drupal-example\"){\n    productionEnvironment\n    standbyProductionEnvironment\n  }\n}\n

    Before switching environments:

    Results of environment query
    {\n  \"data\": {\n    \"projectByName\": {\n      \"productionEnvironment\": \"production-brancha\",\n      \"standbyProductionEnvironment\": \"production-branchb\"\n    }\n  }\n}\n

    After switching environments:

    Results of environment query
    {\n  \"data\": {\n    \"projectByName\": {\n      \"productionEnvironment\": \"production-branchb\",\n      \"standbyProductionEnvironment\": \"production-brancha\"\n    }\n  }\n}\n
    "},{"location":"using-lagoon-advanced/backups/","title":"Backups","text":"

    Lagoon makes use of the k8up operator to provide backup functionality for both database data as well as containers' persistent storage volumes. This operator utilizes Restic to catalog these backups, which is typically connected to an AWS S3 bucket to provide secure, off-site storage for the generated backups.

    "},{"location":"using-lagoon-advanced/backups/#production-environments","title":"Production Environments","text":""},{"location":"using-lagoon-advanced/backups/#backup-schedules","title":"Backup Schedules","text":"

    Backups of databases and containers' persistent storage volumes happens nightly within production environments by default.

    If a different backup schedule for production backups is required, this can be specified at a project level via setting the \"Backup Schedule\" variables in the project's .lagoon.yml file.

    "},{"location":"using-lagoon-advanced/backups/#backup-retention","title":"Backup Retention","text":"

    Production environment backups will be held according to the following schedule by default:

    • Daily: 7
    • Weekly: 6
    • Monthly: 1
    • Hourly: 0

    If a different retention period for production backups is required, this can be specified at a project level via setting the \"Backup Retention\" variables in the project's .lagoon.yml file.

    "},{"location":"using-lagoon-advanced/backups/#development-environments","title":"Development Environments","text":"

    Backups of development environments are attempted nightly and are strictly a best effort service.

    "},{"location":"using-lagoon-advanced/backups/#retrieving-backups","title":"Retrieving Backups","text":"

    Backups stored in Restic will be tracked within Lagoon, and can be recovered via the \"Backup\" tab for each environment in the Lagoon UI.

    "},{"location":"using-lagoon-advanced/backups/#custom-backup-andor-restore-locations","title":"Custom Backup and/or Restore Locations","text":"

    Lagoon supports custom backup and restore locations via the use of the \"Custom Backup Settings\" and/or \"Custom Restore Settings\" variables stored in the Lagoon API for each project.

    Danger

    Proceed with caution: Setting these variables will override backup/restore storage locations that may be configured at a cluster level. Any misconfiguration will cause backup/restore failures.

    "},{"location":"using-lagoon-advanced/base-images/","title":"Base Images","text":""},{"location":"using-lagoon-advanced/base-images/#what-is-a-base-image","title":"What is a base image?","text":"

    A base image is a Docker image that can be and is used by a project deployed on Lagoon. A base image provides a way to ensure that nothing is brought into the codebase/project from upstream that has not been audited. It also allows us to ensure that anything we might need on the deployed environment is available - from lower-level libraries to application-level themes and modules.

    Base images save time and resources when you know what system is being deployed to - if shared packages are included in the base image, they don\u2019t have to be deployed to hundreds of sites individually.

    "},{"location":"using-lagoon-advanced/base-images/#derived-images","title":"Derived images","text":"

    A derived image is one that extends a base image. For example, you might need to make several blog sites. You take our Drupal image, customize it to include all of the modules and themes you need for your blog sites, and deploy them all with that blog image. Templates are derived from base images.

    All derived images should pull in the composer.json file (via repositories like Packagist, Satis, or GitHub) so that they are using the most recent versions of the base packages.

    Further, the derived image includes a call to the script /build/pre_composer, which can be used by the base image to run scripts, updates, etc., downstream in the derived images. For instance, it should run by default when any package is updated or installed at the derived image, and the pre_composer script will then update the base image package.

    "},{"location":"using-lagoon-advanced/base-images/#anatomy-of-a-base-image","title":"Anatomy of a base image","text":"

    Info

    This document will talk about Drupal and Laravel base images as examples, as it was originally written for a client who uses those technologies in their Lagoon projects. It will be expanded to cover the contents of other base images, but none of the processes differ, no matter what the content of your base image.

    Base images are managed with Composer and hosted in BitBucket, GitHub, or GitLab (whatever your team is using). Each base image has its own repository.

    "},{"location":"using-lagoon-advanced/base-images/#metapackages","title":"Metapackages","text":"

    The metapackage is a Composer package that wraps several other components. These include, for example, the core files for Laravel or Drupal, along with any needed modules or themes. This way, you do not need to include Laravel or Drupal, etc., as a dependency in your project.

    Here\u2019s an example from the composer.json in a Laravel base image:

    composer.json
    \"require\": {\n    \"amazeelabs/algm_laravel_baseimage\": \"*\"\n},\n

    We only require this metapackage, which points to a GitHub repository.

    "},{"location":"using-lagoon-advanced/base-images/#docker-composeyml","title":"docker-compose.yml","text":"

    Other pieces of your project are defined in docker-compose.yml. For example, if you have a Drupal project, you need the Drupal image, but you also need MariaDB, Solr, Redis, and Varnish. We have versions of these services optimized for Drupal, all of which are included in docker-compose.yml.

    "},{"location":"using-lagoon-advanced/base-images/#drupal","title":"Drupal","text":"

    The Drupal base image contains the following contributed tools and modules, in addition to Drupal core:

    • Drupal Console
    • Drush
    • Configuration Installer
    • Redis
    • Poll
    • Search API
    • Search API Solr
    • Varnish Purge
    • Purge
    • Admin Toolbar
    • CDN
    • Password Policy
    • Pathauto
    • Ultimate Cron
    "},{"location":"using-lagoon-advanced/base-images/#laravel","title":"Laravel","text":""},{"location":"using-lagoon-advanced/base-images/#configuration","title":"Configuration","text":"

    The base images have provided the default values for the environment variables used by Laravel.

    These are values for:

    • DB_CONNECTION
    • DB_HOST
    • DB_PORT
    • DB_DATABASE
    • DB_USERNAME
    • DB_PASSWORD
    • REDIS_HOST
    • REDIS_PASSWORD
    • REDIS_PORT

    Ensure that your config files (typically located in /config) make use of these by default.

    "},{"location":"using-lagoon-advanced/base-images/#queues","title":"Queues","text":"

    If your project makes use of queues, you can make use of the artisan-worker service. It is a worker container, used for executing artisan queue:work. This is disabled by default - look at the comments in docker-compose.yml.

    "},{"location":"using-lagoon-advanced/base-images/#understanding-the-process-of-building-a-base-image","title":"Understanding the process of building a base image","text":"

    There are several parts to the process of building a base image. All of the major steps are represented in the Makefile. The Jenkinsfile contains a more stripped-down view. Taking a look at both files will give you a good understanding of what happens during this process. Most steps can be tested locally (this is important when building new versions of the base image). After you\u2019ve created and tested everything locally and pushed it up, the actual base image is built by Jenkins and pushed to Harbor.

    "},{"location":"using-lagoon-advanced/base-images/#makefile-and-build-assumptions","title":"Makefile and build assumptions","text":"

    If you're planning on running locally, there are some minimum environment variables that need to be present to build at all.

    "},{"location":"using-lagoon-advanced/base-images/#base-image-build-variables","title":"Base image build variables","text":"

    Variables injected into the base image build process and where to find them.

    • BUILD_NUMBER - This is injected by Jenkins automatically.
    • GIT_BRANCH - This is provided by the Jenkins build process itself. Depends on the branch being built at the time (develop, main, etc.).
    • DOCKER_REPO/DOCKER_HUB - This is defined inside the Jenkinsfile itself. It points to the Docker project and hub into which the resulting images will be pushed.
    • DOCKER_USERNAME/DOCKER_PASSWORD - These are used to actually log into the Docker repository early in the build. These variables are stored inside of the Jenkins credentials. These are used in the Jenkinsfile itself and are not part of the Makefile. This means that if you\u2019re building base images outside of Jenkins (i.e. locally, to test, etc.) you have to run a docker login manually before running any of the make steps.

    In practice, this means that if you're running any of the make targets on your local machine, you'll want to ensure that these are available in the environment - even if this is just setting them when running make from the command line, as an example:

    Setting make targets locally
    GIT_BRANCH=example_branch_name DOCKER_HUB=the_docker_hub_the_images_are_pushed_to DOCKER_REPO=your_docker_repo_here BUILD_NUMBER=<some_integer> make images_remove\n
    "},{"location":"using-lagoon-advanced/base-images/#makefile-targets","title":"Makefile targets","text":"

    The most important targets are the following:

    • images_build : Given the environment variables, this will build and tag the images for publication.
    • images_publish : Pushes built images to a Docker repository.
    • images_start : Will start the images for testing, etc.
    • images_test: Runs basic tests against images.
    • images_remove: Removes previously built images, given the build environment variables.
    "},{"location":"using-lagoon-advanced/base-images/#example-workflow-for-building-a-new-release-of-a-base-image","title":"Example workflow for building a new release of a base image","text":"

    There are several steps to the build process. Most of these are shared among the various base images. These mostly correspond to the Makefile target described above.

    1. Docker Login - The Docker username, password, and URL for Harbor are passed to the Docker client.
    2. Docker Build - The make images_build step is run now, which will:
      1. Ensure that all environment variables are prepared for the build.
      2. Run a docker-compose build. This will produce several new Docker images from the current Git branch.
    3. Images Test - This will run the make images_test target, which will differ depending on the images being tested. In most cases this is a very straightforward test to ensure that the images can be started and interacted with in some way (installing Drupal, listing files, etc.)
    4. Docker Push - This step runs the logic (contained in the make target images_publish) that will tag the images resulting from the Docker Build in Step 2 and push them to Harbor. This is described in more detail elsewhere in this guide.
    5. Docker Clean Images - Runs the make target images_remove, which simply deletes the newly built images from the Docker host now that they are in Harbor.
    "},{"location":"using-lagoon-advanced/base-images/#releasing-a-new-version-of-a-base-image","title":"Releasing a new version of a base image","text":"

    There are many reasons to release a new version of a base image. On Drupal or Laravel, Node.js, etc. images, it may be in order to upgrade or install a module/package for features or security. It may be about the underlying software that comes bundled in the container, such as updating the version of PHP or Node.js. It may be about updating the actual underlying images on which the base images are built.

    The images that your project's base images are built on are the managed images maintained by the Lagoon team. We release updates to these underlying images on a monthly (or more fequent) basus. When these are updated, you need to build new versions of your own base images in order to incorporate the changes and upgrades bundled in the upstream images.

    In this section we will demonstrate the process of updating and tagging a new release of the Drupal 8 base image. We will add a new module (ClamAV) to the base. We\u2019re demonstrating on Drupal because it has the most complex setup of the base images. The steps that are common to every base image are noted below.

    "},{"location":"using-lagoon-advanced/base-images/#step-1-pull-down-the-base-image-locally","title":"Step 1 - Pull down the base image locally","text":"

    This is just pulling down the Git repository locally. In the case of the Drupal 8 base image. In this example, we're using Bitbucket, so we will run:

    Clone Git repo.
    git clone ssh://git@bitbucket.biscrum.com:7999/webpro/drupal8_base_image.git\n

    "},{"location":"using-lagoon-advanced/base-images/#step-2-make-the-changes-to-the-repository","title":"Step 2 - Make the changes to the repository","text":"

    Info

    What is demonstrated here is specific to the Drupal 8 base image. However, any changes (adding files, changing base Docker images, etc.) will be done in this step for all of the base images.

    In our example, we are adding the ClamAV module to the Drupal 8 base image. This involves a few steps. The first is requiring the package so that it gets added to our composer.json file. This is done by running a composer require.

    Here we run:

    Install package with Composer require.
    composer require drupal/clamav\n

    When the Composer require process completes, the package should then appear in the composer.json file.

    Here we open the composer.json file and take a look at the list of required packages, and check that the ClamAV package is listed, and see that it is there:

    "},{"location":"using-lagoon-advanced/base-images/#step-22-ensure-that-the-required-drupal-module-is-enabled-in-template-based-derived-images","title":"Step 2.2 - Ensure that the required Drupal module is enabled in template-based derived images","text":"

    For any modules now added to the base image, we need to ensure that they\u2019re enabled on the template-based derived images. This is done by adding the module to the Lagoon Bundle module located at ./web/modules/lagoon/lagoon_bundle. Specifically, it requires you to add it as a dependency to the dependencies section of the lagoon_bundle.info.yml file. The Lagoon Bundle module is a utility module that exists only to help enforce dependencies across derived images.

    Here we open web/modules/contrib/lagoon/lagoon_bundle/lagoon_bundle.info.yml and add clamav:clamav as a dependency:

    Adding a dependency to this will ensure that whenever the Lagoon Bundle module is enabled on the derived image, its dependencies (in this case, the just-added ClamAV module) will also be enabled. This is enforced by a post-rollout script which enables lagoon_bundle on the derived images when they are rolled out.

    "},{"location":"using-lagoon-advanced/base-images/#step-23-test","title":"Step 2.3 - Test","text":"

    This will depend on what you\u2019re testing. In the case of adding the ClamAV module, we want to ensure that in the base image, the module is downloaded, and that the Lagoon Bundle module enables ClamAV when it is enabled.

    Here we check that the module is downloaded to /app/web/modules/contrib:

    And then we check that when we enable the lagoon_bundle module, it enables clamav by running:

    Enable module with Drush.
    drush pm-enable lagoon_bundle -y\n

    Warning

    You\u2019ll see that there is a JWT error in the container above. You can safely ignore this in the demonstration above - but, for background, you will see this error when there is no Lagoon environment for the site you\u2019re working on.

    With our testing done, we can now tag and build the images.

    "},{"location":"using-lagoon-advanced/base-images/#step-3-tagging-images","title":"Step 3 - Tagging images","text":"

    Images are versioned based on their Git tags - these should follow standard semantic versioning (semver) practices. All tags should have the structure vX.Y.Z where X, Y, and Z are integers (to be precise the X.Y.Z are themselves the semantic version - the vX.Y.Z is a tag). This is an assumption that is used to determine the image tags, so it must be adhered to.

    In this example we will be tagging a new version of the Drupal 8 base image indicating that we have added ClamAV.

    "},{"location":"using-lagoon-advanced/base-images/#here-we-demonstrate-how-to-tag-an-image","title":"Here we demonstrate how to tag an image","text":"

    We check that we have committed (but not pushed) our changes, just as you would do for any regular commit and push, using git log.

    1. Commit your changes if you haven\u2019t yet.
    2. We then check to see what tag we are on using git tag.
    3. Then, tag them using git tag -a v0.0.9 -m \u201cAdds clamAV to base.\u201d
      1. git -a, --annotate: Make an unsigned, annotated tag object
    4. Next, we push our tags with git push --tags.
    5. And finally, push all of our changes with git push.

    Danger

    The tags must be pushed explicitly in their own step!

    "},{"location":"using-lagoon-advanced/base-images/#how-git-tags-map-to-image-tags","title":"How Git tags map to image tags","text":"

    Danger

    Depending on the build workflow, you will almost certainly push the changes via the develop branch before merging it into the main branch.

    An important point to remember here is that the Jenkins base image build process will tag images based on the most recent commit\u2019s tag.

    Images are tagged using the following rules, and images will be built for each of these that apply:

    1. When the main branch is built, it is tagged as latest.
    2. When the develop branch is built, it is tagged as development.
    3. If the commit being built is tagged then that branch will be built with that commit\u2019s tag.
      1. This is how we release a new version as we demonstrated above. It can also be used to make ad hoc builds with fairly arbitrary tags - be reasonable with the tag names, it has only been tested with semver tags.
    "},{"location":"using-lagoon-advanced/base-images/#step-4-building-the-new-base-images","title":"Step 4 - Building the new base images","text":"

    Info

    Generally you will have a trigger strategy set up here for automatic builds, but as that will differ based on your needs and setup, this explains how to build manually.

    1. Visit your Lagoon Jenkins instance.
    2. Select the project you are working on (in this case, AIOBI Drupal 8 Base).
    3. Click the branch you would like to build.
    4. Click \u201cBuild Now.\u201d

    This will kick off the build process which, if successful, will push up the new images to Harbor.

    If the build is not successful, you can click into the build itself and read the logs to understand where it failed.

    As shown in the screenshot below from Harbor, the image we\u2019ve just built in Jenkins has been uploaded and tagged in Harbor, where it will now be scanned for any vulnerabilities. Since it was tagged as v0.0.9, an image with that tag is present, and because we built the main branch, the \u201clatest\u201d image has also been built. At this stage, the v0.0.9 and \u201clatest\u201d images are identical.

    "},{"location":"using-lagoon-advanced/base-images/#acknowledgement","title":"Acknowledgement","text":"

    The base image structure draws heavily (and, in fact, is a fork of) Denpal. It is based on the original Drupal Composer Template, but includes everything necessary to run on Lagoon (either the local development environment or on hosted Lagoon).

    "},{"location":"using-lagoon-advanced/blackfire/","title":"Blackfire","text":""},{"location":"using-lagoon-advanced/blackfire/#blackfire-variables","title":"Blackfire variables","text":"

    The Lagoon Base Images have support for Blackfire included in the PHP Images (see the PHP images).

    In order to use Blackfire in Lagoon, these three environment variables need to be defined:

    Environment Variable Default Description BLACKFIRE_ENABLED (not set) Used to enable blackfire extension with setting variable to TRUE or true BLACKFIRE_SERVER_ID (not set) Set to Blackfire Server ID provided by Blackfire.io. Needs BLACKFIRE_ENABLED set to true BLACKFIRE_SERVER_TOKEN (not set) Set to Blackfire Server Token provided by Blackfire.io. Needs BLACKFIRE_ENABLED set to true"},{"location":"using-lagoon-advanced/blackfire/#local-usage-of-blackfire","title":"Local Usage of Blackfire","text":"

    For local usage of Blackfire with Lagoon Images, set the above environment variables for the PHP container. Here is an example for a Drupal application:

    docker-compose.yml
    services:\n[[snip]]\nphp:\n[[snip]]\nenvironment:\n<< : *default-environment # loads the defined environment variables from the top\nBLACKFIRE_ENABLED: TRUE\nBLACKFIRE_SERVER_ID: xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx\nBLACKFIRE_SERVER_TOKEN: xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx\n

    After restarting the containers, you should be able to profile via the Blackfire Browser Plugin or the Blackfire CLI.

    "},{"location":"using-lagoon-advanced/blackfire/#remote-usage-of-blackfire","title":"Remote Usage of Blackfire","text":"

    In order to use Blackfire in deployed Lagoon environments the same enviornment variables need to be set, this time via one of the possibilities of adding environment variables to Lagoon. Important: Environment variables set in the docker-compose.yml for local development are not used by Lagoon in remote environments!

    "},{"location":"using-lagoon-advanced/blackfire/#debugging","title":"Debugging","text":"

    The Blackfire Agent running in the PHP containers outputs logs as normal container logs, which can be seen via docker-compose logs or via the Lagoon Logging Infrastructure for remote environments.

    By default the Logs are set to Level 3 (info), via the environment variable BLACKFIRE_LOG_LEVEL the level can be increased to 4 (debug) to generate more debugging ouput.

    "},{"location":"using-lagoon-advanced/custom-tasks/","title":"Custom Tasks","text":"

    Lagoon allows for the definition of custom tasks at environment, project, and group levels. This is presently accomplished through the GraphQL API and exposed in the UI.

    "},{"location":"using-lagoon-advanced/custom-tasks/#defining-a-custom-task","title":"Defining a custom task","text":"

    When defining a task you need to determine a number of things.

    "},{"location":"using-lagoon-advanced/custom-tasks/#which-task-do-you-want-to-run","title":"Which task do you want to run?","text":"

    In most cases, the custom task you will be running will be something that will be run in a shell on one of the containers in your application.

    For instance, in a Node.js application, you may be interested in running a yarn audit in your node container. The command, in this case, would simply be yarn audit.

    "},{"location":"using-lagoon-advanced/custom-tasks/#where-will-this-task-be-run","title":"Where will this task be run?","text":"

    We have to define where this task will be run -- this means two things, first, which project or environment we'll be running the task in, and, second, which service.

    Let's say that we'd like for our yarn audit task to be available to run in any environment in a specific project (let's say the project's ID is 42 for this example). We will therefore specify the project's ID when we create our task definition, as we will describe below.

    The second question regards which environment we want to target with our task. When you set up your project, you specify several services in your docker-compose.yml. We use this service name to determine where the command is actually executed.

    "},{"location":"using-lagoon-advanced/custom-tasks/#who-can-run-this-task","title":"Who can run this task?","text":"

    There are three levels of permissions to the task system corresponding to project roles. Guest, Developer, and Maintainer -- from most restrictive to least restrictive, with each role being able to invoke the tasks defined for the lower role (Developer can see Guest tasks, Maintainers can see all tasks).

    "},{"location":"using-lagoon-advanced/custom-tasks/#defining-a-task","title":"Defining a task","text":"

    Tasks are defined by calling the addAdvancedTaskDefinition mutation. Importantly, this simply defines the task, it does not invoke it. It simply makes it avaliable to be run in an environment.

    Schematically, the call looks like this

    Define a new task
    mutation addAdvancedTask {\n    addAdvancedTaskDefinition(input:{\n    name: string,\n    confirmationText: string,\n    type: [COMMAND|IMAGE],\n    [project|environment]: int,\n    description: string,\n    service: string,\n    command: string,\n    advancedTaskDefinitionArguments: [\n      {\n        name: \"ENVIROMENT_VARIABLE_NAME\",\n        displayName: \"Friendly Name For Variable\",\n        type: [STRING | ENVIRONMENT_SOURCE_NAME | ENVIRONMENT_SOURCE_NAME_EXCLUDE_SELF]\n      }\n    ]\n  }) {\n    ... on AdvancedTaskDefinitionImage {\n      id\n      name\n      description\n      service\n      image\n      confirmationText\n      advancedTaskDefinitionArguments {\n        type\n        range\n        name\n        displayName\n      }\n      ...\n    }\n    ... on AdvancedTaskDefinitionCommand {\n      id\n      name\n      description\n      service\n      command\n      advancedTaskDefinitionArguments {\n        type\n        range\n        name\n        displayName\n      }\n      ...\n    }\n  }\n}\n

    Fields name and description are straightforward. They're simply the name and description of the task - these are used primarily in the UI.

    The type field needs some explanation - for now, only platform admins are able to define IMAGE type commands - these allow for the running of specifically created task images as tasks, rather than targeting existing services. Most tasks, though, will be COMMAND types.

    The [project|environment] set of fields will attach the task to either the project or environment (depending on the key you use), with the value being the id. In the case we're considering for our yarn audit we will specify we're targeting a project with an ID of 42.

    We put the service we'd like to target with our task in the service field, and command is the actual command that we'd like to run.

    "},{"location":"using-lagoon-advanced/custom-tasks/#arguments-passed-to-tasks","title":"Arguments passed to tasks","text":"

    In order to give more flexibility to the users invoking the tasks via the Lagoon UI, we support defining task arguments. These arguments are displayed as text boxes or drop downs and are required for the task to be invoked.

    Here is an example of how we might set up two arguments.

    Define task arguments
    advancedTaskDefinitionArguments: [\n      {\n        name: \"ENV_VAR_NAME_SOURCE\",\n        displayName: \"Environment source\",\n        type: ENVIRONMENT_SOURCE_NAME\n\n      },\n      {\n        name: \"ENV_VAR_NAME_STRING\",\n        displayName: \"Echo value\",\n        type: STRING\n        }\n    ]\n  })\n

    This fragment shows both types of arguments the system currently supports. The first, ENV_VAR_NAME_SOURCE is an example of type ENVIRONMENT_SOURCE_NAME, which will present the user of the UI a dropdown of the different environments inside of a project. If we don't want to allow the task to be run on the invoking environment (say, if we want to import a database from another environment), we can restrict the environment list by using ENVIRONMENT_SOURCE_NAME_EXCLUDE_SELF. The second ENV_VAR_NAME_STRING is of type STRING and will present the user with a textbox to fill in.

    The values that the user selects will be available as environment variables in the COMMAND type tasks when the task is run.

    "},{"location":"using-lagoon-advanced/custom-tasks/#confirmation","title":"Confirmation","text":"

    When the confirmationText field has text, it will be displayed with a confirmation modal in the UI before the user is able to run the task.

    "},{"location":"using-lagoon-advanced/custom-tasks/#invoking-the-task","title":"Invoking the task","text":"

    With the task now defined, the task should now show up in the tasks dropdown in the Lagoon UI.

    We are also able to invoke it via the GraphQL api by using the invokeTask mutation.

    Invoke task
    mutation invokeTask {\n  invokeRegisteredTask(advancedTaskDefinition: int, environment: int) {\n    status\n  }\n}\n

    Note that invokeTask will always invoke a task on a specific environment.

    "},{"location":"using-lagoon-advanced/custom-tasks/#example","title":"Example","text":"

    Let's now setup our yarn audit example.

    Define task mutation
    mutation runYarnAudit {\n addAdvancedTaskDefinition(input:{\n    name:\"Run yarn audit\",\n    project: 42,\n    type:COMMAND,\n    permission:DEVELOPER,\n    description: \"Runs a 'yarn audit'\",\n    service:\"node\",\n    command: \"yarn audit\"})\n    {\n        id\n    }\n}\n

    This, then, will define our task for our project (42). When we run this, we will get the ID of the task definition back (for argument's sake, let's say it's 9)

    This task will now be available to run from the UI for anyone with the DEVELOPER or MAINTAINER role.

    "},{"location":"using-lagoon-advanced/deploytarget-configs/","title":"DeployTarget Configurations","text":"

    Danger

    This is an alpha feature in Lagoon. The way DeployTarget Configurations work could change in future releases. If you decide to use this feature, you do so at your own risk.

    DeployTarget configurations are a way to define how a project can deploy to multiple clusters. This feature is useful when you have two clusters, one which could be dedicated for running production workloads, and another that is used for running development workloads.

    The configuration for these is not limited to just a production/development split though, so projects could perceivably target more than one specific cluster.

    The basic idea of a DeployTarget configuration is that it is a way to easily define how a project can deploy across multiple clusters. It uses the existing methods of checking if a environment is valid

    "},{"location":"using-lagoon-advanced/deploytarget-configs/#important-information","title":"Important Information","text":"

    Before going in to how to configure a project to leverage DeployTarget configurations, there are some things you need to know.

    1. Environments now have two new fields available to them to identify which DeployTarget(Kubernetes or OpenShift) they have been created on.

      1. kubernetesNamespacePattern
      2. kubernetes
    2. Once an environment has been deployed to a specific DeployTarget, it will always deploy to this target, even if the DeployTarget configuration, or project configuration is modified.

      1. This offers some safety to existing environments by preventing changes to DeployTarget configurations from creating new environments on different clusters.
      2. This is a new feature that is part of Lagoon, not specifically for DeployTarget configurations.
    3. By default, if no DeployTarget configurations are associated to a project, that project will continue to use the existing methods to determine which environments to deploy. These are the following fields used for this.

      1. branches
      2. pullrequests
      3. kubernetesNamespacePattern
      4. kubernetes
    4. As soon as any DeployTarget configurations are added to a project, then all future deployments for this project will use these configurations. What is defined in the project is ignored, and overwritten to inform users that DeployTarget configurations are in use.

    5. DeployTarget configurations are weighted, which means that a DeployTarget configuration with a larger weight is prioritized over one with lower weight.

      1. The order in which they are returned by the query is the order they are used to determine where an environment should be deployed.

    6. Active/Standby environments can only be deployed to the same cluster, so your DeployTarget configuration must be able to deploy both those environments to the same target.

    7. Projects that leverage the promote feature of Lagoon must be aware that DeployTarget configurations are ignored for the destination environment.

      1. The destination environment will always be deployed to the same target that the source environment is on, your DeployTarget configuration MUST be configured correctly for this source environment.
      2. For safety, it is best to define both the source and destination environment in the same DeployTarget configuration branch regex.
    "},{"location":"using-lagoon-advanced/deploytarget-configs/#configuration","title":"Configuration","text":"

    To configure a project to use DeployTarget configurations, the first step is to add a configuration to a project.

    The following GraphQL mutation can be used, this particular example will add a DeployTarget configuration to the project with the project ID 1. It will allow only the branches that match the name main to be deployed, and pullrequests is set to false. This means no other branches will be able to deploy to this particular target, and no pull requests will be deployed to this particular target. The deployTarget is ID 1, this could be a Kubernetes cluster in a specific region, or designated for a specific type of workload (production or development).

    Configure DeployTarget
    mutation addDeployTargetConfig{\n  addDeployTargetConfig(input:{\n    project: 1\n    branches: \"main\"\n    pullrequests: \"false\"\n    deployTarget: 1\n    weight: 1\n  }){\n    id\n    weight\n    branches\n    pullrequests\n    deployTargetProjectPattern\n    deployTarget{\n        name\n        id\n    }\n    project{\n        name\n    }\n  }\n}\n

    Info

    deployTarget is an alias the Kubernetes or OpenShift ID in the Lagoon API

    It is also possible to configure multiple DeployTarget configurations.

    The following GraphQL mutation can be used, this particular example will add a DeployTarget configuration to the same project as above.

    It will allow only the branches that regex match with ^feature/|^(dev|test|develop)$ to be deployed, and pullrequests is set to true so all pull requests will reach this target.

    The targeted cluster in this example is ID 2, which is a completely different Kubernetes cluster to what was defined above for the main branch.

    Configure DeployTarget
    mutation addDeployTargetConfig{\n  addDeployTargetConfig(input:{\n    project: 1\n    branches: \"^feature/|^(dev|test|develop)$\"\n    pullrequests: \"true\"\n    deployTarget: 2\n    weight: 1\n  }){\n    id\n    weight\n    branches\n    pullrequests\n    deployTargetProjectPattern\n    deployTarget{\n        name\n        id\n    }\n    project{\n        name\n    }\n  }\n}\n

    Once these have been added to a project, you can return all the DeployTarget configurations for a project using the following query

    Get DeployTargets
    query deployTargetConfigsByProjectId{\n    deployTargetConfigsByProjectId(project:1){\n        id\n        weight\n        branches\n        pullrequests\n        deployTargetProjectPattern\n        deployTarget{\n            name\n            id\n        }\n        project{\n            name\n        }\n    }\n}\n# result:\n{\n    \"data\": {\n        \"deployTargetConfigsByProjectId\": [\n        {\n            \"id\": 1,\n            \"weight\": 1,\n            \"branches\": \"main\",\n            \"pullrequests\": \"false\",\n            \"deployTargetProjectPattern\": null,\n            \"deployTarget\": {\n                \"name\": \"production-cluster\",\n                \"id\": 1\n            },\n            \"project\": {\n                \"name\": \"my-project\"\n            }\n        },\n        {\n            \"id\": 2,\n            \"weight\": 1,\n            \"branches\": \"^feature/|^(dev|test|develop)$\",\n            \"pullrequests\": \"true\",\n            \"deployTargetProjectPattern\": null,\n            \"deployTarget\": {\n                \"name\": \"development-cluster\",\n                \"id\": 2\n            },\n            \"project\": {\n                \"name\": \"my-project\"\n            }\n        }\n        ]\n    }\n}\n
    "},{"location":"using-lagoon-advanced/environment-idling/","title":"Environment Idling (optional)","text":""},{"location":"using-lagoon-advanced/environment-idling/#what-is-the-environment-idler","title":"What is the Environment Idler?","text":"

    Lagoon can utilize the Aergia controller, (installed in the lagoon-remote) to automatically idle environments if they have been unused for a defined period of time. This is done in order to reduce the load on the Kubernetes clusters and improve the overall performance of production environments and development environments that are actually in use.

    "},{"location":"using-lagoon-advanced/environment-idling/#how-does-an-environment-get-idled","title":"How does an environment get idled?","text":"

    The environment idler has many different configuration capabilities. Here are the defaults of a standard Lagoon installation (these could be quite different in your Lagoon, check with your Lagoon administrator!)

    • Idling is tried every 4 hours.
    • Production environments are never idled.
    • CLI pods are idled if they don't include a cron job and if there is no remote shell connection active.
    • All other services and pods are idled if there was no traffic on the environment in the last 4 hours.
    • If there is an active build happening, there will be no idling.
    "},{"location":"using-lagoon-advanced/environment-idling/#how-does-an-environment-get-un-idled","title":"How does an environment get un-idled?","text":"

    Aergia will automatically un-idle an environment as soon as it is visited, therefore just visiting any URL of the environment will start the environment. Likewise, initiating an SSH session to the environment will also restart the services.

    The un-idling will take a couple of seconds, as the Kubernetes cluster needs to start all containers again. During this time there will be waiting screen shown to the visitor that their environment is currently started.

    "},{"location":"using-lagoon-advanced/environment-idling/#can-i-disable-prevent-the-idler-from-idling-my-environment","title":"Can I disable / prevent the Idler from idling my environment?","text":"

    Yes, there is a field autoIdle on the project (impacts all environments) and environment (if you need to target just one environment), as to whether idling is allowed to take place. A value of 1 indicates the project/environment is eligible for idling. If the project is set to 0 the environments will never be idled, even if the environment is set to 0 The default is always 1(idling is enabled).

    Talk to your Lagoon administrator if you are unsure how to set these project/environment fields.

    "},{"location":"using-lagoon-advanced/environment-types/","title":"Environment Types","text":"

    Lagoon currently differentiates between two different environment types: production and development.

    When setting up your project via the Lagoon GraphQL API, you can define a productionEnvironment. On every deployment Lagoon executes, it checks if the current environment name matches what is defined in productionEnvironment. If it does, Lagoon will mark this environment as the production environment. This happens in two locations:

    1. Within the GraphQL API itself.
    2. As an environment variable named LAGOON_ENVIRONMENT_TYPE in every container.

    But that's it. Lagoon itself handles development and production environments in exactly the same way (in the end we want as few differences of the environments as possible - that's the beauty of Lagoon).

    There are a couple of things that will use this information:

    • By default, development environments are idled after 4 hours with no hits (don't worry, they wake up automatically). It is also possible for your Lagoon administrator to disable auto-idling on a per-environment basis, just ask!
    • Our default Drupal settings.php files load additional settings files for development.settings.php and production.settings.php so you can define settings and configurations different per environment type.
    • If you try to delete an environment that is defined as the production environment (either via webhooks or REST), Lagoon will politely refuse to delete the production environment, as it tries to prevent you from making a mistake. In order to delete a production environment, you can either change the productionEnvironment in the API or use the secret forceDeleteProductionEnvironment: true POST payload for the REST API.
    • The Lagoon administrator might use the production environment information for some additional things. For example, at amazee.io we're calculating only the hits of the production environments to calculate the price of the hosting.
    "},{"location":"using-lagoon-advanced/environment-variables/","title":"Environment Variables","text":"

    It is common to store API tokens or credentials for applications in environment variables.

    Following best practices, those credentials are different per environment. We allow each environment to use a separate set of environment variables defined in environment variables or environment files.

    As there can be environment variables defined in either the Dockerfile or during runtime (via API environment variables), we have a hierarchy of environment variables: environment variables defined in lower numbers are stronger.

    1. Environment variables (defined via Lagoon API) - environment specific.
    2. Environment variables (defined via Lagoon API) - project-wide.
    3. Environment variables defined in Dockerfile (ENV command).
    4. Environment variables defined in .lagoon.env.$LAGOON_GIT_BRANCH or .lagoon.env.$LAGOON_GIT_SAFE_BRANCH (if the file exists and where $LAGOON_GIT_BRANCH $LAGOON_GIT_SAFE_BRANCH are the name and safe name of the branch this Docker image has been built for), use this for overwriting variables for only specific branches.
    5. Environment variables defined in .lagoon.env (if it exists), use this for overwriting variables for all branches.
    6. Environment variables defined in .env.
    7. Environment variables defined in .env.defaults.

    .lagoon.env.$LAGOON_GIT_BRANCH, .lagoon.env.$LAGOON_GIT_SAFE_BRANCH, .env, and .env.defaults are all sourced by the individual containers themselves as part of running their entrypoint scripts. They are not read by Lagoon, but by the containers ENTRYPOINT scripts, which look for them in the containers working directory. If environment variables don't appear as expected, check if your container has a WORKDIR setting that points to somewhere else.

    "},{"location":"using-lagoon-advanced/environment-variables/#environment-variables-lagoon-api","title":"Environment Variables (Lagoon API)","text":"

    We suggest using the Lagoon API environment variable system for variables that you don't want to keep in your Git repository (like secrets or API keys), as they could be compromised by somebody having them on their local development environment or on the internet, etc.

    The Lagoon API allows you to define project-wide or environment-specific variables. Additionally, they can be defined for a scope-only build-time or runtime. They are all created via the Lagoon GraphQL API. Read more on how to use the GraphQL API in our GraphQL API documentation.

    "},{"location":"using-lagoon-advanced/environment-variables/#runtime-environment-variables-lagoon-api","title":"Runtime Environment Variables (Lagoon API)","text":"

    Runtime environment variables are automatically made available in all containers, but they are only added or updated after an environment has been re-deployed.

    This defines a project wide runtime variable (available in all environments) for the project with ID 463:

    Add runtime variable
    mutation addRuntimeEnv {\n  addEnvVariable(\n    input:{\n      type:PROJECT,\n      typeId:463,\n      scope:RUNTIME,\n      name:\"MYVARIABLENAME\",\n      value:\"MyVariableValue\"\n    }\n  ) {\n    id\n  }\n}\n

    This defines a environment ID 546 specific runtime variable (available only in that specific environment):

    Define environment ID
    mutation addRuntimeEnv {\n  addEnvVariable(\n    input:{\n      type:ENVIRONMENT,\n      typeId:546,\n      scope:RUNTIME,\n      name:\"MYVARIABLENAME\",\n      value:\"MyVariableValue\"\n    }\n  ) {\n    id\n  }\n}\n
    "},{"location":"using-lagoon-advanced/environment-variables/#build-time-environment-variables-lagoon-api","title":"Build-time Environment Variables (Lagoon API)","text":"

    Build-time environment variables are only available during a build and need to be consumed in Dockerfiles via:

    Using build-time environment variables

    ARG MYVARIABLENAME\n
    Typically the ARG will go after the FROM. Read the docker documentation about ARG and FROM.

    This defines a project-wide build-time variable (available in all environments) for the project with ID 463:

    Define a project-wide build-time variable
    mutation addBuildtimeEnv {\n  addEnvVariable(\n    input:{\n      type:PROJECT,\n      typeId:463,\n      scope:BUILD,\n      name:\"MYVARIABLENAME\",\n      value:\"MyVariableValue\"}\n  ) {\n    id\n  }\n}\n

    This defines an environment ID 546specific build-time variable (available only in that specific environment):

    Define environment ID
    mutation addBuildtimeEnv {\n  addEnvVariable(input:{type:ENVIRONMENT, typeId:546, scope:BUILD, name:\"MYVARIABLENAME\", value:\"MyVariableValue\"}) {\n    id\n  }\n}\n

    Container registry environment variables are only available during a build and are used when attempting to log in to a private registry. They are used to store the password for the user defined in Specials \u00bb container-registries. They can be applied at the project or environment level.

    This defines a project-wide container registry variable (available in all environments) for the project with ID 463:

    Define project-wide container registry variable
    mutation addContainerRegistryEnv {\n  addEnvVariable(\n    input:{\n      type:PROJECT,\n      typeId:463,\n      scope:CONTAINER_REGISTRY,\n      name:\"MY_OWN_REGISTRY_PASSWORD\",\n      value:\"MySecretPassword\"})\n  ) {\n    id\n  }\n}\n

    This defines a environment ID 546 specific container registry variable (available only in that specific environment):

    Define environment ID
    mutation addContainerRegistryEnv {\n  addEnvVariable(\n    input:{\n      type:ENVIRONMENT,\n      typeId:546,\n      scope:CONTAINER_REGISTRY,\n      name:\"MY_OWN_REGISTRY_PASSWORD\",\n      value:\"MySecretPassword\"}\n  ) {\n    id\n  }\n}\n
    "},{"location":"using-lagoon-advanced/environment-variables/#environment-files-existing-directly-in-the-git-repo","title":"Environment Files (existing directly in the Git Repo)","text":"

    If you have environment variables that can safely be saved within a Git repository, we suggest adding them directly into the Git repository in an environment file. These variables will also be available within local development environments and are therefore more portable.

    The syntax in the environment files is as following:

    myenvironment.env
    MYVARIABLENAME=\"MyVariableValue\"\nMVARIABLENUMBER=4242\nDB_USER=$DB_USERNAME # Redefine DB_USER with the value of DB_USERNAME e.g. if your application expects another variable name for the Lagoon-provided variables.\n
    "},{"location":"using-lagoon-advanced/environment-variables/#lagoonenvbranchname","title":".lagoon.env.$BRANCHNAME","text":"

    If you want to define environment variables different per environment you can create a .lagoon.env.$BRANCHNAME e.g. for the main branch .lagoon.env.main. This helps you keep environment variables apart between environments.

    "},{"location":"using-lagoon-advanced/environment-variables/#env-and-envdefaults","title":".env and .env.defaults","text":"

    .env and .env.defaults will act as the default values for environment variables if none other is defined. For example, as default environment variables for pull request environments (see Workflows).

    "},{"location":"using-lagoon-advanced/environment-variables/#special-environment-variables","title":"Special Environment Variables","text":""},{"location":"using-lagoon-advanced/environment-variables/#php_error_reporting","title":"PHP_ERROR_REPORTING","text":"

    This variable, if set, will define the logging level you would like PHP to use. If not supplied, it will be set dynamically based on whether this is a production or development environment.

    On production environments, this value defaults to E_ALL & ~E_DEPRECATED & ~E_STRICT & ~E_NOTICE.

    On development environments, this value defaults to E_ALL & ~E_DEPRECATED & ~E_STRICT.

    "},{"location":"using-lagoon-advanced/environment-variables/#custom-backup-settings","title":"Custom Backup Settings","text":"

    Lagoon supports custom backup locations and credentials for any project when all four of the following variables are set as BUILD type variables. The environment variables need to be set at the project level (not per environment), and requires a Lagoon deployment after setting them (for every environment).

    Please note that any use of these variables means that all environment and database backups created and managed by Lagoon will be stored using these credentials, meaning that any interruption of these credentials' may lead to failed or inaccessible backups.

    Environment variable name Purpose LAGOON_BAAS_CUSTOM_BACKUP_ENDPOINT Specify the S3 compatible endpoint where any Lagoon initiated backups should be stored. An example for S3 Sydney would be: https://s3.ap-southeast-2.amazonaws.com. LAGOON_BAAS_CUSTOM_BACKUP_BUCKET Specify the bucket name where any Lagoon initiated backups should be stored.An example custom setting would be: example-restore-bucket. LAGOON_BAAS_CUSTOM_BACKUP_ACCESS_KEY Specify the access key Lagoon should use to access the custom backup bucket. An example custom setting would be: AKIAIOSFODNN7EXAMPLE. LAGOON_BAAS_CUSTOM_BACKUP_SECRET_KEY Specify the secret key Lagoon should use to access the custom backup bucket. An example custom setting would be: wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY.

    No public access is needed on the S3 bucket and can be made entirely private.

    Lagoon will automatically prune the files in these S3 buckets, so no object retention policy is needed at the bucket level.

    "},{"location":"using-lagoon-advanced/environment-variables/#custom-restore-location","title":"Custom Restore Location","text":"

    Lagoon supports custom restore locations and credentials for any project when all four of the following variables are set as BUILD type environment variables. The environment variables need to be set at the project level (not per environment), and requires a Lagoon deployment after setting them (for every environment).

    Please note that any use of these variables means that all environment and database snapshots restored by Lagoon will be stored using these credentials. This means that any interruption of these credentials' access may lead to failed or inaccessible restored files.

    Environment variable name Purpose LAGOON_BAAS_CUSTOM_RESTORE_ENDPOINT Specify the S3 compatible endpoint where any Lagoon initiated restores should be stored. An example for S3 Sydney would be: https://s3.ap-southeast-2.amazonaws.com. LAGOON_BAAS_CUSTOM_RESTORE_BUCKET Specify the bucket name where any Lagoon initiated restores should be stored.An example custom setting would be: example-restore-bucket. LAGOON_BAAS_CUSTOM_RESTORE_ACCESS_KEY Specify the access key Lagoon should use to access the custom restore bucket. An example custom setting would be: AKIAIOSFODNN7EXAMPLE. LAGOON_BAAS_CUSTOM_RESTORE_SECRET_KEY Specify the secret key Lagoon should use to access the custom restore bucket. An example custom setting would be: wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY.

    The S3 bucket must have public access enabled, as Lagoon will create presigned URLs for the objects inside the bucket as needed.

    An example AWS IAM policy that you can create to allow access to just the S3 bucket example-restore-bucket is:

    aws_iam_restore_policy.json
    {\n\"Version\": \"2012-10-17\",\n\"Statement\": [\n{\n\"Effect\": \"Allow\",\n\"Action\": [\n\"s3:GetBucketLocation\",\n\"s3:ListBucket\"\n],\n\"Resource\": [\n\"arn:aws:s3:::example-restore-bucket\"\n]\n},\n{\n\"Effect\": \"Allow\",\n\"Action\": [\n\"s3:PutObject\",\n\"s3:GetObject\",\n\"s3:GetObjectVersion\",\n\"s3:GetBucketLocation\",\n\"s3:PutObjectAcl\"\n],\n\"Resource\": [\n\"arn:aws:s3:::example-restore-bucket/*\"\n]\n}\n]\n}\n

    For increased security and reduced storage costs you can opt into removing restored backups after a set lifetime (e.g. 7 days). Lagoon caters for this scenario gracefully and will re-create any restored snapshots as needed.

    "},{"location":"using-lagoon-advanced/graphql/","title":"GraphQL","text":""},{"location":"using-lagoon-advanced/graphql/#connect-to-graphql-api","title":"Connect to GraphQL API","text":"

    API interactions in Lagoon are done via GraphQL. In order to authenticate to the API, you need a JWT (JSON Web Token), which will authenticate you against the API via your SSH public key.

    To generate this token, use the remote shell via the token command:

    Get token
    ssh -p [PORT] -t lagoon@[HOST] token\n

    Example for amazee.io:

    Get amazee.io token
    ssh -p 32222 -t lagoon@ssh.lagoon.amazeeio.cloud token\n

    This will return a long string, which is the JWT token.

    We also need the URL of the API endpoint. Ask your Lagoon administrator for this.

    On amazee.io this is https://api.lagoon.amazeeio.cloud/graphql.

    Now we need a GraphQL client! Technically this is just HTTP, but we suggest GraphiQL. It has a nice UI that allows you to write GraphQL requests with autocomplete. Download, install and start it. [GraphiQL App]

    Enter the API endpoint URL. Then click on \"Edit HTTP Headers\" and add a new Header:

    • \"Header name\": Authorization
    • \"Header value\": Bearer [jwt token] (make sure that the JWT token has no spaces, that won't work)

    Close the HTTP Header overlay (press ESC) and now you are ready to make your first GraphQL Request!

    Enter this on the left window:

    Get all projects
    query whatIsThere {\n  allProjects {\n    id\n    gitUrl\n    name\n    branches\n    pullrequests\n    productionEnvironment\n    environments {\n      name\n      environmentType\n    }\n  }\n}\n

    And press the \u25b6\ufe0f button (or press CTRL+ENTER).

    If all went well, you should see your first GraphQL response.

    "},{"location":"using-lagoon-advanced/graphql/#mutations","title":"Mutations","text":"

    The Lagoon GraphQL API can not only display objects and create objects, but it also has the capability to update existing objects. All of Lagoon's GraphQL uses best practices.

    Mutation queries in GraphQL modify the data in the data store, and return a value. They can be used to insert, update, and delete data. Mutations are defined as a part of the schema.

    Update the branches to deploy within a project:

    Update deploy branches
    mutation editProjectBranches {\n  updateProject(input:{id:109, patch:{branches:\"^(prod|stage|dev|update)$\"}}) {\n    id\n  }\n}\n

    Update the production environment within a project:

    Warning

    This requires a redeploy in order for all changes to be reflected in the containers.

    Update production environment
    mutation editProjectProductionEnvironment {\n  updateProject(input:{id:109, patch:{productionEnvironment:\"prod\"}}) {\n    id\n  }\n}\n

    You can also combine multiple changes into a single query:

    Multiple changes
    mutation editProjectProductionEnvironmentAndBranches {\n  updateProject(input:{id:109, patch:{productionEnvironment:\"prod\", branches:\"^(prod|stage|dev|update)$\"}}) {\n    id\n  }\n}\n
    "},{"location":"using-lagoon-advanced/nodejs/","title":"Node.js Graceful Shutdown","text":"

    Node.js has integrated web server capabilities. Plus, with Express, these can be extended even more.

    Unfortunately, Node.js does not handle shutting itself down very nicely out of the box. This causes many issues with containerized systems. The biggest issue is that when a Node.js container is told to shut down, it will immediately kill all active connections, and does not allow them to stop gracefully.

    This part explains how you can teach Node.js to behave like a real web server: finishing active requests and then gracefully shutting down.

    As an example we use a no-frills Node.js server with Express:

    app.js
    const express = require('express');\nconst app = express();\n// Adds a 5 second delay for all requests.\napp.use((req, res, next) => setTimeout(next, 5000));\napp.get('/', function (req, res) {\nres.send(\"Hello World\");\n})\nconst server = app.listen(3000, function () {\nconsole.log('Example app listening on port 3000!');\n})\n

    This will just show \"Hello World\" in when the web server is visited at localhost:3000. Note the 5 second delay in the response in order to simulate a request that takes some computing time.

    "},{"location":"using-lagoon-advanced/nodejs/#part-a-allow-requests-to-be-finished","title":"Part A: Allow requests to be finished","text":"

    If we run the above example and stop the Node.js process while the request is handled (within the 5 seconds), we will see that the Node.js server immediately kills the connection, and our browser will show an error.

    To explain to our Node.js server that it should wait for all the requests to be finished before actually stopping itself, we add the following code:

    Graceful Shutdown
    const startGracefulShutdown = () => {\nconsole.log('Starting shutdown of express...');\nserver.close(function () {\nconsole.log('Express shut down.');\n});\n}\nprocess.on('SIGTERM', startGracefulShutdown);\nprocess.on('SIGINT', startGracefulShutdown);\n

    This basically calls server.close(), which will instruct the Node.js HTTP server to:

    1. Not accept any more requests.
    2. Finish all running requests.

    It will do this on SIGINT (when you press CTRL + C) or on SIGTERM (the standard signal for a process to terminate).

    With this small addition, our Node.js will wait until all requests are finished, and then stop itself.

    If we were not running Node.js in a containerized environment, we would probably want to include some additional code that actually kills the Node.js server after a couple of seconds, as it is technically possible that some requests are either taking very long or are never stopped. Because it is running in a containerized system, if the container is not stopped, Docker and Kubernetes will run a SIGKILL after a couple of seconds (usually 30) which cannot be handled by the process itself, so this is not a concern for us.

    "},{"location":"using-lagoon-advanced/nodejs/#part-b-yarn-and-npm-children-spawning-issues","title":"Part B: Yarn and NPM children spawning issues","text":"

    If we only implemented Part A, we would have a good experience. In the real world, many Node.js systems are built with Yarn or NPM, which provide not only package management systems to Node.js, but also script management.

    With these script functionalities, we simplify the start of our application. We can see many package.json files that look like:

    package.json
    {\n\"name\": \"node\",\n\"version\": \"1.0.0\",\n\"main\": \"index.js\",\n\"license\": \"MIT\",\n\"dependencies\": {\n\"express\": \"^4.15.3\"\n},\n\"scripts\": {\n\"start\": \"node index.js\"\n}\n}\n

    and with the defined scripts section we can run our application just with:

    Start application
    yarn start\n

    or

    Start application
    npm start\n

    This is nice and makes the life of developers easier. So we also end up using the same within Dockerfiles:

    .dockerfile
    CMD [\"yarn\", \"start\"]\n

    Unfortunately there is a big problem with this:

    If yarn or npm get a SIGINT or SIGTERM signal, they correctly forward the signal to spawned child process (in this case node index.js). However, it does not wait for the child processes to stop. Instead, yarn/npm immediately stop themselves. This signals to Docker/Kubernetes that the container is finished and Docker/Kubernetes will kill all children processes immediately. There are issues open for Yarn and NPM but unfortunately they are not solved yet.

    The solution for the problem is to not use Yarn or NPM to start your application and instead use node directly:

    .dockerfile
    CMD [\"node\", \"index.js\"]\n

    This allows Node.js to properly terminate and Docker/Kubernetes will wait for Node.js to be finished.

    "},{"location":"using-lagoon-advanced/private-repositories/","title":"Private Repositories","text":"
    1. Give the deploy key access to the Git repositories in your GitHub/GitLab/BitBucket.
    2. Add ARG LAGOON_SSH_PRIVATE_KEY to your dockerfile (before the step of the build process that needs the SSH key).
    3. Add RUN /lagoon/entrypoints/05-ssh-key.sh to your dockerfile (before the step of the build process that needs the SSH key).
    Set up your private respository
    RUN /lagoon/entrypoints/05-ssh-key.sh && composer install && rm /home/.ssh/key\n
    "},{"location":"using-lagoon-advanced/project-default-users-keys/","title":"Project Default Users and SSH Keys","text":"

    When a Lagoon project is created, by default an associated SSH \"project key\" is generated and the private key made available inside the CLI pods of the project. A service account default-user@project is also created and given MAINTAINER access to the project. The SSH \"project key\" is attached to that default-user@project.

    The result of this is that from inside the CLI pod of any environment it is possible to SSH to any other environment within the same project. This access is used for running tasks from the command line such as synchronizing databases between environments (e.g. drush sql-sync).

    There is more information on the MAINTAINER role available in the RBAC documentation.

    "},{"location":"using-lagoon-advanced/project-default-users-keys/#specifying-the-project-key","title":"Specifying the project key","text":"

    It is possible to specify an SSH private key when creating a project, but this is not recommended as it has security implications.

    "},{"location":"using-lagoon-advanced/service-types/","title":"Service Types","text":"

    The below lists all service types that can be defined via lagoon.type within a docker-compose.yml file.

    Warning

    Once a lagoon.type is defined and the environment is deployed, changing it to a different type is not supported and could result in a broken environment.

    "},{"location":"using-lagoon-advanced/service-types/#basic","title":"basic","text":"

    Basic container, good to use for most applications that don't have an existing template. No persistent storage. The port can be changed using a label. If an autogenerated route is not required (e.g. for an internal-facing service, set lagoon.autogeneratedroute: false in the docker-compose.yml)

    Healthcheck Exposed Ports Auto Generated Routes Storage Additional customization parameter TCP connection on 3000 3000 Yes No lagoon.service.port, lagoon.autogeneratedroute"},{"location":"using-lagoon-advanced/service-types/#basic-persistent","title":"basic-persistent","text":"

    Like basic. Will also generate persistent storage, defines mount location via lagoon.persistent.

    Healthcheck Exposed Ports Auto Generated Routes Storage Additional customization parameter TCP connection on 3000 3000 Yes Yes lagoon.service.port, lagoon.autogeneratedroute, lagoon.persistent, lagoon.persistent.name, lagoon.persistent.size, lagoon.persistent.class"},{"location":"using-lagoon-advanced/service-types/#cli","title":"cli","text":"

    Use for any kind of CLI container (like PHP, Node.js, etc). Automatically gets the customer SSH private key that is mounted in /var/run/secrets/lagoon/sshkey/ssh-privatekey.

    Healthcheck Exposed Ports Auto Generated Routes Storage Additional customization parameter - No No No -"},{"location":"using-lagoon-advanced/service-types/#cli-persistent","title":"cli-persistent","text":"

    Like cli, expects lagoon.persistent.name to be given the name of a service that has persistent storage, which will be mounted under defined lagoon.persistent label. Does NOT generate its own persistent storage, only used to mount another service's persistent storage.

    Healthcheck Exposed Ports Auto Generated Routes Storage Additional customization parameter - No No Yes lagoon.persistent.name, lagoon.persistent"},{"location":"using-lagoon-advanced/service-types/#elasticsearch","title":"elasticsearch","text":"

    Elasticsearch container, will auto-generate persistent storage under /usr/share/elasticsearch/data.

    Healthcheck Exposed Ports Auto Generated Routes Storage Additional customization parameter HTTP on localhost:9200/_cluster/health?local=true 9200 No Yes lagoon.persistent.size"},{"location":"using-lagoon-advanced/service-types/#kibana","title":"kibana","text":"

    Kibana container.

    Healthcheck Exposed Ports Auto Generated Routes Storage Additional customization parameter TCP connection on 5601 5601 Yes No -"},{"location":"using-lagoon-advanced/service-types/#logstash","title":"logstash","text":"

    Logstash container.

    Healthcheck Exposed Ports Auto Generated Routes Storage Additional customization parameter TCP connection on 9600 9600 No No -"},{"location":"using-lagoon-advanced/service-types/#mariadb","title":"mariadb","text":"

    A meta-service which will tell Lagoon to automatically decide between mariadb-single and mariadb-dbaas.

    Healthcheck Exposed Ports Auto Generated Routes Storage Additional customization parameter - - - - -"},{"location":"using-lagoon-advanced/service-types/#mariadb-single","title":"mariadb-single","text":"

    MariaDB container. Creates cron job for backups running every 24h executing /lagoon/mysql-backup.sh 127.0.0.1.

    Healthcheck Exposed Ports Auto Generated Routes Storage Additional customization parameter TCP connection on 3306 3306 No Yes lagoon.persistent.size"},{"location":"using-lagoon-advanced/service-types/#mariadb-dbaas","title":"mariadb-dbaas","text":"

    Uses a shared MariaDB server via the DBaaS Operator.

    Healthcheck Exposed Ports Auto Generated Routes Storage Additional customization parameter Not Needed 3306 No - -"},{"location":"using-lagoon-advanced/service-types/#mongo","title":"mongo","text":"

    A meta-service which will tell Lagoon to automatically decide between mongo-single and mongo-dbaas.

    Healthcheck Exposed Ports Auto Generated Routes Storage Additional customization parameter - - - - -"},{"location":"using-lagoon-advanced/service-types/#mongo-single","title":"mongo-single","text":"

    MongoDB container, will generate persistent storage of min 1GB mounted at /data/db.

    Healthcheck Exposed Ports Auto Generated Routes Storage Additional customization parameter TCP connection on 27017 27017 No Yes lagoon.persistent.size"},{"location":"using-lagoon-advanced/service-types/#mongo-dbaas","title":"mongo-dbaas","text":"

    Uses a shared MongoDB server via the DBaaS Operator.

    Healthcheck Exposed Ports Auto Generated Routes Storage Additional customization parameter Not Needed 27017 No - -"},{"location":"using-lagoon-advanced/service-types/#nginx","title":"nginx","text":"

    NGINX container. No persistent storage.

    Healthcheck Exposed Ports Auto Generated Routes Storage Additional customization parameter localhost:50000/nginx_status 8080 Yes No lagoon.autogeneratedroute"},{"location":"using-lagoon-advanced/service-types/#nginx-php","title":"nginx-php","text":"

    Like nginx, but additionally a php container.

    Healthcheck Exposed Ports Auto Generated Routes Storage Additional customization parameter NGINX: localhost:50000/nginx_status, PHP: /usr/sbin/check_fcgi 8080 Yes No lagoon.autogeneratedroute"},{"location":"using-lagoon-advanced/service-types/#nginx-php-persistent","title":"nginx-php-persistent","text":"

    Like nginx-php. Will generate persistent storage, defines mount location via lagoon.persistent.

    Healthcheck Exposed Ports Auto Generated Routes Storage Additional customization parameter NGINX: localhost:50000/nginx_status, PHP: /usr/sbin/check_fcgi http on 8080 Yes Yes lagoon.autogeneratedroute, lagoon.persistent, lagoon.persistent.name, lagoon.persistent.size, lagoon.persistent.class"},{"location":"using-lagoon-advanced/service-types/#node","title":"node","text":"

    Node.js container. No persistent storage.

    Healthcheck Exposed Ports Auto Generated Routes Storage Additional customization parameter TCP connection on 3000 3000 Yes No lagoon.autogeneratedroute"},{"location":"using-lagoon-advanced/service-types/#node-persistent","title":"node-persistent","text":"

    Like node. Will generate persistent storage, defines mount location via lagoon.persistent.

    Healthcheck Exposed Ports Auto Generated Routes Storage Additional customization parameter TCP connection on 3000 3000 Yes Yes lagoon.autogeneratedroute, lagoon.persistent, lagoon.persistent.name, lagoon.persistent.size, lagoon.persistent.class"},{"location":"using-lagoon-advanced/service-types/#none","title":"none","text":"

    Instructs Lagoon to completely ignore this service.

    Healthcheck Exposed Ports Auto Generated Routes Storage Additional customization parameter - - - - -"},{"location":"using-lagoon-advanced/service-types/#opensearch","title":"opensearch","text":"

    OpenSearch container, will auto-generate persistent storage under /usr/share/opensearch/data.

    Healthcheck Exposed Ports Auto Generated Routes Storage Additional customization parameter HTTP on localhost:9200/_cluster/health?local=true 9200 No Yes lagoon.persistent.size"},{"location":"using-lagoon-advanced/service-types/#postgres","title":"postgres","text":"

    A meta-service which will tell Lagoon to automatically decide between postgres-single and postgres-dbaas.

    Healthcheck Exposed Ports Auto Generated Routes Storage Additional customization parameter - - - - -"},{"location":"using-lagoon-advanced/service-types/#postgres-single","title":"postgres-single","text":"

    Postgres container. Creates cron job for backups running every 24h executing /lagoon/postgres-backup.sh localhost.

    Healthcheck Exposed Ports Auto Generated Routes Storage Additional customization parameter TCP connection on 5432 5432 No Yes lagoon.persistent.size"},{"location":"using-lagoon-advanced/service-types/#postgres-dbaas","title":"postgres-dbaas","text":"

    Uses a shared PostgreSQL server via the DBaaS Operator.

    Healthcheck Exposed Ports Auto Generated Routes Storage Additional customization parameter Not Needed 5432 No - -"},{"location":"using-lagoon-advanced/service-types/#python","title":"python","text":"

    Python container. No persistent storage.

    Healthcheck Exposed Ports Auto Generated Routes Storage Additional customization parameter HTTP connection on 8800 8800 Yes No lagoon.autogeneratedroute"},{"location":"using-lagoon-advanced/service-types/#python-persistent","title":"python-persistent","text":"

    Python container. With persistent storage.

    Healthcheck Exposed Ports Auto Generated Routes Storage Additional customization parameter HTTP connection on 8800 8800 Yes Yes lagoon.autogeneratedroute"},{"location":"using-lagoon-advanced/service-types/#redis","title":"redis","text":"

    Redis container.

    Healthcheck Exposed Ports Auto Generated Routes Storage Additional customization parameter TCP connection on 6379 6379 No No -"},{"location":"using-lagoon-advanced/service-types/#redis-persistent","title":"redis-persistent","text":"

    Redis container with auto-generated persistent storage mounted under /data.

    Healthcheck Exposed Ports Auto Generated Routes Storage Additional customization parameter TCP connection on 6379 6379 No Yes lagoon.persistent.size"},{"location":"using-lagoon-advanced/service-types/#solr","title":"solr","text":"

    Solr container with auto-generated persistent storage mounted under /var/solr.

    Healthcheck Exposed Ports Auto Generated Routes Storage Additional customization parameter TCP connection on 8983 8983 No Yes lagoon.persistent.size"},{"location":"using-lagoon-advanced/service-types/#varnish","title":"varnish","text":"

    Varnish container.

    Healthcheck Exposed Ports Auto Generated Routes Storage Additional customization parameter HTTP request localhost:8080/varnish_status 8080 Yes No lagoon.autogeneratedroute"},{"location":"using-lagoon-advanced/service-types/#varnish-persistent","title":"varnish-persistent","text":"

    Varnish container with auto-generated persistent storage mounted under /var/cache/varnish.

    Healthcheck Exposed Ports Auto Generated Routes Storage Additional customization parameter HTTP request localhost:8080/varnish_status 8080 Yes Yes lagoon.autogeneratedroute, lagoon.persistent.size"},{"location":"using-lagoon-advanced/service-types/#worker","title":"worker","text":"

    Use for any kind of worker container (like queue workers, etc.) where there is no exposed service port.

    Healthcheck Exposed Ports Auto Generated Routes Storage Additional customization parameter - No No No -"},{"location":"using-lagoon-advanced/service-types/#worker-persistent","title":"worker-persistent","text":"

    Like worker, expects lagoon.persistent.name to be given the name of a service that has persistent storage, which will be mounted under defined lagoon.persistent label. Does NOT generate its own persistent storage, only used to mount another service's persistent storage.

    Healthcheck Exposed Ports Auto Generated Routes Storage Additional customization parameter - No No Yes lagoon.persistent.name, lagoon.persistent"},{"location":"using-lagoon-advanced/setting-up-xdebug-with-lagoon/","title":"Setting up Xdebug with Lagoon","text":""},{"location":"using-lagoon-advanced/setting-up-xdebug-with-lagoon/#enable-xdebug-extension-in-the-containers","title":"Enable Xdebug extension in the containers","text":"

    The Lagoon base images are pre-configured with Xdebug but, for performance reasons, the extension is not loaded by default. To enable the extension, the XDEBUG_ENABLE environment variable must be set to true:

    • Locally (Pygmy and Lando)
      1. If your project is based off the lagoon-examples docker-compose.yml file, the environment variable already exists. Uncomment these lines.
      2. Make sure to rebuild and restart the containers after changing any environment variables.
    • Remotely (dev/prod)
      1. You can use the Lagoon API to add the environment variable to a running environment.
      2. Make sure to redeploy the environment after changing this any environment variables.
    "},{"location":"using-lagoon-advanced/setting-up-xdebug-with-lagoon/#activate-xdebug-extension","title":"Activate Xdebug Extension","text":"

    The default Xdebug configuration requires a \"trigger\" to activate the extension and start a session. You can view the complete documentation for activating the debugger but the most straightforward instructions are below.

    "},{"location":"using-lagoon-advanced/setting-up-xdebug-with-lagoon/#cli","title":"CLI","text":"

    The php-cli image is configured to always activate Xdebug when it\u2019s enabled, so there is nothing else that needs to be done. Running any PHP script will start a debugging session.

    "},{"location":"using-lagoon-advanced/setting-up-xdebug-with-lagoon/#web","title":"Web","text":"

    Install a browser extension to set/unset an activation cookie.

    Make sure the activation cookie is set for the website you want to start debugging.

    "},{"location":"using-lagoon-advanced/setting-up-xdebug-with-lagoon/#configure-phpstorm","title":"Configure PHPStorm","text":"
    1. PHPStorm is configured correctly by default.
    2. Click the \u201cStart Listening for PHP Debug Connections\u201d icon in the toolbar.
    3. Load a webpage or run a Drush command.
    4. On first run, PHPStorm should pop up a window asking you to:
      1. Confirm path mappings.
      2. Select the correct file locally that was triggered on the server.
    "},{"location":"using-lagoon-advanced/setting-up-xdebug-with-lagoon/#configure-visual-studio-code","title":"Configure Visual Studio Code","text":"
    1. Install the PHP Debug extension by Felix Becker.
    2. Follow the instructions to create a basic launch.json for PHP.
    3. Add correct path mappings. For a typical Drupal site, an example would be:

      launch.json
      \"pathMappings\": {\n\"/app\": \"${workspaceFolder}\",\n},\n
    4. In the Run tab of Visual Studio Code, click the green arrow next to \u201cListen for Xdebug\u201d

    5. Load a webpage or run a Drush command.
    "},{"location":"using-lagoon-advanced/setting-up-xdebug-with-lagoon/#troubleshooting","title":"Troubleshooting","text":"
    • Verify that Xdebug extension is loaded. The best way to do this on a Drupal site is to check the PHP status page. You should find a section about Xdebug and all its settings.
    • Verify the following settings:
    Directive Local Value xdebug.mode debug xdebug.client_host host.docker.internal or your IP address xdebug.client_port 9003
    • Enable Xdebug logging within the running containers. All you need is an environment variable named XDEBUG_LOG set to anything to enable logging. Logs will be saved to /tmp/xdebug.log. If you are using the lagoon-examples then you can uncomment some existing lines.
    • Verify you have the activation cookie set. You can use the browser tools in Chrome or Firefox to check that a XDEBUG_SESSION cookie is set.
    • Verify that Xdebug is activated and attempting to start a debug session with your computer. You can use the nc -l 9003 command line tool to open the Xdebug port. If everything is configured in PHP correctly, you should get a Xdebug init response when you load a webpage or run a Drush command.
    • Verify that the xdebug.client_host has been set correctly. For local debugging with Docker for Mac, this value should be host.docker.internal. For remote debugging this value should be your IP address. If this value was not correctly determined, you can override it by setting the DOCKERHOST environment variable.
    • When using Lando locally, in order to debug scripts run from the CLI you must first SSH into the CLI container via lando ssh. You won\u2019t be able to debug things by running lando drush or lando php.
    "},{"location":"using-lagoon-advanced/setting-up-xdebug-with-lagoon/#mac-specific-troubleshooting","title":"Mac specific troubleshooting","text":"
    • Verify that Docker for Mac networking is not broken. On your host machine, run nc -l 9003, then in a new terminal window, run:

      Verify Docker for Mac networking
      docker-compose run cli nc -zv host.docker.internal 9003\n

      You should see a message like: host.docker.internal (192.168.65.2:9003) open.

    "},{"location":"using-lagoon-advanced/setting-up-xdebug-with-lagoon/#linux-specific-troubleshooting","title":"Linux specific troubleshooting","text":"
    • Ensure the host host.docker.internal can be reached. If docker has been installed manually (and not through Docker Desktop), this host will not resolve. You can force this to resolve with an additional snippet in your docker-compose.yml file (instructions taken from this blog post):

      docker-compose.yml alterations for Linux
        services:\ncli:\nextra_hosts:\nhost.docker.internal: host-gateway\nphp:\nextra_hosts:\nhost.docker.internal: host-gateway\n
    "},{"location":"using-lagoon-advanced/setting-up-xdebug-with-lagoon/#xdebug-2","title":"Xdebug 2","text":"

    If you're running older images you may still be using Xdebug version 2. All the information on this page still applies, but some of the configuration names and values have changes:

    v3 v2 xdebug.mode xdebug.remote_enabled On xdebug.client_host xdebug.remote_host host.docker.internal or your IP address xdebug.client_port xdebug.remote_port 9000"},{"location":"using-lagoon-advanced/simplesaml/","title":"SimpleSAML","text":""},{"location":"using-lagoon-advanced/simplesaml/#simplesamlphp","title":"SimpleSAMLphp","text":"

    This is an example of how to add SimpleSAMLphp to your project and then modify configuration to serve it via NGINX.

    "},{"location":"using-lagoon-advanced/simplesaml/#requirements","title":"Requirements","text":"

    Add SimpleSAMLphp to your project:

    Add SimpleSAMLphp to your project via Composer
    composer req simplesamlphp/simplesamlphp\n
    "},{"location":"using-lagoon-advanced/simplesaml/#modify-configuration-for-simplesamlphp","title":"Modify configuration for SimpleSAMLphp","text":"

    Copy authsources.php and config.php from vendor/simplesamlphp/simplesamlphp/config-templates to somewhere outside vendor directory, such as conf/simplesamlphp. You also need saml20-idp-remote.php from vendor/simplesamlphp/simplesamlphp/metadata-templates.

    In config.php set following values for Lagoon:

    Base URL path where SimpleSAMLphp is accessed:

    config.php
      'baseurlpath' => 'https://YOUR_DOMAIN.TLD/simplesaml/',\n

    Store sessions to database:

    config.php
      'store.type'                    => 'sql',\n  'store.sql.dsn'                 => vsprintf('mysql:host=%s;port=%s;dbname=%s', [\n    getenv('MARIADB_HOST'),\n    getenv('MARIADB_PORT'),\n    getenv('MARIADB_DATABASE'),\n  ]),\n

    Alter other settings to your liking:

    • Check the paths for logs and certs.
    • Secure SimpleSAMLphp dashboard.
    • Set up level of logging.
    • Set technicalcontact and timezone.

    Add authsources (IdPs) to authsources.php, see example:

    authsources.php
      'default-sp' => [\n    'saml:SP',\n    // The entity ID of this SP.\n    'entityID' => 'https://YOUR_DOMAIN.TLD',\n    // The entity ID of the IdP this should SP should contact.\n    // Can be NULL/unset, in which case the user will be shown a list of available IdPs.\n    'idp' => 'https://YOUR_IDP_DOMAIN.TLD',\n    // The URL to the discovery service.\n    // Can be NULL/unset, in which case a builtin discovery service will be used.\n    'discoURL' => null,\n    'NameIDFormat' => 'urn:oasis:names:tc:SAML:2.0:nameid-format:transient',\n    'certificate' => '/app/conf/simplesamlphp/certs/saml.crt',\n    'privatekey' => '/app/conf/simplesamlphp/certs/saml.pem',\n    'redirect.sign' => TRUE,\n    'redirect.validate' => TRUE,\n    'authproc' => [\n      50 => [\n        'class' => 'core:AttributeCopy',\n        'urn:oid:1.3.6.1.4.1.5923.1.1.1.6' => 'eduPersonPrincipalName',\n      ],\n      51 => [\n        'class' => 'core:AttributeCopy',\n        'urn:oid:2.5.4.42' => 'givenName',\n      ],\n      52 => [\n        'class' => 'core:AttributeCopy',\n        'urn:oid:2.5.4.4' => 'sn',\n      ],\n      53 => [\n        'class' => 'core:AttributeCopy',\n        'urn:oid:0.9.2342.19200300.100.1.3' => 'mail',\n      ],\n    ],\n  ],\n

    Add IdP metadata to saml20-idp-remote.php, see example:

    saml20-idp-remote.php
    <?php\n/**\n * SAML 2.0 remote IdP metadata for SimpleSAMLphp.\n *\n * Remember to remove the IdPs you don't use from this file.\n *\n * See: https://simplesamlphp.org/docs/stable/simplesamlphp-reference-idp-remote\n */\n/**\n * Some IdP.\n */\n$metadata['https://YOUR_IDP_DOMAIN.TLD'] = [\n'entityid' => 'https://YOUR_IDP_DOMAIN.TLD',\n'name' => [\n'en' => 'Some IdP',\n],\n'description' => 'Some IdP',\n...\n];\n

    In your build process, copy configuration files to SimpleSAMLphp:

    • vendor/simplesamlphp/simplesamlphp/config/authsources.php
    • vendor/simplesamlphp/simplesamlphp/config/config.php
    • vendor/simplesamlphp/simplesamlphp/metadata/saml20-idp-remote.php
    "},{"location":"using-lagoon-advanced/simplesaml/#create-nginx-conf-for-simplesamlphp","title":"Create NGINX conf for SimpleSAMLphp","text":"

    Create file lagoon/nginx/location_prepend_simplesamlphp.conf:

    location_prepend_simplesamlphp.conf
    location ^~ /simplesaml {\nalias /app/vendor/simplesamlphp/simplesamlphp/www;\nlocation ~ ^(?<prefix>/simplesaml)(?<phpfile>.+?\\.php)(?<pathinfo>/.*)?$ {\ninclude          fastcgi_params;\nfastcgi_pass     ${NGINX_FASTCGI_PASS:-php}:9000;\nfastcgi_param    SCRIPT_FILENAME $document_root$phpfile;\n# Must be prepended with the baseurlpath\nfastcgi_param    SCRIPT_NAME /simplesaml$phpfile;\nfastcgi_param    PATH_INFO $pathinfo if_not_empty;\n}\n}\n

    This will route /simplesaml URLs to SimpleSAMLphp in vendor.

    "},{"location":"using-lagoon-advanced/simplesaml/#add-additional-nginx-conf-to-nginx-image","title":"Add additional NGINX conf to NGINX image","text":"

    Modify nginx.dockerfile and add location_prepend_simplesamlphp.conf to the image:

    nginx.dockerfile
    ARG CLI_IMAGE\nFROM ${CLI_IMAGE} as cli\n\nFROM amazeeio/nginx-drupal\n\nCOPY --from=cli /app /app\n\nCOPY lagoon/nginx/location_prepend_simplesamlphp.conf /etc/nginx/conf.d/drupal/location_prepend_simplesamlphp.conf\nRUN fix-permissions /etc/nginx/conf.d/drupal/location_prepend_simplesamlphp.conf\n\n# Define where the Drupal Root is located\nENV WEBROOT=public\n
    "},{"location":"using-lagoon-advanced/ssh/","title":"SSH","text":"

    Lagoon allows you to connect to your running containers via SSH. The containers themselves don't actually have an SSH server installed, but instead you connect via SSH to Lagoon, which then itself creates a remote shell connection via the Kubernetes API for you.

    "},{"location":"using-lagoon-advanced/ssh/#ensure-you-are-set-up-for-ssh-access","title":"Ensure you are set up for SSH access","text":""},{"location":"using-lagoon-advanced/ssh/#generating-an-ssh-key","title":"Generating an SSH Key","text":"

    It is recommended to generate a separate SSH key for each device as opposed to sharing the same key between multiple computers. Instructions for generating an SSH key on various systems can be found below:

    "},{"location":"using-lagoon-advanced/ssh/#osx-mac","title":"OSX (Mac)","text":"

    Mac

    "},{"location":"using-lagoon-advanced/ssh/#linux-ubuntu","title":"Linux (Ubuntu)","text":"

    Linux

    "},{"location":"using-lagoon-advanced/ssh/#windows","title":"Windows","text":"

    Windows

    "},{"location":"using-lagoon-advanced/ssh/#ssh-agent","title":"SSH Agent","text":""},{"location":"using-lagoon-advanced/ssh/#osx-mac_1","title":"OSX (Mac)","text":"

    OSX does not have its SSH agent configured to load configured SSH keys at startup, which can cause some headaches. You can find a handy guide to configuring this capability here: https://www.backarapper.com/add-ssh-keys-to-ssh-agent-on-startup-in-macos/

    "},{"location":"using-lagoon-advanced/ssh/#linux","title":"Linux","text":"

    Linux distributions vary in how they use the ssh-agent . You can find a general guide here: https://www.ssh.com/academy/ssh/agent

    "},{"location":"using-lagoon-advanced/ssh/#windows_1","title":"Windows","text":"

    SSH key support in Windows has improved markedly as of recently, and is now supported natively. A handy guide to configuring the Windows 10 SSH agent can be found here: https://richardballard.co.uk/ssh-keys-on-windows-10/

    "},{"location":"using-lagoon-advanced/ssh/#uploading-ssh-keys","title":"Uploading SSH Keys","text":""},{"location":"using-lagoon-advanced/ssh/#via-the-ui","title":"Via the UI","text":"

    You can upload your SSH key(s) through the UI. Log in as you normally would.

    In the upper right hand corner, click on Settings:

    You will then see a page where you can upload your SSH key(s), and it will show any uploaded keys. Paste your key into the text box, give it a name, and click \"Add.\" That's it! Add additional keys as needed.

    "},{"location":"using-lagoon-advanced/ssh/#via-command-line","title":"Via Command Line","text":"

    A general example of using the Lagoon API via GraphQL to add an SSH key to a user can be found here

    "},{"location":"using-lagoon-advanced/ssh/#ssh-into-a-pod","title":"SSH into a pod","text":""},{"location":"using-lagoon-advanced/ssh/#connection","title":"Connection","text":"

    Connecting is straightforward and follows the following pattern:

    SSH
    ssh -p [PORT] -t [PROJECT-ENVIRONMENT-NAME]@[HOST]\n
    • PORT - The remote shell SSH endpoint port (for amazee.io: 32222).
    • HOST - The remote shell SSH endpoint host (for amazee.io ssh.lagoon.amazeeio.cloud).
    • PROJECT-ENVIRONMENT-NAME - The environment you want to connect to. This is most commonly in the pattern PROJECTNAME-ENVIRONMENT.

    As an example:

    SSH example
    ssh -p 32222 -t drupal-example-main@ssh.lagoon.amazeeio.cloud\n

    This will connect you to the project drupal-example on the environment main.

    "},{"location":"using-lagoon-advanced/ssh/#podservice-container-definition","title":"Pod/Service, Container Definition","text":"

    By default, the remote shell will try to connect you to the container defined with the type cli. If you would like to connect to another pod/service you can define it via:

    SSH to another service
    ssh -p [PORT] -t [PROJECT-ENVIRONMENT-NAME]@[HOST] service=[SERVICE-NAME]\n

    If your pod/service contains multiple containers, Lagoon will connect you to the first defined container. You can also define the specific container to connect to via:

    Define container
    ssh -p [PORT] -t [PROJECT-ENVIRONMENT-NAME]@[HOST] service=[SERVICE-NAME] container=[CONTAINER-NAME]\n

    For example, to connect to the php container within the nginx pod:

    SSH to php container
    ssh -p 32222 -t drupal-example-main@ssh.lagoon.amazeeio.cloud service=nginx container=php\n
    "},{"location":"using-lagoon-advanced/ssh/#copying-files","title":"Copying files","text":"

    The common case of copying a file into your cli pod can be acheived with the usual SSH-compatible tools.

    "},{"location":"using-lagoon-advanced/ssh/#scp","title":"scp","text":"Copy file with scp
    scp -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -P 32222 [local_path] [project_name]-[environment_name]@ssh.lagoon.amazeeio.cloud:[remote_path]\n
    "},{"location":"using-lagoon-advanced/ssh/#rsync","title":"rsync","text":"Copy files with rsync
    rsync --rsh='ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -p 32222' [local_path] [project_name]-[environment_name]@ssh.lagoon.amazeeio.cloud:[remote_path]\n
    "},{"location":"using-lagoon-advanced/ssh/#tar","title":"tar","text":"Bash
    ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -P 32222 [project_name]-[environment_name]@ssh.lagoon.amazee.io tar -zcf - [remote_path] | tar -zxf - -C /tmp/\n
    "},{"location":"using-lagoon-advanced/ssh/#specifying-non-cli-podservice","title":"Specifying non-CLI pod/service","text":"

    In the rare case that you need to specify a non-CLI service you can specify the service=... and/or container=... arguments in the copy command.

    Piping tar through the ssh connection is the simplest method, and can be used to copy a file or directory using the usual tar flags:

    Bash
    ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -P 32222 [project_name]-[environment_name]@ssh.lagoon.amazee.io service=solr tar -zcf - [remote_path] | tar -zxf - -C /tmp/\n

    You can also use rsync with a wrapper script to reorder the arguments to ssh in the manner required by Lagoon's SSH service:

    Bash
    #!/usr/bin/env sh\nsvc=$1 user=$3 host=$4\nshift 4\nexec ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -p 32222 -l \"$user\" \"$host\" \"$svc\" \"$@\"\n

    Put that in an executable shell script rsh.sh and specify the service=... in the rsync command:

    rsync to non-CLI pod
    rsync --rsh=\"/path/to/rsh.sh service=cli\" /tmp/foo [project_name]-[environment_name]@ssh.lagoon.amazeeio.cloud:/tmp/foo\n

    The script could also be adjusted to also handle a container=... argument.

    "},{"location":"using-lagoon-advanced/triggering-deployments/","title":"Triggering Deployments","text":""},{"location":"using-lagoon-advanced/triggering-deployments/#trigger-a-new-deployment-using-azure-pipelines","title":"Trigger a new deployment using Azure Pipelines","text":"

    In order to automatically trigger new deployments using Azure Pipelines follow these instructions:

    1. Add your deployment SSH private key to Azure as a secure file as id_rsa_lagoon. For more information about secure files have a look at the Azure Documentation Site.
    2. Add the following configuration to your azure-pipelines.yml:
    azure-pipelines.yml
    pool:\nvmImage: 'ubuntu-latest'\nstages:\n# .. other stages\n- stage: Deploy\ncondition: and(succeeded(), in(variables['Build.SourceBranch'], 'refs/heads/staging', 'refs/heads/develop'))\njobs:\n- job: DeployLagoon\nsteps:\n- task: DownloadSecureFile@1\nname: lagoonSshKey\ndisplayName: 'Download Lagoon SSH key'\ninputs:\nsecureFile: id_rsa_lagoon\n- script: |\ncurl -L \"https://github.com/amazeeio/lagoon-cli/releases/download/0.9.2/lagoon-cli-0.9.2-linux-amd64\" -o ./lagoon\nchmod +x ./lagoon\ndisplayName: 'Download lagoon-cli'\n- script: ./lagoon login -i $(lagoonSshKey.secureFilePath)\ndisplayName: 'Log into Lagoon'\n- script: ./lagoon deploy branch -e $(Build.SourceBranchName) -p my-awesome-project -b $(Build.SourceBranchName) --force\ndisplayName: 'Trigger deployment using lagoon-cli'\n

    This will trigger a new deployment whenever changes are made on the develop or staging branch. Adjust these values accordingly so they fit your deployment strategy and configuration.

    "},{"location":"using-lagoon-advanced/triggering-deployments/#push-without-deploying","title":"Push without deploying","text":"

    There may be a case where you want to push without a deployment. Make sure your commit message contains \"[skip deploy]\" or \"[deploy skip]\" and Lagoon will not trigger a deployment from that commit.

    "},{"location":"using-lagoon-advanced/workflows/","title":"Workflows","text":"

    Lagoon tries to support any development workflow possible. It specifically does not enforce any workflows onto teams, so that each development team can define how they would like to develop and deploy their code.

    "},{"location":"using-lagoon-advanced/workflows/#fixed-branches","title":"Fixed Branches","text":"

    The most straightforward workflows are deployment-based on some fixed branches:

    You define which branches (like develop, staging and main, which would be ^(develop|staging|main)$ as regular expressions) that Lagoon should deploy and it will do so. Done!

    If you would like to test a new feature, merge them into a branch that you have set up locally and push, and Lagoon will deploy the feature and you can test. When all is good, merge the branch into your production branch and push.

    "},{"location":"using-lagoon-advanced/workflows/#feature-branches","title":"Feature Branches","text":"

    A bit more advanced are feature branches. Since Lagoon supports the ability to define the branches you would like to deploy via regular expressions, you can also extend the above regular expression to this: ^feature\\/|^(staging|main)$. This will instruct Lagoon to deploy all branches that start with feature/, plus the branches called staging and main. Our development workflow could be as following:

    • Create a new branch from main called feature/myfeature and push feature/myfeature.
    • Lagoon will deploy the branch feature/myfeature as a new environment, where you can test your feature independently of any other features.
    • Merge feature/myfeature into the main branch and it will deploy to your production environment.

    If you like, you can also merge feature/myfeature and any other feature branches into staging first, in order to test the functionality of multiple features together. After you have tested the features together on staging, you can merge the features into main.

    This workflow needs a high level of branch pruning and cleanliness in your Git repository. Since each feature branch will create its own Lagoon environment, you can have very quickly generate a LOT of environments, which all of them will use resources. Be sure to merge or delete unused branches.

    Because of this, it could make sense to think about a pull request based workflow.

    "},{"location":"using-lagoon-advanced/workflows/#pull-requests","title":"Pull requests","text":"

    Even more advanced are workflows via pull requests. Such workflows need the support of a Git hosting which supports pull requests (also called merge requests). The idea of pull request-based workflows lies behind that idea that you can test a feature together with a target branch, without actually needing to merge yet, as Lagoon will do the merging for you during the build.

    In our example we would configure Lagoon to deploy the branches ^(staging|main)$ and the pull requests to .* (to deploy all pull requests). Now our workflow would be:

    1. Create a new branch from main called feature/myfeature and push feature/myfeature (no deployment will happen now because we have only specific staging and main as our branches to be deployed).
    2. Create a pull request in your Git hosting from feature/myfeature into main.
    3. Lagoon will now merge the feature/myfeature branch on top of the main branch and deploy that resulting code for you.
    4. Now you can test the functionality of the feature/myfeature branch just as if it had been merged into main, so all changes that have happened in main since you created the feature/myfeature branch from it will be there, and you don't need to worry that you might have an older version of the main branch.
      1. If there is a merge conflict, the build will fail, Lagoon will stop and notify you.
    5. After you have tested your pull request branch, you can go back to your Git hosting and actually merge the code into main. This will now trigger a deployment of main.
    6. When the pull request is merged, it is automatically closed and Lagoon will remove the environment for the pull request automatically.

    Some teams might opt to create the pull request against a shared staging branch and then merge the staging branch into the main branch via another pull request. This depends on the kind of Git workflow you're using.

    Additionally, in Lagoon you can define that only pull requests with a specific text in the title are deployed. [BUILD] defined as regular expression will only deploy pull requests that have a title like [BUILD] My Pull Request, while a pull request with that title My other Pull Request is not automatically deployed. This helps to keep the amount of environments small and allows for pull requests that don't need an environment yet.

    "},{"location":"using-lagoon-advanced/workflows/#automatic-database-sync-for-pull-requests","title":"Automatic Database Sync for Pull requests","text":"

    Automatic pull request environments are a fantastic thing. But it would also be handy to have the database synced from another environment when those environments are created. Lagoon can handle that!

    The following example will sync the staging database on the first rollout of the pull request environment:

    .lagoon.yml
    tasks:\npost-rollout:\n- run:\nname: IF no Drupal installed & Pullrequest = Sync database from staging\ncommand: |\nif [[ -n ${LAGOON_PR_BASE_BRANCH} ]] && tables=$(drush sqlq 'show tables;') && [ -z \"$tables\" ]; then\ndrush -y sql-sync @staging default\nfi\nservice: cli\nshell: bash\n
    "},{"location":"using-lagoon-advanced/workflows/#promotion","title":"Promotion","text":"

    Another way of deploying your code into an environment is the promotion workflow.

    The idea behind the promotion workflow comes from this (as an example):

    If you merge the branch staging into the main branch, and if there are no changes to main , so main and staging have the exact same code in Git, it could still technically be possible that the resulting Docker images are slightly different. This is because it's possible that between the last staging deployment and the current main deployment, some upstream Docker images may have changed, or dependencies loaded from the various package managers may have changed. This is a very small chance, but it's there.

    For this situation, Lagoon understands the concept of promoting Lagoon images from one environment to another. This basically means that it will take the already built and deployed Docker images from one environment, and will use those exact same Docker images for another environment.

    In our example, we want to promote the Docker images from the main environment to the production environment:

    • First, we need a regular deployed environment with the name main. Make sure that the environment has deployed successfully.
    • Also, make sure that you don't have a branch called production in your Git repository. This could lead to weird confusions (like people pushing into this branch, etc).
    • Now trigger a promotion deployment via this curl request:
    Trigger a promotion deployment
      curl -X POST \\\nhttps://rest.lagoon.amazeeio.cloud/promote \\\n-H 'Content-Type: application/json' \\\n-d '{\n          \"projectName\":\"myproject\",\n          \"sourceEnvironmentName\": \"main\",\n          \"branchName\": \"production\"\n      }'\n

    This tells Lagoon that you want to promote from the source main to the destination production (yes, it really uses branchName as destination, which is a bit unfortunate, but it will be fixed soon).

    Lagoon will now do the following:

    • Check out the Git branch main in order to load the .lagoon.yml and docker-compose.yml files (Lagoon still needs these in order to fully work).
    • Create all Kubernetes/OpenShift objects for the defined services in docker-compose.yml , but with LAGOON_GIT_BRANCH=production as environment variable.
    • Copy the newest images from the main environment and use them (instead of building Images or tagging them from upstream).
    • Run all post-rollout tasks like a normal deployment.

    You will receive the same notifications of success or failures like any other deployment.

    "},{"location":"using-lagoon-the-basics/","title":"Overview","text":""},{"location":"using-lagoon-the-basics/#requirements","title":"Requirements","text":""},{"location":"using-lagoon-the-basics/#docker","title":"Docker","text":"

    To run a Lagoon Project, your system must meet the requirements to run Docker. We suggest installing the latest version of Docker for your workstation. You can download Docker here. We also suggest allowing Docker at least 4 CPUs and 4 GB RAM.

    "},{"location":"using-lagoon-the-basics/#local-development-environments","title":"Local Development Environments","text":"

    TL;DR: install and start pygmy:

    Bash
    brew tap pygmystack/pygmy; # (1)\nbrew install pygmy;\npygmy up\n
    1. HomeBrew is the easiest way to install Pygmy, see the docs for more info.

    Pygmy is a container stack for local development, developed collaboratively with the Lagoon team.

    Learn more about Lagoon, pygmy, and Local Development Environments

    "},{"location":"using-lagoon-the-basics/#step-by-step-guides","title":"Step by Step Guides","text":"
    • General: set up a new project in Lagoon
    • General: first deployment
    • Drupal: first deployment in Drupal
    • Drupal: Lagoonize your Drupal site
    • All: build and deployment process of Lagoon
    "},{"location":"using-lagoon-the-basics/#overview-of-lagoon-configuration-files","title":"Overview of Lagoon Configuration Files","text":""},{"location":"using-lagoon-the-basics/#lagoonyml","title":".lagoon.yml","text":"

    This is the main file that will be used by Lagoon to understand what should be deployed, as well as many other things. See documentation for .lagoon.yml.

    "},{"location":"using-lagoon-the-basics/#docker-composeyml","title":"docker-compose.yml","text":"

    This file is used by Docker Compose to start your local development environment. Lagoon also uses it to understand which of the services should be deployed, which type, and how to build them. This happens via labels. See documentation for docker-compose.yml.

    "},{"location":"using-lagoon-the-basics/#dockerfiles","title":"Dockerfiles","text":"

    Some Docker images and containers need additional customizations from the provided images. This usually has two reasons:

    1. Application code: Containers like NGINX, PHP, Node.js, etc, need the actual programming code within their images. This is done during a Docker build step, which is configured in a Dockerfile. Lagoon has full support for Docker, and therefore also allows you full control over the resulting images via Dockerfile customizations.
    2. Customization of images: Lagoon also allows you to customize the base images according to your needs. This can be to inject an additional environment variable, change a service configuration, or even install additional tools. We advise caution with installing additional tools to the Docker images, as you will need to maintain any adaptions in the future!
    "},{"location":"using-lagoon-the-basics/#supported-services-base-images-by-lagoon","title":"Supported Services & Base Images by Lagoon","text":"Type Versions Dockerfile MariaDB 10.4, 10.5, 10.6, 10.11 mariadb/Dockerfile PostgreSQL 11, 12, 13, 14, 15 postgres/Dockerfile MongoDB 4 mongo/Dockerfile NGINX openresty/1.21 nginx/Dockerfile Node.js 16, 18, 20 node/Dockerfile PHP FPM 8.0, 8.1, 8.2 php/fpm/Dockerfile PHP CLI 8.0, 8.1, 8.2 php/cli/Dockerfile Python 3.7, 3.8, 3.9, 3.10, 3.11 python/Dockerfile Redis 5, 6, 7 redis/Dockerfile Solr 7, 8 solr/Dockerfile Varnish 5, 6, 7 varnish/Dockerfile Opensearch 2 opensearch/Dockerfiles RabbitMQ 3.10 rabbitmq/Dockerfile Ruby 3.0, 3.1, 3.2 ruby/Dockerfile

    All images are pushed to https://hub.docker.com/u/uselagoon. We suggest always using the latest tag (like uselagoon/nginx:latest) as they are kept up to date in terms of features and security.

    If you choose to use a specific Lagoon version of an image like uselagoon/nginx:20.10.0 or uselagoon/node-10:20.10.0 it is your own responsibility to upgrade the version of the images as soon as a new Lagoon version is released!

    "},{"location":"using-lagoon-the-basics/build-and-deploy-process/","title":"Build and Deploy Process","text":"

    This document describes what actually happens during a Lagoon build and deployment. It is heavily simplified from what actually happens, but it will help you to understand what is happening under the hood every time that Lagoon deploys new code for you.

    Watch the video below for a walk-through of the deployment process.

    "},{"location":"using-lagoon-the-basics/build-and-deploy-process/#1-set-up-openshift-projectkubernetes-namespace-for-environment","title":"1. Set up OpenShift Project/Kubernetes Namespace for Environment","text":"

    First, Lagoon checks if the OpenShift project/Kubernetes namespace for the given environment exists and is correctly set up. It will make sure that we have the needed service accounts, create secrets, and will configure environment variables into a ConfigMap lagoon-env which is filled with information like the environment type and name, the Lagoon project name, and so on.

    "},{"location":"using-lagoon-the-basics/build-and-deploy-process/#2-git-checkout-merge","title":"2. Git Checkout & Merge","text":"

    Next, Lagoon will check out your code from Git. It needs that to be able to read the .lagoon.yml, docker-compose.yml and any .env files, but also to build the Docker images.

    Note that Lagoon will only process these actions if the branch/PR matches the branch regex set in Lagoon. Based on how the deployment has been triggered, different things will happen:

    "},{"location":"using-lagoon-the-basics/build-and-deploy-process/#branch-webhook-push","title":"Branch Webhook Push","text":"

    If the deployment is triggered automatically via a Git webhook and is for a single branch, Lagoon will check out the Git SHA which is included in the webhook payload. This will trigger a deployment for every Git SHA pushed.

    "},{"location":"using-lagoon-the-basics/build-and-deploy-process/#branch-rest-trigger","title":"Branch REST trigger","text":"

    If you trigger a branch deployment manually via the REST API (via the UI, or GraphQL) and do NOT define a SHA in the POST payload, Lagoon will just check out the latest commit in that branch and deploy it.

    "},{"location":"using-lagoon-the-basics/build-and-deploy-process/#pull-requests","title":"Pull Requests","text":"

    If the deployment is a pull request (PR) deployment, Lagoon will load the base and the HEAD branch and SHAs for the pull request and will:

    • Check out the base branch (the branch the PR points to).
    • Merge the HEAD branch (the branch that the PR originates from) on top of the base branch.
    • More specifically:
      • Lagoon will check out and merge particular SHAs which were sent in the webhook. Those SHAs may or may not point to the branch heads. For example, if you make a new push to a GitHub pull request, it can happen that SHA of the base branch will not point to the current base branch HEAD.

    If the merge fails, Lagoon will also stop and inform you about this.

    "},{"location":"using-lagoon-the-basics/build-and-deploy-process/#3-build-image","title":"3. Build Image","text":"

    For each service defined in the docker-compose.yml Lagoon will check if images need to be built or not. If they need to be built, this will happen now. The order of building is based on the order they are configured in docker-compose.yml , and some build arguments are injected:

    • LAGOON_GIT_SHA
    • LAGOON_GIT_BRANCH
    • LAGOON_PROJECT
    • LAGOON_BUILD_TYPE (either pullrequest, branch or promote)
    • LAGOON_SSH_PRIVATE_KEY - The SSH private key that is used to clone the source repository. Use RUN /lagoon/entrypoints/05-ssh-key.sh to convert the build argument into an actual key at /home/.ssh/key which will be used by SSH and Git automatically. For safety, remove the key again via RUN rm /home/.ssh/key.
    • LAGOON_GIT_SOURCE_REPOSITORY - The full Git URL of the source repository.

    Also, if this is a pull request build:

    • LAGOON_PR_HEAD_BRANCH
    • LAGOON_PR_HEAD_SHA
    • LAGOON_PR_BASE_BRANCH
    • LAGOON_PR_BASE_SHA
    • LAGOON_PR_TITLE

    Additionally, for each already built image, its name is also injected. If your docker-compose.yml is configured to first build the cli image and then the nginx image, the name of the nginx image is injected as NGINX_IMAGE.

    "},{"location":"using-lagoon-the-basics/build-and-deploy-process/#4-configure-kubernetes-or-openshift-services-and-routes","title":"4. Configure Kubernetes or OpenShift Services and Routes","text":"

    Next, Lagoon will configure Kubernetes or OpenShift with all services and routes that are defined from the service types, plus possible additional custom routes that you have defined in .lagoon.yml.

    In this step it will expose all defined routes in the LAGOON_ROUTES as comma separated URLs. It will also define one route as the \"main\" route, in this order:

    1. If custom routes defined: the first defined custom route in .lagoon.yml.
    2. The first auto-generated route from a service defined in docker-compose.yml.
    3. None.

    The \"main\" route is injected via the LAGOON_ROUTE environment variable.

    "},{"location":"using-lagoon-the-basics/build-and-deploy-process/#5-push-and-tag-images","title":"5. Push and Tag Images","text":"

    Now it is time to push the previously built Docker images into the internal Docker image registry.

    For services that didn't specify a Dockerfile to be built in docker-compose.yml and only gave an image, they are also tagged and will cause the internal Docker image registry to know about the images, so that they can be used in containers.

    "},{"location":"using-lagoon-the-basics/build-and-deploy-process/#6-persistent-storage","title":"6. Persistent Storage","text":"

    Lagoon will now create persistent storage (PVC) for each service that needs and requested persistent storage.

    "},{"location":"using-lagoon-the-basics/build-and-deploy-process/#7-cron-jobs","title":"7. Cron jobs","text":"

    For each service that requests a cron job (like MariaDB), plus for each custom cron job defined in .lagoon.yml, Lagoon will now generate the cron job environment variables which are later injected into the Deployment.

    "},{"location":"using-lagoon-the-basics/build-and-deploy-process/#8-run-defined-pre-rollout-tasks","title":"8. Run defined pre-rollout tasks","text":"

    Now Lagoon will check the .lagoon.yml file for defined tasks in pre-rollout and will run them one by one in the defined services. Note that these tasks are executed on the pods currently running (so cannot utilize features or scripts that only exist in the latest commit) and therefore they are also not run on first deployments.

    If any of them fail, Lagoon will immediately stop and notify you, and the rollout will not proceed.

    "},{"location":"using-lagoon-the-basics/build-and-deploy-process/#9-deploymentconfigs-statefulsets-daemonsets","title":"9. DeploymentConfigs, Statefulsets, Daemonsets","text":"

    This is probably the most important step. Based on the defined service type, Lagoon will create the Deployment, Statefulset or Daemonsets for the service. (Note that Deployments are analogous to DeploymentConfigs in OpenShift)

    It will include all previously gathered information like the cron jobs, the location of persistent storage, the pushed images and so on.

    Creation of these objects will also automatically cause Kubernetes or OpenShift to trigger new deployments of the pods if necessary, like when an environment variable has changed or an image has changed. But if there is no change, there will be no deployment! This means if you only update the PHP code in your application, the Varnish, Solr, MariaDB, Redis and any other service that is defined but does not include your code will not be deployed. This makes everything much much faster.

    "},{"location":"using-lagoon-the-basics/build-and-deploy-process/#10-wait-for-all-rollouts-to-be-done","title":"10. Wait for all rollouts to be done","text":"

    Now Lagoon waits! It waits for all of the just-triggered deployments of the new pods to be finished, as well as for their health checks to be successful.

    If any of the deployments or health checks fail, the deployment will be stopped here, and you will be informed via the defined notification systems (like Slack) that the deployment has failed.

    "},{"location":"using-lagoon-the-basics/build-and-deploy-process/#11-run-defined-post-rollout-tasks","title":"11. Run defined post-rollout tasks","text":"

    Now Lagoon will check the .lagoon.yml file for defined tasks in post-rollout and will run them one by one in the defined services.

    If any of them fail, Lagoon will immediately stop and notify you.

    "},{"location":"using-lagoon-the-basics/build-and-deploy-process/#12-success","title":"12. Success","text":"

    If all went well and nothing threw any errors, Lagoon will mark this build as successful and inform you via defined notifications. \u2705

    "},{"location":"using-lagoon-the-basics/configure-webhooks/","title":"Configure Webhooks","text":"

    Your Lagoon administrator will also give you the route to the webhook-handler. You will add this to your repository as an outgoing webhook, and choose which events to send to Lagoon. Typically, you will send all push and pull request events. In Lagoon it is possible to add a regular expression to determine which branches and pull requests actually result in a deploy, and your Lagoon administrator can set that up for you. For example, all branches that start with feature- could be deployed to Lagoon.

    Info for amazee.io customers

    If you are an amazee.io customer, the route to the webhook-handler is: https://hooks.lagoon.amazeeio.cloud.

    Danger

    Managing the following settings will require you to have a high level of access to these repositories, which will be controlled by your organization. If you cannot access these settings, please contact your systems administrator or the appropriate person within your organization.

    "},{"location":"using-lagoon-the-basics/configure-webhooks/#github","title":"GitHub","text":"
    1. Proceed to Settings -> Webhooks -> Add webhook in your GitHub repository.
    2. The Payload URL is the route to the webhook-handler of your Lagoon instance, provided by your Lagoon administrator.
    3. Set Content type to application/json.
    4. Choose \"Let me select individual events.\"
    5. Choose which events will trigger your webhook. We suggest that you send Push and Pull request events, and then filter further in the Lagoon configuration of your project.
    6. Make sure the webhook is set to Active.
    7. Click Add webhook to save your configuration.
    "},{"location":"using-lagoon-the-basics/configure-webhooks/#gitlab","title":"GitLab","text":"
    1. Navigate to Settings -> Integrations in your GitLab repository.
    2. The URL is the route to the webhook-handler of your Lagoon instance, provided by your Lagoon administrator.
    3. Select the Trigger events which will send a notification to Lagoon. We suggest that you send Push events and Merge request events, and then filter further in the Lagoon configuration of your project.
    4. Click Add webhookto save your configuration.
    "},{"location":"using-lagoon-the-basics/configure-webhooks/#bitbucket","title":"Bitbucket","text":"
    1. Navigate to Settings -> Webhooks -> Add new webhook in your repository.
    2. Title is for your reference.
    3. URL is the route to the webhook-handler of your Lagoon instance, provided by your Lagoon administrator.
    4. Choose from a full list of triggers and select the following:

      • Repository
        • Push
      • Pull Request
        • Created
        • Updated
        • Approved
        • Approval removed
        • Merged
        • Declined

      5. Click Save to save the webhook configurations for Bitbucket.

    "},{"location":"using-lagoon-the-basics/docker-compose-yml/","title":"docker-compose.yml","text":"

    The docker-compose.yml file is used by Lagoon to:

    • Learn which services/containers should be deployed.
    • Define how the images for the containers are built.
    • Define additional configurations like persistent volumes.

    Docker Compose (the tool) is very strict in validating the content of the YAML file, so we can only do configuration within labels of a service definition.

    Warning

    Lagoon only reads the labels, service names, image names and build definitions from a docker-compose.yml file. Definitions like: ports, environment variables, volumes, networks, links, users, etc. are IGNORED.

    This is intentional, as the docker-compose file is there to define your local environment configuration. Lagoon learns from the lagoon.type the type of service you are deploying and from that knows about ports, networks and any additional configuration that this service might need.

    Here a straightforward example of a docker-compose.yml file for Drupal:

    docker-compose.yml
    version: '2.3'\nx-lagoon-project:\n# Lagoon project name (leave `&lagoon-project` when you edit this)\n&lagoon-project drupal-example\nx-volumes:\n&default-volumes\n# Define all volumes you would like to have real-time mounted into the docker containers\nvolumes:\n- .:/app:delegated\nx-environment:\n&default-environment\nLAGOON_PROJECT: *lagoon-project\n# Route that should be used locally, if you are using pygmy, this route *must* end with .docker.amazee.io\nLAGOON_ROUTE: http://drupal-example.docker.amazee.io\n# Uncomment if you want to have the system behave as it will in production\n#LAGOON_ENVIRONMENT_TYPE: production\n# Uncomment to enable Xdebug and then restart via `docker-compose up -d`\n#XDEBUG_ENABLE: \"true\"\nx-user:\n&default-user\n# The default user under which the containers should run. Change this if you are on linux and run with another user than ID `1000`\nuser: '1000'\nservices:\nnginx:\nbuild:\ncontext: .\ndockerfile: nginx.dockerfile\nlabels:\nlagoon.type: nginx-php-persistent # (1)\nlagoon.persistent: /app/web/sites/default/files/\nphp:\nbuild:\ncontext: .\ndockerfile: php.dockerfile\nlabels:\nlagoon.type: nginx-php-persistent # (2)\nlagoon.name: nginx\nlagoon.persistent: /app/web/sites/default/files/\nmariadb:\nimage: amazeeio/mariadb-drupal\nlabels:\nlagoon.type: mariadb\n
    1. Note the multi-container pods here.
    2. Note the multi-container pods here.
    "},{"location":"using-lagoon-the-basics/docker-compose-yml/#basic-settings","title":"Basic settings","text":"

    x-lagoon-project:

    This is the machine name of your project, define it here. We\u2019ll use \u201cdrupal-example.\u201d

    x-volumes:

    This tells Lagoon what to mount into the container. Your web application lives in /app, but you can add or change this if needed.

    x-environment:

    1. Here you can set your local development URL. If you are using pygmy, it must end with .docker.amazee.io.
    2. If you want to exactly mimic the production environment, uncomment LAGOON_ENVIRONMENT_TYPE: production.
    3. If you want to enable Xdebug, uncomment DEBUG_ENABLE: \"true\".

    x-user:

    You are unlikely to need to change this, unless you are on Linux and would like to run with a user other than 1000.

    "},{"location":"using-lagoon-the-basics/docker-compose-yml/#services","title":"services","text":"

    This defines all the services you want to deploy. Unfortunately, Docker Compose calls them services, even though they are actually containers. Going forward we'll be calling them services, and throughout this documentation.

    The name of the service (nginx, php, and mariadb in the example above) is used by Lagoon as the name of the Kubernetes pod (yet another term - again, we'll be calling them services) that is generated, plus also any additional Kubernetes objects that are created based on the defined lagoon.type, which could be things like services, routes, persistent storage, etc.

    Please note that service names adhere to the RFC 1035 DNS label standard. Service names must:

    • contain at most 63 characters
    • contain only lowercase alphanumeric characters or '-'
    • start with an alphabetic character
    • end with an alphanumeric character

    Warning

    Once you have set the name of a service, do NOT rename it. This will cause all kind of havoc in your containers and break things.

    "},{"location":"using-lagoon-the-basics/docker-compose-yml/#docker-images","title":"Docker Images","text":""},{"location":"using-lagoon-the-basics/docker-compose-yml/#build","title":"build","text":"

    If you want Lagoon to build a Dockerfile for your service during every deployment, you can define it here:

    build

    • context
      • The build context path that should be passed on into the docker build command.
    • dockerfile:
      • Location and name of the Dockerfile that should be built.

    Warning

    Lagoon does NOT support the short version of build: <Dockerfile> and will fail if it finds such a definition.

    "},{"location":"using-lagoon-the-basics/docker-compose-yml/#image","title":"image","text":"

    If you don't need to build a Dockerfile and just want to use an existing Dockerfile, define it via image.

    "},{"location":"using-lagoon-the-basics/docker-compose-yml/#types","title":"Types","text":"

    Lagoon needs to know what type of service you are deploying in order to configure the correct Kubernetes or OpenShift objects.

    This is done via the lagoon.type label. There are many different types to choose from. Check Service Types to see all of them and their additional configuration possibilities.

    "},{"location":"using-lagoon-the-basics/docker-compose-yml/#skipignore-containers","title":"Skip/Ignore containers","text":"

    If you'd like Lagoon to ignore a service completely - for example, you need a container only during local development - give it the type none.

    "},{"location":"using-lagoon-the-basics/docker-compose-yml/#persistent-storage","title":"Persistent Storage","text":"

    Some containers need persistent storage. Lagoon allows for each container to have a maximum of one persistent storage volume attached to the container. You can configure the container to request its own persistent storage volume (which can then be mounted by other container), or you can tell the container to mount the persistent storage created by another container.

    In many cases, Lagoon knows where that persistent storage needs to go. For example, for a MariaDB container, Lagoon knows that the persistent storage should be put into /var/lib/mysql , and puts it there automatically without any extra configuration to define that. For some situations, though, Lagoon needs your help to know where to put the persistent storage:

    • lagoon.persistent - The absolute path where the persistent storage should be mounted (the above example uses /app/web/sites/default/files/ which is where Drupal expects its persistent storage).
    • lagoon.persistent.name - Tells Lagoon to not create a new persistent storage for that service, but instead mounts the persistent storage of another defined service into this service.
    • lagoon.persistent.size - The size of persistent storage you require (Lagoon usually gives you minimum 5G of persistent storage, if you need more, define it here).
    • lagoon.persistent.class - By default Lagoon automatically assigns the right storage class for your service (like SSDs for MySQL, bulk storage for Nginx, etc.). If you need to overwrite this, you can do so here. This is highly dependent on the underlying Kubernetes/OpenShift that Lagoon runs on. Ask your Lagoon administrator about this.
    "},{"location":"using-lagoon-the-basics/docker-compose-yml/#auto-generated-routes","title":"Auto-generated Routes","text":"

    The docker-compose.yml file also supports per-service enabling and disabling of autogenerated routes

    • lagoon.autogeneratedroute: false label will stop a route from being automatically created for that service. It can be applied to all services with autogenerated routes, but is mostly useful for the basic and basic-persistent service types when used to create an additional internal-facing service for a database service or similar. The inverse is also true - it will enable an auto-generated route for a service when the .lagoon.yml file disables them.
    "},{"location":"using-lagoon-the-basics/docker-compose-yml/#multi-container-pods","title":"Multi-Container Pods","text":"

    Kubernetes and OpenShift don't deploy plain containers. Instead, they deploy pods, with each one or more containers. Usually Lagoon creates a single pod with a container inside for each defined docker-compose service. For some cases, we need to put two containers inside a single pod, as these containers are so dependent on each other that they should always stay together. An example for such a situation is the PHP and NGINX containers that both contain PHP code of a web application like Drupal.

    For these cases, it is possible to tell Lagoon which services should stay together, which is done in the following way (remember that we are calling containers services because of docker-compose:

    1. Define both services with a lagoon.type that expects two services (in the example this is nginx-php-persistent defined on the nginx and php services).
    2. Link the second service with the first one, defining the label lagoon.name of the second one with the first one. (in the example this is done with defining lagoon.name: nginx).

    This will cause Lagoon to realize that the nginx and php containers are combined in a pod that will be called nginx.

    Warning

    Once you have set the lagooon.name of a service, do NOT rename it. This will cause all kind of havoc in your containers and break things.

    Lagoon still needs to understand which of the two services is the actual individual service type (nginx and php in this case). It does this by searching for service names with the same name that are given by the type, so nginx-php-persistent expects one service with the name nginx and one with php in the docker-compose.yml. If for any reason you want to use different names for the services, or you need for than one pod with the type nginx-php-persistent there is an additional label lagoon.deployment.servicetype which can be used to define the actual service type.

    An example:

    docker-compose.yml
    nginx:\nbuild:\ncontext: .\ndockerfile: nginx.dockerfile\nlabels:\nlagoon.type: nginx-php-persistent\nlagoon.persistent: /app/web/sites/default/files/\nlagoon.name: nginx # If this isn't present, Lagoon will use the container name, which in this case is nginx.\nlagoon.deployment.servicetype: nginx\nphp:\nbuild:\ncontext: .\ndockerfile: php.dockerfile\nlabels:\nlagoon.type: nginx-php-persistent\nlagoon.persistent: /app/web/sites/default/files/\nlagoon.name: nginx # We want this service to be part of the NGINX pod in Lagoon.\nlagoon.deployment.servicetype: php\n

    In the example above, the services are named nginx and php (but you can call them whatever you want). The lagoon.name tells Lagoon which services go together - all of the services with the same name go together.

    In order for Lagoon to realize which one is the nginx and which one is the php service, we define it via lagoon.deployment.servicetype: nginx and lagoon.deployment.servicetype: php.

    "},{"location":"using-lagoon-the-basics/docker-compose-yml/#helm-templates-kubernetes-only","title":"Helm Templates (Kubernetes only)","text":"

    Lagoon uses Helm for templating on Kubernetes. To do this, a series of Charts are included with the build-deploy-tool image.

    "},{"location":"using-lagoon-the-basics/docker-compose-yml/#custom-rollout-monitor-types","title":"Custom Rollout Monitor Types","text":"

    By default, Lagoon expects that services from custom templates are rolled out via a DeploymentConfig object within Kubernetes or Openshift. It monitors the rollout based on this object. In some cases, the services that are defined via custom deployment need a different way of monitoring. This can be defined via lagoon.rollout:

    • deploymentconfig - This is the default. Expects a DeploymentConfig object in the template for the service.
    • statefulset - Expects a Statefulset object in the template for the service.
    • daemonset - Expects a Daemonset object in the template for the service.
    • false - Will not monitor any rollouts, and will just be happy if the template applies and does not throw any errors.

    You can also overwrite the rollout for just one specific environment. This is done in .lagoon.yml.

    "},{"location":"using-lagoon-the-basics/docker-compose-yml/#buildkit-and-docker-compose-v2","title":"BuildKit and Docker Compose v2","text":"

    BuildKit is a toolkit for converting source code to build artifacts in an efficient, expressive and repeatable manner.

    With the release of Lagoon v2.11.0, Lagoon now provides support for BuildKit-based docker-compose builds. To enable BuildKit for your Project or Environment, add DOCKER_BUILDKIT=1 as a build-time variable.

    Bug

    Note that while using BuildKit locally, you may experience some known issues.

    • Failed to solve with frontend dockerfile.v0: failed to create LLB definition: pull access denied, repository does not exist or may require authorization: This message means that your build has tried to access a Docker image that hasn't been built yet. As BuildKit builds in parallel, if you have a Docker image that inherits another one (as we do in Drupal with the CLI). You can use the target field inside the build to reconfigure as a multi-stage build
    • issues with volumes_from in Docker Compose v2 - this service (that provides SSH access into locally running containers) has been deprecated by Docker Compose. The section can be removed from your docker-compose.yml file if you don't require SSH access from inside your local environment, or can be worked around on a project-by-project basis - see https://github.com/pygmystack/pygmy/issues/333#issuecomment-1274091375 for more information.
    "},{"location":"using-lagoon-the-basics/first-deployment/","title":"First Deployment","text":"

    Note

    If you are deploying a Drupal Project, skip this and read the Drupal-specific first deployment documentation.

    "},{"location":"using-lagoon-the-basics/first-deployment/#1-make-sure-you-are-ready","title":"1. Make sure you are ready","text":"

    In order to make your first deployment a successful one, please make sure that your project is Lagoonized and that you have set up the project in Lagoon. If not, or you're not sure, or that doesn't sound familiar, don't worry, go back and follow the Step-by-Step Guides which show you how this works, and then come back and deploy!

    "},{"location":"using-lagoon-the-basics/first-deployment/#2-push","title":"2. Push","text":"

    With Lagoon, you create a new deployment by pushing into a branch that is configured to be deployed.

    If you don't have any new code to push, don't worry! Run:

    Git push
    git commit --allow-empty -m \"go, go! Power Rangers!\"\ngit push\n

    This will trigger a push, and your Git hosting will inform Lagoon about this push via the configured webhook.

    If all is correct, you should see a notification in your configured chat system (this has been configured by your friendly Lagoon administrator):

    This informs you that Lagoon has just started to deploy your code. Depending on the size of the code and amount of containers, this will take a couple of seconds. Just relax. If you want to know what's happening now, check out the Build and Deploy Process of Lagoon.

    You can also check your Lagoon UI to see the progress of any deployment (your Lagoon administrator has the info).

    "},{"location":"using-lagoon-the-basics/first-deployment/#3-its-done","title":"3. It's done","text":"

    As soon as Lagoon is done building and deploying it will send a second notification to the chat system, here an example:

    It tells you:

    • Which project has been deployed.
    • Which branch and Git SHA have been deployed.
    • A link to the full logs of the build and deployment.
    • Links to all routes (URLs) where the environment can be reached.

    You can also quickly tell what kind of notification it is by the emoji at the beginning - whether it's just info that the build has started, a success, or fail.

    That's it! We hope that wasn't too hard - making devOps accessible is what we are striving for!

    "},{"location":"using-lagoon-the-basics/first-deployment/#but-wait-how-about-other-branches-or-the-production-environment","title":"But wait, how about other branches or the production environment?","text":"

    That's the beauty of Lagoon: it's exactly the same! Just push the name of the branch and that one will be deployed.

    "},{"location":"using-lagoon-the-basics/first-deployment/#failure-dont-worry","title":"Failure? Don't worry","text":"

    Did the deployment fail? Oh no! But we're here to help:

    1. If you deployed a Drupal site, make sure to read the Drupal-specific first deployment documentation, which explains why this happens.
    2. Click on the Logs link in the error notification, it will tell you where in the deployment process the failure happened.
    3. If you can't figure it out, just ask your Lagoon support, we are here to help!
    4. Reach out to us in your support channel or in the community Discord.
    "},{"location":"using-lagoon-the-basics/going-live/","title":"Going Live","text":"

    Congratulations, you're this close to going live with your website on Lagoon! In order to make this as seamless as possible, we've got this final checklist for you. It leads you through the last few things you should check before taking your site live.

    "},{"location":"using-lagoon-the-basics/going-live/#check-your-lagoonyml","title":"Check your .lagoon.yml","text":""},{"location":"using-lagoon-the-basics/going-live/#routes-ssl","title":"Routes / SSL","text":"

    Check to be sure that all routes have been set up in your .lagoon.yml. Be aware that if you don't point the domains towards Lagoon, you should disable Let's Encrypt (LE) certificate creation, as it will lead to issues. Domains not pointing towards Lagoon will be disabled after a while in order to not exceed the Let's Encrypt quotas.

    If you use Certificate Authority (CA) signed certificates, you can set tls-acme to false , but leave the insecure flag set to Allow or Redirect. In the case of CA certificates, let your Lagoon administrator know the routes and the SSL certificate that needs to be put in place.

    .lagoon.yml
    environments:\nmain:\nroutes:\n- nginx:\n- example.com:\ntls-acme: 'false'\ninsecure: Allow\n- www.example.com:\ntls-acme: 'false'\ninsecure: Allow\n

    As soon as the DNS entries point towards your Lagoon installation, you can switch the flags: tls-acme to true and insecure to Redirect

    .lagoon.yml
    environments:\nmain:\nroutes:\n- nginx:\n- example.com:\ntls-acme: 'true'\ninsecure: Redirect\n- www.example.com:\ntls-acme: 'true'\ninsecure: Redirect\n

    Note

    As checking every page of your website might be a bit a tedious job, you can make use of mixed-content-scan. This will crawl the entire site and give you back pages that include assets from a non-HTTPS site.

    "},{"location":"using-lagoon-the-basics/going-live/#redirects","title":"Redirects","text":"

    If you need non-www to www redirects, make sure you have them set up in the redirects-map.conf - see Documentation.

    "},{"location":"using-lagoon-the-basics/going-live/#cron-jobs","title":"Cron jobs","text":"

    Check if your cron jobs have been set up for your production environment - see .lagoon.yml.

    "},{"location":"using-lagoon-the-basics/going-live/#dns","title":"DNS","text":"

    To make it as smooth as possible for you to get your site pointing to our servers, we have dedicated load-balancer DNS records. Those technical DNS resource records are used for getting your site linked to the amazee.io infrastructure and serve no other purpose. If you are in doubt of the CNAME record, ask your Lagoon administrator about the exact CNAME you need to set up.

    Example on amazee.io : <region-identifier>.amazee.io

    Before you switch over your domain to Lagoon, make sure you lower the Time-to-Live (TTL) before you go live. This will ensure that the switch from the old to the new servers will go quickly. We usually advise a TTL of 300-600 seconds prior to the DNS switch. More information about TTL.

    "},{"location":"using-lagoon-the-basics/going-live/#recommended-settings-for-fastly-cname-record","title":"Recommended settings for Fastly (CNAME record):","text":"

    The recommended method of pointing your domain's DNS records at Lagoon is via a CNAME record as shown below:

    CNAME: cdn.amazee.io

    "},{"location":"using-lagoon-the-basics/going-live/#alternate-settings-for-fastly-a-records","title":"Alternate Settings for Fastly (A records):","text":"

    If your DNS provider does not support the use of CNAME records, you can use these A records instead. Please ensure you set up individual records for each IP listed below:

    • A: 151.101.2.191
    • A: 151.101.66.191
    • A: 151.101.130.191
    • A: 151.101.194.191

    Note

    We do not suggest configuring any static IP addresses in your DNS zones. The Lagoon load balancer infrastructure may change over time which can have impact on your site availability if you configure a static IP address.

    "},{"location":"using-lagoon-the-basics/going-live/#root-domains","title":"Root Domains","text":"

    Configuring the root domain (e.g. example.com) can be a bit tricky because the DNS specification does not allow the root domain to point to a CNAME entry. Depending on your DNS provider, the record name is different:

    • ALIAS at DNSimple
    • ANAME at DNS Made Easy
    • ANAME at easyDNS
    • ALIAS at PointDNS
    • CNAME at CloudFlare
    • CNAME at NS1

    If your DNS provider needs an IP address for the root domain, get in touch with your Lagoon administrator to give you the load balancer IP addresses.

    "},{"location":"using-lagoon-the-basics/going-live/#production-environment","title":"Production environment","text":"

    Lagoon understands the concept of development and production environments. Development environments automatically send noindex and nofollow headers in order to prohibit indexing by search engines.

    X-Robots-Tag: noindex, nofollow

    During project setup, the production environment should already be defined. If that's omitted, your environment will run in development mode. You can check if the environment is set as production environment in the Lagoon user interface. If the production environment is not set, let your Lagoon administrator know, and they will configure the system accordingly.

    "},{"location":"using-lagoon-the-basics/lagoon-yml/","title":".lagoon.yml","text":"

    The .lagoon.yml file is the central file to set up your project. It contains configuration in order to do the following:

    • Define routes for accessing your sites.
    • Define pre-rollout tasks.
    • Define post-rollout tasks.
    • Set up SSL certificates.
    • Add cron jobs for environments.

    The .lagoon.yml file must be placed at the root of your Git repository.

    "},{"location":"using-lagoon-the-basics/lagoon-yml/#general-settings","title":"General Settings","text":""},{"location":"using-lagoon-the-basics/lagoon-yml/#docker-compose-yaml","title":"docker-compose-yaml","text":"

    Tells the build script which Docker Compose YAML file should be used, in order to learn which services and containers should be deployed. This defaults to docker-compose.yml, but could be used for a specific Lagoon Docker Compose YAML file if needed.

    "},{"location":"using-lagoon-the-basics/lagoon-yml/#environment_variablesgit_sha","title":"environment_variables.git_sha","text":"

    This setting allows you to enable injecting the deployed Git SHA into your project as an environment variable. By default this is disabled. Setting the value to true sets the SHA as the environment variable LAGOON_GIT_SHA.

    "},{"location":"using-lagoon-the-basics/lagoon-yml/#routes","title":"Routes","text":"

    Routes are used to direct traffic to services. Each service in an environnment can have routes, in which the domain names are defined manually or automatically. The top level routes section applies to all routes in all environments.

    "},{"location":"using-lagoon-the-basics/lagoon-yml/#routesautogenerate","title":"routes.autogenerate","text":"

    This allows you to configure automatically created routes. Manual routes are defined per environment.

    • enabled: Set to false to disable autogenerated routes. Default is true.
    • allowPullrequests: Set to true to override enabled: false for pull requests..lagoon.yml
      routes:\nautogenerate:\nenabled: false\nallowPullrequests: true\n
    • insecure: Configures HTTP connections. Default is Allow.
      • Allow: Route will respond to HTTP and HTTPS.
      • Redirect: Route will redirect any HTTP request to HTTPS.
    • prefixes: Configure prefixes for the autogenerated routes of each environment. This is useful for things like language prefix domains, or a multi-domain site using the Drupal domain module.

      .lagoon.yml
        routes:\nautogenerate:\nprefixes:\n- www\n- de\n- fr\n- it\n
    "},{"location":"using-lagoon-the-basics/lagoon-yml/#tasks","title":"Tasks","text":"

    There are different type of tasks you can define, and they differ in when exactly they are executed in a build flow:

    "},{"location":"using-lagoon-the-basics/lagoon-yml/#pre-rollout-tasks-pre_rolloutirun","title":"Pre-Rollout Tasks - pre_rollout.[i].run","text":"

    Here you can specify tasks which will run against your project after all images have been successfully built, but before:

    • Any running containers are updated with the newly built images.
    • Any other changes are made to your existing environment.

    This feature enables you to, for example, create a database dump before updating your application. This can make it easier to roll back in case of a problem with the deploy.

    Info

    The pre-rollout tasks run in the existing pods before they are updated, which means:

    • Changes made to your Dockerfile since the last deploy will not be visible when pre-rollout tasks run.
    • If there are no existing containers (e.g. on the initial deployment of a new environment), pre-rollout tasks are skipped.
    "},{"location":"using-lagoon-the-basics/lagoon-yml/#post-rollout-tasks-post_rolloutirun","title":"Post-Rollout Tasks - post_rollout.[i].run","text":"

    Here you can specify tasks which need to run against your project, after:

    • All images have been successfully built.
    • All containers are updated with the new images.
    • All containers are running have passed their readiness checks.

    Common uses for post-rollout tasks include running drush updb, drush cim, or clearing various caches.

    • name
      • The name is an arbitrary label for making it easier to identify each task in the logs.
    • command
      • Here you specify what command should run. These are run in the WORKDIR of each container, for Lagoon images this is /app. Keep this in mind if you need to cd into a specific location to run your task.
    • service
      • The service in which to run the task. If following our Drupal example, this will be the CLI container, as it has all your site code, files, and a connection to the database. Typically you do not need to change this.
    • container
      • If the service has multiple containers (e.g. nginx-php), you will need to specify which container in the pod to connect to (e.g. the php container within the nginx pod).
    • shell
      • In which shell the task should be run. By default sh is used, but if the container also has other shells (like bash, you can define it here). This is useful if you want to run some small if/else bash scripts within the post-rollouts. See the example below to learn how to write a script with multiple lines.
    • when
      • The \"when\" clause allows for the conditional running of tasks. It expects an expression that will evaluate to a true/false value which determines whether the task should be run.

    Note: If you would like to temporarily disable pre/post-rollout tasks during a deployment, you can set either of the following environment variables in the API at the project or environment level (see how on Environment Variables).

    • LAGOON_PREROLLOUT_DISABLED=true
    • LAGOON_POSTROLLOUT_DISABLED=true
    "},{"location":"using-lagoon-the-basics/lagoon-yml/#example-post-rollout-tasks","title":"Example post-rollout tasks","text":"

    Here are some useful examples of post-rollout tasks that you may want to use or adapt for your projects.

    Run only if Drupal not installed:

    .lagoon.yml
    - run:\n  name: IF no Drupal installed\n  command: | # (1)\nif tables=$(drush sqlq \"show tables like 'node';\") && [ -z \"$tables\" ]; then\n#### whatever you like\nfi\nservice: cli\n  shell: bash\n
    1. This shows how to create a multi-line command.

    Different tasks based on branch name:

    .lagoon.yml
    - run:\nname: Different tasks based on branch name\ncommand: |\n### Runs if current branch is not 'production'\nservice: cli\nwhen: LAGOON_GIT_BRANCH != \"production\"\n

    Run shell script:

    .lagoon.yml
    - run:\nname: Run Script\ncommand: './scripts/script.sh'\nservice: cli\n

    Target specific container in pod:

    .lagoon.yml
    - run:\nname: show php env variables\ncommand: env\nservice: nginx\ncontainer: php\n

    Drupal & Drush 9: Sync database & files from master environment:

    .lagoon.yml
    - run:\n    name: Sync DB and Files from master if we are not on master\n    command: |\n# Only if we don't have a database yet\nif tables=$(drush sqlq 'show tables;') && [ -z \"$tables\" ]; then\ndrush sql-sync @lagoon.master @self # (1)\ndrush rsync @lagoon.master:%files @self:%files -- --omit-dir-times --no-perms --no-group --no-owner --chmod=ugo=rwX\n      fi\nservice: cli\n    when: LAGOON_ENVIRONMENT_TYPE != \"production\"\n
    1. Make sure to use the correct aliases for your project here.
    "},{"location":"using-lagoon-the-basics/lagoon-yml/#backup-retention","title":"Backup Retention","text":""},{"location":"using-lagoon-the-basics/lagoon-yml/#backup-retentionproductionmonthly","title":"backup-retention.production.monthly","text":"

    Specify the number of monthly backups Lagoon should retain for your project's production environment(s).

    The global default is 1 if this value is not specified.

    "},{"location":"using-lagoon-the-basics/lagoon-yml/#backup-retentionproductionweekly","title":"backup-retention.production.weekly","text":"

    Specify the number of weekly backups Lagoon should retain for your project's production environment(s).

    The global default is 6 if this value is not specified.

    "},{"location":"using-lagoon-the-basics/lagoon-yml/#backup-retentionproductiondaily","title":"backup-retention.production.daily","text":"

    Specify the number of daily backups Lagoon should retain for your project's production environment(s).

    The global default is 7 if this value is not specified.

    "},{"location":"using-lagoon-the-basics/lagoon-yml/#backup-retentionproductionhourly","title":"backup-retention.production.hourly","text":"

    Specify the number of hourly backups Lagoon should retain for your project's production environment(s).

    The global default is 0 if this value is not specified.

    "},{"location":"using-lagoon-the-basics/lagoon-yml/#backup-schedule","title":"Backup Schedule","text":""},{"location":"using-lagoon-the-basics/lagoon-yml/#backup-scheduleproduction","title":"backup-schedule.production","text":"

    Specify the backup schedule for this project. Accepts cron-compatible syntax with the notable exception that the Minute block must be the letter M. Any other value in the Minute block will cause the Lagoon build to fail. This allows Lagoon to randomly choose a specific minute for these backups to happen, while users can specify the remainder of the schedule down to the hour.

    The global default is M H(22-2) * * * if this value is not specified. Take note that these backups will use the cluster's local timezone.

    "},{"location":"using-lagoon-the-basics/lagoon-yml/#environments","title":"Environments","text":"

    Environment names match your deployed branches or pull requests. This allows for each environment to have a different config. In our example it will apply to the main and staging environment.

    "},{"location":"using-lagoon-the-basics/lagoon-yml/#environmentsnameroutes","title":"environments.[name].routes","text":"

    Manual routes are domain names that are configured per environment to direct traffic to a service. Since all environments get automatically created routes by default, it is typical that manual routes are only setup for the production environment, using the main domain of the project's website like www.example.com.

    Tip

    Since Lagoon has no control over the manual routes, you'll need to ensure the DNS records are configured properly at your DNS provider. You can likely set a CNAME record to point to the automatic route.

    The first element after the environment is the target service, nginx in our example. This is how we identify which service incoming requests will be sent to.

    The simplest route is example.com, as seen in our example .lagoon.yml - you can see it has no additional configuration. This will assume that you want a Let's Encrypt certificate for your route and no redirect from HTTPS to HTTP.

    In the \"www.example.com\" example below, we see three more options (also notice the : at the end of the route and that the route is wrapped in \", that's important!):

    .lagoon.yml
    - \"www.example.com\":\ntls-acme: true\ninsecure: Redirect\nhstsEnabled: true\n
    "},{"location":"using-lagoon-the-basics/lagoon-yml/#ssl-configuration-tls-acme","title":"SSL Configuration tls-acme","text":"
    • tls-acme: Configures automatic TLS certificate generation via Let's Encrypt. Default is true, set to false to disable automatic certificates.
    • insecure: Configures HTTP connections. Default is Allow.
      • Allow: Route will respond to HTTP and HTTPS.
      • Redirect: Route will redirect any HTTP request to HTTPS.
    • hstsEnabled: Adds the Strict-Transport-Security header. Default is false.
    • hstsMaxAge: Configures the max-age directive. Default is 31536000 (1 year).
    • hstsPreload: Sets the preload directive. Default is false.
    • hstsIncludeSubdomains: Sets the includeSubDomains directive. Default is false.

    Info

    If you plan to switch from a SSL certificate signed by a Certificate Authority (CA) to a Let's Encrypt certificate, it's best to get in touch with your Lagoon administrator to oversee the transition. There are known issues during the transition. The workaround would be manually removing the CA certificate and then triggering the Let's Encrypt process.

    "},{"location":"using-lagoon-the-basics/lagoon-yml/#monitoring-a-specific-path","title":"Monitoring a specific path","text":"

    When UptimeRobot is configured for your cluster (Kubernetes or OpenShift), Lagoon will inject annotations to each route/ingress for use by the stakater/IngressControllerMonitor. The default action is to monitor the homepage of the route. If you have a specific route to be monitored, this can be overridden by adding a monitoring-path to your route specification. A common use is to set up a path for monitoring which bypasses caching to give a more real-time monitoring of your site.

    .lagoon.yml
    - \"www.example.com\":\nmonitoring-path: \"/bypass-cache\"\n
    "},{"location":"using-lagoon-the-basics/lagoon-yml/#ingress-annotations","title":"Ingress annotations","text":"

    Warning

    Route/Ingress annotations are only supported by projects that deploy into clusters that run nginx-ingress controllers! Check with your Lagoon administrator if this is supported.

    • annotations can be a YAML map of annotations supported by the nginx-ingress controller. This is specifically useful for easy redirects and other configurations.
    "},{"location":"using-lagoon-the-basics/lagoon-yml/#restrictions","title":"Restrictions","text":"

    Some annotations are disallowed or partially restricted in Lagoon. The table below describes these rules.

    If your .lagoon.yml contains one of these annotations it will cause a build failure.

    Annotation Notes nginx.ingress.kubernetes.io/auth-snippet Disallowed nginx.ingress.kubernetes.io/configuration-snippet Restricted to rewrite, add_header, set_real_ip, and more_set_headers directives. nginx.ingress.kubernetes.io/modsecurity-snippet Disallowed nginx.ingress.kubernetes.io/server-snippet Restricted to rewrite, add_header, set_real_ip, and more_set_headers directives. nginx.ingress.kubernetes.io/stream-snippet Disallowed nginx.ingress.kubernetes.io/use-regex Disallowed"},{"location":"using-lagoon-the-basics/lagoon-yml/#ingress-annotations-redirects","title":"Ingress annotations redirects","text":"

    In this example any requests to example.ch will be redirected to https://www.example.ch while keeping folders or query parameters intact (example.com/folder?query -> https://www.example.ch/folder?query).

    .lagoon.yml
    - \"example.ch\":\nannotations:\nnginx.ingress.kubernetes.io/permanent-redirect: https://www.example.ch$request_uri\n- www.example.ch\n

    You can of course also redirect to any other URL not hosted on Lagoon, this will direct requests to example.de to https://www.google.com

    .lagoon.yml
    - \"example.de\":\nannotations:\nnginx.ingress.kubernetes.io/permanent-redirect: https://www.google.com\n
    "},{"location":"using-lagoon-the-basics/lagoon-yml/#trusted-reverse-proxies","title":"Trusted Reverse Proxies","text":"

    Warning

    Kubernetes will only process a single nginx.ingress.kubernetes.io/server-snippet annotation. Please ensure that if you use this annotation on a non-production environment route that you also include the add_header X-Robots-Tag \"noindex, nofollow\"; annotation as part of your server-snippet. This is needed to stop robots from crawling development environments as the default server-snippet set to prevent this in development environments in the ingress templates will get overwritten with any server-snippets set in .lagoon.yml.

    Some configurations involve a reverse proxy (like a CDN) in front of the Kubernetes clusters. In these configurations, the IP of the reverse proxy will appear as the REMOTE_ADDR HTTP_X_REAL_IP HTTP_X_FORWARDED_FOR headers field in your applications. The original IP of the requester can be found in the HTTP_X_ORIGINAL_FORWARDED_FOR header.

    If you want the original IP to appear in the REMOTE_ADDR HTTP_X_REAL_IP HTTP_X_FORWARDED_FOR headers, you need to tell the ingress which reverse proxy IPs you want to trust:

    .lagoon.yml
    - \"example.ch\":\nannotations:\nnginx.ingress.kubernetes.io/server-snippet: |\nset_real_ip_from 1.2.3.4/32;\n

    This example would trust the CIDR 1.2.3.4/32 (the IP 1.2.3.4 in this case). Therefore if there is a request sent to the Kubernetes cluster from the IP 1.2.3.4 the X-Forwarded-For Header is analyzed and its contents injected into REMOTE_ADDR HTTP_X_REAL_IP HTTP_X_FORWARDED_FOR headers.

    "},{"location":"using-lagoon-the-basics/lagoon-yml/#environmentsnametypes","title":"Environments.[name].types","text":"

    The Lagoon build process checks the lagoon.type label from the docker-compose.yml file in order to learn what type of service should be deployed (read more about them in the documentation of docker-compose.yml).

    Sometimes you might want to override the type just for a single environment, and not for all of them. For example, if you want a standalone MariaDB database (instead of letting the Service Broker/operator provision a shared one) for your non-production environment called develop:

    service-name: service-type

    • service-name is the name of the service from docker-compose.yml you would like to override.
    • service-type the type of the service you would like to use in your override.

    Example for setting up MariaDB_Galera:

    .lagoon.yml
    environments:\ndevelop:\ntypes:\nmariadb: mariadb-single\n
    "},{"location":"using-lagoon-the-basics/lagoon-yml/#environmentsnametemplates","title":"environments.[name].templates","text":"

    The Lagoon build process checks the lagoon.template label from the docker-compose.yml file in order to check if the service needs a custom template file (read more about them in the documentation of docker-compose.yml).

    Sometimes you might want to override the template just for a single environment, and not for all of them:

    service-name: template-file

    • service-name is the name of the service from docker-compose.yml you would like to override.
    • template-file is the path and name of the template to use for this service in this environment.
    "},{"location":"using-lagoon-the-basics/lagoon-yml/#example-template-override","title":"Example Template Override","text":".lagoon.yml
    environments:\nmain:\ntemplates:\nmariadb: mariadb.main.deployment.yml\n
    "},{"location":"using-lagoon-the-basics/lagoon-yml/#environmentsnamerollouts","title":"environments.[name].rollouts","text":"

    The Lagoon build process checks the lagoon.rollout label from the docker-compose.yml file in order to check if the service needs a special rollout type (read more about them in the documentation of docker-compose.yml)

    Sometimes you might want to override the rollout type just for a single environment, especially if you also overwrote the template type for the environment:

    service-name: rollout-type

    • service-name is the name of the service from docker-compose.yml you would like to override.
    • rollout-type is the type of rollout. See documentation of docker-compose.yml) for possible values.
    "},{"location":"using-lagoon-the-basics/lagoon-yml/#custom-rollout-type-example","title":"Custom Rollout Type Example","text":".lagoon.yml
    environments:\nmain:\nrollouts:\nmariadb: statefulset\n
    "},{"location":"using-lagoon-the-basics/lagoon-yml/#environmentsnameautogenerateroutes","title":"environments.[name].autogenerateRoutes","text":"

    This allows for any environments to get autogenerated routes when route autogeneration is disabled.

    .lagoon.yml
    routes:\nautogenerate:\nenabled: false\nenvironments:\ndevelop:\nautogenerateRoutes: true\n
    "},{"location":"using-lagoon-the-basics/lagoon-yml/#environmentsnamecronjobs","title":"environments.[name].cronjobs","text":"

    Cron jobs must be defined explicitly for each environment, since it is typically not desirable to run the same ones for all environments. Depending on the defined schedule, cron jobs may run as a Kubernetes native CronJob or as an in-pod cron job via the crontab of the defined service.

    "},{"location":"using-lagoon-the-basics/lagoon-yml/#cron-job-example","title":"Cron Job Example","text":".lagoon.yml
    cronjobs:\n- name: Hourly Drupal Cron\nschedule: \"M * * * *\" # Once per hour, at a random minute.\ncommand: drush cron\nservice: cli\n- name: Nightly Drupal Cron\nschedule: \"M 0 * * *\" # Once per day, at a random minute from 00:00 to 00:59.\ncommand: drush cron\nservice: cli\n
    • name: Any name that will identify the purpose and distinguish it from other cron jobs.
    • schedule: The schedule for executing the cron job. Lagoon uses an extended version of the crontab format. If you're not sure about the syntax, use a crontab generator.

      • You can specify M for the minute, and your cron job will run once per hour at a random minute (the same minute each hour), or M/15 to run it every 15 mins, but with a random offset from the hour (like 6,21,36,51). It is a good idea to spread out your cron jobs using this feature, rather than have them all fire off on minute 0.
      • You can specify H for the hour, and your cron job will run once per day at a random hour (the same hour every day), or H(2-4) to run it once per day within the hours of 2-4.

    Timezones:

    • The default timezone for cron jobs is UTC.
    • Native cron jobs use the timezone of the node, which is UTC.
    • In-pod cron jobs use the timezone of the defined service, which can be configured to something other than UTC.
    • command: The command to execute. This executes in the WORKDIR of the service. For Lagoon images, this is /app.

    Warning

    Cronjobs may run in-pod, via crontab, which doesn't support multiline commands. If you need a complex or multiline cron command, you must put it in a script that can be used as the command. Consider whether a pre- or post-rollout task would work.

    • service: Which service of your project to run the command in. For most projects, this should be the cli service.
    "},{"location":"using-lagoon-the-basics/lagoon-yml/#polysite","title":"Polysite","text":"

    In Lagoon, the same Git repository can be added to multiple projects, creating what is called a polysite. This allows you to run the same codebase, but allow for different, isolated, databases and persistent files. In .lagoon.yml , we currently only support specifying custom routes for a polysite project. The key difference from a standard project is that the environments becomes the second-level element, and the project name the top level.

    To utilize this, you will need to:

    1. Create two (or more) projects in Lagoon, each configured with the same Git URL and production branch, named per your .lagoon.yml (i.e poly-project1 and poly-project2 below)
    2. Add the deploy keys from each project to the Git repository.
    3. Configure the webhook for the repository (if required) - you can then push/deploy. Note that a push to the repository will simultaneously deploy all projects/branches for that Git URL.
    "},{"location":"using-lagoon-the-basics/lagoon-yml/#polysite-example","title":"Polysite Example","text":".lagoon.yml
    poly-project1:\nenvironments:\nmain:\nroutes:\n- nginx:\n- project1.com\npoly-project2:\nenvironments:\nmain:\nroutes:\n- nginx:\n- project2.com\n
    "},{"location":"using-lagoon-the-basics/lagoon-yml/#specials","title":"Specials","text":""},{"location":"using-lagoon-the-basics/lagoon-yml/#api","title":"api","text":"Info

    If you run directly on amazee.io hosted Lagoon you will not need this key set.

    With the key api you can define another URL that should be used by the Lagoon CLI and drush to connect to the Lagoon GraphQL API. This needs to be a full URL with a scheme, like: http://localhost:3000 This usually does not need to be changed, but there might be situations where your Lagoon administrator tells you to do so.

    "},{"location":"using-lagoon-the-basics/lagoon-yml/#ssh","title":"ssh","text":"Info

    If you run directly on amazee.io hosted Lagoon you will not need this key set.

    With the key ssh you can define another SSH endpoint that should be used by the Lagoon CLI and drush to connect to the Lagoon remote shell service. This needs to be a hostname and a port separated by a colon, like: localhost:2020 This usually does not need to be changed, but there might be situations where your Lagoon administrator tells you to do so.

    "},{"location":"using-lagoon-the-basics/lagoon-yml/#container-registries","title":"container-registries","text":"

    The container-registries block allows you to define your own private container registries to pull custom or private images. To use a private container registry, you will need a username, password, and optionally the url for your registry. If you don't specify a url in your YAML, it will default to using Docker Hub.

    There are 2 ways to define the password used for your registry user.

    Create an environment variable in the Lagoon API with the type container_registry:

    • lagoon add variable -p <project_name> -N <registry_password_variable_name> -V <password_goes_here> -S container_registry
    • (see more on Environment Variables)

    The name of the variable you create can then be set as the password:

    .lagoon.yml
    container-registries:\nmy-custom-registry:\nusername: myownregistryuser\npassword: <registry_password_variable_name>\nurl: my.own.registry.com\n

    You can also define the password directly in the .lagoon.yml file in plain text:

    .lagoon.yml
    container-registries:\ndocker-hub:\nusername: dockerhubuser\npassword: MySecretPassword\n
    "},{"location":"using-lagoon-the-basics/lagoon-yml/#consuming-a-custom-or-private-container-registry-image","title":"Consuming a custom or private container registry image","text":"

    To consume a custom or private container registry image, you need to update the service inside your docker-compose.yml file to use a build context instead of defining an image:

    .docker-compose.yml
    services:\nmariadb:\nbuild:\ncontext: .\ndockerfile: Dockerfile.mariadb\n

    Once the docker-compose.yml file has been updated to use a build, you need to create the Dockerfile.<service> and then set your private image as the FROM <repo>/<name>:<tag>

    .lagoon.yml
    FROM dockerhubuser/my-private-database:tag\n
    "},{"location":"using-lagoon-the-basics/lagoon-yml/#example-lagoonyml","title":"Example .lagoon.yml","text":"

    This is an example .lagoon.yml which showcases all possible settings. You will need to adapt it to your project.

    .lagoon.yml
    docker-compose-yaml: docker-compose.yml\nenvironment_variables:\ngit_sha: 'true'\ntasks:\npre-rollout:\n- run:\nname: drush sql-dump\ncommand: mkdir -p /app/web/sites/default/files/private/ && drush sql-dump --ordered-dump --gzip --result-file=/app/web/sites/default/files/private/pre-deploy-dump.sql.gz\nservice: cli\npost-rollout:\n- run:\nname: drush cim\ncommand: drush -y cim\nservice: cli\nshell: bash\n- run:\nname: drush cr\ncommand: drush -y cr\nservice: cli\nroutes:\nautogenerate:\ninsecure: Redirect\nenvironments:\nmain:\nroutes:\n- nginx:\n- example.com\n- example.net\n- \"www.example.com\":\ntls-acme: true\ninsecure: Redirect\nhstsEnabled: true\n- \"example.ch\":\nannotations:\nnginx.ingress.kubernetes.io/permanent-redirect: https://www.example.ch$request_uri\n- www.example.ch\ntypes:\nmariadb: mariadb\ntemplates:\nmariadb: mariadb.main.deployment.yml\nrollouts:\nmariadb: statefulset\ncronjobs:\n- name: drush cron\nschedule: \"M * * * *\" # This will run the cron once per hour.\ncommand: drush cron\nservice: cli\nstaging:\ncronjobs:\n- name: drush cron\nschedule: \"M * * * *\" # This will run the cron once per hour.\ncommand: drush cron\nservice: cli\nfeature/feature-branch:\ncronjobs:\n- name: drush cron\nschedule: \"H * * * *\" # This will run the cron once per hour.\ncommand: drush cron\nservice: cli\n
    "},{"location":"using-lagoon-the-basics/lagoon-yml/#deprecated","title":"Deprecated","text":"

    These settings have been deprecated and should be removed from use in your .lagoon.yml.

    • routes.autogenerate.insecure

      The None option is equivalent to Redirect.

    • environments.[name].monitoring_urls
    • environments.[name].routes.[service].[route].hsts
    • environments.[name].routes.[service].[route].insecure

      The None option is equivalent to Redirect.

    "},{"location":"using-lagoon-the-basics/local-development-environments/","title":"Local Development Environments","text":"

    Even though Lagoon has only a hard dependency on Docker and Docker Compose (which is mostly shipped with Docker) there are some things which are nice for local development that are not included in Docker:

    • An HTTP reverse proxy for nice URLs and HTTPS offloading.
    • A DNS system so we don't have to remember IP addresses.
    • SSH agents to use SSH keys within containers.
    • A system that receives and displays mail locally.
    Warning

    You do not need to install Lagoon locally to use it locally! That sounds confusing but follow the documentation. Lagoon is the system that deploys your local development environment to your production environment, it's not the environment itself.

    "},{"location":"using-lagoon-the-basics/local-development-environments/#pygmy-or-lando-the-choice-is-yours","title":"pygmy or Lando - the choice is yours","text":"

    Lagoon has traditionally worked best with pygmy , which is the amazee.io flavored system of the above tools and works out of the box with Lagoon. It lives at https://github.com/pygmystack/pygmy

    pygmy is written in Golang, so to install it, run:

    Install with HomeBrew
    brew tap pygmystack/pygmy && brew install pygmy\n

    For detailed usage or installation info on pygmy, see its documentation.

    As announced in our blog post, Lagoon is now also compatible with Lando! For more information, please see the documentation at https://docs.lando.dev/config/lagoon.html to get yourself up and running.

    Lando's workflow for Lagoon will be familiar to users of Lando, and will also be the easiest way for Lagoon newcomers to get up and running. Pygmy presents a closer integration with Docker, which will lend itself better to more complex scenarios and use cases but will also require a deeper understanding.

    We have previously evaluated adding support for other systems like Docksal and Docker4Drupal, and while we may add support for these in the future, our current focus is on supporting using Lando and pygmy. If you do have Lagoon running with one of these (or other) tools, we would love for you to submit a PR on GitHub!

    "},{"location":"using-lagoon-the-basics/setup-project/","title":"Set Up a New Project","text":"

    Note

    We are working hard on getting our CLI and GraphQL API set up to allow everyone using Lagoon to setup and configure their projects themselves. Right now, it needs more testing before we can release those features, so hold tight!

    Until then, the setup of a new project involves talking to your Lagoon administrator, which is ok, as they are much friendlier than APIs. \ud83d\ude0a

    Please have the following information ready for your Lagoon administrator:

    • A name you would like the project to be known by
      • This name can only contain lowercase characters, numbers and dashes
      • Double dashes (--) are not allowed within a project name
    • SSH public keys, email addresses and the names of everybody that will work on this project. Here are instructions for generating and copying SSH keys for GitHub, GitLab, and Bitbucket.
    • The URL of the Git repository where your code is hosted (git@example.com:test/test.git).
    • The name of the Git branch you would like to use for your production environment (see Environment Types for details about the environments).
    • Which branches and pull requests you would like to deploy to your additional environments. With Lagoon, you can filter branches and pull requests by name with regular expressions, and your Lagoon administrator can get this set up for you.

    We suggest deploying specific important branches (like develop and main) and pull requests. But that's all up to you! (see Workflows for some more information)

    "},{"location":"using-lagoon-the-basics/setup-project/#1-make-sure-your-project-is-lagoonized","title":"1. Make sure your project is Lagoonized","text":"

    This means that the .lagoon.yml and docker-compose.yml files are available in your Git repository and configured accordingly.

    If this is not the case, check out the list of Step-by-Step Guides on how to do so before proceeding.

    "},{"location":"using-lagoon-the-basics/setup-project/#2-provide-access-to-your-code","title":"2. Provide access to your code","text":"

    In order to deploy your code, Lagoon needs access to it. By design and for security, Lagoon only needs read access to your Git repository.

    Your Lagoon administrator will tell you the SSH public key or the Git account to give read access to.

    "},{"location":"using-lagoon-the-basics/setup-project/#3-configure-webhooks","title":"3. Configure Webhooks","text":"

    Lagoon needs to be informed about a couple of events that are happening to your Git repository. Currently these are pushes and pull requests, but we may add more in the future.

    As Lagoon supports many different Git hosts, we have split off those instructions into this documentation: Configure Webhooks.

    "},{"location":"using-lagoon-the-basics/setup-project/#4-next-first-deployment","title":"4. Next: First deployment","text":"

    Congratulations, you are now ready to run your first deployment.

    "}]} \ No newline at end of file diff --git a/sitemap.xml b/sitemap.xml new file mode 100644 index 0000000000..7c0eb9ed29 --- /dev/null +++ b/sitemap.xml @@ -0,0 +1,558 @@ + + + + https://docs.lagoon.sh/ + 2023-10-10 + daily + + + https://docs.lagoon.sh/code-of-conduct/ + 2023-10-10 + daily + + + https://docs.lagoon.sh/contributing/ + 2023-10-10 + daily + + + https://docs.lagoon.sh/administering-lagoon/feature-flags/ + 2023-10-10 + daily + + + https://docs.lagoon.sh/administering-lagoon/graphql-queries/ + 2023-10-10 + daily + + + https://docs.lagoon.sh/administering-lagoon/rbac/ + 2023-10-10 + daily + + + https://docs.lagoon.sh/administering-lagoon/using-harbor/ + 2023-10-10 + daily + + + https://docs.lagoon.sh/administering-lagoon/using-harbor/security-scanning/ + 2023-10-10 + daily + + + https://docs.lagoon.sh/administering-lagoon/using-harbor/harbor-settings/ + 2023-10-10 + daily + + + https://docs.lagoon.sh/administering-lagoon/using-harbor/harbor-settings/harbor-core/ + 2023-10-10 + daily + + + https://docs.lagoon.sh/administering-lagoon/using-harbor/harbor-settings/harbor-database/ + 2023-10-10 + daily + + + https://docs.lagoon.sh/administering-lagoon/using-harbor/harbor-settings/harbor-jobservice/ + 2023-10-10 + daily + + + https://docs.lagoon.sh/administering-lagoon/using-harbor/harbor-settings/harbor-trivy/ + 2023-10-10 + daily + + + https://docs.lagoon.sh/administering-lagoon/using-harbor/harbor-settings/harborregistry/ + 2023-10-10 + daily + + + https://docs.lagoon.sh/administering-lagoon/using-harbor/harbor-settings/harborregistryctl/ + 2023-10-10 + daily + + + https://docs.lagoon.sh/applications/ + 2023-10-10 + daily + + + https://docs.lagoon.sh/applications/node/ + 2023-10-10 + daily + + + https://docs.lagoon.sh/applications/options/ + 2023-10-10 + daily + + + https://docs.lagoon.sh/applications/other/ + 2023-10-10 + daily + + + https://docs.lagoon.sh/applications/php/ + 2023-10-10 + daily + + + https://docs.lagoon.sh/applications/python/ + 2023-10-10 + daily + + + https://docs.lagoon.sh/applications/ruby/ + 2023-10-10 + daily + + + https://docs.lagoon.sh/applications/wordpress/ + 2023-10-10 + daily + + + https://docs.lagoon.sh/community/discord/ + 2023-10-10 + daily + + + https://docs.lagoon.sh/community/moderation/ + 2023-10-10 + daily + + + https://docs.lagoon.sh/community/participation/ + 2023-10-10 + daily + + + https://docs.lagoon.sh/contributing-to-lagoon/api-debugging/ + 2023-10-10 + daily + + + https://docs.lagoon.sh/contributing-to-lagoon/developing-lagoon/ + 2023-10-10 + daily + + + https://docs.lagoon.sh/contributing-to-lagoon/documentation/ + 2023-10-10 + daily + + + https://docs.lagoon.sh/contributing-to-lagoon/releasing/ + 2023-10-10 + daily + + + https://docs.lagoon.sh/contributing-to-lagoon/tests/ + 2023-10-10 + daily + + + https://docs.lagoon.sh/docker-images/commons/ + 2023-10-10 + daily + + + https://docs.lagoon.sh/docker-images/mariadb/ + 2023-10-10 + daily + + + https://docs.lagoon.sh/docker-images/mongodb/ + 2023-10-10 + daily + + + https://docs.lagoon.sh/docker-images/nginx/ + 2023-10-10 + daily + + + https://docs.lagoon.sh/docker-images/nodejs/ + 2023-10-10 + daily + + + https://docs.lagoon.sh/docker-images/opensearch/ + 2023-10-10 + daily + + + https://docs.lagoon.sh/docker-images/php-cli/ + 2023-10-10 + daily + + + https://docs.lagoon.sh/docker-images/php-fpm/ + 2023-10-10 + daily + + + https://docs.lagoon.sh/docker-images/postgres/ + 2023-10-10 + daily + + + https://docs.lagoon.sh/docker-images/python/ + 2023-10-10 + daily + + + https://docs.lagoon.sh/docker-images/rabbitmq/ + 2023-10-10 + daily + + + https://docs.lagoon.sh/docker-images/redis/ + 2023-10-10 + daily + + + https://docs.lagoon.sh/docker-images/ruby/ + 2023-10-10 + daily + + + https://docs.lagoon.sh/docker-images/solr/ + 2023-10-10 + daily + + + https://docs.lagoon.sh/docker-images/varnish/ + 2023-10-10 + daily + + + https://docs.lagoon.sh/drupal/ + 2023-10-10 + daily + + + https://docs.lagoon.sh/drupal/drush-9/ + 2023-10-10 + daily + + + https://docs.lagoon.sh/drupal/first-deployment-of-drupal/ + 2023-10-10 + daily + + + https://docs.lagoon.sh/drupal/integrate-drupal-and-fastly/ + 2023-10-10 + daily + + + https://docs.lagoon.sh/drupal/phpunit-and-phpstorm/ + 2023-10-10 + daily + + + https://docs.lagoon.sh/drupal/step-by-step-getting-drupal-ready-to-run-on-lagoon/ + 2023-10-10 + daily + + + https://docs.lagoon.sh/drupal/subfolders/ + 2023-10-10 + daily + + + https://docs.lagoon.sh/drupal/services/ + 2023-10-10 + daily + + + https://docs.lagoon.sh/drupal/services/mariadb/ + 2023-10-10 + daily + + + https://docs.lagoon.sh/drupal/services/nginx/ + 2023-10-10 + daily + + + https://docs.lagoon.sh/drupal/services/php-cli/ + 2023-10-10 + daily + + + https://docs.lagoon.sh/drupal/services/redis/ + 2023-10-10 + daily + + + https://docs.lagoon.sh/drupal/services/solr/ + 2023-10-10 + daily + + + https://docs.lagoon.sh/drupal/services/varnish/ + 2023-10-10 + daily + + + https://docs.lagoon.sh/installing-lagoon/add-group/ + 2023-10-10 + daily + + + https://docs.lagoon.sh/installing-lagoon/add-project/ + 2023-10-10 + daily + + + https://docs.lagoon.sh/installing-lagoon/create-user/ + 2023-10-10 + daily + + + https://docs.lagoon.sh/installing-lagoon/deploy-project/ + 2023-10-10 + daily + + + https://docs.lagoon.sh/installing-lagoon/efs-provisioner/ + 2023-10-10 + daily + + + https://docs.lagoon.sh/installing-lagoon/gitlab/ + 2023-10-10 + daily + + + https://docs.lagoon.sh/installing-lagoon/install-harbor/ + 2023-10-10 + daily + + + https://docs.lagoon.sh/installing-lagoon/install-lagoon-remote/ + 2023-10-10 + daily + + + https://docs.lagoon.sh/installing-lagoon/lagoon-backups/ + 2023-10-10 + daily + + + https://docs.lagoon.sh/installing-lagoon/lagoon-cli/ + 2023-10-10 + daily + + + https://docs.lagoon.sh/installing-lagoon/lagoon-core/ + 2023-10-10 + daily + + + https://docs.lagoon.sh/installing-lagoon/lagoon-files/ + 2023-10-10 + daily + + + https://docs.lagoon.sh/installing-lagoon/lagoon-logging/ + 2023-10-10 + daily + + + https://docs.lagoon.sh/installing-lagoon/logs-concentrator/ + 2023-10-10 + daily + + + https://docs.lagoon.sh/installing-lagoon/opendistro/ + 2023-10-10 + daily + + + https://docs.lagoon.sh/installing-lagoon/querying-graphql/ + 2023-10-10 + daily + + + https://docs.lagoon.sh/installing-lagoon/requirements/ + 2023-10-10 + daily + + + https://docs.lagoon.sh/installing-lagoon/update-lagoon/ + 2023-10-10 + daily + + + https://docs.lagoon.sh/logging/kibana-examples/ + 2023-10-10 + daily + + + https://docs.lagoon.sh/logging/logging/ + 2023-10-10 + daily + + + https://docs.lagoon.sh/resources/faq/ + 2023-10-10 + daily + + + https://docs.lagoon.sh/resources/glossary/ + 2023-10-10 + daily + + + https://docs.lagoon.sh/resources/tutorials-and-webinars/ + 2023-10-10 + daily + + + https://docs.lagoon.sh/using-lagoon-advanced/active-standby/ + 2023-10-10 + daily + + + https://docs.lagoon.sh/using-lagoon-advanced/backups/ + 2023-10-10 + daily + + + https://docs.lagoon.sh/using-lagoon-advanced/base-images/ + 2023-10-10 + daily + + + https://docs.lagoon.sh/using-lagoon-advanced/blackfire/ + 2023-10-10 + daily + + + https://docs.lagoon.sh/using-lagoon-advanced/custom-tasks/ + 2023-10-10 + daily + + + https://docs.lagoon.sh/using-lagoon-advanced/deploytarget-configs/ + 2023-10-10 + daily + + + https://docs.lagoon.sh/using-lagoon-advanced/environment-idling/ + 2023-10-10 + daily + + + https://docs.lagoon.sh/using-lagoon-advanced/environment-types/ + 2023-10-10 + daily + + + https://docs.lagoon.sh/using-lagoon-advanced/environment-variables/ + 2023-10-10 + daily + + + https://docs.lagoon.sh/using-lagoon-advanced/graphql/ + 2023-10-10 + daily + + + https://docs.lagoon.sh/using-lagoon-advanced/nodejs/ + 2023-10-10 + daily + + + https://docs.lagoon.sh/using-lagoon-advanced/private-repositories/ + 2023-10-10 + daily + + + https://docs.lagoon.sh/using-lagoon-advanced/project-default-users-keys/ + 2023-10-10 + daily + + + https://docs.lagoon.sh/using-lagoon-advanced/service-types/ + 2023-10-10 + daily + + + https://docs.lagoon.sh/using-lagoon-advanced/setting-up-xdebug-with-lagoon/ + 2023-10-10 + daily + + + https://docs.lagoon.sh/using-lagoon-advanced/simplesaml/ + 2023-10-10 + daily + + + https://docs.lagoon.sh/using-lagoon-advanced/ssh/ + 2023-10-10 + daily + + + https://docs.lagoon.sh/using-lagoon-advanced/triggering-deployments/ + 2023-10-10 + daily + + + https://docs.lagoon.sh/using-lagoon-advanced/workflows/ + 2023-10-10 + daily + + + https://docs.lagoon.sh/using-lagoon-the-basics/ + 2023-10-10 + daily + + + https://docs.lagoon.sh/using-lagoon-the-basics/build-and-deploy-process/ + 2023-10-10 + daily + + + https://docs.lagoon.sh/using-lagoon-the-basics/configure-webhooks/ + 2023-10-10 + daily + + + https://docs.lagoon.sh/using-lagoon-the-basics/docker-compose-yml/ + 2023-10-10 + daily + + + https://docs.lagoon.sh/using-lagoon-the-basics/first-deployment/ + 2023-10-10 + daily + + + https://docs.lagoon.sh/using-lagoon-the-basics/going-live/ + 2023-10-10 + daily + + + https://docs.lagoon.sh/using-lagoon-the-basics/lagoon-yml/ + 2023-10-10 + daily + + + https://docs.lagoon.sh/using-lagoon-the-basics/local-development-environments/ + 2023-10-10 + daily + + + https://docs.lagoon.sh/using-lagoon-the-basics/setup-project/ + 2023-10-10 + daily + + \ No newline at end of file diff --git a/sitemap.xml.gz b/sitemap.xml.gz new file mode 100644 index 0000000000..b05757b245 Binary files /dev/null and b/sitemap.xml.gz differ diff --git a/step2_require.gif b/step2_require.gif new file mode 100644 index 0000000000..285ab2a145 Binary files /dev/null and b/step2_require.gif differ diff --git a/using-lagoon-advanced/active-standby/index.html b/using-lagoon-advanced/active-standby/index.html new file mode 100644 index 0000000000..dcdae15262 --- /dev/null +++ b/using-lagoon-advanced/active-standby/index.html @@ -0,0 +1,3004 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + Active/Standby - Lagoon Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    + +
    + + + + +
    + + +
    + +
    + + + + + + +
    +
    + + + +
    +
    +
    + + + + +
    +
    +
    + + + +
    +
    +
    + + + +
    +
    +
    + + + +
    +
    + + + + + + + + + + + + + +

    Active/Standby#

    + + +

    Configuration#

    +

    To change an existing project to support active/standby you'll need to configure some project settings with the Lagoon API.

    +
      +
    • productionEnviromment should be set to the branch name of the current active environment.
    • +
    • standbyProductionEnvironment should be set to the branch name of the current environment that is in standby.
    • +
    +
    Update project settings
    mutation updateProject {
    +  updateProject(input:{
    +    id:1234
    +    patch:{
    +      productionEnvironment:"production-brancha"
    +      standbyProductionEnvironment:"production-branchb"
    +    }
    +  }){
    +    standbyProductionEnvironment
    +    name
    +    productionEnvironment
    +  }
    +}
    +
    +

    .lagoon.yml - production_routes#

    +

    To configure a project for active/standby in the .lagoon.yml file, you'll need to configure the production_routes section with any routes you want to attach to the active environment, and any routes to the standby environment. During an active/standby switch, these routes will migrate between the two environments.

    +

    If you have two production environments, production-brancha and production-branchb, with the current active production environment as production-brancha then:

    +
      +
    • Routes under production_routes.active will direct you to production-brancha.
    • +
    • Routes under production_routes.standby will direct you to production-branchb.
    • +
    +

    During an active/standby switch, the routes will swap:

    +
      +
    • Routes under production_routes.active will direct you to production-branchb.
    • +
    • Routes under production_routes.standby will direct you to production-brancha.
    • +
    +
    .lagoon.yml
    production_routes:
    +  active:
    +    routes:
    +      - nginx:
    +        - example.com:
    +            tls-acme: 'false'
    +        - active.example.com:
    +            tls-acme: 'false'
    +  standby:
    +    routes:
    +      - nginx:
    +        - standby.example.com:
    +            tls-acme: 'false'
    +
    +
    +

    Info

    +

    Any routes that are under the section environments..routes will not be moved as part of active/standby. These routes will always be attached to the environment as defined. Ensure that if you do need a specific route to be migrated during an active/standby switch, that you remove them from the environments section and place them under the production_routes section specific to if it should be an active or standby route. See more about routes in .lagoon.yml.

    +
    +

    Triggering a switch event#

    +

    via the UI#

    +

    To trigger the switching of environment routes, you can visit the standby environment in the Lagoon UI and click on the button labeled Switch Active/Standby environments. You will be prompted to confirm your action.

    +

    Once confirmed, it will take you to the tasks page where you can view the progress of the switch.

    +

    via the API#

    +

    To trigger an event to switch the environments, run the following GraphQL mutation. This will tell Lagoon to begin the process.

    +
    Active Standby Switch
    mutation ActiveStandby {
    +  switchActiveStandby(
    +    input:{
    +      project:{
    +        name:"drupal-example"
    +      }
    +    }
    +  ){
    +    id
    +    remoteId
    +  }
    +}
    +
    +

    A task is created in the current active environment tasks tab when a switch event is triggered. You can check the status of the switch here.

    +

    Using the remoteId from the switchActiveStandby mutation, we can also check the status of the task.

    +
    Check task status
    query getTask {
    +  taskByRemoteId(id: "<remoteId>") {
    +    id
    +    name
    +    created
    +    started
    +    completed
    +    status
    +    logs
    +  }
    +}
    +
    +

    drush aliases#

    +

    By default, projects will be created with the following aliases that will be available when active/standby is enabled on a project.

    +
      +
    • lagoon-production
    • +
    • lagoon-standby
    • +
    +

    The lagoon-production alias will point to whichever site is defined as productionEnvironment, and lagoon-standby will always point to the site that is defined as standbyProductionEnvironment.

    +

    These aliases are configurable by updating the project. Be aware that changing them may require you to update any scripts that rely on them.

    +
    Update Drush Aliases
    mutation updateProject {
    +  updateProject(input:{
    +    id:1234
    +    patch:{
    +      productionAlias:"custom-lagoon-production-alias"
    +      standbyAlias:"custom-lagoon-standby-alias"
    +    }
    +  }){
    +    productionAlias
    +    name
    +    standbyAlias
    +  }
    +}
    +
    +

    Disabling Active/Standby#

    +

    You need to decide which of these 2 branches are the one you want to go forward with as being the main environment and then ensure it is set as the active branch (e.g production-branchb).

    +
      +
    1. In your .lagoon.yml file in this (now active) branch, move the routes from the production_routes.active.routes section into the environments.production-branchb section. This will mean that they are then attached to the production-branchb environment only.
    2. +
    3. Once you've done this, you can delete the entire production_routes section from the .lagoon.yml file and re-deploy the production-branchb environment.
    4. +
    5. If you no longer need the other branch production-brancha, you can delete it.
    6. +
    7. If you keep the branch in Git, you should also remove the production_routes from that branch .lagoon.yml too, just to prevent any confusion. The branch will remain as production type unless you delete and redeploy it (wiping all storage and databases, etc).
    8. +
    9. Once you've got the project in a state where there is only the production-branchb production environment, and all the other environments are development, update the project to remove the standbyProductionEnvironment from the project so that the active/standby labels on the environments go away.
    10. +
    +
    Turn off Active/Standby
    mutation updateProject {
    +  updateProject(input:{
    +    id:1234
    +    patch:{
    +      productionEnvironment:"production-branchb"
    +      standbyProductionEnvironment:""
    +    }
    +  }){
    +    standbyProductionEnvironment
    +    name
    +    productionEnvironment
    +  }
    +}
    +
    +

    Notes#

    +

    When the active/standby trigger has been executed, the productionEnvironment and standbyProductionEnvironments will switch within the Lagoon API. Both environments are still classed as production environment types. We use the productionEnvironment to determine which one is labelled as active. For more information on the differences between environment types, read the documentation for environment types

    +
    Get environments via GraphQL
    query projectByName {
    +  projectByName(name:"drupal-example"){
    +    productionEnvironment
    +    standbyProductionEnvironment
    +  }
    +}
    +
    +

    Before switching environments:

    +
    Results of environment query
    {
    +  "data": {
    +    "projectByName": {
    +      "productionEnvironment": "production-brancha",
    +      "standbyProductionEnvironment": "production-branchb"
    +    }
    +  }
    +}
    +
    +

    After switching environments:

    +
    Results of environment query
    {
    +  "data": {
    +    "projectByName": {
    +      "productionEnvironment": "production-branchb",
    +      "standbyProductionEnvironment": "production-brancha"
    +    }
    +  }
    +}
    +
    + + + + + + + + +
    +
    + + +
    + +
    + + + +
    +
    +
    +
    + + + + + + + + + + + + + \ No newline at end of file diff --git a/using-lagoon-advanced/backups/index.html b/using-lagoon-advanced/backups/index.html new file mode 100644 index 0000000000..58545c722f --- /dev/null +++ b/using-lagoon-advanced/backups/index.html @@ -0,0 +1,2829 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + Backups - Lagoon Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    + +
    + + + + +
    + + +
    + +
    + + + + + + +
    +
    + + + +
    +
    +
    + + + + +
    +
    +
    + + + +
    +
    +
    + + + +
    +
    +
    + + + +
    +
    + + + + + + + + + + + + + +

    Backups#

    +

    Lagoon makes use of the k8up operator to provide backup functionality for both database data as well as containers' persistent storage volumes. This operator utilizes Restic to catalog these backups, which is typically connected to an AWS S3 bucket to provide secure, off-site storage for the generated backups.

    +

    Production Environments#

    +

    Backup Schedules#

    +

    Backups of databases and containers' persistent storage volumes happens nightly within production environments by default.

    +

    If a different backup schedule for production backups is required, this can be specified at a project level via setting the "Backup Schedule" variables in the project's .lagoon.yml file.

    +

    Backup Retention#

    +

    Production environment backups will be held according to the following schedule by default:

    +
      +
    • Daily: 7
    • +
    • Weekly: 6
    • +
    • Monthly: 1
    • +
    • Hourly: 0
    • +
    +

    If a different retention period for production backups is required, this can be specified at a project level via setting the "Backup Retention" variables in the project's .lagoon.yml file.

    +

    Development Environments#

    +

    Backups of development environments are attempted nightly and are strictly a best effort service.

    +

    Retrieving Backups#

    +

    Backups stored in Restic will be tracked within Lagoon, and can be recovered via the "Backup" tab for each environment in the Lagoon UI.

    +

    Custom Backup and/or Restore Locations#

    +

    Lagoon supports custom backup and restore locations via the use of the "Custom Backup Settings" and/or "Custom Restore Settings" variables stored in the Lagoon API for each project.

    +
    +

    Danger

    +

    Proceed with caution: Setting these variables will override backup/restore storage locations that may be configured at a cluster level. Any misconfiguration will cause backup/restore failures.

    +
    + + + + + + + + +
    +
    + + +
    + +
    + + + +
    +
    +
    +
    + + + + + + + + + + + + + \ No newline at end of file diff --git a/using-lagoon-advanced/base-images/index.html b/using-lagoon-advanced/base-images/index.html new file mode 100644 index 0000000000..364473088c --- /dev/null +++ b/using-lagoon-advanced/base-images/index.html @@ -0,0 +1,3282 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + Base Images - Lagoon Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    + +
    + + + + +
    + + +
    + +
    + + + + + + +
    +
    + + + +
    +
    +
    + + + + +
    +
    +
    + + + + + + + +
    +
    + + + + + + + + + + + + + +

    Base Images#

    +

    What is a base image?#

    +

    A base image is a Docker image that can be and is used by a project deployed on Lagoon. A base image provides a way to ensure that nothing is brought into the codebase/project from upstream that has not been audited. It also allows us to ensure that anything we might need on the deployed environment is available - from lower-level libraries to application-level themes and modules.

    +

    Base images save time and resources when you know what system is being deployed to - if shared packages are included in the base image, they don’t have to be deployed to hundreds of sites individually.

    +

    Derived images#

    +

    A derived image is one that extends a base image. For example, you might need to make several blog sites. You take our Drupal image, customize it to include all of the modules and themes you need for your blog sites, and deploy them all with that blog image. Templates are derived from base images.

    +

    All derived images should pull in the composer.json file (via repositories like Packagist, Satis, or GitHub) so that they are using the most recent versions of the base packages.

    +

    Further, the derived image includes a call to the script /build/pre_composer, which can be used by the base image to run scripts, updates, etc., downstream in the derived images. For instance, it should run by default when any package is updated or installed at the derived image, and the pre_composer script will then update the base image package.

    +

    Anatomy of a base image#

    +
    +

    Info

    +

    This document will talk about Drupal and Laravel base images as examples, as it was originally written for a client who uses those technologies in their Lagoon projects. It will be expanded to cover the contents of other base images, but none of the processes differ, no matter what the content of your base image.

    +
    +

    Base images are managed with Composer and hosted in BitBucket, GitHub, or GitLab (whatever your team is using). Each base image has its own repository.

    +

    Metapackages#

    +

    The metapackage is a Composer package that wraps several other components. These include, for example, the core files for Laravel or Drupal, along with any needed modules or themes. This way, you do not need to include Laravel or Drupal, etc., as a dependency in your project.

    +

    Here’s an example from the composer.json in a Laravel base image:

    +
    composer.json
    "require": {
    +    "amazeelabs/algm_laravel_baseimage": "*"
    +},
    +
    +

    We only require this metapackage, which points to a GitHub repository.

    +

    docker-compose.yml#

    +

    Other pieces of your project are defined in docker-compose.yml. For example, if you have a Drupal project, you need the Drupal image, but you also need MariaDB, Solr, Redis, and Varnish. We have versions of these services optimized for Drupal, all of which are included in docker-compose.yml.

    +

    Drupal#

    +

    The Drupal base image contains the following contributed tools and modules, in addition to Drupal core:

    + +

    Laravel#

    +

    Configuration#

    +

    The base images have provided the default values for the environment variables used by Laravel.

    +

    These are values for:

    +
      +
    • DB_CONNECTION
    • +
    • DB_HOST
    • +
    • DB_PORT
    • +
    • DB_DATABASE
    • +
    • DB_USERNAME
    • +
    • DB_PASSWORD
    • +
    • REDIS_HOST
    • +
    • REDIS_PASSWORD
    • +
    • REDIS_PORT
    • +
    +

    Ensure that your config files (typically located in /config) make use of these by default.

    +

    Queues#

    +

    If your project makes use of queues, you can make use of the artisan-worker service. It is a worker container, used for executing artisan queue:work. This is disabled by default - look at the comments in docker-compose.yml.

    +

    Understanding the process of building a base image#

    +

    There are several parts to the process of building a base image. All of the major steps are represented in the Makefile. The Jenkinsfile contains a more stripped-down view. Taking a look at both files will give you a good understanding of what happens during this process. Most steps can be tested locally (this is important when building new versions of the base image). After you’ve created and tested everything locally and pushed it up, the actual base image is built by Jenkins and pushed to Harbor.

    +

    Makefile and build assumptions#

    +

    If you're planning on running locally, there are some minimum environment variables that need to be present to build at all.

    +

    Base image build variables#

    +

    Variables injected into the base image build process and where to find them.

    +
      +
    • BUILD_NUMBER - This is injected by Jenkins automatically.
    • +
    • GIT_BRANCH - This is provided by the Jenkins build process itself. Depends on the branch being built at the time (develop, main, etc.).
    • +
    • DOCKER_REPO/DOCKER_HUB - This is defined inside the Jenkinsfile itself. It points to the Docker project and hub into which the resulting images will be pushed.
    • +
    • DOCKER_USERNAME/DOCKER_PASSWORD - These are used to actually log into the Docker repository early in the build. These variables are stored inside of the Jenkins credentials. These are used in the Jenkinsfile itself and are not part of the Makefile. This means that if you’re building base images outside of Jenkins (i.e. locally, to test, etc.) you have to run a docker login manually before running any of the make steps.
    • +
    +

    In practice, this means that if you're running any of the make targets on your local machine, you'll want to ensure that these are available in the environment - even if this is just setting them when running make from the command line, as an example:

    +
    Setting make targets locally
    GIT_BRANCH=example_branch_name DOCKER_HUB=the_docker_hub_the_images_are_pushed_to DOCKER_REPO=your_docker_repo_here BUILD_NUMBER=<some_integer> make images_remove
    +
    +

    Makefile targets#

    +

    The most important targets are the following:

    +
      +
    • images_build : Given the environment variables, this will build and tag the images for publication.
    • +
    • images_publish : Pushes built images to a Docker repository.
    • +
    • images_start : Will start the images for testing, etc.
    • +
    • images_test: Runs basic tests against images.
    • +
    • images_remove: Removes previously built images, given the build environment variables.
    • +
    +

    Example workflow for building a new release of a base image#

    +

    There are several steps to the build process. Most of these are shared among the various base images. These mostly correspond to the Makefile target described above.

    +
      +
    1. Docker Login - The Docker username, password, and URL for Harbor are passed to the Docker client.
    2. +
    3. Docker Build - The make images_build step is run now, which will:
        +
      1. Ensure that all environment variables are prepared for the build.
      2. +
      3. Run a docker-compose build. This will produce several new Docker images from the current Git branch.
      4. +
      +
    4. +
    5. Images Test - This will run the make images_test target, which will differ depending on the images being tested. In most cases this is a very straightforward test to ensure that the images can be started and interacted with in some way (installing Drupal, listing files, etc.)
    6. +
    7. Docker Push - This step runs the logic (contained in the make target images_publish) that will tag the images resulting from the Docker Build in Step 2 and push them to Harbor. This is described in more detail elsewhere in this guide.
    8. +
    9. Docker Clean Images - Runs the make target images_remove, which simply deletes the newly built images from the Docker host now that they are in Harbor.
    10. +
    +

    Releasing a new version of a base image#

    +

    There are many reasons to release a new version of a base image. On Drupal or Laravel, Node.js, etc. images, it may be in order to upgrade or install a module/package for features or security. It may be about the underlying software that comes bundled in the container, such as updating the version of PHP or Node.js. It may be about updating the actual underlying images on which the base images are built.

    +

    The images that your project's base images are built on are the managed images maintained by the Lagoon team. We release updates to these underlying images on a monthly (or more fequent) basus. When these are updated, you need to build new versions of your own base images in order to incorporate the changes and upgrades bundled in the upstream images.

    +

    In this section we will demonstrate the process of updating and tagging a new release of the Drupal 8 base image. We will add a new module (ClamAV) to the base. We’re demonstrating on Drupal because it has the most complex setup of the base images. The steps that are common to every base image are noted below.

    +

    Step 1 - Pull down the base image locally#

    +

    This is just pulling down the Git repository locally. In the case of the Drupal 8 base image. In this example, we're using Bitbucket, so we will run:

    +
    Clone Git repo.
    git clone ssh://git@bitbucket.biscrum.com:7999/webpro/drupal8_base_image.git
    +
    +

    Running `git clone` on the base image repository.

    +

    Step 2 - Make the changes to the repository#

    +
    +

    Info

    +

    What is demonstrated here is specific to the Drupal 8 base image. However, any changes (adding files, changing base Docker images, etc.) will be done in this step for all of the base images.

    +
    +

    In our example, we are adding the ClamAV module to the Drupal 8 base image. This involves a few steps. The first is requiring the package so that it gets added to our composer.json file. This is done by running a composer require.

    +

    Here we run:

    +
    Install package with Composer require.
    composer require drupal/clamav
    +
    +

    Running `composer require drupal/clamav`

    +

    When the Composer require process completes, the package should then appear in the composer.json file.

    +

    Here we open the composer.json file and take a look at the list of required packages, and check that the ClamAV package is listed, and see that it is there:

    +

    Opening composer.json to check that ClamAV is now required.

    +

    Step 2.2 - Ensure that the required Drupal module is enabled in template-based derived images#

    +

    For any modules now added to the base image, we need to ensure that they’re enabled on the template-based derived images. This is done by adding the module to the Lagoon Bundle module located at ./web/modules/lagoon/lagoon_bundle. Specifically, it requires you to add it as a dependency to the dependencies section of the lagoon_bundle.info.yml file. The Lagoon Bundle module is a utility module that exists only to help enforce dependencies across derived images.

    +

    Here we open web/modules/contrib/lagoon/lagoon_bundle/lagoon_bundle.info.yml and add clamav:clamav as a dependency:

    +

    Adding ClamAV as a dependency of Lagoon Bundle.

    +

    Adding a dependency to this will ensure that whenever the Lagoon Bundle module is enabled on the derived image, its dependencies (in this case, the just-added ClamAV module) will also be enabled. This is enforced by a post-rollout script which enables lagoon_bundle on the derived images when they are rolled out.

    +

    Step 2.3 - Test#

    +

    This will depend on what you’re testing. In the case of adding the ClamAV module, we want to ensure that in the base image, the module is downloaded, and that the Lagoon Bundle module enables ClamAV when it is enabled.

    +

    Here we check that the module is downloaded to /app/web/modules/contrib:

    +

    Checking /app/web/modules/contrib to make sure ClamAV is downloaded.

    +

    And then we check that when we enable the lagoon_bundle module, it enables clamav by running:

    +
    Enable module with Drush.
    drush pm-enable lagoon_bundle -y
    +
    +

    Running `drush pm-enable lagoon_bundle -y` and seeing that it also enables ClamAV

    +
    +

    Warning

    +

    You’ll see that there is a JWT error in the container above. You can safely ignore this in the demonstration above - but, for background, you will see this error when there is no Lagoon environment for the site you’re working on.

    +
    +

    With our testing done, we can now tag and build the images.

    +

    Step 3 - Tagging images#

    +

    Images are versioned based on their Git tags - these should follow standard semantic versioning (semver) practices. All tags should have the structure vX.Y.Z where X, Y, and Z are integers (to be precise the X.Y.Z are themselves the semantic version - the vX.Y.Z is a tag). This is an assumption that is used to determine the image tags, so it must be adhered to.

    +

    In this example we will be tagging a new version of the Drupal 8 base image indicating that we have added ClamAV.

    +

    Here we demonstrate how to tag an image#

    +

    We check that we have committed (but not pushed) our changes, just as you would do for any regular commit and push, using git log.

    +
      +
    1. Commit your changes if you haven’t yet.
    2. +
    3. We then check to see what tag we are on using git tag.
    4. +
    5. Then, tag them using git tag -a v0.0.9 -m “Adds clamAV to base.”
        +
      1. git -a, --annotate: Make an unsigned, annotated tag object
      2. +
      +
    6. +
    7. Next, we push our tags with git push --tags.
    8. +
    9. And finally, push all of our changes with git push.
    10. +
    +
    +

    Danger

    +

    The tags must be pushed explicitly in their own step!

    +
    +

    Demonstrating how to tag and push a base image.

    +

    How Git tags map to image tags#

    +
    +

    Danger

    +

    Depending on the build workflow, you will almost certainly push the changes via the develop branch before merging it into the main branch.

    +
    +

    An important point to remember here is that the Jenkins base image build process will tag images based on the most recent commit’s tag.

    +

    Images are tagged using the following rules, and images will be built for each of these that apply:

    +
      +
    1. When the main branch is built, it is tagged as latest.
    2. +
    3. When the develop branch is built, it is tagged as development.
    4. +
    5. If the commit being built is tagged then that branch will be built with that commit’s tag.
        +
      1. This is how we release a new version as we demonstrated above. It can also be used to make ad hoc builds with fairly arbitrary tags - be reasonable with the tag names, it has only been tested with semver tags.
      2. +
      +
    6. +
    +

    Step 4 - Building the new base images#

    +
    +

    Info

    +

    Generally you will have a trigger strategy set up here for automatic builds, but as that will differ based on your needs and setup, this explains how to build manually.

    +
    +
      +
    1. Visit your Lagoon Jenkins instance.
    2. +
    3. Select the project you are working on (in this case, AIOBI Drupal 8 Base).
    4. +
    5. Click the branch you would like to build.
    6. +
    7. Click “Build Now.”
    8. +
    +

    Showing how to build a base image in the Jenkins UI.

    +

    This will kick off the build process which, if successful, will push up the new images to Harbor.

    +

    If the build is not successful, you can click into the build itself and read the logs to understand where it failed.

    +

    As shown in the screenshot below from Harbor, the image we’ve just built in Jenkins has been uploaded and tagged in Harbor, where it will now be scanned for any vulnerabilities. Since it was tagged as v0.0.9, an image with that tag is present, and because we built the main branch, the “latest” image has also been built. At this stage, the v0.0.9 and “latest” images are identical.

    +

    Screenshot from Harbor showing uploaded and tagged images.

    +

    Acknowledgement#

    +

    The base image structure draws heavily (and, in fact, is a fork of) Denpal. It is based on the original Drupal Composer Template, but includes everything necessary to run on Lagoon (either the local development environment or on hosted Lagoon).

    + + + + + + + + +
    +
    + + +
    + +
    + + + +
    +
    +
    +
    + + + + + + + + + + + + + \ No newline at end of file diff --git a/using-lagoon-advanced/blackfire/index.html b/using-lagoon-advanced/blackfire/index.html new file mode 100644 index 0000000000..607e8fa8f6 --- /dev/null +++ b/using-lagoon-advanced/blackfire/index.html @@ -0,0 +1,2815 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + Blackfire - Lagoon Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    + +
    + + + + +
    + + +
    + +
    + + + + + + +
    +
    + + + +
    +
    +
    + + + + +
    +
    +
    + + + +
    +
    +
    + + + +
    +
    +
    + + + +
    +
    + + + + + + + + + + + + + +

    Blackfire#

    +

    Blackfire variables#

    +

    The Lagoon Base Images have support for Blackfire included in the PHP Images (see the PHP images).

    +

    In order to use Blackfire in Lagoon, these three environment variables need to be defined:

    + + + + + + + + + + + + + + + + + + + + + + + + + +
    Environment VariableDefaultDescription
    BLACKFIRE_ENABLED(not set)Used to enable blackfire extension with setting variable to TRUE or true
    BLACKFIRE_SERVER_ID(not set)Set to Blackfire Server ID provided by Blackfire.io. Needs BLACKFIRE_ENABLED set to true
    BLACKFIRE_SERVER_TOKEN(not set)Set to Blackfire Server Token provided by Blackfire.io. Needs BLACKFIRE_ENABLED set to true
    +

    Local Usage of Blackfire#

    +

    For local usage of Blackfire with Lagoon Images, set the above environment variables for the PHP container. Here is an example for a Drupal application:

    +
    docker-compose.yml
    services:
    +
    +[[snip]]
    +
    +  php:
    +    [[snip]]
    +
    +    environment:
    +      << : *default-environment # loads the defined environment variables from the top
    +      BLACKFIRE_ENABLED: TRUE
    +      BLACKFIRE_SERVER_ID: xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx
    +      BLACKFIRE_SERVER_TOKEN: xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
    +
    +

    After restarting the containers, you should be able to profile via the Blackfire Browser Plugin or the Blackfire CLI.

    +

    Remote Usage of Blackfire#

    +

    In order to use Blackfire in deployed Lagoon environments the same enviornment variables need to be set, this time via one of the possibilities of adding environment variables to Lagoon. Important: Environment variables set in the docker-compose.yml for local development are not used by Lagoon in remote environments!

    +

    Debugging#

    +

    The Blackfire Agent running in the PHP containers outputs logs as normal container logs, which can be seen via docker-compose logs or via the Lagoon Logging Infrastructure for remote environments.

    +

    By default the Logs are set to Level 3 (info), via the environment variable BLACKFIRE_LOG_LEVEL the level can be increased to 4 (debug) to generate more debugging ouput.

    + + + + + + + + +
    +
    + + +
    + +
    + + + +
    +
    +
    +
    + + + + + + + + + + + + + \ No newline at end of file diff --git a/using-lagoon-advanced/custom-task-arguments.png b/using-lagoon-advanced/custom-task-arguments.png new file mode 100644 index 0000000000..9b4ef4422e Binary files /dev/null and b/using-lagoon-advanced/custom-task-arguments.png differ diff --git a/using-lagoon-advanced/custom-task-confirm.png b/using-lagoon-advanced/custom-task-confirm.png new file mode 100644 index 0000000000..2055667a8a Binary files /dev/null and b/using-lagoon-advanced/custom-task-confirm.png differ diff --git a/using-lagoon-advanced/custom-tasks/index.html b/using-lagoon-advanced/custom-tasks/index.html new file mode 100644 index 0000000000..2b30a46395 --- /dev/null +++ b/using-lagoon-advanced/custom-tasks/index.html @@ -0,0 +1,2982 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + Custom Tasks - Lagoon Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    + +
    + + + + +
    + + +
    + +
    + + + + + + +
    +
    + + + +
    +
    +
    + + + + +
    +
    +
    + + + +
    +
    +
    + + + +
    +
    +
    + + + +
    +
    + + + + + + + + + + + + + +

    Custom Tasks#

    +

    Lagoon allows for the definition of custom tasks at environment, project, and group levels. This is presently accomplished through the GraphQL API and exposed in the UI.

    +

    Defining a custom task#

    +

    When defining a task you need to determine a number of things.

    +

    Which task do you want to run?#

    +

    In most cases, the custom task you will be running will be something that will be run in a shell on one of the containers in your application.

    +

    For instance, in a Node.js application, you may be interested in running a yarn audit in your node container. The command, in this case, would simply be yarn audit.

    +

    Where will this task be run?#

    +

    We have to define where this task will be run -- this means two things, first, which project or environment we'll be running the task in, and, second, which service.

    +

    Let's say that we'd like for our yarn audit task to be available to run in any environment in a specific project (let's say the project's ID is 42 for this example). We will therefore specify the project's ID when we create our task definition, as we will describe below.

    +

    The second question regards which environment we want to target with our task. When you set up your project, you specify several services in your docker-compose.yml. We use this service name to determine where the command is actually executed.

    +

    Who can run this task?#

    +

    There are three levels of permissions to the task system corresponding to project roles. Guest, Developer, and Maintainer -- from most restrictive to least restrictive, with each role being able to invoke the tasks defined for the lower role (Developer can see Guest tasks, Maintainers can see all tasks).

    +

    Defining a task#

    +

    Tasks are defined by calling the addAdvancedTaskDefinition mutation. Importantly, this simply defines the task, it does not invoke it. It simply makes it avaliable to be run in an environment.

    +

    Schematically, the call looks like this

    +
    Define a new task
    mutation addAdvancedTask {
    +    addAdvancedTaskDefinition(input:{
    +    name: string,
    +    confirmationText: string,
    +    type: [COMMAND|IMAGE],
    +    [project|environment]: int,
    +    description: string,
    +    service: string,
    +    command: string,
    +    advancedTaskDefinitionArguments: [
    +      {
    +        name: "ENVIROMENT_VARIABLE_NAME",
    +        displayName: "Friendly Name For Variable",
    +        type: [STRING | ENVIRONMENT_SOURCE_NAME | ENVIRONMENT_SOURCE_NAME_EXCLUDE_SELF]
    +      }
    +    ]
    +  }) {
    +    ... on AdvancedTaskDefinitionImage {
    +      id
    +      name
    +      description
    +      service
    +      image
    +      confirmationText
    +      advancedTaskDefinitionArguments {
    +        type
    +        range
    +        name
    +        displayName
    +      }
    +      ...
    +    }
    +    ... on AdvancedTaskDefinitionCommand {
    +      id
    +      name
    +      description
    +      service
    +      command
    +      advancedTaskDefinitionArguments {
    +        type
    +        range
    +        name
    +        displayName
    +      }
    +      ...
    +    }
    +  }
    +}
    +
    +

    Fields name and description are straightforward. They're simply the name and description of the task - these are used primarily in the UI.

    +

    The type field needs some explanation - for now, only platform admins are able to define IMAGE type commands - these allow for the running of specifically created task images as tasks, rather than targeting existing services. Most tasks, though, will be COMMAND types.

    +

    The [project|environment] set of fields will attach the task to either the project or environment (depending on the key you use), with the value being the id. In the case we're considering for our yarn audit we will specify we're targeting a project with an ID of 42.

    +

    We put the service we'd like to target with our task in the service field, and command is the actual command that we'd like to run.

    +

    Arguments passed to tasks#

    +

    In order to give more flexibility to the users invoking the tasks via the Lagoon UI, we support defining task arguments. These arguments are displayed as text boxes or drop downs and are required for the task to be invoked.

    +

    Here is an example of how we might set up two arguments.

    +
    Define task arguments
    advancedTaskDefinitionArguments: [
    +      {
    +        name: "ENV_VAR_NAME_SOURCE",
    +        displayName: "Environment source",
    +        type: ENVIRONMENT_SOURCE_NAME
    +
    +      },
    +      {
    +        name: "ENV_VAR_NAME_STRING",
    +        displayName: "Echo value",
    +        type: STRING
    +        }
    +    ]
    +  })
    +
    +

    This fragment shows both types of arguments the system currently supports. +The first, ENV_VAR_NAME_SOURCE is an example of type ENVIRONMENT_SOURCE_NAME, which will present the user of the UI a dropdown of the different environments inside of a project. If we don't want to allow the task to be run on the invoking environment (say, if we want to import a database from another environment), we can restrict the environment list by using ENVIRONMENT_SOURCE_NAME_EXCLUDE_SELF. +The second ENV_VAR_NAME_STRING is of type STRING and will present the user with a textbox to fill in.

    +

    The values that the user selects will be available as environment variables in the COMMAND type tasks when the task is run.

    +

    Task Arguments

    +

    Confirmation#

    +

    When the confirmationText field has text, it will be displayed with a confirmation modal in the UI before the user is able to run the task.

    +

    Task Confirmation

    +

    Invoking the task#

    +

    With the task now defined, the task should now show up in the tasks dropdown in the Lagoon UI.

    +

    We are also able to invoke it via the GraphQL api by using the invokeTask mutation.

    +
    Invoke task
    mutation invokeTask {
    +  invokeRegisteredTask(advancedTaskDefinition: int, environment: int) {
    +    status
    +  }
    +}
    +
    +

    Note that invokeTask will always invoke a task on a specific environment.

    +

    Example#

    +

    Let's now setup our yarn audit example.

    +
    Define task mutation
    mutation runYarnAudit {
    + addAdvancedTaskDefinition(input:{
    +    name:"Run yarn audit",
    +    project: 42,
    +    type:COMMAND,
    +    permission:DEVELOPER,
    +    description: "Runs a 'yarn audit'",
    +    service:"node",
    +    command: "yarn audit"})
    +    {
    +        id
    +    }
    +}
    +
    +

    This, then, will define our task for our project (42). When we run this, we will get the ID of the task definition back (for argument's sake, let's say it's 9)

    +

    This task will now be available to run from the UI for anyone with the DEVELOPER or MAINTAINER role.

    +

    Task List

    + + + + + + + + +
    +
    + + +
    + +
    + + + +
    +
    +
    +
    + + + + + + + + + + + + + \ No newline at end of file diff --git a/using-lagoon-advanced/deploytarget-configs/index.html b/using-lagoon-advanced/deploytarget-configs/index.html new file mode 100644 index 0000000000..0d3e8a08a1 --- /dev/null +++ b/using-lagoon-advanced/deploytarget-configs/index.html @@ -0,0 +1,2903 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + DeployTarget Configs - Lagoon Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    + +
    + + + + +
    + + +
    + +
    + + + + + + +
    +
    + + + +
    +
    +
    + + + + +
    +
    +
    + + + +
    +
    +
    + + + +
    +
    +
    + + + +
    +
    + + + + + + + + + + + + + +

    DeployTarget Configurations#

    +
    +

    Danger

    +

    This is an alpha feature in Lagoon. +The way DeployTarget Configurations work could change in future releases. +If you decide to use this feature, you do so at your own risk.

    +
    +

    DeployTarget configurations are a way to define how a project can deploy to multiple clusters. This feature is useful when you have two clusters, one which could be dedicated for running production workloads, and another that is used for running development workloads.

    +

    The configuration for these is not limited to just a production/development split though, so projects could perceivably target more than one specific cluster.

    +

    The basic idea of a DeployTarget configuration is that it is a way to easily define how a project can deploy across multiple clusters. It uses the existing methods of checking if a environment is valid

    +

    Important Information#

    +

    Before going in to how to configure a project to leverage DeployTarget configurations, there are some things you need to know.

    +
      +
    1. +

      Environments now have two new fields available to them to identify which DeployTarget(Kubernetes or OpenShift) they have been created on.

      +
        +
      1. kubernetesNamespacePattern
      2. +
      3. kubernetes
      4. +
      +
    2. +
    3. +

      Once an environment has been deployed to a specific DeployTarget, it will always deploy to this target, even if the DeployTarget configuration, or project configuration is modified.

      +
        +
      1. This offers some safety to existing environments by preventing changes to DeployTarget configurations from creating new environments on different clusters.
      2. +
      3. This is a new feature that is part of Lagoon, not specifically for DeployTarget configurations.
      4. +
      +
    4. +
    5. +

      By default, if no DeployTarget configurations are associated to a project, that project will continue to use the existing methods to determine which environments to deploy. These are the following fields used for this.

      +
        +
      1. branches
      2. +
      3. pullrequests
      4. +
      5. kubernetesNamespacePattern
      6. +
      7. kubernetes
      8. +
      +
    6. +
    7. +

      As soon as any DeployTarget configurations are added to a project, then all future deployments for this project will use these configurations. What is defined in the project is ignored, and overwritten to inform users that DeployTarget configurations are in use.

      +
    8. +
    9. +

      DeployTarget configurations are weighted, which means that a DeployTarget configuration with a larger weight is prioritized over one with lower weight.

      +

      1. The order in which they are returned by the query is the order they are used to determine where an environment should be deployed.

      +
    10. +
    11. +

      Active/Standby environments can only be deployed to the same cluster, so your DeployTarget configuration must be able to deploy both those environments to the same target.

      +
    12. +
    13. +

      Projects that leverage the promote feature of Lagoon must be aware that DeployTarget configurations are ignored for the destination environment.

      +
        +
      1. The destination environment will always be deployed to the same target that the source environment is on, your DeployTarget configuration MUST be configured correctly for this source environment.
      2. +
      3. For safety, it is best to define both the source and destination environment in the same DeployTarget configuration branch regex.
      4. +
      +
    14. +
    +

    Configuration#

    +

    To configure a project to use DeployTarget configurations, the first step is to add a configuration to a project.

    +

    The following GraphQL mutation can be used, this particular example will add a DeployTarget configuration to the project with the project ID 1. +It will allow only the branches that match the name main to be deployed, and pullrequests is set to false. +This means no other branches will be able to deploy to this particular target, and no pull requests will be deployed to this particular target. +The deployTarget is ID 1, this could be a Kubernetes cluster in a specific region, or designated for a specific type of workload (production or development).

    +
    Configure DeployTarget
    mutation addDeployTargetConfig{
    +  addDeployTargetConfig(input:{
    +    project: 1
    +    branches: "main"
    +    pullrequests: "false"
    +    deployTarget: 1
    +    weight: 1
    +  }){
    +    id
    +    weight
    +    branches
    +    pullrequests
    +    deployTargetProjectPattern
    +    deployTarget{
    +        name
    +        id
    +    }
    +    project{
    +        name
    +    }
    +  }
    +}
    +
    +
    +

    Info

    +

    deployTarget is an alias the Kubernetes or OpenShift ID in the Lagoon API

    +
    +

    It is also possible to configure multiple DeployTarget configurations.

    +

    The following GraphQL mutation can be used, this particular example will add a DeployTarget configuration to the same project as above.

    +

    It will allow only the branches that regex match with ^feature/|^(dev|test|develop)$ to be deployed, and pullrequests is set to true so all pull requests will reach this target.

    +

    The targeted cluster in this example is ID 2, which is a completely different Kubernetes cluster to what was defined above for the main branch.

    +
    Configure DeployTarget
    mutation addDeployTargetConfig{
    +  addDeployTargetConfig(input:{
    +    project: 1
    +    branches: "^feature/|^(dev|test|develop)$"
    +    pullrequests: "true"
    +    deployTarget: 2
    +    weight: 1
    +  }){
    +    id
    +    weight
    +    branches
    +    pullrequests
    +    deployTargetProjectPattern
    +    deployTarget{
    +        name
    +        id
    +    }
    +    project{
    +        name
    +    }
    +  }
    +}
    +
    +

    Once these have been added to a project, you can return all the DeployTarget configurations for a project using the following query

    +
    Get DeployTargets
    query deployTargetConfigsByProjectId{
    +    deployTargetConfigsByProjectId(project:1){
    +        id
    +        weight
    +        branches
    +        pullrequests
    +        deployTargetProjectPattern
    +        deployTarget{
    +            name
    +            id
    +        }
    +        project{
    +            name
    +        }
    +    }
    +}
    +# result:
    +{
    +    "data": {
    +        "deployTargetConfigsByProjectId": [
    +        {
    +            "id": 1,
    +            "weight": 1,
    +            "branches": "main",
    +            "pullrequests": "false",
    +            "deployTargetProjectPattern": null,
    +            "deployTarget": {
    +                "name": "production-cluster",
    +                "id": 1
    +            },
    +            "project": {
    +                "name": "my-project"
    +            }
    +        },
    +        {
    +            "id": 2,
    +            "weight": 1,
    +            "branches": "^feature/|^(dev|test|develop)$",
    +            "pullrequests": "true",
    +            "deployTargetProjectPattern": null,
    +            "deployTarget": {
    +                "name": "development-cluster",
    +                "id": 2
    +            },
    +            "project": {
    +                "name": "my-project"
    +            }
    +        }
    +        ]
    +    }
    +}
    +
    + + + + + + + + +
    +
    + + +
    + +
    + + + +
    +
    +
    +
    + + + + + + + + + + + + + \ No newline at end of file diff --git a/using-lagoon-advanced/drupal-example project 2021-11-18 19-03-22.png b/using-lagoon-advanced/drupal-example project 2021-11-18 19-03-22.png new file mode 100644 index 0000000000..5429dfca43 Binary files /dev/null and b/using-lagoon-advanced/drupal-example project 2021-11-18 19-03-22.png differ diff --git a/using-lagoon-advanced/environment-idling/index.html b/using-lagoon-advanced/environment-idling/index.html new file mode 100644 index 0000000000..4c0b6bc77c --- /dev/null +++ b/using-lagoon-advanced/environment-idling/index.html @@ -0,0 +1,2795 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + Environment Idling - Lagoon Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    + +
    + + + + +
    + + +
    + +
    + + + + + + +
    +
    + + + +
    +
    +
    + + + + +
    +
    +
    + + + + + + + +
    +
    + + + + + + + + + + + + + +

    Environment Idling (optional)#

    +

    What is the Environment Idler?#

    +

    Lagoon can utilize the Aergia controller, (installed in the lagoon-remote) to automatically idle environments if they have been unused for a defined period of time. This is done in order to reduce the load on the Kubernetes clusters and improve the overall performance of production environments and development environments that are actually in use.

    +

    How does an environment get idled?#

    +

    The environment idler has many different configuration capabilities. Here are the defaults of a standard Lagoon installation (these could be quite different in your Lagoon, check with your Lagoon administrator!)

    +
      +
    • Idling is tried every 4 hours.
    • +
    • Production environments are never idled.
    • +
    • CLI pods are idled if they don't include a cron job and if there is no remote shell connection active.
    • +
    • All other services and pods are idled if there was no traffic on the environment in the last 4 hours.
    • +
    • If there is an active build happening, there will be no idling.
    • +
    +

    How does an environment get un-idled?#

    +

    Aergia will automatically un-idle an environment as soon as it is visited, therefore just visiting any URL of the environment will start the environment. Likewise, initiating an SSH session to the environment will also restart the services.

    +

    The un-idling will take a couple of seconds, as the Kubernetes cluster needs to start all containers again. During this time there will be waiting screen shown to the visitor that their environment is currently started.

    +

    Can I disable / prevent the Idler from idling my environment?#

    +

    Yes, there is a field autoIdle on the project (impacts all environments) and environment (if you need to target just one environment), as to whether idling is allowed to take place. A value of 1 indicates the project/environment is eligible for idling. If the project is set to 0 the environments will never be idled, even if the environment is set to 0 +The default is always 1(idling is enabled).

    +

    Talk to your Lagoon administrator if you are unsure how to set these project/environment fields.

    + + + + + + + + +
    +
    + + +
    + +
    + + + +
    +
    +
    +
    + + + + + + + + + + + + + \ No newline at end of file diff --git a/using-lagoon-advanced/environment-types/index.html b/using-lagoon-advanced/environment-types/index.html new file mode 100644 index 0000000000..1b0a3c54b3 --- /dev/null +++ b/using-lagoon-advanced/environment-types/index.html @@ -0,0 +1,2691 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + Environment Types - Lagoon Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    + +
    + + + + +
    + + +
    + +
    + + + + + + +
    +
    + + + +
    +
    +
    + + + + +
    +
    +
    + + + +
    +
    +
    + + + +
    +
    +
    + + + +
    +
    + + + + + + + + + + + + + +

    Environment Types#

    +

    Lagoon currently differentiates between two different environment types: production and development.

    +

    When setting up your project via the Lagoon GraphQL API, you can define a productionEnvironment. On every deployment Lagoon executes, it checks if the current environment name matches what is defined in productionEnvironment. If it does, Lagoon will mark this environment as the production environment. This happens in two locations:

    +
      +
    1. Within the GraphQL API itself.
    2. +
    3. As an environment variable named LAGOON_ENVIRONMENT_TYPE in every container.
    4. +
    +

    But that's it. Lagoon itself handles development and production environments in exactly the same way (in the end we want as few differences of the environments as possible - that's the beauty of Lagoon).

    +

    There are a couple of things that will use this information:

    +
      +
    • By default, development environments are idled after 4 hours with no hits (don't worry, they wake up automatically). It is also possible for your Lagoon administrator to disable auto-idling on a per-environment basis, just ask!
    • +
    • Our default Drupal settings.php files load additional settings files for development.settings.php and production.settings.php so you can define settings and configurations different per environment type.
    • +
    • If you try to delete an environment that is defined as the production environment (either via webhooks or REST), Lagoon will politely refuse to delete the production environment, as it tries to prevent you from making a mistake. In order to delete a production environment, you can either change the productionEnvironment in the API or use the secret forceDeleteProductionEnvironment: true POST payload for the REST API.
    • +
    • The Lagoon administrator might use the production environment information for some additional things. For example, at amazee.io we're calculating only the hits of the production environments to calculate the price of the hosting.
    • +
    + + + + + + + + +
    +
    + + +
    + +
    + + + +
    +
    +
    +
    + + + + + + + + + + + + + \ No newline at end of file diff --git a/using-lagoon-advanced/environment-variables/index.html b/using-lagoon-advanced/environment-variables/index.html new file mode 100644 index 0000000000..8753aed1dc --- /dev/null +++ b/using-lagoon-advanced/environment-variables/index.html @@ -0,0 +1,3096 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + Environment Variables - Lagoon Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    + +
    + + + + +
    + + +
    + +
    + + + + + + +
    +
    + + + +
    +
    +
    + + + + +
    +
    +
    + + + + + + + +
    +
    + + + + + + + + + + + + + +

    Environment Variables#

    +

    It is common to store API tokens or credentials for applications in environment variables.

    +

    Following best practices, those credentials are different per environment. We allow each environment to use a separate set of environment variables defined in environment variables or environment files.

    +

    As there can be environment variables defined in either the Dockerfile or during runtime (via API environment variables), we have a hierarchy of environment variables: environment variables defined in lower numbers are stronger.

    +
      +
    1. Environment variables (defined via Lagoon API) - environment specific.
    2. +
    3. Environment variables (defined via Lagoon API) - project-wide.
    4. +
    5. Environment variables defined in Dockerfile (ENV command).
    6. +
    7. Environment variables defined in .lagoon.env.$LAGOON_GIT_BRANCH or .lagoon.env.$LAGOON_GIT_SAFE_BRANCH (if the file exists and where $LAGOON_GIT_BRANCH $LAGOON_GIT_SAFE_BRANCH are the name and safe name of the branch this Docker image has been built for), use this for overwriting variables for only specific branches.
    8. +
    9. Environment variables defined in .lagoon.env (if it exists), use this for overwriting variables for all branches.
    10. +
    11. Environment variables defined in .env.
    12. +
    13. Environment variables defined in .env.defaults.
    14. +
    +

    .lagoon.env.$LAGOON_GIT_BRANCH, .lagoon.env.$LAGOON_GIT_SAFE_BRANCH, .env, and .env.defaults are all sourced by the individual containers themselves as part of running their entrypoint scripts. They are not read by Lagoon, but by the containers ENTRYPOINT scripts, which look for them in the containers working directory. If environment variables don't appear as expected, check if your container has a WORKDIR setting that points to somewhere else.

    +

    Environment Variables (Lagoon API)#

    +

    We suggest using the Lagoon API environment variable system for variables that you don't want to keep in your Git repository (like secrets or API keys), as they could be compromised by somebody having them on their local development environment or on the internet, etc.

    +

    The Lagoon API allows you to define project-wide or environment-specific variables. Additionally, they can be defined for a scope-only build-time or runtime. They are all created via the Lagoon GraphQL API. Read more on how to use the GraphQL API in our GraphQL API documentation.

    +

    Runtime Environment Variables (Lagoon API)#

    +

    Runtime environment variables are automatically made available in all containers, but they are only added or updated after an environment has been re-deployed.

    +

    This defines a project wide runtime variable (available in all environments) for the project with ID 463:

    +
    Add runtime variable
    mutation addRuntimeEnv {
    +  addEnvVariable(
    +    input:{
    +      type:PROJECT,
    +      typeId:463,
    +      scope:RUNTIME,
    +      name:"MYVARIABLENAME",
    +      value:"MyVariableValue"
    +    }
    +  ) {
    +    id
    +  }
    +}
    +
    +

    This defines a environment ID 546 specific runtime variable (available only in that specific environment):

    +
    Define environment ID
    mutation addRuntimeEnv {
    +  addEnvVariable(
    +    input:{
    +      type:ENVIRONMENT,
    +      typeId:546,
    +      scope:RUNTIME,
    +      name:"MYVARIABLENAME",
    +      value:"MyVariableValue"
    +    }
    +  ) {
    +    id
    +  }
    +}
    +
    +

    Build-time Environment Variables (Lagoon API)#

    +

    Build-time environment variables are only available during a build and need to be consumed in Dockerfiles via:

    +

    Using build-time environment variables
    ARG MYVARIABLENAME
    +
    +Typically the ARG will go after the FROM. Read the docker documentation about ARG and FROM.

    +

    This defines a project-wide build-time variable (available in all environments) for the project with ID 463:

    +
    Define a project-wide build-time variable
    mutation addBuildtimeEnv {
    +  addEnvVariable(
    +    input:{
    +      type:PROJECT,
    +      typeId:463,
    +      scope:BUILD,
    +      name:"MYVARIABLENAME",
    +      value:"MyVariableValue"}
    +  ) {
    +    id
    +  }
    +}
    +
    +

    This defines an environment ID 546specific build-time variable (available only in that specific environment):

    +
    Define environment ID
    mutation addBuildtimeEnv {
    +  addEnvVariable(input:{type:ENVIRONMENT, typeId:546, scope:BUILD, name:"MYVARIABLENAME", value:"MyVariableValue"}) {
    +    id
    +  }
    +}
    +
    +

    Container registry environment variables are only available during a build and are used when attempting to log in to a private registry. They are used to store the password for the user defined in Specials » container-registries. They can be applied at the project or environment level.

    +

    This defines a project-wide container registry variable (available in all environments) for the project with ID 463:

    +
    Define project-wide container registry variable
    mutation addContainerRegistryEnv {
    +  addEnvVariable(
    +    input:{
    +      type:PROJECT,
    +      typeId:463,
    +      scope:CONTAINER_REGISTRY,
    +      name:"MY_OWN_REGISTRY_PASSWORD",
    +      value:"MySecretPassword"})
    +  ) {
    +    id
    +  }
    +}
    +
    +

    This defines a environment ID 546 specific container registry variable (available only in that specific environment):

    +
    Define environment ID
    mutation addContainerRegistryEnv {
    +  addEnvVariable(
    +    input:{
    +      type:ENVIRONMENT,
    +      typeId:546,
    +      scope:CONTAINER_REGISTRY,
    +      name:"MY_OWN_REGISTRY_PASSWORD",
    +      value:"MySecretPassword"}
    +  ) {
    +    id
    +  }
    +}
    +
    +

    Environment Files (existing directly in the Git Repo)#

    +

    If you have environment variables that can safely be saved within a Git repository, we suggest adding them directly into the Git repository in an environment file. These variables will also be available within local development environments and are therefore more portable.

    +

    The syntax in the environment files is as following:

    +
    myenvironment.env
    MYVARIABLENAME="MyVariableValue"
    +MVARIABLENUMBER=4242
    +DB_USER=$DB_USERNAME # Redefine DB_USER with the value of DB_USERNAME e.g. if your application expects another variable name for the Lagoon-provided variables.
    +
    +

    .lagoon.env.$BRANCHNAME#

    +

    If you want to define environment variables different per environment you can create a .lagoon.env.$BRANCHNAME e.g. for the main branch .lagoon.env.main. This helps you keep environment variables apart between environments.

    +

    .env and .env.defaults#

    +

    .env and .env.defaults will act as the default values for environment variables if none other is defined. For example, as default environment variables for pull request environments (see Workflows).

    +

    Special Environment Variables#

    +

    PHP_ERROR_REPORTING#

    +

    This variable, if set, will define the logging level you would like PHP to use. If not supplied, it will be set dynamically based on whether this is a production or development environment.

    +

    On production environments, this value defaults to E_ALL & ~E_DEPRECATED & ~E_STRICT & ~E_NOTICE.

    +

    On development environments, this value defaults to E_ALL & ~E_DEPRECATED & ~E_STRICT.

    +

    Custom Backup Settings#

    +

    Lagoon supports custom backup locations and credentials for any project when all four of the following variables are set as BUILD type variables. The environment variables need to be set at the project level (not per environment), and requires a Lagoon deployment after setting them (for every environment).

    +

    Please note that any use of these variables means that all environment and database backups created and managed by Lagoon will be stored using these credentials, meaning that any interruption of these credentials' may lead to failed or inaccessible backups.

    + + + + + + + + + + + + + + + + + + + + + + + + + +
    Environment variable namePurpose
    LAGOON_BAAS_CUSTOM_BACKUP_ENDPOINTSpecify the S3 compatible endpoint where any Lagoon initiated backups should be stored. An example for S3 Sydney would be: https://s3.ap-southeast-2.amazonaws.com.
    LAGOON_BAAS_CUSTOM_BACKUP_BUCKETSpecify the bucket name where any Lagoon initiated backups should be stored.An example custom setting would be: example-restore-bucket.
    LAGOON_BAAS_CUSTOM_BACKUP_ACCESS_KEYSpecify the access key Lagoon should use to access the custom backup bucket. An example custom setting would be: AKIAIOSFODNN7EXAMPLE.
    LAGOON_BAAS_CUSTOM_BACKUP_SECRET_KEYSpecify the secret key Lagoon should use to access the custom backup bucket. An example custom setting would be: wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY.
    +

    No public access is needed on the S3 bucket and can be made entirely private.

    +

    Lagoon will automatically prune the files in these S3 buckets, so no object retention policy is needed at the bucket level.

    +

    Custom Restore Location#

    +

    Lagoon supports custom restore locations and credentials for any project when all four of the following variables are set as BUILD type environment variables. The environment variables need to be set at the project level (not per environment), and requires a Lagoon deployment after setting them (for every environment).

    +

    Please note that any use of these variables means that all environment and database snapshots restored by Lagoon will be stored using these credentials. This means that any interruption of these credentials' access may lead to failed or inaccessible restored files.

    + + + + + + + + + + + + + + + + + + + + + + + + + +
    Environment variable namePurpose
    LAGOON_BAAS_CUSTOM_RESTORE_ENDPOINTSpecify the S3 compatible endpoint where any Lagoon initiated restores should be stored. An example for S3 Sydney would be: https://s3.ap-southeast-2.amazonaws.com.
    LAGOON_BAAS_CUSTOM_RESTORE_BUCKETSpecify the bucket name where any Lagoon initiated restores should be stored.An example custom setting would be: example-restore-bucket.
    LAGOON_BAAS_CUSTOM_RESTORE_ACCESS_KEYSpecify the access key Lagoon should use to access the custom restore bucket. An example custom setting would be: AKIAIOSFODNN7EXAMPLE.
    LAGOON_BAAS_CUSTOM_RESTORE_SECRET_KEYSpecify the secret key Lagoon should use to access the custom restore bucket. An example custom setting would be: wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY.
    +

    The S3 bucket must have public access enabled, as Lagoon will create presigned URLs for the objects inside the bucket as needed.

    +

    An example AWS IAM policy that you can create to allow access to just the S3 bucket example-restore-bucket is:

    +
    aws_iam_restore_policy.json
    {
    +  "Version": "2012-10-17",
    +  "Statement": [
    +    {
    +      "Effect": "Allow",
    +      "Action": [
    +        "s3:GetBucketLocation",
    +        "s3:ListBucket"
    +      ],
    +      "Resource": [
    +        "arn:aws:s3:::example-restore-bucket"
    +      ]
    +    },
    +    {
    +      "Effect": "Allow",
    +      "Action": [
    +        "s3:PutObject",
    +        "s3:GetObject",
    +        "s3:GetObjectVersion",
    +        "s3:GetBucketLocation",
    +        "s3:PutObjectAcl"
    +      ],
    +      "Resource": [
    +         "arn:aws:s3:::example-restore-bucket/*"
    +      ]
    +    }
    +  ]
    +}
    +
    +

    For increased security and reduced storage costs you can opt into removing restored backups after a set lifetime (e.g. 7 days). Lagoon caters for this scenario gracefully and will re-create any restored snapshots as needed.

    + + + + + + + + +
    +
    + + +
    + +
    + + + +
    +
    +
    +
    + + + + + + + + + + + + + \ No newline at end of file diff --git a/using-lagoon-advanced/graphiql-2020-01-29-18-05-54.png b/using-lagoon-advanced/graphiql-2020-01-29-18-05-54.png new file mode 100644 index 0000000000..60912674a3 Binary files /dev/null and b/using-lagoon-advanced/graphiql-2020-01-29-18-05-54.png differ diff --git a/using-lagoon-advanced/graphiql-2020-01-29-18-07-28.png b/using-lagoon-advanced/graphiql-2020-01-29-18-07-28.png new file mode 100644 index 0000000000..891ac952f3 Binary files /dev/null and b/using-lagoon-advanced/graphiql-2020-01-29-18-07-28.png differ diff --git a/using-lagoon-advanced/graphql/index.html b/using-lagoon-advanced/graphql/index.html new file mode 100644 index 0000000000..d29c2b7d85 --- /dev/null +++ b/using-lagoon-advanced/graphql/index.html @@ -0,0 +1,2804 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + GraphQL - Lagoon Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    + +
    + + + + +
    + + +
    + +
    + + + + + + +
    +
    + + + +
    +
    +
    + + + + +
    +
    +
    + + + +
    +
    +
    + + + +
    +
    +
    + + + +
    +
    + + + + + + + + + + + + + +

    GraphQL#

    +

    Connect to GraphQL API#

    +

    API interactions in Lagoon are done via GraphQL. In order to authenticate to the API, you need a JWT (JSON Web Token), which will authenticate you against the API via your SSH public key.

    +

    To generate this token, use the remote shell via the token command:

    +
    Get token
    ssh -p [PORT] -t lagoon@[HOST] token
    +
    +

    Example for amazee.io:

    +
    Get amazee.io token
    ssh -p 32222 -t lagoon@ssh.lagoon.amazeeio.cloud token
    +
    +

    This will return a long string, which is the JWT token.

    +

    We also need the URL of the API endpoint. Ask your Lagoon administrator for this.

    + +

    On amazee.io this is https://api.lagoon.amazeeio.cloud/graphql.

    +

    Now we need a GraphQL client! Technically this is just HTTP, but we suggest GraphiQL. It has a nice UI that allows you to write GraphQL requests with autocomplete. Download, install and start it. [GraphiQL App]

    +

    Enter the API endpoint URL. Then click on "Edit HTTP Headers" and add a new Header:

    +
      +
    • "Header name": Authorization
    • +
    • "Header value": Bearer [jwt token] (make sure that the JWT token has no spaces, that won't work)
    • +
    +

    Editing HTTP Headers in the GraphiQL UI.

    +

    Close the HTTP Header overlay (press ESC) and now you are ready to make your first GraphQL Request!

    +

    Enter this on the left window:

    +
    Get all projects
    query whatIsThere {
    +  allProjects {
    +    id
    +    gitUrl
    +    name
    +    branches
    +    pullrequests
    +    productionEnvironment
    +    environments {
    +      name
    +      environmentType
    +    }
    +  }
    +}
    +
    +

    And press the ▶️ button (or press CTRL+ENTER).

    +

    Entering a query in the GraphiQL UI.

    +

    If all went well, you should see your first GraphQL response.

    +

    Mutations#

    +

    The Lagoon GraphQL API can not only display objects and create objects, but it also has the capability to update existing objects. All of Lagoon's GraphQL uses best practices.

    +

    Mutation queries in GraphQL modify the data in the data store, and return a value. They can be used to insert, update, and delete data. Mutations are defined as a part of the schema.

    +

    Update the branches to deploy within a project:

    +
    Update deploy branches
    mutation editProjectBranches {
    +  updateProject(input:{id:109, patch:{branches:"^(prod|stage|dev|update)$"}}) {
    +    id
    +  }
    +}
    +
    +

    Update the production environment within a project:

    +
    +

    Warning

    +

    This requires a redeploy in order for all changes to be reflected in the containers.

    +
    +
    Update production environment
    mutation editProjectProductionEnvironment {
    +  updateProject(input:{id:109, patch:{productionEnvironment:"prod"}}) {
    +    id
    +  }
    +}
    +
    +

    You can also combine multiple changes into a single query:

    +
    Multiple changes
    mutation editProjectProductionEnvironmentAndBranches {
    +  updateProject(input:{id:109, patch:{productionEnvironment:"prod", branches:"^(prod|stage|dev|update)$"}}) {
    +    id
    +  }
    +}
    +
    + + + + + + + + +
    +
    + + +
    + +
    + + + +
    +
    +
    +
    + + + + + + + + + + + + + \ No newline at end of file diff --git a/using-lagoon-advanced/nodejs/index.html b/using-lagoon-advanced/nodejs/index.html new file mode 100644 index 0000000000..3b9b771a64 --- /dev/null +++ b/using-lagoon-advanced/nodejs/index.html @@ -0,0 +1,2808 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + Node.js Graceful Shutdown - Lagoon Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    + +
    + + + + +
    + + +
    + +
    + + + + + + +
    +
    + + + +
    +
    +
    + + + + +
    +
    +
    + + + +
    +
    +
    + + + +
    +
    +
    + + + +
    +
    + + + + + + + + + + + + + +

    Node.js Graceful Shutdown#

    +

    Node.js has integrated web server capabilities. Plus, with Express, these can be extended even more.

    +

    Unfortunately, Node.js does not handle shutting itself down very nicely out of the box. This causes many issues with containerized systems. The biggest issue is that when a Node.js container is told to shut down, it will immediately kill all active connections, and does not allow them to stop gracefully.

    +

    This part explains how you can teach Node.js to behave like a real web server: finishing active requests and then gracefully shutting down.

    +

    As an example we use a no-frills Node.js server with Express:

    +
    app.js
    const express = require('express');
    +const app = express();
    +
    +// Adds a 5 second delay for all requests.
    +app.use((req, res, next) => setTimeout(next, 5000));
    +
    +app.get('/', function (req, res) {
    +  res.send("Hello World");
    +})
    +
    +const server = app.listen(3000, function () {
    +  console.log('Example app listening on port 3000!');
    +})
    +
    +

    This will just show "Hello World" in when the web server is visited at localhost:3000. Note the 5 second delay in the response in order to simulate a request that takes some computing time.

    +

    Part A: Allow requests to be finished#

    +

    If we run the above example and stop the Node.js process while the request is handled (within the 5 seconds), we will see that the Node.js server immediately kills the connection, and our browser will show an error.

    +

    To explain to our Node.js server that it should wait for all the requests to be finished before actually stopping itself, we add the following code:

    +
    Graceful Shutdown
    const startGracefulShutdown = () => {
    +  console.log('Starting shutdown of express...');
    +  server.close(function () {
    +    console.log('Express shut down.');
    +  });
    +}
    +
    +process.on('SIGTERM', startGracefulShutdown);
    +process.on('SIGINT', startGracefulShutdown);
    +
    +

    This basically calls server.close(), which will instruct the Node.js HTTP server to:

    +
      +
    1. Not accept any more requests.
    2. +
    3. Finish all running requests.
    4. +
    +

    It will do this on SIGINT (when you press CTRL + C) or on SIGTERM (the standard signal for a process to terminate).

    +

    With this small addition, our Node.js will wait until all requests are finished, and then stop itself.

    +

    If we were not running Node.js in a containerized environment, we would probably want to include some additional code that actually kills the Node.js server after a couple of seconds, as it is technically possible that some requests are either taking very long or are never stopped. Because it is running in a containerized system, if the container is not stopped, Docker and Kubernetes will run a SIGKILL after a couple of seconds (usually 30) which cannot be handled by the process itself, so this is not a concern for us.

    +

    Part B: Yarn and NPM children spawning issues#

    +

    If we only implemented Part A, we would have a good experience. In the real world, many Node.js systems are built with Yarn or NPM, which provide not only package management systems to Node.js, but also script management.

    +

    With these script functionalities, we simplify the start of our application. We can see many package.json files that look like:

    +
    package.json
    {
    +  "name": "node",
    +  "version": "1.0.0",
    +  "main": "index.js",
    +  "license": "MIT",
    +  "dependencies": {
    +    "express": "^4.15.3"
    +  },
    +  "scripts": {
    +    "start": "node index.js"
    +  }
    +}
    +
    +

    and with the defined scripts section we can run our application just with:

    +
    Start application
    yarn start
    +
    +

    or

    +
    Start application
    npm start
    +
    +

    This is nice and makes the life of developers easier. So we also end up using the same within Dockerfiles:

    +
    .dockerfile
    CMD ["yarn", "start"]
    +
    +

    Unfortunately there is a big problem with this:

    +

    If yarn or npm get a SIGINT or SIGTERM signal, they correctly forward the signal to spawned child process (in this case node index.js). However, it does not wait for the child processes to stop. Instead, yarn/npm immediately stop themselves. This signals to Docker/Kubernetes that the container is finished and Docker/Kubernetes will kill all children processes immediately. There are issues open for Yarn and NPM but unfortunately they are not solved yet.

    +

    The solution for the problem is to not use Yarn or NPM to start your application and instead use node directly:

    +
    .dockerfile
    CMD ["node", "index.js"]
    +
    +

    This allows Node.js to properly terminate and Docker/Kubernetes will wait for Node.js to be finished.

    + + + + + + + + +
    +
    + + +
    + +
    + + + +
    +
    +
    +
    + + + + + + + + + + + + + \ No newline at end of file diff --git a/using-lagoon-advanced/phpinfo.png b/using-lagoon-advanced/phpinfo.png new file mode 100644 index 0000000000..a5120762f0 Binary files /dev/null and b/using-lagoon-advanced/phpinfo.png differ diff --git a/using-lagoon-advanced/private-repositories/index.html b/using-lagoon-advanced/private-repositories/index.html new file mode 100644 index 0000000000..c08d4ee029 --- /dev/null +++ b/using-lagoon-advanced/private-repositories/index.html @@ -0,0 +1,2684 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + Private Repositories - Lagoon Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    + +
    + + + + +
    + + +
    + +
    + + + + + + +
    +
    + + + +
    +
    +
    + + + + +
    +
    +
    + + + +
    +
    +
    + + + +
    +
    +
    + + + +
    +
    + + + + + + + + + + + + + +

    Private Repositories#

    +
      +
    1. Give the deploy key access to the Git repositories in your GitHub/GitLab/BitBucket.
    2. +
    3. Add ARG LAGOON_SSH_PRIVATE_KEY to your dockerfile (before the step of the build process that needs the SSH key).
    4. +
    5. Add RUN /lagoon/entrypoints/05-ssh-key.sh to your dockerfile (before the step of the build process that needs the SSH key).
    6. +
    +
    Set up your private respository
    RUN /lagoon/entrypoints/05-ssh-key.sh && composer install && rm /home/.ssh/key
    +
    + + + + + + + + +
    +
    + + +
    + +
    + + + +
    +
    +
    +
    + + + + + + + + + + + + + \ No newline at end of file diff --git a/using-lagoon-advanced/project-default-users-keys/index.html b/using-lagoon-advanced/project-default-users-keys/index.html new file mode 100644 index 0000000000..c8747e4299 --- /dev/null +++ b/using-lagoon-advanced/project-default-users-keys/index.html @@ -0,0 +1,2728 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + Project Default Users and SSH Keys - Lagoon Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    + +
    + + + + +
    + + +
    + +
    + + + + + + +
    +
    + + + +
    +
    +
    + + + + +
    +
    +
    + + + +
    +
    +
    + + + +
    +
    +
    + + + +
    +
    + + + + + + + + + + + + + +

    Project Default Users and SSH Keys#

    +

    When a Lagoon project is created, by default an associated SSH "project key" is generated and the private key made available inside the CLI pods of the project. A service account default-user@project is also created and given MAINTAINER access to the project. The SSH "project key" is attached to that default-user@project.

    +

    The result of this is that from inside the CLI pod of any environment it is possible to SSH to any other environment within the same project. This access is used for running tasks from the command line such as synchronizing databases between environments (e.g. drush sql-sync).

    +

    There is more information on the MAINTAINER role available in the RBAC documentation.

    +

    Specifying the project key#

    +

    It is possible to specify an SSH private key when creating a project, but this is not recommended as it has security implications.

    + + + + + + + + +
    +
    + + +
    + +
    + + + +
    +
    +
    +
    + + + + + + + + + + + + + \ No newline at end of file diff --git a/using-lagoon-advanced/service-types/index.html b/using-lagoon-advanced/service-types/index.html new file mode 100644 index 0000000000..90c0fb025b --- /dev/null +++ b/using-lagoon-advanced/service-types/index.html @@ -0,0 +1,3866 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + Service Types - Lagoon Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    + +
    + + + + +
    + + +
    + +
    + + + + + + +
    +
    + + + +
    +
    +
    + + + + +
    +
    +
    + + + +
    +
    + +
    +
    + + + +
    +
    + + + + + + + + + + + + + +

    Service Types#

    +

    The below lists all service types that can be defined via lagoon.type within a docker-compose.yml file.

    +
    +

    Warning

    +

    Once a lagoon.type is defined and the environment is deployed, changing it to a different type is not supported and could result in a broken environment.

    +
    +

    basic#

    +

    Basic container, good to use for most applications that don't have an existing template. No persistent storage. The port can be changed using a label. If an autogenerated route is not required (e.g. for an internal-facing service, set lagoon.autogeneratedroute: false in the docker-compose.yml)

    + + + + + + + + + + + + + + + + + + + +
    HealthcheckExposed PortsAuto Generated RoutesStorageAdditional customization parameter
    TCP connection on 30003000YesNolagoon.service.port, lagoon.autogeneratedroute
    +

    basic-persistent#

    +

    Like basic. Will also generate persistent storage, defines mount location via lagoon.persistent.

    + + + + + + + + + + + + + + + + + + + +
    HealthcheckExposed PortsAuto Generated RoutesStorageAdditional customization parameter
    TCP connection on 30003000YesYeslagoon.service.port, lagoon.autogeneratedroute, lagoon.persistent, lagoon.persistent.name, lagoon.persistent.size, lagoon.persistent.class
    +

    cli#

    +

    Use for any kind of CLI container (like PHP, Node.js, etc). Automatically gets the customer SSH private key that is mounted in /var/run/secrets/lagoon/sshkey/ssh-privatekey.

    + + + + + + + + + + + + + + + + + + + +
    HealthcheckExposed PortsAuto Generated RoutesStorageAdditional customization parameter
    -NoNoNo-
    +

    cli-persistent#

    +

    Like cli, expects lagoon.persistent.name to be given the name of a service that has persistent storage, which will be mounted under defined lagoon.persistent label. Does NOT generate its own persistent storage, only used to mount another service's persistent storage.

    + + + + + + + + + + + + + + + + + + + +
    HealthcheckExposed PortsAuto Generated RoutesStorageAdditional customization parameter
    -NoNoYeslagoon.persistent.name, lagoon.persistent
    +

    elasticsearch#

    +

    Elasticsearch container, will auto-generate persistent storage under /usr/share/elasticsearch/data.

    + + + + + + + + + + + + + + + + + + + +
    HealthcheckExposed PortsAuto Generated RoutesStorageAdditional customization parameter
    HTTP on localhost:9200/_cluster/health?local=true9200NoYeslagoon.persistent.size
    +

    kibana#

    +

    Kibana container.

    + + + + + + + + + + + + + + + + + + + +
    HealthcheckExposed PortsAuto Generated RoutesStorageAdditional customization parameter
    TCP connection on 56015601YesNo-
    +

    logstash#

    +

    Logstash container.

    + + + + + + + + + + + + + + + + + + + +
    HealthcheckExposed PortsAuto Generated RoutesStorageAdditional customization parameter
    TCP connection on 96009600NoNo-
    +

    mariadb#

    +

    A meta-service which will tell Lagoon to automatically decide between mariadb-single and mariadb-dbaas.

    + + + + + + + + + + + + + + + + + + + +
    HealthcheckExposed PortsAuto Generated RoutesStorageAdditional customization parameter
    -----
    +

    mariadb-single#

    +

    MariaDB container. Creates cron job for backups running every 24h executing /lagoon/mysql-backup.sh 127.0.0.1.

    + + + + + + + + + + + + + + + + + + + +
    HealthcheckExposed PortsAuto Generated RoutesStorageAdditional customization parameter
    TCP connection on 33063306NoYeslagoon.persistent.size
    +

    mariadb-dbaas#

    +

    Uses a shared MariaDB server via the DBaaS Operator.

    + + + + + + + + + + + + + + + + + + + +
    HealthcheckExposed PortsAuto Generated RoutesStorageAdditional customization parameter
    Not Needed3306No--
    +

    mongo#

    +

    A meta-service which will tell Lagoon to automatically decide between mongo-single and mongo-dbaas.

    + + + + + + + + + + + + + + + + + + + +
    HealthcheckExposed PortsAuto Generated RoutesStorageAdditional customization parameter
    -----
    +

    mongo-single#

    +

    MongoDB container, will generate persistent storage of min 1GB mounted at /data/db.

    + + + + + + + + + + + + + + + + + + + +
    HealthcheckExposed PortsAuto Generated RoutesStorageAdditional customization parameter
    TCP connection on 2701727017NoYeslagoon.persistent.size
    +

    mongo-dbaas#

    +

    Uses a shared MongoDB server via the DBaaS Operator.

    + + + + + + + + + + + + + + + + + + + +
    HealthcheckExposed PortsAuto Generated RoutesStorageAdditional customization parameter
    Not Needed27017No--
    +

    nginx#

    +

    NGINX container. No persistent storage.

    + + + + + + + + + + + + + + + + + + + +
    HealthcheckExposed PortsAuto Generated RoutesStorageAdditional customization parameter
    localhost:50000/nginx_status8080YesNolagoon.autogeneratedroute
    +

    nginx-php#

    +

    Like nginx, but additionally a php container.

    + + + + + + + + + + + + + + + + + + + +
    HealthcheckExposed PortsAuto Generated RoutesStorageAdditional customization parameter
    NGINX: localhost:50000/nginx_status, PHP: /usr/sbin/check_fcgi8080YesNolagoon.autogeneratedroute
    +

    nginx-php-persistent#

    +

    Like nginx-php. Will generate persistent storage, defines mount location via lagoon.persistent.

    + + + + + + + + + + + + + + + + + + + +
    HealthcheckExposed PortsAuto Generated RoutesStorageAdditional customization parameter
    NGINX: localhost:50000/nginx_status, PHP: /usr/sbin/check_fcgihttp on 8080YesYeslagoon.autogeneratedroute, lagoon.persistent, lagoon.persistent.name, lagoon.persistent.size, lagoon.persistent.class
    +

    node#

    +

    Node.js container. No persistent storage.

    + + + + + + + + + + + + + + + + + + + +
    HealthcheckExposed PortsAuto Generated RoutesStorageAdditional customization parameter
    TCP connection on 30003000YesNolagoon.autogeneratedroute
    +

    node-persistent#

    +

    Like node. Will generate persistent storage, defines mount location via lagoon.persistent.

    + + + + + + + + + + + + + + + + + + + +
    HealthcheckExposed PortsAuto Generated RoutesStorageAdditional customization parameter
    TCP connection on 30003000YesYeslagoon.autogeneratedroute, lagoon.persistent, lagoon.persistent.name, lagoon.persistent.size, lagoon.persistent.class
    +

    none#

    +

    Instructs Lagoon to completely ignore this service.

    + + + + + + + + + + + + + + + + + + + +
    HealthcheckExposed PortsAuto Generated RoutesStorageAdditional customization parameter
    -----
    +

    opensearch#

    +

    OpenSearch container, will auto-generate persistent storage under /usr/share/opensearch/data.

    + + + + + + + + + + + + + + + + + + + +
    HealthcheckExposed PortsAuto Generated RoutesStorageAdditional customization parameter
    HTTP on localhost:9200/_cluster/health?local=true9200NoYeslagoon.persistent.size
    +

    postgres#

    +

    A meta-service which will tell Lagoon to automatically decide between postgres-single and postgres-dbaas.

    + + + + + + + + + + + + + + + + + + + +
    HealthcheckExposed PortsAuto Generated RoutesStorageAdditional customization parameter
    -----
    +

    postgres-single#

    +

    Postgres container. Creates cron job for backups running every 24h executing /lagoon/postgres-backup.sh localhost.

    + + + + + + + + + + + + + + + + + + + +
    HealthcheckExposed PortsAuto Generated RoutesStorageAdditional customization parameter
    TCP connection on 54325432NoYeslagoon.persistent.size
    +

    postgres-dbaas#

    +

    Uses a shared PostgreSQL server via the DBaaS Operator.

    + + + + + + + + + + + + + + + + + + + +
    HealthcheckExposed PortsAuto Generated RoutesStorageAdditional customization parameter
    Not Needed5432No--
    +

    python#

    +

    Python container. No persistent storage.

    + + + + + + + + + + + + + + + + + + + +
    HealthcheckExposed PortsAuto Generated RoutesStorageAdditional customization parameter
    HTTP connection on 88008800YesNolagoon.autogeneratedroute
    +

    python-persistent#

    +

    Python container. With persistent storage.

    + + + + + + + + + + + + + + + + + + + +
    HealthcheckExposed PortsAuto Generated RoutesStorageAdditional customization parameter
    HTTP connection on 88008800YesYeslagoon.autogeneratedroute
    +

    redis#

    +

    Redis container.

    + + + + + + + + + + + + + + + + + + + +
    HealthcheckExposed PortsAuto Generated RoutesStorageAdditional customization parameter
    TCP connection on 63796379NoNo-
    +

    redis-persistent#

    +

    Redis container with auto-generated persistent storage mounted under /data.

    + + + + + + + + + + + + + + + + + + + +
    HealthcheckExposed PortsAuto Generated RoutesStorageAdditional customization parameter
    TCP connection on 63796379NoYeslagoon.persistent.size
    +

    solr#

    +

    Solr container with auto-generated persistent storage mounted under /var/solr.

    + + + + + + + + + + + + + + + + + + + +
    HealthcheckExposed PortsAuto Generated RoutesStorageAdditional customization parameter
    TCP connection on 89838983NoYeslagoon.persistent.size
    +

    varnish#

    +

    Varnish container.

    + + + + + + + + + + + + + + + + + + + +
    HealthcheckExposed PortsAuto Generated RoutesStorageAdditional customization parameter
    HTTP request localhost:8080/varnish_status8080YesNolagoon.autogeneratedroute
    +

    varnish-persistent#

    +

    Varnish container with auto-generated persistent storage mounted under /var/cache/varnish.

    + + + + + + + + + + + + + + + + + + + +
    HealthcheckExposed PortsAuto Generated RoutesStorageAdditional customization parameter
    HTTP request localhost:8080/varnish_status8080YesYeslagoon.autogeneratedroute, lagoon.persistent.size
    +

    worker#

    +

    Use for any kind of worker container (like queue workers, etc.) where there is no exposed service port.

    + + + + + + + + + + + + + + + + + + + +
    HealthcheckExposed PortsAuto Generated RoutesStorageAdditional customization parameter
    -NoNoNo-
    +

    worker-persistent#

    +

    Like worker, expects lagoon.persistent.name to be given the name of a service that has persistent storage, which will be mounted under defined lagoon.persistent label. Does NOT generate its own persistent storage, only used to mount another service's persistent storage.

    + + + + + + + + + + + + + + + + + + + +
    HealthcheckExposed PortsAuto Generated RoutesStorageAdditional customization parameter
    -NoNoYeslagoon.persistent.name, lagoon.persistent
    + + + + + + + + +
    +
    + + +
    + +
    + + + +
    +
    +
    +
    + + + + + + + + + + + + + \ No newline at end of file diff --git a/using-lagoon-advanced/setting-up-xdebug-with-lagoon/index.html b/using-lagoon-advanced/setting-up-xdebug-with-lagoon/index.html new file mode 100644 index 0000000000..31b25c28b2 --- /dev/null +++ b/using-lagoon-advanced/setting-up-xdebug-with-lagoon/index.html @@ -0,0 +1,3047 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + Setting up Xdebug with Lagoon - Lagoon Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    + +
    + + + + +
    + + +
    + +
    + + + + + + +
    +
    + + + +
    +
    +
    + + + + +
    +
    +
    + + + +
    +
    + +
    +
    + + + +
    +
    + + + + + + + + + + + + + +

    Setting up Xdebug with Lagoon#

    +

    Enable Xdebug extension in the containers#

    +

    The Lagoon base images are pre-configured with Xdebug but, for performance +reasons, the extension is not loaded by default. To enable the extension, the +XDEBUG_ENABLE environment variable must be set to true:

    + +

    Activate Xdebug Extension#

    +

    The default Xdebug configuration requires a "trigger" to activate the extension +and start a session. You can view the complete documentation +for activating the debugger but the most straightforward instructions are below.

    +

    CLI#

    +

    The php-cli image is configured to always activate Xdebug when it’s enabled, +so there is nothing else that needs to be done. Running any PHP script will +start a debugging session.

    +

    Web#

    +

    Install a browser extension +to set/unset an activation cookie.

    +

    Make sure the activation cookie is set for the website you want to start +debugging.

    +

    Configure PHPStorm#

    +
      +
    1. PHPStorm is configured correctly by default.
    2. +
    3. Click the “Start Listening for PHP Debug Connections” icon in the + toolbar.
    4. +
    5. Load a webpage or run a Drush command.
    6. +
    7. On first run, PHPStorm should pop up a window asking you to:
        +
      1. Confirm path mappings.
      2. +
      3. Select the correct file locally that was triggered on the server.
      4. +
      +
    8. +
    +

    Configure Visual Studio Code#

    +
      +
    1. Install the PHP Debug extension + by Felix Becker.
    2. +
    3. Follow the instructions + to create a basic launch.json for PHP.
    4. +
    5. +

      Add correct path mappings. For a typical Drupal site, an example would be:

      +
      launch.json
      "pathMappings": {
      +  "/app": "${workspaceFolder}",
      +},
      +
      +
    6. +
    7. +

      In the Run tab of Visual Studio Code, click the green arrow next to + “Listen for Xdebug

      +
    8. +
    9. Load a webpage or run a Drush command.
    10. +
    +

    Troubleshooting#

    +
      +
    • Verify that Xdebug extension is loaded. The best way to do this on a Drupal + site is to check the PHP status page. You should find a section about Xdebug + and all its settings.
    • +
    +

    phpinfo results

    +
      +
    • Verify the following settings:
    • +
    + + + + + + + + + + + + + + + + + + + + + +
    DirectiveLocal Value
    xdebug.modedebug
    xdebug.client_hosthost.docker.internal or your IP address
    xdebug.client_port9003
    +
      +
    • Enable Xdebug logging within the running containers. All you need is an + environment variable named XDEBUG_LOG set to anything to enable logging. + Logs will be saved to /tmp/xdebug.log. If you are using the lagoon-examples + then you can uncomment some existing lines.
    • +
    • Verify you have the activation cookie set. You can use the browser tools in + Chrome or Firefox to check that a XDEBUG_SESSION cookie is set.
    • +
    • Verify that Xdebug is activated and attempting to start a debug session with + your computer. You can use the nc -l 9003 command line tool to open the + Xdebug port. If everything is configured in PHP correctly, you should get a + Xdebug init response when you load a webpage or run a Drush command.
    • +
    • Verify that the xdebug.client_host has been set correctly. For local + debugging with Docker for Mac, this value should be host.docker.internal. + For remote debugging this value should be your IP address. If this value was + not correctly determined, you can override it by setting the DOCKERHOST + environment variable.
    • +
    • When using Lando locally, in order to debug scripts run from the CLI you must + first SSH into the CLI container via lando ssh. You won’t be able to debug + things by running lando drush or lando php.
    • +
    +

    Mac specific troubleshooting#

    +
      +
    • +

      Verify that Docker for Mac networking is not broken. On your host machine, run + nc -l 9003, then in a new terminal window, run:

      +
      Verify Docker for Mac networking
      docker-compose run cli nc -zv host.docker.internal 9003
      +
      +

      You should see a message like: +host.docker.internal (192.168.65.2:9003) open.

      +
    • +
    +

    Linux specific troubleshooting#

    +
      +
    • +

      Ensure the host host.docker.internal can be reached. If docker has been + installed manually (and not through Docker Desktop), this host will not + resolve. You can force this to resolve with an additional snippet in your + docker-compose.yml file (instructions taken from this blog post):

      +
      docker-compose.yml alterations for Linux
        services:
      +    cli:
      +      extra_hosts:
      +        host.docker.internal: host-gateway
      +    php:
      +      extra_hosts:
      +        host.docker.internal: host-gateway
      +
      +
    • +
    +

    Xdebug 2#

    +

    If you're running older images you may still be using Xdebug version 2. All the +information on this page still applies, but some of the configuration names and +values have changes:

    + + + + + + + + + + + + + + + + + + + + + + + + + +
    v3v2
    xdebug.modexdebug.remote_enabledOn
    xdebug.client_hostxdebug.remote_hosthost.docker.internal or your IP address
    xdebug.client_portxdebug.remote_port9000
    + + + + + + + + +
    +
    + + +
    + +
    + + + +
    +
    +
    +
    + + + + + + + + + + + + + \ No newline at end of file diff --git a/using-lagoon-advanced/settings 2021-11-18 19-03-48.png b/using-lagoon-advanced/settings 2021-11-18 19-03-48.png new file mode 100644 index 0000000000..1b4ea990c7 Binary files /dev/null and b/using-lagoon-advanced/settings 2021-11-18 19-03-48.png differ diff --git a/using-lagoon-advanced/simplesaml/index.html b/using-lagoon-advanced/simplesaml/index.html new file mode 100644 index 0000000000..59f94989c9 --- /dev/null +++ b/using-lagoon-advanced/simplesaml/index.html @@ -0,0 +1,2922 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + SimpleSAML - Lagoon Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    + +
    + + + + +
    + + +
    + +
    + + + + + + +
    +
    + + + +
    +
    +
    + + + + +
    +
    +
    + + + +
    +
    +
    + + + +
    +
    +
    + + + +
    +
    + + + + + + + + + + + + + +

    SimpleSAML#

    +

    SimpleSAMLphp#

    +

    This is an example of how to add SimpleSAMLphp to your project and then modify configuration to serve it via NGINX.

    +

    Requirements#

    +

    Add SimpleSAMLphp to your project:

    +
    Add SimpleSAMLphp to your project via Composer
    composer req simplesamlphp/simplesamlphp
    +
    +

    Modify configuration for SimpleSAMLphp#

    +

    Copy authsources.php and config.php from vendor/simplesamlphp/simplesamlphp/config-templates to somewhere outside vendor directory, such as conf/simplesamlphp. You also need saml20-idp-remote.php from vendor/simplesamlphp/simplesamlphp/metadata-templates.

    +

    In config.php set following values for Lagoon:

    +

    Base URL path where SimpleSAMLphp is accessed:

    +
    config.php
      'baseurlpath' => 'https://YOUR_DOMAIN.TLD/simplesaml/',
    +
    +

    Store sessions to database:

    +
    config.php
      'store.type'                    => 'sql',
    +
    +  'store.sql.dsn'                 => vsprintf('mysql:host=%s;port=%s;dbname=%s', [
    +    getenv('MARIADB_HOST'),
    +    getenv('MARIADB_PORT'),
    +    getenv('MARIADB_DATABASE'),
    +  ]),
    +
    +

    Alter other settings to your liking:

    +
      +
    • Check the paths for logs and certs.
    • +
    • Secure SimpleSAMLphp dashboard.
    • +
    • Set up level of logging.
    • +
    • Set technicalcontact and timezone.
    • +
    +

    Add authsources (IdPs) to authsources.php, see example:

    +
    authsources.php
      'default-sp' => [
    +    'saml:SP',
    +
    +    // The entity ID of this SP.
    +    'entityID' => 'https://YOUR_DOMAIN.TLD',
    +
    +    // The entity ID of the IdP this should SP should contact.
    +    // Can be NULL/unset, in which case the user will be shown a list of available IdPs.
    +    'idp' => 'https://YOUR_IDP_DOMAIN.TLD',
    +
    +    // The URL to the discovery service.
    +    // Can be NULL/unset, in which case a builtin discovery service will be used.
    +    'discoURL' => null,
    +
    +    'NameIDFormat' => 'urn:oasis:names:tc:SAML:2.0:nameid-format:transient',
    +
    +    'certificate' => '/app/conf/simplesamlphp/certs/saml.crt',
    +    'privatekey' => '/app/conf/simplesamlphp/certs/saml.pem',
    +    'redirect.sign' => TRUE,
    +    'redirect.validate' => TRUE,
    +
    +    'authproc' => [
    +      50 => [
    +        'class' => 'core:AttributeCopy',
    +        'urn:oid:1.3.6.1.4.1.5923.1.1.1.6' => 'eduPersonPrincipalName',
    +      ],
    +      51 => [
    +        'class' => 'core:AttributeCopy',
    +        'urn:oid:2.5.4.42' => 'givenName',
    +      ],
    +      52 => [
    +        'class' => 'core:AttributeCopy',
    +        'urn:oid:2.5.4.4' => 'sn',
    +      ],
    +      53 => [
    +        'class' => 'core:AttributeCopy',
    +        'urn:oid:0.9.2342.19200300.100.1.3' => 'mail',
    +      ],
    +    ],
    +  ],
    +
    +

    Add IdP metadata to saml20-idp-remote.php, see example:

    +
    saml20-idp-remote.php
    <?php
    +/**
    + * SAML 2.0 remote IdP metadata for SimpleSAMLphp.
    + *
    + * Remember to remove the IdPs you don't use from this file.
    + *
    + * See: https://simplesamlphp.org/docs/stable/simplesamlphp-reference-idp-remote
    + */
    +
    +/**
    + * Some IdP.
    + */
    +$metadata['https://YOUR_IDP_DOMAIN.TLD'] = [
    +  'entityid' => 'https://YOUR_IDP_DOMAIN.TLD',
    +  'name' => [
    +    'en' => 'Some IdP',
    +  ],
    +  'description' => 'Some IdP',
    +
    +  ...
    +
    +];
    +
    +

    In your build process, copy configuration files to SimpleSAMLphp:

    +
      +
    • vendor/simplesamlphp/simplesamlphp/config/authsources.php
    • +
    • vendor/simplesamlphp/simplesamlphp/config/config.php
    • +
    • vendor/simplesamlphp/simplesamlphp/metadata/saml20-idp-remote.php
    • +
    +

    Create NGINX conf for SimpleSAMLphp#

    +

    Create file lagoon/nginx/location_prepend_simplesamlphp.conf:

    +
    location_prepend_simplesamlphp.conf
    location ^~ /simplesaml {
    +    alias /app/vendor/simplesamlphp/simplesamlphp/www;
    +
    +    location ~ ^(?<prefix>/simplesaml)(?<phpfile>.+?\.php)(?<pathinfo>/.*)?$ {
    +        include          fastcgi_params;
    +        fastcgi_pass     ${NGINX_FASTCGI_PASS:-php}:9000;
    +        fastcgi_param    SCRIPT_FILENAME $document_root$phpfile;
    +        # Must be prepended with the baseurlpath
    +        fastcgi_param    SCRIPT_NAME /simplesaml$phpfile;
    +        fastcgi_param    PATH_INFO $pathinfo if_not_empty;
    +    }
    +}
    +
    +

    This will route /simplesaml URLs to SimpleSAMLphp in vendor.

    +

    Add additional NGINX conf to NGINX image#

    +

    Modify nginx.dockerfile and add location_prepend_simplesamlphp.conf to the image:

    +
    nginx.dockerfile
    ARG CLI_IMAGE
    +FROM ${CLI_IMAGE} as cli
    +
    +FROM amazeeio/nginx-drupal
    +
    +COPY --from=cli /app /app
    +
    +COPY lagoon/nginx/location_prepend_simplesamlphp.conf /etc/nginx/conf.d/drupal/location_prepend_simplesamlphp.conf
    +RUN fix-permissions /etc/nginx/conf.d/drupal/location_prepend_simplesamlphp.conf
    +
    +# Define where the Drupal Root is located
    +ENV WEBROOT=public
    +
    + + + + + + + + +
    +
    + + +
    + +
    + + + +
    +
    +
    +
    + + + + + + + + + + + + + \ No newline at end of file diff --git a/using-lagoon-advanced/ssh/index.html b/using-lagoon-advanced/ssh/index.html new file mode 100644 index 0000000000..84a5fe5619 --- /dev/null +++ b/using-lagoon-advanced/ssh/index.html @@ -0,0 +1,3125 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + SSH - Lagoon Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    + +
    + + + + +
    + + +
    + +
    + + + + + + +
    +
    + + + +
    +
    +
    + + + + +
    +
    +
    + + + +
    +
    +
    + + + +
    +
    +
    + + + +
    +
    + + + + + + + + + + + + + +

    SSH#

    +

    Lagoon allows you to connect to your running containers via SSH. The containers themselves don't actually have an SSH server installed, but instead you connect via SSH to Lagoon, which then itself creates a remote shell connection via the Kubernetes API for you.

    +

    Ensure you are set up for SSH access#

    +

    Generating an SSH Key#

    +

    It is recommended to generate a separate SSH key for each device as opposed to sharing the same key between multiple computers. Instructions for generating an SSH key on various systems can be found below:

    +

    OSX (Mac)#

    +

    Mac

    +

    Linux (Ubuntu)#

    +

    Linux

    +

    Windows#

    +

    Windows

    +

    SSH Agent#

    +

    OSX (Mac)#

    +

    OSX does not have its SSH agent configured to load configured SSH keys at startup, which can cause some headaches. You can find a handy guide to configuring this capability here: https://www.backarapper.com/add-ssh-keys-to-ssh-agent-on-startup-in-macos/

    +

    Linux#

    +

    Linux distributions vary in how they use the ssh-agent . You can find a general guide here: https://www.ssh.com/academy/ssh/agent

    +

    Windows#

    +

    SSH key support in Windows has improved markedly as of recently, and is now supported natively. A handy guide to configuring the Windows 10 SSH agent can be found here: https://richardballard.co.uk/ssh-keys-on-windows-10/

    +

    Uploading SSH Keys#

    +

    Via the UI#

    +

    You can upload your SSH key(s) through the UI. Log in as you normally would.

    +

    In the upper right hand corner, click on Settings:

    +

    Click "Settings" in the upper right hand corner

    +

    You will then see a page where you can upload your SSH key(s), and it will show any uploaded keys. Paste your key into the text box, give it a name, and click "Add." That's it! Add additional keys as needed.

    +

    Paste your key into the text box.

    +

    Via Command Line#

    +

    A general example of using the Lagoon API via GraphQL to add an SSH key to a user can be found here

    +

    SSH into a pod#

    +

    Connection#

    +

    Connecting is straightforward and follows the following pattern:

    +
    SSH
    ssh -p [PORT] -t [PROJECT-ENVIRONMENT-NAME]@[HOST]
    +
    +
      +
    • PORT - The remote shell SSH endpoint port (for amazee.io: 32222).
    • +
    • HOST - The remote shell SSH endpoint host (for amazee.io ssh.lagoon.amazeeio.cloud).
    • +
    • PROJECT-ENVIRONMENT-NAME - The environment you want to connect to. This is most commonly in the pattern PROJECTNAME-ENVIRONMENT.
    • +
    +

    As an example:

    +
    SSH example
    ssh -p 32222 -t drupal-example-main@ssh.lagoon.amazeeio.cloud
    +
    +

    This will connect you to the project drupal-example on the environment main.

    +

    Pod/Service, Container Definition#

    +

    By default, the remote shell will try to connect you to the container defined with the type cli. If you would like to connect to another pod/service you can define it via:

    +
    SSH to another service
    ssh -p [PORT] -t [PROJECT-ENVIRONMENT-NAME]@[HOST] service=[SERVICE-NAME]
    +
    +

    If your pod/service contains multiple containers, Lagoon will connect you to the first defined container. You can also define the specific container to connect to via:

    +
    Define container
    ssh -p [PORT] -t [PROJECT-ENVIRONMENT-NAME]@[HOST] service=[SERVICE-NAME] container=[CONTAINER-NAME]
    +
    +

    For example, to connect to the php container within the nginx pod:

    +
    SSH to php container
    ssh -p 32222 -t drupal-example-main@ssh.lagoon.amazeeio.cloud service=nginx container=php
    +
    +

    Copying files#

    +

    The common case of copying a file into your cli pod can be acheived with the usual SSH-compatible tools.

    +

    scp#

    +
    Copy file with scp
    scp -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -P 32222 [local_path] [project_name]-[environment_name]@ssh.lagoon.amazeeio.cloud:[remote_path]
    +
    +

    rsync#

    +
    Copy files with rsync
    rsync --rsh='ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -p 32222' [local_path] [project_name]-[environment_name]@ssh.lagoon.amazeeio.cloud:[remote_path]
    +
    +

    tar#

    +
    Bash
    ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -P 32222 [project_name]-[environment_name]@ssh.lagoon.amazee.io tar -zcf - [remote_path] | tar -zxf - -C /tmp/
    +
    +

    Specifying non-CLI pod/service#

    +

    In the rare case that you need to specify a non-CLI service you can specify the service=... and/or container=... arguments in the copy command.

    +

    Piping tar through the ssh connection is the simplest method, and can be used to copy a file or directory using the usual tar flags:

    +
    Bash
    ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -P 32222 [project_name]-[environment_name]@ssh.lagoon.amazee.io service=solr tar -zcf - [remote_path] | tar -zxf - -C /tmp/
    +
    +

    You can also use rsync with a wrapper script to reorder the arguments to ssh in the manner required by Lagoon's SSH service:

    +
    Bash
    #!/usr/bin/env sh
    +svc=$1 user=$3 host=$4
    +shift 4
    +exec ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -p 32222 -l "$user" "$host" "$svc" "$@"
    +
    +

    Put that in an executable shell script rsh.sh and specify the service=... in the rsync command:

    +
    rsync to non-CLI pod
    rsync --rsh="/path/to/rsh.sh service=cli" /tmp/foo [project_name]-[environment_name]@ssh.lagoon.amazeeio.cloud:/tmp/foo
    +
    +

    The script could also be adjusted to also handle a container=... argument.

    + + + + + + + + +
    +
    + + +
    + +
    + + + +
    +
    +
    +
    + + + + + + + + + + + + + \ No newline at end of file diff --git a/using-lagoon-advanced/task-yarn-audit.png b/using-lagoon-advanced/task-yarn-audit.png new file mode 100644 index 0000000000..94cdb52c58 Binary files /dev/null and b/using-lagoon-advanced/task-yarn-audit.png differ diff --git a/using-lagoon-advanced/triggering-deployments/index.html b/using-lagoon-advanced/triggering-deployments/index.html new file mode 100644 index 0000000000..5de80fb75e --- /dev/null +++ b/using-lagoon-advanced/triggering-deployments/index.html @@ -0,0 +1,2770 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + Triggering Deployments - Lagoon Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    + +
    + + + + +
    + + +
    + +
    + + + + + + +
    +
    + + + +
    +
    +
    + + + + +
    +
    +
    + + + +
    +
    +
    + + + +
    +
    +
    + + + +
    +
    + + + + + + + + + + + + + +

    Triggering Deployments#

    +

    Trigger a new deployment using Azure Pipelines#

    +

    In order to automatically trigger new deployments using Azure Pipelines follow these instructions:

    +
      +
    1. Add your deployment SSH private key to Azure as a secure file as id_rsa_lagoon. For more information about secure files have a look at the Azure Documentation Site.
    2. +
    3. Add the following configuration to your azure-pipelines.yml:
    4. +
    +
    azure-pipelines.yml
    pool:
    +  vmImage: 'ubuntu-latest'
    +
    +stages:
    +  # .. other stages
    +  - stage: Deploy
    +    condition: and(succeeded(), in(variables['Build.SourceBranch'], 'refs/heads/staging', 'refs/heads/develop'))
    +    jobs:
    +      - job: DeployLagoon
    +        steps:
    +        - task: DownloadSecureFile@1
    +          name: lagoonSshKey
    +          displayName: 'Download Lagoon SSH key'
    +          inputs:
    +            secureFile: id_rsa_lagoon
    +        - script: |
    +            curl -L "https://github.com/amazeeio/lagoon-cli/releases/download/0.9.2/lagoon-cli-0.9.2-linux-amd64" -o ./lagoon
    +            chmod +x ./lagoon
    +          displayName: 'Download lagoon-cli'
    +        - script: ./lagoon login -i $(lagoonSshKey.secureFilePath)
    +          displayName: 'Log into Lagoon'
    +        - script: ./lagoon deploy branch -e $(Build.SourceBranchName) -p my-awesome-project -b $(Build.SourceBranchName) --force
    +          displayName: 'Trigger deployment using lagoon-cli'
    +
    +

    This will trigger a new deployment whenever changes are made on the develop or staging branch. Adjust these values accordingly so they fit your deployment strategy and configuration.

    +

    Push without deploying#

    +

    There may be a case where you want to push without a deployment. Make sure your commit message contains "[skip deploy]" or "[deploy skip]" and Lagoon will not trigger a deployment from that commit.

    + + + + + + + + +
    +
    + + +
    + +
    + + + +
    +
    +
    +
    + + + + + + + + + + + + + \ No newline at end of file diff --git a/using-lagoon-advanced/workflows/index.html b/using-lagoon-advanced/workflows/index.html new file mode 100644 index 0000000000..ae4bd818c5 --- /dev/null +++ b/using-lagoon-advanced/workflows/index.html @@ -0,0 +1,2865 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + Workflows - Lagoon Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    + +
    + + + + +
    + + +
    + +
    + + + + + + +
    +
    + + + +
    +
    +
    + + + + +
    +
    +
    + + + +
    +
    +
    + + + +
    +
    +
    + + + +
    +
    + + + + + + + + + + + + + +

    Workflows#

    +

    Lagoon tries to support any development workflow possible. It specifically does not enforce any workflows onto teams, so that each development team can define how they would like to develop and deploy their code.

    +

    Fixed Branches#

    +

    The most straightforward workflows are deployment-based on some fixed branches:

    +

    You define which branches (like develop, staging and main, which would be ^(develop|staging|main)$ as regular expressions) that Lagoon should deploy and it will do so. Done!

    +

    If you would like to test a new feature, merge them into a branch that you have set up locally and push, and Lagoon will deploy the feature and you can test. When all is good, merge the branch into your production branch and push.

    +

    Feature Branches#

    +

    A bit more advanced are feature branches. Since Lagoon supports the ability to define the branches you would like to deploy via regular expressions, you can also extend the above regular expression to this: ^feature\/|^(staging|main)$. This will instruct Lagoon to deploy all branches that start with feature/, plus the branches called staging and main. Our development workflow could be as following:

    +
      +
    • Create a new branch from main called feature/myfeature and push feature/myfeature.
    • +
    • Lagoon will deploy the branch feature/myfeature as a new environment, where you can test your feature independently of any other features.
    • +
    • Merge feature/myfeature into the main branch and it will deploy to your production environment.
    • +
    +

    If you like, you can also merge feature/myfeature and any other feature branches into staging first, in order to test the functionality of multiple features together. After you have tested the features together on staging, you can merge the features into main.

    +

    This workflow needs a high level of branch pruning and cleanliness in your Git repository. Since each feature branch will create its own Lagoon environment, you can have very quickly generate a LOT of environments, which all of them will use resources. Be sure to merge or delete unused branches.

    +

    Because of this, it could make sense to think about a pull request based workflow.

    +

    Pull requests#

    +

    Even more advanced are workflows via pull requests. Such workflows need the support of a Git hosting which supports pull requests (also called merge requests). The idea of pull request-based workflows lies behind that idea that you can test a feature together with a target branch, without actually needing to merge yet, as Lagoon will do the merging for you during the build.

    +

    In our example we would configure Lagoon to deploy the branches ^(staging|main)$ and the pull requests to .* (to deploy all pull requests). Now our workflow would be:

    +
      +
    1. Create a new branch from main called feature/myfeature and push feature/myfeature (no deployment will happen now because we have only specific staging and main as our branches to be deployed).
    2. +
    3. Create a pull request in your Git hosting from feature/myfeature into main.
    4. +
    5. Lagoon will now merge the feature/myfeature branch on top of the main branch and deploy that resulting code for you.
    6. +
    7. Now you can test the functionality of the feature/myfeature branch just as if it had been merged into main, so all changes that have happened in main since you created the feature/myfeature branch from it will be there, and you don't need to worry that you might have an older version of the main branch.
        +
      1. If there is a merge conflict, the build will fail, Lagoon will stop and notify you.
      2. +
      +
    8. +
    9. After you have tested your pull request branch, you can go back to your Git hosting and actually merge the code into main. This will now trigger a deployment of main.
    10. +
    11. When the pull request is merged, it is automatically closed and Lagoon will remove the environment for the pull request automatically.
    12. +
    +

    Some teams might opt to create the pull request against a shared staging branch and then merge the staging branch into the main branch via another pull request. This depends on the kind of Git workflow you're using.

    +

    Additionally, in Lagoon you can define that only pull requests with a specific text in the title are deployed. [BUILD] defined as regular expression will only deploy pull requests that have a title like [BUILD] My Pull Request, while a pull request with that title My other Pull Request is not automatically deployed. This helps to keep the amount of environments small and allows for pull requests that don't need an environment yet.

    +

    Automatic Database Sync for Pull requests#

    +

    Automatic pull request environments are a fantastic thing. But it would also be handy to have the database synced from another environment when those environments are created. Lagoon can handle that!

    +

    The following example will sync the staging database on the first rollout of the pull request environment:

    +
    .lagoon.yml
    tasks:
    +  post-rollout:
    +    - run:
    +        name: IF no Drupal installed & Pullrequest = Sync database from staging
    +        command: |
    +            if [[ -n ${LAGOON_PR_BASE_BRANCH} ]] && tables=$(drush sqlq 'show tables;') && [ -z "$tables" ]; then
    +                drush -y sql-sync @staging default
    +            fi
    +        service: cli
    +        shell: bash
    +
    +

    Promotion#

    +

    Another way of deploying your code into an environment is the promotion workflow.

    +

    The idea behind the promotion workflow comes from this (as an example):

    +

    If you merge the branch staging into the main branch, and if there are no changes to main , so main and staging have the exact same code in Git, it could still technically be possible that the resulting Docker images are slightly different. This is because it's possible that between the last staging deployment and the current main deployment, some upstream Docker images may have changed, or dependencies loaded from the various package managers may have changed. This is a very small chance, but it's there.

    +

    For this situation, Lagoon understands the concept of promoting Lagoon images from one environment to another. This basically means that it will take the already built and deployed Docker images from one environment, and will use those exact same Docker images for another environment.

    +

    In our example, we want to promote the Docker images from the main environment to the production environment:

    +
      +
    • First, we need a regular deployed environment with the name main. Make sure that the environment has deployed successfully.
    • +
    • Also, make sure that you don't have a branch called production in your Git repository. This could lead to weird confusions (like people pushing into this branch, etc).
    • +
    • Now trigger a promotion deployment via this curl request:
    • +
    +
    Trigger a promotion deployment
      curl -X POST \
    +      https://rest.lagoon.amazeeio.cloud/promote \
    +      -H 'Content-Type: application/json' \
    +      -d '{
    +          "projectName":"myproject",
    +          "sourceEnvironmentName": "main",
    +          "branchName": "production"
    +      }'
    +
    +

    This tells Lagoon that you want to promote from the source main to the destination production (yes, it really uses branchName as destination, which is a bit unfortunate, but it will be fixed soon).

    +

    Lagoon will now do the following:

    +
      +
    • Check out the Git branch main in order to load the .lagoon.yml and docker-compose.yml files (Lagoon still needs these in order to fully work).
    • +
    • Create all Kubernetes/OpenShift objects for the defined services in docker-compose.yml , but with LAGOON_GIT_BRANCH=production as environment variable.
    • +
    • Copy the newest images from the main environment and use them (instead of building Images or tagging them from upstream).
    • +
    • Run all post-rollout tasks like a normal deployment.
    • +
    +

    You will receive the same notifications of success or failures like any other deployment.

    + + + + + + + + +
    +
    + + +
    + +
    + + + +
    +
    +
    +
    + + + + + + + + + + + + + \ No newline at end of file diff --git a/using-lagoon-the-basics/bb_webhook_1.png b/using-lagoon-the-basics/bb_webhook_1.png new file mode 100644 index 0000000000..23dea3aded Binary files /dev/null and b/using-lagoon-the-basics/bb_webhook_1.png differ diff --git a/using-lagoon-the-basics/build-and-deploy-process/index.html b/using-lagoon-the-basics/build-and-deploy-process/index.html new file mode 100644 index 0000000000..a4a0dd24e9 --- /dev/null +++ b/using-lagoon-the-basics/build-and-deploy-process/index.html @@ -0,0 +1,3005 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + Build and Deploy Process - Lagoon Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    + +
    + + + + +
    + + +
    + +
    + + + + + + +
    +
    + + + +
    +
    +
    + + + + +
    +
    +
    + + + + + + + +
    +
    + + + + + + + + + + + + + +

    Build and Deploy Process#

    +

    This document describes what actually happens during a Lagoon build and deployment. It is heavily simplified from what actually happens, but it will help you to understand what is happening under the hood every time that Lagoon deploys new code for you.

    +

    Watch the video below for a walk-through of the deployment process.

    + + +

    1. Set up OpenShift Project/Kubernetes Namespace for Environment#

    +

    First, Lagoon checks if the OpenShift project/Kubernetes namespace for the given environment exists and is correctly set up. It will make sure that we have the needed service accounts, create secrets, and will configure environment variables into a ConfigMap lagoon-env which is filled with information like the environment type and name, the Lagoon project name, and so on.

    +

    2. Git Checkout & Merge#

    +

    Next, Lagoon will check out your code from Git. It needs that to be able to read the .lagoon.yml, docker-compose.yml and any .env files, but also to build the Docker images.

    +

    Note that Lagoon will only process these actions if the branch/PR matches the branch regex set in Lagoon. Based on how the deployment has been triggered, different things will happen:

    +

    Branch Webhook Push#

    +

    If the deployment is triggered automatically via a Git webhook and is for a single branch, Lagoon will check out the Git SHA which is included in the webhook payload. This will trigger a deployment for every Git SHA pushed.

    +

    Branch REST trigger#

    +

    If you trigger a branch deployment manually via the REST API (via the UI, or GraphQL) and do NOT define a SHA in the POST payload, Lagoon will just check out the latest commit in that branch and deploy it.

    +

    Pull Requests#

    +

    If the deployment is a pull request (PR) deployment, Lagoon will load the base and the HEAD branch and SHAs for the pull request and will:

    +
      +
    • Check out the base branch (the branch the PR points to).
    • +
    • Merge the HEAD branch (the branch that the PR originates from) on top of the base branch.
    • +
    • More specifically:
        +
      • Lagoon will check out and merge particular SHAs which were sent in the webhook. Those SHAs may or may not point to the branch heads. For example, if you make a new push to a GitHub pull request, it can happen that SHA of the base branch will not point to the current base branch HEAD.
      • +
      +
    • +
    +

    If the merge fails, Lagoon will also stop and inform you about this.

    +

    3. Build Image#

    +

    For each service defined in the docker-compose.yml Lagoon will check if images need to be built or not. If they need to be built, this will happen now. The order of building is based on the order they are configured in docker-compose.yml , and some build arguments are injected:

    +
      +
    • LAGOON_GIT_SHA
    • +
    • LAGOON_GIT_BRANCH
    • +
    • LAGOON_PROJECT
    • +
    • LAGOON_BUILD_TYPE (either pullrequest, branch or promote)
    • +
    • LAGOON_SSH_PRIVATE_KEY - The SSH private key that is used to clone the source repository. Use RUN /lagoon/entrypoints/05-ssh-key.sh to convert the build argument into an actual key at /home/.ssh/key which will be used by SSH and Git automatically. For safety, remove the key again via RUN rm /home/.ssh/key.
    • +
    • LAGOON_GIT_SOURCE_REPOSITORY - The full Git URL of the source repository.
    • +
    +

    Also, if this is a pull request build:

    +
      +
    • LAGOON_PR_HEAD_BRANCH
    • +
    • LAGOON_PR_HEAD_SHA
    • +
    • LAGOON_PR_BASE_BRANCH
    • +
    • LAGOON_PR_BASE_SHA
    • +
    • LAGOON_PR_TITLE
    • +
    +

    Additionally, for each already built image, its name is also injected. If your docker-compose.yml is configured to first build the cli image and then the nginx image, the name of the nginx image is injected as NGINX_IMAGE.

    +

    4. Configure Kubernetes or OpenShift Services and Routes#

    +

    Next, Lagoon will configure Kubernetes or OpenShift with all services and routes that are defined from the service types, plus possible additional custom routes that you have defined in .lagoon.yml.

    +

    In this step it will expose all defined routes in the LAGOON_ROUTES as comma separated URLs. It will also define one route as the "main" route, in this order:

    +
      +
    1. If custom routes defined: the first defined custom route in .lagoon.yml.
    2. +
    3. The first auto-generated route from a service defined in docker-compose.yml.
    4. +
    5. None.
    6. +
    +

    The "main" route is injected via the LAGOON_ROUTE environment variable.

    +

    5. Push and Tag Images#

    +

    Now it is time to push the previously built Docker images into the internal Docker image registry.

    +

    For services that didn't specify a Dockerfile to be built in docker-compose.yml and only gave an image, they are also tagged and will cause the internal Docker image registry to know about the images, so that they can be used in containers.

    +

    6. Persistent Storage#

    +

    Lagoon will now create persistent storage (PVC) for each service that needs and requested persistent storage.

    +

    7. Cron jobs#

    +

    For each service that requests a cron job (like MariaDB), plus for each custom cron job defined in .lagoon.yml, Lagoon will now generate the cron job environment variables which are later injected into the Deployment.

    +

    8. Run defined pre-rollout tasks#

    +

    Now Lagoon will check the .lagoon.yml file for defined tasks in pre-rollout and will run them one by one in the defined services. Note that these tasks are executed on the pods currently running (so cannot utilize features or scripts that only exist in the latest commit) and therefore they are also not run on first deployments.

    +

    If any of them fail, Lagoon will immediately stop and notify you, and the rollout will not proceed.

    +

    9. DeploymentConfigs, Statefulsets, Daemonsets#

    +

    This is probably the most important step. Based on the defined service type, Lagoon will create the Deployment, Statefulset or Daemonsets for the service. (Note that Deployments are analogous to DeploymentConfigs in OpenShift)

    +

    It will include all previously gathered information like the cron jobs, the location of persistent storage, the pushed images and so on.

    +

    Creation of these objects will also automatically cause Kubernetes or OpenShift to trigger new deployments of the pods if necessary, like when an environment variable has changed or an image has changed. But if there is no change, there will be no deployment! This means if you only update the PHP code in your application, the Varnish, Solr, MariaDB, Redis and any other service that is defined but does not include your code will not be deployed. This makes everything much much faster.

    +

    10. Wait for all rollouts to be done#

    +

    Now Lagoon waits! It waits for all of the just-triggered deployments of the new pods to be finished, as well as for their health checks to be successful.

    +

    If any of the deployments or health checks fail, the deployment will be stopped here, and you will be informed via the defined notification systems (like Slack) that the deployment has failed.

    +

    11. Run defined post-rollout tasks#

    +

    Now Lagoon will check the .lagoon.yml file for defined tasks in post-rollout and will run them one by one in the defined services.

    +

    If any of them fail, Lagoon will immediately stop and notify you.

    +

    12. Success#

    +

    If all went well and nothing threw any errors, Lagoon will mark this build as successful and inform you via defined notifications. ✅

    + + + + + + + + +
    +
    + + +
    + +
    + + + +
    +
    +
    +
    + + + + + + + + + + + + + \ No newline at end of file diff --git a/using-lagoon-the-basics/configure-webhooks/index.html b/using-lagoon-the-basics/configure-webhooks/index.html new file mode 100644 index 0000000000..025f089f61 --- /dev/null +++ b/using-lagoon-the-basics/configure-webhooks/index.html @@ -0,0 +1,2810 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + Configure Webhooks - Lagoon Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    + +
    + + + + +
    + + +
    + +
    + + + + + + +
    +
    + + + +
    +
    +
    + + + + +
    +
    +
    + + + +
    +
    +
    + + + +
    +
    +
    + + + +
    +
    + + + + + + + + + + + + + +

    Configure Webhooks#

    +

    Your Lagoon administrator will also give you the route to the webhook-handler. You will add this to your repository as an outgoing webhook, and choose which events to send to Lagoon. Typically, you will send all push and pull request events. In Lagoon it is possible to add a regular expression to determine which branches and pull requests actually result in a deploy, and your Lagoon administrator can set that up for you. For example, all branches that start with feature- could be deployed to Lagoon.

    + +
    +Info for amazee.io customers +

    If you are an amazee.io customer, the route to the webhook-handler is: https://hooks.lagoon.amazeeio.cloud. +

    +
    +
    +

    Danger

    +

    Managing the following settings will require you to have a high level of access to these repositories, which will be controlled by your organization. If you cannot access these settings, please contact your systems administrator or the appropriate person within your organization.

    +
    +

    GitHub#

    +
      +
    1. Proceed to Settings -> Webhooks -> Add webhook in your GitHub repository. + Adding webhook in GitHub.
    2. +
    3. The Payload URL is the route to the webhook-handler of your Lagoon instance, provided by your Lagoon administrator.
    4. +
    5. Set Content type to application/json. + Add the Payload URL and set the Content type.
    6. +
    7. Choose "Let me select individual events."
    8. +
    9. Choose which events will trigger your webhook. We suggest that you send Push and Pull request events, and then filter further in the Lagoon configuration of your project. + Select the webhook event triggers in GitHub.
    10. +
    11. Make sure the webhook is set to Active.
    12. +
    13. Click Add webhook to save your configuration.
    14. +
    +

    GitLab#

    +
      +
    1. Navigate to Settings -> Integrations in your GitLab repository. + Go to Settings &gt; Integrations in your GitLab repository.
    2. +
    3. The URL is the route to the webhook-handler of your Lagoon instance, provided by your Lagoon administrator.
    4. +
    5. Select the Trigger events which will send a notification to Lagoon. We suggest that you send Push events and Merge request events, and then filter further in the Lagoon configuration of your project. + Selecting Trigger events in GitLab.
    6. +
    7. Click Add webhookto save your configuration.
    8. +
    +

    Bitbucket#

    +
      +
    1. Navigate to Settings -> Webhooks -> Add new webhook in your repository.
    2. +
    3. Title is for your reference.
    4. +
    5. URL is the route to the webhook-handler of your Lagoon instance, provided by your Lagoon administrator.
    6. +
    7. +

      Choose from a full list of triggers and select the following:

      +
        +
      • Repository
          +
        • Push
        • +
        +
      • +
      • Pull Request
          +
        • Created
        • +
        • Updated
        • +
        • Approved
        • +
        • Approval removed
        • +
        • Merged
        • +
        • Declined
        • +
        +
      • +
      +

      Select the Bitbucket Triggers for your webhook. +5. Click Save to save the webhook configurations for Bitbucket.

      +
    8. +
    + + + + + + + + +
    +
    + + +
    + +
    + + + +
    +
    +
    +
    + + + + + + + + + + + + + \ No newline at end of file diff --git a/using-lagoon-the-basics/docker-compose-yml/index.html b/using-lagoon-the-basics/docker-compose-yml/index.html new file mode 100644 index 0000000000..bddc0e78fc --- /dev/null +++ b/using-lagoon-the-basics/docker-compose-yml/index.html @@ -0,0 +1,3118 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + docker-compose.yml - Lagoon Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    + +
    + + + + +
    + + +
    + +
    + + + + + + +
    +
    + + + +
    +
    +
    + + + + +
    +
    +
    + + + +
    +
    +
    + + + +
    +
    +
    + + + +
    +
    + + + + + + + + + + + + + +

    docker-compose.yml#

    +

    The docker-compose.yml file is used by Lagoon to:

    +
      +
    • Learn which services/containers should be deployed.
    • +
    • Define how the images for the containers are built.
    • +
    • Define additional configurations like persistent volumes.
    • +
    +

    Docker Compose (the tool) is very strict in validating the content of the YAML file, so we can only do configuration within labels of a service definition.

    +
    +

    Warning

    +

    Lagoon only reads the labels, service names, image names and build definitions from a docker-compose.yml file. Definitions like: ports, environment variables, volumes, networks, links, users, etc. are IGNORED.

    +
    +

    This is intentional, as the docker-compose file is there to define your local environment configuration. Lagoon learns from the lagoon.type the type of service you are deploying and from that knows about ports, networks and any additional configuration that this service might need.

    +

    Here a straightforward example of a docker-compose.yml file for Drupal:

    +
    docker-compose.yml
    version: '2.3'
    +
    +x-lagoon-project:
    +  # Lagoon project name (leave `&lagoon-project` when you edit this)
    +  &lagoon-project drupal-example
    +
    +x-volumes:
    +  &default-volumes
    +    # Define all volumes you would like to have real-time mounted into the docker containers
    +    volumes:
    +      - .:/app:delegated
    +
    +x-environment:
    +  &default-environment
    +    LAGOON_PROJECT: *lagoon-project
    +    # Route that should be used locally, if you are using pygmy, this route *must* end with .docker.amazee.io
    +    LAGOON_ROUTE: http://drupal-example.docker.amazee.io
    +    # Uncomment if you want to have the system behave as it will in production
    +    #LAGOON_ENVIRONMENT_TYPE: production
    +    # Uncomment to enable Xdebug and then restart via `docker-compose up -d`
    +    #XDEBUG_ENABLE: "true"
    +
    +x-user:
    +  &default-user
    +    # The default user under which the containers should run. Change this if you are on linux and run with another user than ID `1000`
    +    user: '1000'
    +
    +services:
    +
    +  nginx:
    +    build:
    +      context: .
    +      dockerfile: nginx.dockerfile
    +    labels:
    +      lagoon.type: nginx-php-persistent # (1)
    +      lagoon.persistent: /app/web/sites/default/files/
    +
    +  php:
    +    build:
    +      context: .
    +      dockerfile: php.dockerfile
    +    labels:
    +      lagoon.type: nginx-php-persistent # (2)
    +      lagoon.name: nginx
    +      lagoon.persistent: /app/web/sites/default/files/
    +
    +  mariadb:
    +    image: amazeeio/mariadb-drupal
    +    labels:
    +      lagoon.type: mariadb
    +
    +
      +
    1. Note the multi-container pods here.
    2. +
    3. Note the multi-container pods here.
    4. +
    +

    Basic settings#

    +

    x-lagoon-project:

    +

    This is the machine name of your project, define it here. We’ll use “drupal-example.”

    +

    x-volumes:

    +

    This tells Lagoon what to mount into the container. Your web application lives in /app, but you can add or change this if needed.

    +

    x-environment:

    +
      +
    1. Here you can set your local development URL. If you are using pygmy, it must end with .docker.amazee.io.
    2. +
    3. If you want to exactly mimic the production environment, uncomment LAGOON_ENVIRONMENT_TYPE: production.
    4. +
    5. If you want to enable Xdebug, uncomment DEBUG_ENABLE: "true".
    6. +
    +

    x-user:

    +

    You are unlikely to need to change this, unless you are on Linux and would like to run with a user other than 1000.

    +

    services#

    +

    This defines all the services you want to deploy. Unfortunately, Docker Compose calls them services, even though they are actually containers. Going forward we'll be calling them services, and throughout this documentation.

    +

    The name of the service (nginx, php, and mariadb in the example above) is used by Lagoon as the name of the Kubernetes pod (yet another term - again, we'll be calling them services) that is generated, plus also any additional Kubernetes objects that are created based on the defined lagoon.type, which could be things like services, routes, persistent storage, etc.

    +

    Please note that service names adhere to the RFC 1035 DNS label standard. Service names must:

    +
      +
    • contain at most 63 characters
    • +
    • contain only lowercase alphanumeric characters or '-'
    • +
    • start with an alphabetic character
    • +
    • end with an alphanumeric character
    • +
    +
    +

    Warning

    +

    Once you have set the name of a service, do NOT rename it. This will cause all kind of havoc in your containers and break things.

    +
    +

    Docker Images#

    +

    build#

    +

    If you want Lagoon to build a Dockerfile for your service during every deployment, you can define it here:

    +

    build

    +
      +
    • context
        +
      • The build context path that should be passed on into the docker build command.
      • +
      +
    • +
    • dockerfile:
        +
      • Location and name of the Dockerfile that should be built.
      • +
      +
    • +
    +
    +

    Warning

    +

    Lagoon does NOT support the short version of build: <Dockerfile> and will fail if it finds such a definition.

    +
    +

    image#

    +

    If you don't need to build a Dockerfile and just want to use an existing Dockerfile, define it via image.

    +

    Types#

    +

    Lagoon needs to know what type of service you are deploying in order to configure the correct Kubernetes or OpenShift objects.

    +

    This is done via the lagoon.type label. There are many different types to choose from. Check Service Types to see all of them and their additional configuration possibilities.

    +

    Skip/Ignore containers#

    +

    If you'd like Lagoon to ignore a service completely - for example, you need a container only during local development - give it the type none.

    +

    Persistent Storage#

    +

    Some containers need persistent storage. Lagoon allows for each container to have a maximum of one persistent storage volume attached to the container. You can configure the container to request its own persistent storage volume (which can then be mounted by other container), or you can tell the container to mount the persistent storage created by another container.

    +

    In many cases, Lagoon knows where that persistent storage needs to go. For example, for a MariaDB container, Lagoon knows that the persistent storage should be put into /var/lib/mysql , and puts it there automatically without any extra configuration to define that. For some situations, though, Lagoon needs your help to know where to put the persistent storage:

    +
      +
    • lagoon.persistent - The absolute path where the persistent storage should be mounted (the above example uses /app/web/sites/default/files/ which is where Drupal expects its persistent storage).
    • +
    • lagoon.persistent.name - Tells Lagoon to not create a new persistent storage for that service, but instead mounts the persistent storage of another defined service into this service.
    • +
    • lagoon.persistent.size - The size of persistent storage you require (Lagoon usually gives you minimum 5G of persistent storage, if you need more, define it here).
    • +
    • lagoon.persistent.class - By default Lagoon automatically assigns the right storage class for your service (like SSDs for MySQL, bulk storage for Nginx, etc.). If you need to overwrite this, you can do so here. This is highly dependent on the underlying Kubernetes/OpenShift that Lagoon runs on. Ask your Lagoon administrator about this.
    • +
    +

    Auto-generated Routes#

    +

    The docker-compose.yml file also supports per-service enabling and disabling of autogenerated routes

    +
      +
    • lagoon.autogeneratedroute: false label will stop a route from being automatically created for that service. It can be applied to all services with autogenerated routes, but is mostly useful for the basic and basic-persistent service types when used to create an additional internal-facing service for a database service or similar. The inverse is also true - it will enable an auto-generated route for a service when the .lagoon.yml file disables them.
    • +
    +

    Multi-Container Pods#

    +

    Kubernetes and OpenShift don't deploy plain containers. Instead, they deploy pods, with each one or more containers. Usually Lagoon creates a single pod with a container inside for each defined docker-compose service. For some cases, we need to put two containers inside a single pod, as these containers are so dependent on each other that they should always stay together. An example for such a situation is the PHP and NGINX containers that both contain PHP code of a web application like Drupal.

    +

    For these cases, it is possible to tell Lagoon which services should stay together, which is done in the following way (remember that we are calling containers services because of docker-compose:

    +
      +
    1. Define both services with a lagoon.type that expects two services (in the example this is nginx-php-persistent defined on the nginx and php services).
    2. +
    3. Link the second service with the first one, defining the label lagoon.name of the second one with the first one. (in the example this is done with defining lagoon.name: nginx).
    4. +
    +

    This will cause Lagoon to realize that the nginx and php containers are combined in a pod that will be called nginx.

    +
    +

    Warning

    +

    Once you have set the lagooon.name of a service, do NOT rename it. This will cause all kind of havoc in your containers and break things.

    +
    +

    Lagoon still needs to understand which of the two services is the actual individual service type (nginx and php in this case). It does this by searching for service names with the same name that are given by the type, so nginx-php-persistent expects one service with the name nginx and one with php in the docker-compose.yml. If for any reason you want to use different names for the services, or you need for than one pod with the type nginx-php-persistent there is an additional label lagoon.deployment.servicetype which can be used to define the actual service type.

    +

    An example:

    +
    docker-compose.yml
    nginx:
    +    build:
    +      context: .
    +      dockerfile: nginx.dockerfile
    +    labels:
    +      lagoon.type: nginx-php-persistent
    +      lagoon.persistent: /app/web/sites/default/files/
    +      lagoon.name: nginx # If this isn't present, Lagoon will use the container name, which in this case is nginx.
    +      lagoon.deployment.servicetype: nginx
    +php:
    +    build:
    +      context: .
    +      dockerfile: php.dockerfile
    +    labels:
    +      lagoon.type: nginx-php-persistent
    +      lagoon.persistent: /app/web/sites/default/files/
    +      lagoon.name: nginx # We want this service to be part of the NGINX pod in Lagoon.
    +      lagoon.deployment.servicetype: php
    +
    +

    In the example above, the services are named nginx and php (but you can call them whatever you want). The lagoon.name tells Lagoon which services go together - all of the services with the same name go together.

    +

    In order for Lagoon to realize which one is the nginx and which one is the php service, we define it via lagoon.deployment.servicetype: nginx and lagoon.deployment.servicetype: php.

    +

    Helm Templates (Kubernetes only)#

    +

    Lagoon uses Helm for templating on Kubernetes. To do this, a series of Charts are included with the build-deploy-tool image.

    +

    Custom Rollout Monitor Types#

    +

    By default, Lagoon expects that services from custom templates are rolled out via a DeploymentConfig object within Kubernetes or Openshift. It monitors the rollout based on this object. In some cases, the services that are defined via custom deployment need a different way of monitoring. This can be defined via lagoon.rollout:

    +
      +
    • deploymentconfig - This is the default. Expects a DeploymentConfig object in the template for the service.
    • +
    • statefulset - Expects a Statefulset object in the template for the service.
    • +
    • daemonset - Expects a Daemonset object in the template for the service.
    • +
    • false - Will not monitor any rollouts, and will just be happy if the template applies and does not throw any errors.
    • +
    +

    You can also overwrite the rollout for just one specific environment. This is done in .lagoon.yml.

    +

    BuildKit and Docker Compose v2#

    +

    BuildKit is a toolkit for converting source code to build artifacts in an efficient, expressive and repeatable manner.

    +

    With the release of Lagoon v2.11.0, Lagoon now provides support for BuildKit-based docker-compose builds. To enable BuildKit for your Project or Environment, add DOCKER_BUILDKIT=1 as a build-time variable.

    +
    +

    Bug

    +

    Note that while using BuildKit locally, you may experience some known issues.

    +
    +
      +
    • Failed to solve with frontend dockerfile.v0: failed to create LLB definition: pull access denied, repository does not exist or may require authorization: This message means that your build has tried to access a Docker image that hasn't been built yet. As BuildKit builds in parallel, if you have a Docker image that inherits another one (as we do in Drupal with the CLI). You can use the target field inside the build to reconfigure as a multi-stage build
    • +
    • issues with volumes_from in Docker Compose v2 - this service (that provides SSH access into locally running containers) has been deprecated by Docker Compose. The section can be removed from your docker-compose.yml file if you don't require SSH access from inside your local environment, or can be worked around on a project-by-project basis - see https://github.com/pygmystack/pygmy/issues/333#issuecomment-1274091375 for more information.
    • +
    + + + + + + + + +
    +
    + + +
    + +
    + + + +
    +
    +
    +
    + + + + + + + + + + + + + \ No newline at end of file diff --git a/using-lagoon-the-basics/first-deployment/index.html b/using-lagoon-the-basics/first-deployment/index.html new file mode 100644 index 0000000000..ca9fcfdb7a --- /dev/null +++ b/using-lagoon-the-basics/first-deployment/index.html @@ -0,0 +1,2821 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + First Deployment - Lagoon Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    + +
    + + + + +
    + + +
    + +
    + + + + + + +
    +
    + + + +
    +
    +
    + + + + +
    +
    +
    + + + +
    + +
    + + + +
    +
    + + + + + + + + + + + + + +

    First Deployment#

    +

    excited

    +
    +

    Note

    +

    If you are deploying a Drupal Project, skip this and read the Drupal-specific first deployment documentation.

    +
    +

    1. Make sure you are ready#

    +

    In order to make your first deployment a successful one, please make sure that your project is Lagoonized and that you have set up the project in Lagoon. If not, or you're not sure, or that doesn't sound familiar, don't worry, go back and follow the Step-by-Step Guides which show you how this works, and then come back and deploy!

    +

    2. Push#

    +

    With Lagoon, you create a new deployment by pushing into a branch that is configured to be deployed.

    +

    If you don't have any new code to push, don't worry! Run:

    +
    Git push
    git commit --allow-empty -m "go, go! Power Rangers!"
    +git push
    +
    +

    This will trigger a push, and your Git hosting will inform Lagoon about this push via the configured webhook.

    +

    If all is correct, you should see a notification in your configured chat system (this has been configured by your friendly Lagoon administrator):

    +

    Slack notification that a push has been made in a Lagoonized repository.

    +

    This informs you that Lagoon has just started to deploy your code. Depending on the size of the code and amount of containers, this will take a couple of seconds. Just relax. If you want to know what's happening now, check out the Build and Deploy Process of Lagoon.

    +

    You can also check your Lagoon UI to see the progress of any deployment (your Lagoon administrator has the info).

    +

    3. It's done#

    +

    As soon as Lagoon is done building and deploying it will send a second notification to the chat system, here an example:

    +

    Slack notification of a successful Lagoon build and deployment.

    +

    It tells you:

    +
      +
    • Which project has been deployed.
    • +
    • Which branch and Git SHA have been deployed.
    • +
    • A link to the full logs of the build and deployment.
    • +
    • Links to all routes (URLs) where the environment can be reached.
    • +
    +

    You can also quickly tell what kind of notification it is by the emoji at the beginning - whether it's just info that the build has started, a success, or fail.

    +

    That's it! We hope that wasn't too hard - making devOps accessible is what we are striving for!

    +

    But wait, how about other branches or the production environment?#

    +

    That's the beauty of Lagoon: it's exactly the same! Just push the name of the branch and that one will be deployed.

    +

    Failure? Don't worry#

    +

    Did the deployment fail? Oh no! But we're here to help:

    +
      +
    1. If you deployed a Drupal site, make sure to read the Drupal-specific first deployment documentation, which explains why this happens.
    2. +
    3. Click on the Logs link in the error notification, it will tell you where in the deployment process the failure happened.
    4. +
    5. If you can't figure it out, just ask your Lagoon support, we are here to help!
    6. +
    7. Reach out to us in your support channel or in the community Discord.
    8. +
    + + + + + + + + +
    +
    + + +
    + +
    + + + +
    +
    +
    +
    + + + + + + + + + + + + + \ No newline at end of file diff --git a/using-lagoon-the-basics/first_deployment_slack_2nd_success.jpg b/using-lagoon-the-basics/first_deployment_slack_2nd_success.jpg new file mode 100644 index 0000000000..e3d69f28c4 Binary files /dev/null and b/using-lagoon-the-basics/first_deployment_slack_2nd_success.jpg differ diff --git a/using-lagoon-the-basics/first_deployment_slack_start.jpg b/using-lagoon-the-basics/first_deployment_slack_start.jpg new file mode 100644 index 0000000000..2aed71ec80 Binary files /dev/null and b/using-lagoon-the-basics/first_deployment_slack_start.jpg differ diff --git a/using-lagoon-the-basics/gh_webhook_1.png b/using-lagoon-the-basics/gh_webhook_1.png new file mode 100644 index 0000000000..b20b4c985b Binary files /dev/null and b/using-lagoon-the-basics/gh_webhook_1.png differ diff --git a/using-lagoon-the-basics/gh_webhook_2.png b/using-lagoon-the-basics/gh_webhook_2.png new file mode 100644 index 0000000000..6a0c5b672b Binary files /dev/null and b/using-lagoon-the-basics/gh_webhook_2.png differ diff --git a/using-lagoon-the-basics/gitlab-settings.png b/using-lagoon-the-basics/gitlab-settings.png new file mode 100644 index 0000000000..2bae6db868 Binary files /dev/null and b/using-lagoon-the-basics/gitlab-settings.png differ diff --git a/using-lagoon-the-basics/gitlab_webhook.png b/using-lagoon-the-basics/gitlab_webhook.png new file mode 100644 index 0000000000..c7162d204b Binary files /dev/null and b/using-lagoon-the-basics/gitlab_webhook.png differ diff --git a/using-lagoon-the-basics/going-live/index.html b/using-lagoon-the-basics/going-live/index.html new file mode 100644 index 0000000000..249b13b6a0 --- /dev/null +++ b/using-lagoon-the-basics/going-live/index.html @@ -0,0 +1,2931 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + Going Live - Lagoon Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    + +
    + + + + +
    + + +
    + +
    + + + + + + +
    +
    + + + +
    +
    +
    + + + + +
    +
    +
    + + + +
    +
    +
    + + + +
    +
    +
    + + + +
    +
    + + + + + + + + + + + + + +

    Going Live#

    +

    Congratulations, you're this close to going live with your website on Lagoon! In order to make this as seamless as possible, we've got this final checklist for you. It leads you through the last few things you should check before taking your site live.

    +

    Check your .lagoon.yml#

    +

    Routes / SSL#

    +

    Check to be sure that all routes have been set up in your .lagoon.yml. Be aware that if you don't point the domains towards Lagoon, you should disable Let's Encrypt (LE) certificate creation, as it will lead to issues. Domains not pointing towards Lagoon will be disabled after a while in order to not exceed the Let's Encrypt quotas.

    +

    If you use Certificate Authority (CA) signed certificates, you can set tls-acme to false , but leave the insecure flag set to Allow or Redirect. In the case of CA certificates, let your Lagoon administrator know the routes and the SSL certificate that needs to be put in place.

    +
    .lagoon.yml
    environments:
    +  main:
    +    routes:
    +      - nginx:
    +        - example.com:
    +            tls-acme: 'false'
    +            insecure: Allow
    +        - www.example.com:
    +            tls-acme: 'false'
    +            insecure: Allow
    +
    +

    As soon as the DNS entries point towards your Lagoon installation, you can switch the flags: tls-acme to true and insecure to Redirect

    +
    .lagoon.yml
    environments:
    +  main:
    +    routes:
    +      - nginx:
    +        - example.com:
    +            tls-acme: 'true'
    +            insecure: Redirect
    +        - www.example.com:
    +            tls-acme: 'true'
    +            insecure: Redirect
    +
    +
    +

    Note

    +

    As checking every page of your website might be a bit a tedious job, you can make use of mixed-content-scan. This will crawl the entire site and give you back pages that include assets from a non-HTTPS site.

    +
    +

    Redirects#

    +

    If you need non-www to www redirects, make sure you have them set up in the redirects-map.conf - see Documentation.

    +

    Cron jobs#

    +

    Check if your cron jobs have been set up for your production environment - see .lagoon.yml.

    +

    DNS#

    +

    To make it as smooth as possible for you to get your site pointing to our servers, we have dedicated load-balancer DNS records. Those technical DNS resource records are used for getting your site linked to the amazee.io infrastructure and serve no other purpose. If you are in doubt of the CNAME record, ask your Lagoon administrator about the exact CNAME you need to set up.

    +

    Example on amazee.io : <region-identifier>.amazee.io

    +

    Before you switch over your domain to Lagoon, make sure you lower the Time-to-Live (TTL) before you go live. This will ensure that the switch from the old to the new servers will go quickly. We usually advise a TTL of 300-600 seconds prior to the DNS switch. More information about TTL.

    + +

    The recommended method of pointing your domain's DNS records at Lagoon is via a CNAME record as shown below:

    + +

    CNAME: cdn.amazee.io

    +

    Alternate Settings for Fastly (A records):#

    +

    If your DNS provider does not support the use of CNAME records, you can use these A records instead. Please ensure you set up individual records for each IP listed below:

    +
      +
    • A: 151.101.2.191
    • +
    • A: 151.101.66.191
    • +
    • A: 151.101.130.191
    • +
    • A: 151.101.194.191
    • +
    +
    +

    Note

    +

    We do not suggest configuring any static IP addresses in your DNS zones. The Lagoon load balancer infrastructure may change over time which can have impact on your site availability if you configure a static IP address.

    +
    +

    Root Domains#

    +

    Configuring the root domain (e.g. example.com) can be a bit tricky because the DNS specification does not allow the root domain to point to a CNAME entry. Depending on your DNS provider, the record name is different:

    + +

    If your DNS provider needs an IP address for the root domain, get in touch with your Lagoon administrator to give you the load balancer IP addresses.

    +

    Production environment#

    +

    Lagoon understands the concept of development and production environments. Development environments automatically send noindex and nofollow headers in order to prohibit indexing by search engines.

    +

    X-Robots-Tag: noindex, nofollow

    +

    During project setup, the production environment should already be defined. If that's omitted, your environment will run in development mode. You can check if the environment is set as production environment in the Lagoon user interface. If the production environment is not set, let your Lagoon administrator know, and they will configure the system accordingly.

    +

    The production environment is labelled in green on the left.

    + + + + + + + + +
    +
    + + +
    + +
    + + + +
    +
    +
    +
    + + + + + + + + + + + + + \ No newline at end of file diff --git a/using-lagoon-the-basics/index.html b/using-lagoon-the-basics/index.html new file mode 100644 index 0000000000..6af105fc4e --- /dev/null +++ b/using-lagoon-the-basics/index.html @@ -0,0 +1,2976 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + Overview - Lagoon Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    + +
    + + + + +
    + + +
    + +
    + + + + + + +
    +
    + + + +
    +
    +
    + + + + +
    +
    +
    + + + +
    +
    +
    + + + +
    +
    +
    + + + +
    +
    + + + + + + + + + + + + + +

    Overview#

    +

    Requirements#

    +

    Docker#

    +

    To run a Lagoon Project, your system must meet the requirements to run Docker. We suggest installing the latest version of Docker for your workstation. You can download Docker here. We also suggest allowing Docker at least 4 CPUs and 4 GB RAM.

    +

    Local Development Environments#

    +

    TL;DR: install and start pygmy:

    +
    Bash
    brew tap pygmystack/pygmy; # (1)
    +brew install pygmy;
    +pygmy up
    +
    +
      +
    1. HomeBrew is the easiest way to install Pygmy, see the docs for more info.
    2. +
    +

    Pygmy is a container stack for local development, developed collaboratively with the Lagoon team.

    +

    Learn more about Lagoon, pygmy, and Local Development Environments

    +

    Step by Step Guides#

    + +

    Overview of Lagoon Configuration Files#

    +

    .lagoon.yml#

    +

    This is the main file that will be used by Lagoon to understand what should be deployed, as well as many other things. See documentation for .lagoon.yml.

    +

    docker-compose.yml#

    +

    This file is used by Docker Compose to start your local development environment. Lagoon also uses it to understand which of the services should be deployed, which type, and how to build them. This happens via labels. See documentation for docker-compose.yml.

    +

    Dockerfiles#

    +

    Some Docker images and containers need additional customizations from the provided images. This usually has two reasons:

    +
      +
    1. Application code: Containers like NGINX, PHP, Node.js, etc, need the actual programming code within their images. This is done during a Docker build step, which is configured in a Dockerfile. Lagoon has full support for Docker, and therefore also allows you full control over the resulting images via Dockerfile customizations.
    2. +
    3. Customization of images: Lagoon also allows you to customize the base images according to your needs. This can be to inject an additional environment variable, change a service configuration, or even install additional tools. We advise caution with installing additional tools to the Docker images, as you will need to maintain any adaptions in the future!
    4. +
    +

    Supported Services & Base Images by Lagoon#

    + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    TypeVersionsDockerfile
    MariaDB10.4, 10.5, 10.6, 10.11mariadb/Dockerfile
    PostgreSQL11, 12, 13, 14, 15postgres/Dockerfile
    MongoDB4mongo/Dockerfile
    NGINXopenresty/1.21nginx/Dockerfile
    Node.js16, 18, 20node/Dockerfile
    PHP FPM8.0, 8.1, 8.2php/fpm/Dockerfile
    PHP CLI8.0, 8.1, 8.2php/cli/Dockerfile
    Python3.7, 3.8, 3.9, 3.10, 3.11python/Dockerfile
    Redis5, 6, 7redis/Dockerfile
    Solr7, 8solr/Dockerfile
    Varnish5, 6, 7varnish/Dockerfile
    Opensearch2opensearch/Dockerfiles
    RabbitMQ3.10rabbitmq/Dockerfile
    Ruby3.0, 3.1, 3.2ruby/Dockerfile
    +

    All images are pushed to https://hub.docker.com/u/uselagoon. We suggest always using the latest tag (like uselagoon/nginx:latest) as they are kept up to date in terms of features and security.

    +

    If you choose to use a specific Lagoon version of an image like uselagoon/nginx:20.10.0 or uselagoon/node-10:20.10.0 it is your own responsibility to upgrade the version of the images as soon as a new Lagoon version is released!

    + + + + + + + + +
    +
    + + +
    + +
    + + + +
    +
    +
    +
    + + + + + + + + + + + + + \ No newline at end of file diff --git a/using-lagoon-the-basics/lagoon-ui-production.png b/using-lagoon-the-basics/lagoon-ui-production.png new file mode 100644 index 0000000000..25c15437df Binary files /dev/null and b/using-lagoon-the-basics/lagoon-ui-production.png differ diff --git a/using-lagoon-the-basics/lagoon-yml/index.html b/using-lagoon-the-basics/lagoon-yml/index.html new file mode 100644 index 0000000000..17d389f96e --- /dev/null +++ b/using-lagoon-the-basics/lagoon-yml/index.html @@ -0,0 +1,4070 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + .lagoon.yml - Lagoon Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    + +
    + + + + +
    + + +
    + +
    + + + + + + +
    +
    + + + +
    +
    +
    + + + + +
    +
    +
    + + + +
    +
    +
    + + + +
    +
    +
    + + + +
    +
    + + + + + + + + + + + + + +

    .lagoon.yml#

    +

    The .lagoon.yml file is the central file to set up your project. It contains configuration in order to do the following:

    + +

    The .lagoon.yml file must be placed at the root of your Git repository.

    +

    General Settings#

    +

    docker-compose-yaml#

    +

    Tells the build script which Docker Compose YAML file should be used, in order to learn which services and containers should be deployed. This defaults to docker-compose.yml, but could be used for a specific Lagoon Docker Compose YAML file if needed.

    +

    environment_variables.git_sha#

    +

    This setting allows you to enable injecting the deployed Git SHA into your project as an environment variable. By default this is disabled. Setting the value to true sets the SHA as the environment variable LAGOON_GIT_SHA.

    +

    Routes#

    +

    Routes are used to direct traffic to services. Each service in an environnment +can have routes, in which the domain names are defined manually or +automatically. The top level routes section applies to all routes in all +environments.

    +

    routes.autogenerate#

    +

    This allows you to configure automatically created routes. Manual +routes are defined per environment.

    +
      +
    • enabled: Set to false to disable autogenerated routes. Default is true.
    • +
    • allowPullrequests: Set to true to override enabled: false for pull + requests.
      .lagoon.yml
      routes:
      +  autogenerate:
      +    enabled: false
      +    allowPullrequests: true
      +
      +
    • +
    +
      +
    • insecure: Configures HTTP connections. Default is Allow.
        +
      • Allow: Route will respond to HTTP and HTTPS.
      • +
      • Redirect: Route will redirect any HTTP request to HTTPS.
      • +
      +
    • +
    +
      +
    • +

      prefixes: Configure prefixes for the autogenerated routes of each + environment. This is useful for things like language prefix domains, or a + multi-domain site using the Drupal domain module.

      +
      .lagoon.yml
        routes:
      +    autogenerate:
      +      prefixes:
      +      - www
      +      - de
      +      - fr
      +      - it
      +
      +
    • +
    +

    Tasks#

    +

    There are different type of tasks you can define, and they differ in when exactly they are executed in a build flow:

    +

    Pre-Rollout Tasks - pre_rollout.[i].run#

    +

    Here you can specify tasks which will run against your project after all images have been successfully built, but before:

    +
      +
    • Any running containers are updated with the newly built images.
    • +
    • Any other changes are made to your existing environment.
    • +
    +

    This feature enables you to, for example, create a database dump before updating your application. +This can make it easier to roll back in case of a problem with the deploy.

    +
    +

    Info

    +

    The pre-rollout tasks run in the existing pods before they are updated, which means:

    +
      +
    • Changes made to your Dockerfile since the last deploy will not be visible when pre-rollout tasks run.
    • +
    • If there are no existing containers (e.g. on the initial deployment of a new environment), pre-rollout tasks are skipped.
    • +
    +
    +

    Post-Rollout Tasks - post_rollout.[i].run#

    +

    Here you can specify tasks which need to run against your project, after:

    +
      +
    • All images have been successfully built.
    • +
    • All containers are updated with the new images.
    • +
    • All containers are running have passed their readiness checks.
    • +
    +

    Common uses for post-rollout tasks include running drush updb, drush cim, or clearing various caches.

    +
      +
    • name
        +
      • The name is an arbitrary label for making it easier to identify each task in the logs.
      • +
      +
    • +
    • command
        +
      • Here you specify what command should run. These are run in the WORKDIR of each container, for Lagoon images this is /app. Keep this in mind if you need to cd into a specific location to run your task.
      • +
      +
    • +
    • service
        +
      • The service in which to run the task. If following our Drupal example, this will be the CLI container, as it has all your site code, files, and a connection to the database. Typically you do not need to change this.
      • +
      +
    • +
    • container
        +
      • If the service has multiple containers (e.g. nginx-php), you will need to specify which container in the pod to connect to (e.g. the php container within the nginx pod).
      • +
      +
    • +
    • shell
        +
      • In which shell the task should be run. By default sh is used, but if the container also has other shells (like bash, you can define it here). This is useful if you want to run some small if/else bash scripts within the post-rollouts. See the example below to learn how to write a script with multiple lines.
      • +
      +
    • +
    • when
        +
      • The "when" clause allows for the conditional running of tasks. It expects an expression that will evaluate to a true/false value which determines whether the task should be run.
      • +
      +
    • +
    +

    Note: If you would like to temporarily disable pre/post-rollout tasks during a deployment, you can set either of the following environment variables in the API at the project or environment level (see how on Environment Variables).

    +
      +
    • LAGOON_PREROLLOUT_DISABLED=true
    • +
    • LAGOON_POSTROLLOUT_DISABLED=true
    • +
    +

    Example post-rollout tasks#

    +

    Here are some useful examples of post-rollout tasks that you may want to use or adapt for your projects.

    +

    Run only if Drupal not installed:

    +
    .lagoon.yml
    - run:
    +  name: IF no Drupal installed
    +  command: | # (1)
    +    if tables=$(drush sqlq "show tables like 'node';") && [ -z "$tables" ]; then
    +      #### whatever you like
    +    fi
    +  service: cli
    +  shell: bash
    +
    +
      +
    1. This shows how to create a multi-line command.
    2. +
    +

    Different tasks based on branch name:

    +
    .lagoon.yml
    - run:
    +    name: Different tasks based on branch name
    +    command: |
    +        ### Runs if current branch is not 'production'
    +    service: cli
    +    when: LAGOON_GIT_BRANCH != "production"
    +
    +

    Run shell script:

    +
    .lagoon.yml
    - run:
    +    name: Run Script
    +    command: './scripts/script.sh'
    +    service: cli
    +
    +

    Target specific container in pod:

    +
    .lagoon.yml
    - run:
    +    name: show php env variables
    +    command: env
    +    service: nginx
    +    container: php
    +
    +

    Drupal & Drush 9: Sync database & files from master environment:

    +
    .lagoon.yml
    - run:
    +    name: Sync DB and Files from master if we are not on master
    +    command: |
    +      # Only if we don't have a database yet
    +      if tables=$(drush sqlq 'show tables;') && [ -z "$tables" ]; then
    +          drush sql-sync @lagoon.master @self # (1)
    +          drush rsync @lagoon.master:%files @self:%files -- --omit-dir-times --no-perms --no-group --no-owner --chmod=ugo=rwX
    +      fi
    +    service: cli
    +    when: LAGOON_ENVIRONMENT_TYPE != "production"
    +
    +
      +
    1. Make sure to use the correct aliases for your project here.
    2. +
    +

    Backup Retention#

    +

    backup-retention.production.monthly#

    +

    Specify the number of monthly backups Lagoon should retain for your project's production environment(s).

    +

    The global default is 1 if this value is not specified.

    +

    backup-retention.production.weekly#

    +

    Specify the number of weekly backups Lagoon should retain for your project's production environment(s).

    +

    The global default is 6 if this value is not specified.

    +

    backup-retention.production.daily#

    +

    Specify the number of daily backups Lagoon should retain for your project's production environment(s).

    +

    The global default is 7 if this value is not specified.

    +

    backup-retention.production.hourly#

    +

    Specify the number of hourly backups Lagoon should retain for your project's production environment(s).

    +

    The global default is 0 if this value is not specified.

    +

    Backup Schedule#

    +

    backup-schedule.production#

    +

    Specify the backup schedule for this project. Accepts cron-compatible syntax with the notable exception that the Minute block must be the letter M. Any other value in the Minute block will cause the Lagoon build to fail. This allows Lagoon to randomly choose a specific minute for these backups to happen, while users can specify the remainder of the schedule down to the hour.

    +

    The global default is M H(22-2) * * * if this value is not specified. Take note that these backups will use the cluster's local timezone.

    +

    Environments#

    +

    Environment names match your deployed branches or pull requests. This allows for each environment to have a different config. In our example it will apply to the main and staging environment.

    +

    environments.[name].routes#

    + + +

    Manual routes are domain names that are configured per environment to direct +traffic to a service. Since all environments get automatically created +routes by default, it is typical that manual routes are +only setup for the production environment, using the main domain of the +project's website like www.example.com.

    +
    +

    Tip

    +

    Since Lagoon has no control over the manual routes, you'll need to ensure +the DNS records are configured properly at your DNS provider. You can likely +set a CNAME record to point to the automatic route.

    +
    +

    The first element after the environment is the target service, nginx in our +example. This is how we identify which service incoming requests will be sent +to.

    +

    The simplest route is example.com, as seen in our example +.lagoon.yml - you can see it has no additional +configuration. This will assume that you want a Let's Encrypt certificate for +your route and no redirect from HTTPS to HTTP.

    +

    In the "www.example.com" example below, we see three more options (also +notice the : at the end of the route and that the route is wrapped in ", +that's important!):

    +
    .lagoon.yml
    - "www.example.com":
    +    tls-acme: true
    +    insecure: Redirect
    +    hstsEnabled: true
    +
    +

    SSL Configuration tls-acme#

    +
      +
    • tls-acme: Configures automatic TLS certificate generation via Let's Encrypt. + Default is true, set to false to disable automatic certificates.
    • +
    • insecure: Configures HTTP connections. Default is Allow.
        +
      • Allow: Route will respond to HTTP and HTTPS.
      • +
      • Redirect: Route will redirect any HTTP request to HTTPS.
      • +
      +
    • +
    • hstsEnabled: Adds the Strict-Transport-Security header. Default is + false.
    • +
    • hstsMaxAge: Configures the max-age directive. Default is 31536000 (1 + year).
    • +
    • hstsPreload: Sets the preload directive. Default is false.
    • +
    • hstsIncludeSubdomains: Sets the includeSubDomains directive. Default is + false.
    • +
    +
    +

    Info

    +

    If you plan to switch from a SSL certificate signed by a Certificate +Authority (CA) to a Let's Encrypt certificate, it's best to get in touch +with your Lagoon administrator to oversee the transition. There are known +issues during the +transition. The workaround would be manually removing the CA certificate and +then triggering the Let's Encrypt process.

    +
    +

    Monitoring a specific path#

    +

    When UptimeRobot is configured for your cluster (Kubernetes or OpenShift), Lagoon will inject annotations to each route/ingress for use by the stakater/IngressControllerMonitor. The default action is to monitor the homepage of the route. If you have a specific route to be monitored, this can be overridden by adding a monitoring-path to your route specification. A common use is to set up a path for monitoring which bypasses caching to give a more real-time monitoring of your site.

    +
    .lagoon.yml
    - "www.example.com":
    +      monitoring-path: "/bypass-cache"
    +
    +

    Ingress annotations#

    +
    +

    Warning

    +

    Route/Ingress annotations are only supported by projects that deploy into clusters that run nginx-ingress controllers! Check with your Lagoon administrator if this is supported.

    +
    + +

    Restrictions#

    +

    Some annotations are disallowed or partially restricted in Lagoon. +The table below describes these rules.

    +

    If your .lagoon.yml contains one of these annotations it will cause a build failure.

    + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    AnnotationNotes
    nginx.ingress.kubernetes.io/auth-snippetDisallowed
    nginx.ingress.kubernetes.io/configuration-snippetRestricted to rewrite, add_header, set_real_ip, and more_set_headers directives.
    nginx.ingress.kubernetes.io/modsecurity-snippetDisallowed
    nginx.ingress.kubernetes.io/server-snippetRestricted to rewrite, add_header, set_real_ip, and more_set_headers directives.
    nginx.ingress.kubernetes.io/stream-snippetDisallowed
    nginx.ingress.kubernetes.io/use-regexDisallowed
    +

    Ingress annotations redirects#

    +

    In this example any requests to example.ch will be redirected to https://www.example.ch while keeping folders or query parameters intact (example.com/folder?query -> https://www.example.ch/folder?query).

    +
    .lagoon.yml
    - "example.ch":
    +    annotations:
    +      nginx.ingress.kubernetes.io/permanent-redirect: https://www.example.ch$request_uri
    +- www.example.ch
    +
    +

    You can of course also redirect to any other URL not hosted on Lagoon, this will direct requests to example.de to https://www.google.com

    +
    .lagoon.yml
    - "example.de":
    +    annotations:
    +      nginx.ingress.kubernetes.io/permanent-redirect: https://www.google.com
    +
    +

    Trusted Reverse Proxies#

    +
    +

    Warning

    +

    Kubernetes will only process a single nginx.ingress.kubernetes.io/server-snippet annotation. Please ensure that if you use this annotation on a non-production environment route that you also include the add_header X-Robots-Tag "noindex, nofollow"; annotation as part of your server-snippet. This is needed to stop robots from crawling development environments as the default server-snippet set to prevent this in development environments in the ingress templates will get overwritten with any server-snippets set in .lagoon.yml.

    +
    +

    Some configurations involve a reverse proxy (like a CDN) in front of the Kubernetes clusters. In these configurations, the IP of the reverse proxy will appear as the REMOTE_ADDR HTTP_X_REAL_IP HTTP_X_FORWARDED_FOR headers field in your applications. The original IP of the requester can be found in the HTTP_X_ORIGINAL_FORWARDED_FOR header.

    +

    If you want the original IP to appear in the REMOTE_ADDR HTTP_X_REAL_IP HTTP_X_FORWARDED_FOR headers, you need to tell the ingress which reverse proxy IPs you want to trust:

    +
    .lagoon.yml
    - "example.ch":
    +    annotations:
    +      nginx.ingress.kubernetes.io/server-snippet: |
    +        set_real_ip_from 1.2.3.4/32;
    +
    +

    This example would trust the CIDR 1.2.3.4/32 (the IP 1.2.3.4 in this case). Therefore if there is a request sent to the Kubernetes cluster from the IP 1.2.3.4 the X-Forwarded-For Header is analyzed and its contents injected into REMOTE_ADDR HTTP_X_REAL_IP HTTP_X_FORWARDED_FOR headers.

    +

    Environments.[name].types#

    +

    The Lagoon build process checks the lagoon.type label from the docker-compose.yml file in order to learn what type of service should be deployed (read more about them in the documentation of docker-compose.yml).

    +

    Sometimes you might want to override the type just for a single environment, and not for all of them. For example, if you want a standalone MariaDB database (instead of letting the Service Broker/operator provision a shared one) for your non-production environment called develop:

    +

    service-name: service-type

    +
      +
    • service-name is the name of the service from docker-compose.yml you would like to override.
    • +
    • service-type the type of the service you would like to use in your override.
    • +
    +

    Example for setting up MariaDB_Galera:

    +
    .lagoon.yml
    environments:
    +  develop:
    +    types:
    +      mariadb: mariadb-single
    +
    +

    environments.[name].templates#

    +

    The Lagoon build process checks the lagoon.template label from the docker-compose.yml file in order to check if the service needs a custom template file (read more about them in the documentation of docker-compose.yml).

    +

    Sometimes you might want to override the template just for a single environment, and not for all of them:

    +

    service-name: template-file

    +
      +
    • service-name is the name of the service from docker-compose.yml you would like to override.
    • +
    • template-file is the path and name of the template to use for this service in this environment.
    • +
    +

    Example Template Override#

    +
    .lagoon.yml
    environments:
    +  main:
    +    templates:
    +      mariadb: mariadb.main.deployment.yml
    +
    +

    environments.[name].rollouts#

    +

    The Lagoon build process checks the lagoon.rollout label from the docker-compose.yml file in order to check if the service needs a special rollout type (read more about them in the documentation of docker-compose.yml)

    +

    Sometimes you might want to override the rollout type just for a single environment, especially if you also overwrote the template type for the environment:

    +

    service-name: rollout-type

    +
      +
    • service-name is the name of the service from docker-compose.yml you would like to override.
    • +
    • rollout-type is the type of rollout. See documentation of docker-compose.yml) for possible values.
    • +
    +

    Custom Rollout Type Example#

    +
    .lagoon.yml
    environments:
    +  main:
    +    rollouts:
    +      mariadb: statefulset
    +
    +

    environments.[name].autogenerateRoutes#

    +

    This allows for any environments to get autogenerated routes when route autogeneration is disabled.

    +
    .lagoon.yml
    routes:
    +  autogenerate:
    +    enabled: false
    +environments:
    +  develop:
    +    autogenerateRoutes: true
    +
    +

    environments.[name].cronjobs#

    + + +

    Cron jobs must be defined explicitly for each environment, since it is typically +not desirable to run the same ones for all environments. Depending on the +defined schedule, cron jobs may run as a Kubernetes native CronJob or as an +in-pod cron job via the crontab of the defined service.

    +

    Cron Job Example#

    +
    .lagoon.yml
    cronjobs:
    +  - name: Hourly Drupal Cron
    +    schedule: "M * * * *" # Once per hour, at a random minute.
    +    command: drush cron
    +    service: cli
    +  - name: Nightly Drupal Cron
    +    schedule: "M 0 * * *" # Once per day, at a random minute from 00:00 to 00:59.
    +    command: drush cron
    +    service: cli
    +
    +
      +
    • name: Any name that will identify the purpose and distinguish it from other + cron jobs.
    • +
    • +

      schedule: The schedule for executing the cron job. Lagoon uses an extended + version of the crontab format. If you're not sure about the syntax, use a + crontab generator.

      +
        +
      • You can specify M for the minute, and your cron job will run once per + hour at a random minute (the same minute each hour), or M/15 to run it + every 15 mins, but with a random offset from the hour (like + 6,21,36,51). It is a good idea to spread out your cron jobs using this + feature, rather than have them all fire off on minute 0.
      • +
      • You can specify H for the hour, and your cron job will run once per day + at a random hour (the same hour every day), or H(2-4) to run it once + per day within the hours of 2-4.
      • +
      +
    • +
    +
    +

    Timezones:

    +
      +
    • The default timezone for cron jobs is UTC.
    • +
    • Native cron jobs use the timezone of the node, which is UTC.
    • +
    • In-pod cron jobs use the timezone of the defined service, which + can be configured to something other than UTC.
    • +
    +
    +
      +
    • command: The command to execute. This executes in the WORKDIR of the + service. For Lagoon images, this is /app.
    • +
    +
    +

    Warning

    +

    Cronjobs may run in-pod, via crontab, which doesn't support multiline + commands. If + you need a complex or multiline cron command, you must put it in a + script that can be used as the command. Consider whether a pre- or post-rollout task would work.

    +
    +
      +
    • service: Which service of your project to run the command in. For most + projects, this should be the cli service.
    • +
    +

    Polysite#

    +

    In Lagoon, the same Git repository can be added to multiple projects, creating what is called a polysite. This allows you to run the same codebase, but allow for different, isolated, databases and persistent files. In .lagoon.yml , we currently only support specifying custom routes for a polysite project. The key difference from a standard project is that the environments becomes the second-level element, and the project name the top level.

    +

    To utilize this, you will need to:

    +
      +
    1. Create two (or more) projects in Lagoon, each configured with the same Git URL and production branch, named per your .lagoon.yml (i.e poly-project1 and poly-project2 below)
    2. +
    3. Add the deploy keys from each project to the Git repository.
    4. +
    5. Configure the webhook for the repository (if required) - you can then push/deploy. Note that a push to the repository will simultaneously deploy all projects/branches for that Git URL.
    6. +
    +

    Polysite Example#

    +
    .lagoon.yml
    poly-project1:
    +  environments:
    +    main:
    +      routes:
    +        - nginx:
    +          - project1.com
    +poly-project2:
    +  environments:
    +    main:
    +      routes:
    +        - nginx:
    +          - project2.com
    +
    +

    Specials#

    +

    api#

    +
    +Info +

    If you run directly on amazee.io hosted Lagoon you will not need this key set.

    +
    +

    With the key api you can define another URL that should be used by the Lagoon CLI and drush to connect to the Lagoon GraphQL API. This needs to be a full URL with a scheme, like: http://localhost:3000 This usually does not need to be changed, but there might be situations where your Lagoon administrator tells you to do so.

    +

    ssh#

    +
    +Info +

    If you run directly on amazee.io hosted Lagoon you will not need this key set.

    +
    +

    With the key ssh you can define another SSH endpoint that should be used by the Lagoon CLI and drush to connect to the Lagoon remote shell service. This needs to be a hostname and a port separated by a colon, like: localhost:2020 This usually does not need to be changed, but there might be situations where your Lagoon administrator tells you to do so.

    +

    container-registries#

    +

    The container-registries block allows you to define your own private container registries to pull custom or private images. To use a private container registry, you will need a username, password, and optionally the url for your registry. If you don't specify a url in your YAML, it will default to using Docker Hub.

    +

    There are 2 ways to define the password used for your registry user.

    +

    Create an environment variable in the Lagoon API with the type container_registry:

    +
      +
    • lagoon add variable -p <project_name> -N <registry_password_variable_name> -V <password_goes_here> -S container_registry
    • +
    • (see more on Environment Variables)
    • +
    +

    The name of the variable you create can then be set as the password:

    +
    .lagoon.yml
    container-registries:
    +  my-custom-registry:
    +    username: myownregistryuser
    +    password: <registry_password_variable_name>
    +    url: my.own.registry.com
    +
    +

    You can also define the password directly in the .lagoon.yml file in plain text:

    +
    .lagoon.yml
    container-registries:
    +  docker-hub:
    +    username: dockerhubuser
    +    password: MySecretPassword
    +
    +

    Consuming a custom or private container registry image#

    +

    To consume a custom or private container registry image, you need to update the service inside your docker-compose.yml file to use a build context instead of defining an image:

    +
    .docker-compose.yml
    services:
    +  mariadb:
    +    build:
    +      context: .
    +      dockerfile: Dockerfile.mariadb
    +
    +

    Once the docker-compose.yml file has been updated to use a build, you need to create the Dockerfile.<service> and then set your private image as the FROM <repo>/<name>:<tag>

    +
    .lagoon.yml
    FROM dockerhubuser/my-private-database:tag
    +
    +

    Example .lagoon.yml#

    +

    This is an example .lagoon.yml which showcases all possible settings. You will need to adapt it to your project.

    +
    .lagoon.yml
     1
    + 2
    + 3
    + 4
    + 5
    + 6
    + 7
    + 8
    + 9
    +10
    +11
    +12
    +13
    +14
    +15
    +16
    +17
    +18
    +19
    +20
    +21
    +22
    +23
    +24
    +25
    +26
    +27
    +28
    +29
    +30
    +31
    +32
    +33
    +34
    +35
    +36
    +37
    +38
    +39
    +40
    +41
    +42
    +43
    +44
    +45
    +46
    +47
    +48
    +49
    +50
    +51
    +52
    +53
    +54
    +55
    +56
    +57
    +58
    +59
    +60
    +61
    +62
    +63
    docker-compose-yaml: docker-compose.yml
    +
    +environment_variables:
    +  git_sha: 'true'
    +
    +tasks:
    +  pre-rollout:
    +    - run:
    +        name: drush sql-dump
    +        command: mkdir -p /app/web/sites/default/files/private/ && drush sql-dump --ordered-dump --gzip --result-file=/app/web/sites/default/files/private/pre-deploy-dump.sql.gz
    +        service: cli
    +  post-rollout:
    +    - run:
    +        name: drush cim
    +        command: drush -y cim
    +        service: cli
    +        shell: bash
    +    - run:
    +        name: drush cr
    +        command: drush -y cr
    +        service: cli
    +
    +routes:
    +  autogenerate:
    +    insecure: Redirect
    +
    +environments:
    +  main:
    +    routes:
    +      - nginx:
    +        - example.com
    +        - example.net
    +        - "www.example.com":
    +            tls-acme: true
    +            insecure: Redirect
    +            hstsEnabled: true
    +        - "example.ch":
    +            annotations:
    +              nginx.ingress.kubernetes.io/permanent-redirect: https://www.example.ch$request_uri
    +        - www.example.ch
    +    types:
    +      mariadb: mariadb
    +    templates:
    +      mariadb: mariadb.main.deployment.yml
    +    rollouts:
    +      mariadb: statefulset
    +    cronjobs:
    +      - name: drush cron
    +        schedule: "M * * * *" # This will run the cron once per hour.
    +        command: drush cron
    +        service: cli
    +  staging:
    +      cronjobs:
    +      - name: drush cron
    +        schedule: "M * * * *" # This will run the cron once per hour.
    +        command: drush cron
    +        service: cli
    +  feature/feature-branch:
    +      cronjobs:
    +      - name: drush cron
    +        schedule: "H * * * *" # This will run the cron once per hour.
    +        command: drush cron
    +        service: cli
    +
    +

    Deprecated#

    +

    These settings have been deprecated and should be removed from use in your .lagoon.yml.

    +
      +
    • routes.autogenerate.insecure

      The None option is equivalent to Redirect.

      +
    • +
    +
      +
    • environments.[name].monitoring_urls
    • +
    • environments.[name].routes.[service].[route].hsts
    • +
    • environments.[name].routes.[service].[route].insecure

      The None option is equivalent to Redirect.

      +
    • +
    + + + + + + + + +
    +
    + + +
    + +
    + + + +
    +
    +
    +
    + + + + + + + + + + + + + \ No newline at end of file diff --git a/using-lagoon-the-basics/local-development-environments/index.html b/using-lagoon-the-basics/local-development-environments/index.html new file mode 100644 index 0000000000..56bd62d1ab --- /dev/null +++ b/using-lagoon-the-basics/local-development-environments/index.html @@ -0,0 +1,2743 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + Local Development Environments - Lagoon Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    + +
    + + + + +
    + + +
    + +
    + + + + + + +
    +
    + + + +
    +
    +
    + + + + +
    +
    +
    + + + +
    +
    +
    + + + +
    +
    +
    + + + +
    +
    + + + + + + + + + + + + + +

    Local Development Environments#

    +

    Even though Lagoon has only a hard dependency on Docker and Docker Compose (which is mostly shipped with Docker) there are some things which are nice for local development that are not included in Docker:

    +
      +
    • An HTTP reverse proxy for nice URLs and HTTPS offloading.
    • +
    • A DNS system so we don't have to remember IP addresses.
    • +
    • SSH agents to use SSH keys within containers.
    • +
    • A system that receives and displays mail locally.
    • +
    +
    +Warning +

    You do not need to install Lagoon locally to use it locally! That sounds confusing but follow the documentation. Lagoon is the system that deploys your local development environment to your production environment, it's not the environment itself.

    +
    +

    pygmy or Lando - the choice is yours#

    +

    Lagoon has traditionally worked best with pygmy , which is the amazee.io flavored system of the above tools and works out of the box with Lagoon. It lives at https://github.com/pygmystack/pygmy

    +

    pygmy is written in Golang, so to install it, run:

    +
    Install with HomeBrew
    brew tap pygmystack/pygmy && brew install pygmy
    +
    +

    For detailed usage or installation info on pygmy, see its documentation.

    +

    As announced in our blog post, Lagoon is now also compatible with Lando! For more information, please see the documentation at https://docs.lando.dev/config/lagoon.html to get yourself up and running.

    +

    Lando's workflow for Lagoon will be familiar to users of Lando, and will also be the easiest way for Lagoon newcomers to get up and running. Pygmy presents a closer integration with Docker, which will lend itself better to more complex scenarios and use cases but will also require a deeper understanding.

    +

    We have previously evaluated adding support for other systems like Docksal and Docker4Drupal, and while we may add support for these in the future, our current focus is on supporting using Lando and pygmy. If you do have Lagoon running with one of these (or other) tools, we would love for you to submit a PR on GitHub!

    + + + + + + + + +
    +
    + + +
    + +
    + + + +
    +
    +
    +
    + + + + + + + + + + + + + \ No newline at end of file diff --git a/using-lagoon-the-basics/setup-project/index.html b/using-lagoon-the-basics/setup-project/index.html new file mode 100644 index 0000000000..4456d86132 --- /dev/null +++ b/using-lagoon-the-basics/setup-project/index.html @@ -0,0 +1,2794 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + Set Up a New Project - Lagoon Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    + +
    + + + + +
    + + +
    + +
    + + + + + + +
    +
    + + + +
    +
    +
    + + + + +
    +
    +
    + + + +
    +
    +
    + + + +
    +
    +
    + + + +
    +
    + + + + + + + + + + + + + +

    Set Up a New Project#

    +
    +

    Note

    +

    We are working hard on getting our CLI and GraphQL API set up to allow everyone using Lagoon to setup and configure their projects themselves. Right now, it needs more testing before we can release those features, so hold tight!

    +
    +

    Until then, the setup of a new project involves talking to your Lagoon administrator, which is ok, as they are much friendlier than APIs. 😊

    +

    Please have the following information ready for your Lagoon administrator:

    +
      +
    • A name you would like the project to be known by
        +
      • This name can only contain lowercase characters, numbers and dashes
      • +
      • Double dashes (--) are not allowed within a project name
      • +
      +
    • +
    • SSH public keys, email addresses and the names of everybody that will work on this project. Here are instructions for generating and copying SSH keys for GitHub, GitLab, and Bitbucket.
    • +
    • The URL of the Git repository where your code is hosted (git@example.com:test/test.git).
    • +
    • The name of the Git branch you would like to use for your production environment (see Environment Types for details about the environments).
    • +
    • Which branches and pull requests you would like to deploy to your additional environments. With Lagoon, you can filter branches and pull requests by name with regular expressions, and your Lagoon administrator can get this set up for you.
    • +
    +

    We suggest deploying specific important branches (like develop and main) and pull requests. But that's all up to you! (see Workflows for some more information)

    +

    1. Make sure your project is Lagoonized#

    +

    This means that the .lagoon.yml and docker-compose.yml files are available in your Git repository and configured accordingly.

    +

    If this is not the case, check out the list of Step-by-Step Guides on how to do so before proceeding.

    +

    2. Provide access to your code#

    +

    In order to deploy your code, Lagoon needs access to it. By design and for security, Lagoon only needs read access to your Git repository.

    +

    Your Lagoon administrator will tell you the SSH public key or the Git account to give read access to.

    +

    3. Configure Webhooks#

    +

    Lagoon needs to be informed about a couple of events that are happening to your Git repository. Currently these are pushes and pull requests, but we may add more in the future.

    +

    As Lagoon supports many different Git hosts, we have split off those instructions into this documentation: Configure Webhooks.

    +

    4. Next: First deployment#

    +

    Congratulations, you are now ready to run your first deployment.

    + + + + + + + + +
    +
    + + +
    + +
    + + + +
    +
    +
    +
    + + + + + + + + + + + + + \ No newline at end of file diff --git a/using-lagoon-the-basics/webhooks-2020-01-23-12-40-16.png b/using-lagoon-the-basics/webhooks-2020-01-23-12-40-16.png new file mode 100644 index 0000000000..982e3d2345 Binary files /dev/null and b/using-lagoon-the-basics/webhooks-2020-01-23-12-40-16.png differ