This is a Shared Jenkins Library. If included in a
Jenkinsfile with @Library('pipeline-lib') _
, then all the
Global variables described below will be available in that Jenkinsfile.
deployer
supports branch deploys.[1]
To migrate an existing project out from the current orchestrated release flow and to adopt Continuous Deployment via branch deploys, follow these steps:
-
Upgrade the project’s Jenkinsfile to:
-
Include this library with
@Library('pipeline-lib') _
-
Configure the job’s properties
-
Call
deployer.deployOnCommentTrigger
after the code has been tested and a Docker image built
-
-
Remove the project from all current release files [2]
-
Add
continuous-integration/jenkins/pr-merge/deploy
as a required check in GitHub
Each step is explained in detail below.
This adds a build trigger so that deploys can be started with !deploy
PR
comments. It also configures the Datadog plugin, so that we can collect metrics
about deployments.
In a declarative pipeline, add a properties
call before the pipeline
directive as follows.
+properties(deployer.wrapProperties())
+
pipeline {
// ...
}
In a scripted pipeline, include the same properties
call:
properties(deployer.wrapProperties())
Or wrap an existing properties
call with the provided wrapper:
-properties([
+properties(deployer.wrapProperties([
parameters([
string(name: 'my-param', defaultValue: 'my-val', description: 'A description')
])
-])
+]))
The exact changes required depend on the project, but here’s an example.
// At the top level
@Library('pipeline-lib') _
+@Library('SaleMoveAcceptance') __ // (1)
// In podTemplate, inPod, or similar, after building a docker image
checkout(scm)
-def image = docker.build('call-router')
-imageScanner.scan(image)
+def image = deployer.buildImageIfDoesNotExist(name: 'call-router') { (2)
+ def newImage = docker.build('call-router')
+ imageScanner.scan(newImage)
+ return newImage
+}
-def shortCommit = sh(returnStdout: true, script: 'git log -n 1 --pretty=format:"%h"').trim()
-docker.withRegistry(DOCKER_REGISTRY_URL, DOCKER_REGISTRY_CREDENTIALS_ID) {
- image.push(shortCommit) // (3)
-}
+deployer.deployOnCommentTrigger(
+ image: image,
+ kubernetesNamespace: 'default', // Optional, will deploy to `default` namespace if not specified
+ kubernetesDeployment: 'call-router',
+ lockGlobally: true, // Optional, defaults to `true` (4)
+ deploymentUpdateTimeout: [time: 10, unit: 'MINUTES'], // Optional, defaults to 10 minutes (5)
+ preDeploymentChecksFor: { env -> // Optional, allows running some code before deploying new version of the image
+ echo "Running pre-deployment checks for version ${env.version} in environment ${env.name}"
+
+ // env['runInKube'] provides a wrapper function around `kubectl run` command.
+ def exitStatus = env['runInKube'](
+ image: "662491802882.dkr.ecr.us-east-1.amazonaws.com/smoke-test-repo:${env.version}", // (6)
+ command: './preview-db-migration.sh',
+ name: 'call-router-deploy-test', // Optional. Default is based on kubernetesDeployment value
+ overwriteEntrypoint: true, // Optional, defaults to `false` (7)
+ additionalArgs: '--env="FOO=bar" --port=12345', // Optional (8)
+ returnStdout: false, // Optional (9)
+ returnStatus: true // Optional (9)
+ )
+
+ // `kubectl run` has its own limitation so you if you need better flexibility,
+ // you can use `env.kubectlCmd` variable to run arbitrary commands
+ // in the current Kubernetes environment:
+ def configValue = sh(
+ script: "${env.kubectlCmd} get configmap my-config -o jsonpath={.data.key}",
+ returnStdout: true
+ ).trim()
+
+ if (configValue != "OK") {
+ throw new Exception("Pre-deployment check failed").
+ }
+ },
+ automaticChecksFor: { env ->
+ env['runInKube'](
+ image: '662491802882.dkr.ecr.us-east-1.amazonaws.com/smoke-test-repo:latest', // (6)
+ command: './run_smoke_tests.sh',
+ name: 'call-router-deploy-test', // Optional. Default is based on kubernetesDeployment value
+ overwriteEntrypoint: true, // Optional, defaults to `false` (7)
+ additionalArgs: '--env="FOO=bar" --port=12345', // Optional (8)
+ returnStdout: false, // Optional (9)
+ returnStatus: true // Optional (9)
+ )
+ if (env.name == 'acceptance') {
+ runAcceptanceTests(
+ driver: 'chrome',
+ visitorApp: 'v2',
+ suite: 'acceptance_test_pattern[lib/engagement/omnicall/.*_spec.rb]', // (10)
+ slackChannel: '#tm-engage,
+ parallelTestProcessors: 1
+ )
+ }
+ },
+ checklistFor: { env ->
+ def dashboardURL = "https://app.datadoghq.com/dash/206940?&tpl_var_KubernetesCluster=${env.name}" // (11)
+ def logURL = "https://logs.${env.domainName}/app/kibana#/discover?_g=" + // (12)
+ '(time:(from:now-30m,mode:quick,to:now))&_a=' +
+ '(query:(language:lucene,query:\'application:call_router+AND+level:error\'))'
+
+ [[
+ name: 'dashboard', // (13)
+ description: "<a href=\"${dashboardURL}\">The project dashboard (${dashboardURL})</a> looks OK" // (14)
+ ], [
+ name: 'logs',
+ description: "No new errors in <a href=\"${logURL}\">the project logs (${logURL})</a>"
+ ]]
+ }
+)
-build(job: 'kubernetes-deploy', ...)
-
This is needed for running acceptance tests before deploying to other environments. If you already have a
@Library
import followed by a two underscores, then change them to three underscores (_
) or more, as required. The symbol has to be unique within the Jenkinsfile. -
Wrapping the image building code with
buildImageIfDoesNotExist
is not required, but it can significantly speed up the deployment process if you do. With it, the image will only be built, if it doesn’t already exist. By also putting test execution and linting into the same block with building the image, these steps can also be skipped, when deploying an image that already exist and has gone through these validations. -
No need to push the image to anywhere. Just build it and pass to
deployOnCommentTrigger
, which tags and pushes as required. -
Optional. Defaults to
true
. If set tofalse
, then deploys of this project will not affect deploys of other projects. That is, this project can then be deployed at the same time with other projects. Should only be enabled if this project is completely isolated, so that it’s tests don’t affect other projects and other projects' tests and deploys don’t affect this project. This can be overwritten for individual PRs by triggering the deploy with a!deploy no-global-lock
comment. -
Optional. Defaults to 10 minutes. Allowed values for
unit
are listed in Jenkins documentation fortimeout
. -
The image defaults to the current version of the application image.
-
Optional. Defaults to
false
. If true, thencommand
will overwrite the container’s entrypoint, instead of being used as its arguments. In Kubernetes terms, thecommand
will be specified as thecommand
field for the container, instead ofargs
. -
Optional. Additional arguments to
kubectl run
. -
Optional. These arguments are passed as is to
sh
Jenkins pipeline step. -
The tests and the other checks run in acceptance obviously vary by project.
-
Use
env.name
to customize links for the specific environment. It’s one of:acceptance
,beta
,prod-us
, andprod-eu
. -
Use
env.domainName
to customize URLs. For example, it’sbeta.salemove.com
in beta andsalemove.com
in prod US. -
This should be a simple keyword.
-
Blue Ocean UI currently doesn’t display links, while the old one does. This means that links have to also be included in plain text, for Blue Ocean UI users to see/access them.
-
Open the master branch settings for the project.[3]
-
Check Require status checks to pass before merging, if not already checked
-
Check the
continuous-integration/jenkins/pr-merge/deploy
status [4] [5]
deployer.publishAssets
uploads static assets to S3. It takes the following
arguments:
-
folder
: The path to a folder with distribution-ready (compiled, minified, etc) static assets. Relative to the current working directory. -
s3Bucket
: Optional. The name and optional path of the S3 bucket to upload the files infolder
to. Defaults tolibs.salemove.com
. -
cacheMaxAge
: Optional. Cache-Control maximum age in seconds. Defaults to31536000
.
Example:
deployer.publishAssets(
folder: 'dist',
s3Bucket: 'your.s3.bucket/path', // Optional
cacheMaxAge: 31536000 // Optional
)
deployer.deployAssetsVersion
updates the version of a group of assets and
optionally their integrities in the "static-assets" ConfigMap in acceptance. It
takes the following arguments:
-
version
: The version to put into the ConfigMap in the literal ConfigMap format. -
integritiesFile
: Optional. Path to a JSON manifest of the assets, including their integrities (hashes). Relative to the current working directory.
Example:
deployer.deployAssetsVersion(
version: 'visitor-app.v1=507d427',
integritiesFile: 'integrities.json', // Optional
)
inPod
is a thin wrapper around the Kubernetes plugin
podTemplate
+ a nested node
call. Every setting that can
be provided to podTemplate
can be provided to inPod
and its
derivatives (described below).
It provides default values for fields such as cloud
and name
, so that
you don’t need to worry about them. It makes creating a basic worker pod
very simple. For example, let’s say you want to build something in NodeJS.
The following snippet is everything you need to achieve just that.
inPod(containers: [interactiveContainer(name: 'node', image: 'node:9-alpine')]) {
checkout(scm)
container('node') {
sh('npm install && npm test')
}
}
ℹ️
|
inPod and its derivatives also include a workaround for an issue with
the Kubernetes plugin where the label has to be updated for changes to the
container or volume configurations to take effect. It’s fixed by automatically
providing a unique suffix to the pod label using the hash of the provided
argument map.
|
❗
|
When using inPod or its derivatives, it’s best to also use
[code-passivecontainer-code], [code-interactivecontainer-code], and
[code-agentcontainer-code] instead of using containerTemplate directly.
This is because the containerTemplate wrappers provided by this library all
share the same workingDir , which makes them work nicely together.
|
A pod template for building docker containers.
Unlike inPod
, inDockerAgent
has an agent container [6] which supports building docker
images. So if you need to run docker.build
, use
inDockerAgent
instead of inPod
.
inDockerAgent
accepts argument useBuildKit: true
which forces Docker to use
BuildKit for building images.
ℹ️
|
inDockerAgent is a derivative of [code-inpod-code], so everything
that applies to inPod also applies to inDockerAgent .
|
A pod template for building Ruby projects. Comes with an agent container
with Ruby and Docker support and PostgreSQL and RabbitMQ containers. Ruby version
is configurable via rubyVersion
parameter and defaults to 2.4
. All available
versions can be found in Docker repository.
ℹ️
|
inRubyBuildAgent is a derivative of [code-inpod-code], so everything
that applies to inPod also applies to inRubyBuildAgent .
|
Example:
inRubyBuildAgent(
rubyVersion: '2.5' // Optional, defaults to 2.4
)
A containerTemplate
wrapper for databases and other
services that will not have pipeline steps executed in them. name
and
image
fields are required.
Example:
inPod(
containers: [
passiveContainer(
name: 'db',
image: 'postgres:9.5-alpine',
envVars: [
envVar(key: 'POSTGRES_USER', value: 'myuser'),
envVar(key: 'POSTGRES_PASSWORD', value: 'mypass')
]
)
]
) {
// Access the PostgreSQL DB over its default port 5432 at localhost
}
|
Only specify the workingDir , command , args , and/or
ttyEnabled fields for passiveContainer if you know what you’re doing.
|
A containerTemplate
wrapper for containers that
will have pipeline steps executed in them. name
and image
fields are
required. Pipeline steps can be executed in the container by wrapping them
with container
.
Example:
inPod(containers: [interactiveContainer(name: 'ruby', image: 'ruby:2.5-alpine')]) {
checkout(scm)
container('ruby') {
sh('bundle install')
}
}
|
Only specify the workingDir , command , args , and/or
ttyEnabled fields for interactiveContainer if you know what you’re
doing.
|
ℹ️
|
interactiveContainer specifies /bin/sh -c cat as the entrypoint
for the image, so that the image doesn’t exit. This allows you to run
arbitrary commands with container + sh within the container.
|
A containerTemplate
wrapper for agent containers.
Only the image
field is required. It replaces the default jnlp
container with the one provided as the image
. The specified image has to
be a Jenkins slave agent.
Example:
inPod(containers: [agentContainer(image: 'salemove/jenkins-agent-ruby:2.4.1')]) {
checkout(scm)
sh('bundle install && rake') // (1)
docker.build('my-ruby-project')
}
-
Compared to the
interactiveContainer
example above, this doesn’t have to be wrapped in acontainer
, because the agent itself supports Ruby.
|
Only specify the name , workingDir , command , args , and/or
ttyEnabled fields for agentContainer if you know what you’re doing.
|
A containerTemplate
wrapper for toolbox container that
will have pipeline steps executed in them. Pipeline steps can be executed in the container by wrapping them
with container
. Contains latest version of salemove/jenkins-toolbox
.
Example:
inPod(containers: [toolboxContainer()]) {
checkout(scm)
container('toolbox') {
sh('kubectl exec -it <pod> <cmd>')
}
}
A scripted pipeline [7] wrapper that sends build status notifications to Slack and optionally email.
Without specifying any arguments it sends Slack notifications to the #ci
channel whenever a master branch build status changes from success to failure
or back. To send notifications to your team’s channel, specify the
slackChannel
argument.
withResultReporting(slackChannel: '#tm-engage') {
inPod {
checkout(scm)
// Build
}
}
💡
|
If the main branch in a project is different from master , then reporting
can be enabled for that branch by specifying mainBranch . E.g.
withResultReporting(mainBranch: 'develop') .
|
For non-branch builds, such as cronjobs or manually started jobs, the above
status reporting strategy does not make sense. In these cases a simpler
onFailure
, onFailureAndRecovery
or always
strategy can be used.
properties([
pipelineTriggers([cron('30 10 * * 5')])
])
withResultReporting(slackChannel: '#tm-inf', strategy: 'onFailure') {
inPod {
// Do something
}
}
By default withResultReporting
only includes the build status
(success/failure), the job name, and links to the build in the slack message.
Additional project-specific information can be included via the customMessage
argument.
properties([
parameters([
string(name: 'buildParam', defaultValue: 'default', description: 'A parameter')
])
])
withResultReporting(customMessage: "Build was started with: ${params.buildParam}") {
inPod {
// Do something
}
}
If mailto
argument has been specified, then a notification is also sent to the
email, specified in this argument. The wording is similar to the one in
E-mail notification post-build action of Mailer plugin.
For a failed build, 250 last lines of console log are also included into the
notification (the length is configurable via maxLogLines
argument).
withResultReporting(
slackChannel: '#tm-inf',
strategy: 'onFailureAndRecovery'
mailto: '[email protected]'
) {
inPod {
// Do something
}
}
deployer.publishDocs
publishes docs. It takes the following arguments:
-
source
: The source file which must be in the working directory where this function is called. E.g.quicksight/engagements.md
. -
destination
: File name of the published asset. E.g.quicksight/auto-generated/engagements.md
.
Example:
deployer.publishDocs(
source: 'quicksight/engagements.md',
destination: 'quicksight/auto-generated/engagements.md'
)
Two wrappers around ansiblePlaybook()
and build()
pipeline steps respectively,
designed to obtain variables, registered by the Ansible playbook.
def ansibleVars
def buildResult
(ansibleVars, buildResult) = buildWithResults(
job: 'ansible/run-playbook',
parameters: [
string(name: 'playbook', value: 'db-provision-hosts.yml'),
// other parameters
]
)
The above syntax expects that the ansible/run-playbook
Jenkins job invokes
ansiblePlaybookWithResults()
when running the playbook (instead of the standard
ansiblePlaybook()
). This ensures the results are passed within build variables
in serialised form and can be later deserialised by buildWithResults()
and
returned as the first element of the returning tuple. The second element is
optional and left for backward compatibility.
Registering variables is done in Ansible by calling register-jenkins-variable
role. There are two ways of doing so: directly or from a task.
roles:
- role: register-jenkins-variable
jenkins_var_name: 'var1'
jenkins_var_value: "value1"
tasks:
- name: Save IP of DB master for Jenkins
include_role:
name: register-jenkins-variable
vars:
jenkins_var_name: 'var2'
jenkins_var_value: "variable2"
With the given variables registered by the Ansible playbook, the following code
will print value1
and value2
:
def ansibleVars
(ansibleVars) = buildWithResults(
// build parameters
)
echo(ansibleVars.var1)
echo(ansibleVars.var2)
Guard is used for providing a preview of the documentation. Run the following
commands to open a preview of the rendered documentation in a browser.
Unfortunately there’s no live reload - just refresh the browser whenever you
save changes to README.adoc
.
bin/bundle install
bin/guard # (1)
open README.html # (2)
-
This doesn’t exit, so following commands have to be entered elsewhere
-
Opens the preview in browser. Manually refresh browser as necessary
call-router
settings are linked here as an example. Click Settings → Branches → Edit master
in GitHub to access.
jnlp
, in which all commands will run by default, unless the container is changed with container
.