A new deploy from scratch takes approximately 20 minutes.
All flags are optional. Configuration settings provided via flags will persist in later deployments unless explicitly overriden.
In order for flags to be parsed correctly, the name of your deployment should be placed at the end of your command.
Flag | Description | Environment Variable |
---|---|---|
--domain value |
Domain to use as endpoint for Concourse web interface (eg: ci.myproject.com) | DOMAIN |
control-tower deploy --domain chimichanga.engineerbetter.com chimichanga
In the example above control-tower
will search for a hosted zone that matches chimichanga.engineerbetter.com
or engineerbetter.com
and add a record to the longest match (chimichanga.engineerbetter.com
in this example).
The domain you provide must fall within a hosted zone in the Cloud DNS of the GCP project or route53 of the AWS account you are deploying to. For example, in our system tests we test this by delegating gcp.engineerbetter.com to our GCP project (our root domain is managed on another DNS server) then specifying something like control-tower.gcp.engineerbetter.com as the domain.
Flag | Description | Environment Variable |
---|---|---|
--tls-cert value |
TLS cert to use with Concourse endpoint | TLS_CERT |
--tls-key value |
TLS private key to use with Concourse endpoint | TLS_KEY |
By default
control-tower
will generate a self-signed cert using the given domain. If you'd like to provide your own certificate instead, pass the cert and private key as strings using the--tls-cert
and--tls-key
flags respectively. eg:
control-tower deploy \
--domain chimichanga.engineerbetter.com \
--tls-cert "$(cat chimichanga.engineerbetter.com.crt)" \
--tls-key "$(cat chimichanga.engineerbetter.com.key)" \
chimichanga
Flag | Description | Environment Variable |
---|---|---|
--workers value |
Number of Concourse worker instances to deploy (default: 1) | WORKERS |
--worker-type |
Specify a worker type for aws (m5, m5a, or m4) (default: "m4") | WORKER_TYPE |
--worker-size value |
Size of Concourse workers. See table below for sizes (default: "xlarge") |
WORKER_SIZE |
worker-type
is an AWS-specific option
AWS does not offer m5 or m5a instances in all regions, and even for regions that do offer m5 instances, not all zones within that region may offer them. To complicate matters further, each AWS account is assigned AWS zones at random - for instance,
eu-west-1a
for one account may be the same aseu-west-1b
in another account. If m5s are available in your chosen region but not the zone Control Tower has chosen, create a new deployment, this time specifying another--zone
.
--worker-size | AWS m4 Instance type | AWS m5 Instance type | AWS m5a Instance type | GCP Instance type |
---|---|---|---|---|
medium | t3.medium | t3.medium | n1-standard-1 | |
large | m4.large | m5.large | m5a.large | n1-standard-2 |
xlarge | m4.xlarge | m5.xlarge | m5a.xlarge | n1-standard-4 |
2xlarge | m4.2xlarge | m5.2xlarge | m5a.2xlarge | n1-standard-8 |
4xlarge | m4.4xlarge | m5.4xlarge | m5a.4xlarge | n1-standard-16 |
10xlarge | m4.10xlarge | n1-standard-32 | ||
12xlarge | m5.12xlarge | m5a.12xlarge | ||
16xlarge | m4.16xlarge | n1-standard-64 | ||
24xlarge | m5.24xlarge | m5a.24xlarge |
Flag | Description | Environment Variable |
---|---|---|
--web-size value |
Size of Concourse web node. See table below for sizes (default: "small") |
WEB_SIZE |
--persistent-disk value |
Size of Concourse web node persistent disk. See table below for sizes (default: "default") |
PERSISTENT_DISK |
--web-size | AWS Instance type | GCP Instance type |
---|---|---|
small | t3.small | n1-standard-1 |
medium | t3.medium | n1-standard-2 |
large | t3.large | n1-standard-4 |
xlarge | t3.xlarge | n1-standard-8 |
2xlarge | t3.2xlarge | n1-standard-16 |
--persistent-disk | AWS size | GCP size |
---|---|---|
small | 20GB | 20GB |
default | 50GB | 50GB |
medium | 100GB | 100GB |
large | 200GB | 200GB |
Flag | Description | Environment Variable |
---|---|---|
--db-size value |
Size of Concourse Postgres instance. See table below for sizes (default: "small") |
DB_SIZE |
Note that when changing the database size on an existing control-tower deployment, the SQL instance will scaled by terraform resulting in approximately 3 minutes of downtime.
--db-size | AWS Instance type | GCP Instance type |
---|---|---|
small | db.t3.small | db-g1-small |
medium | db.t3.medium | db-custom-2-4096 |
large | db.m4.large | db-custom-2-8192 |
xlarge | db.m4.xlarge | db-custom-4-16384 |
2xlarge | db.m4.2xlarge | db-custom-8-32768 |
4xlarge | db.m4.4xlarge | db-custom-16-65536 |
IAAS | Service | Type | Version | Notes |
---|---|---|---|---|
GCP | CloudSQL | Postgres | v9.6 | CloudSQL does not currently offer and in-place upgrade from this version |
AWS | Amazon RDS | Postgres | v13 | - |
Flag | Description | Environment Variable |
---|---|---|
--enable-global-resources |
Enable Global Resources in the Concourse cluster. Can be true/false. Default is false. | ENABLE_GLOBAL_RESOURCES |
Flag | Description | Environment Variable |
---|---|---|
--allow-ips value |
Comma separated list of IP addresses or CIDR ranges to allow access to. Not applied to future manual deploys unless this flag is provided again (default: "0.0.0.0/0") |
ALLOW_IPS |
allow-ips
governs what can access Concourse but not what can access the control plane (i.e. the BOSH director). The control plane will be restricted to the IPcontrol-tower deploy
was run from.
This flag overwrites the allowed IPs on every deploy. This means deploying with
allow-ips
then deploying again without it will reset the allow list to0.0.0.0/0
. The self-update pipeline will maintain theallow-ips
of the most recent deploy.
On GCP the database disk encryption is enabled by default. On AWS we added the option to enable the disk encryption too. By default it's disabled.
Note that you can only change this value during the initial deploy. It's not possible to change this for a running instance.
Flag | Description | Environment Variable |
---|---|---|
--rds-disk-encryption |
Optional configuration to use an encrypted rds disk for AWS. Not enabled by default! | RDSDiskEncryption |
Flag | Description | Environment Variable |
---|---|---|
--bitbucket-auth-client-id value |
Client ID for a bitbucket OAuth application - Used for Bitbucket Auth | BITBUCKET_AUTH_CLIENT_ID |
--bitbucket-auth-client-secret value |
Client Secret for a bitbucket OAuth application - Used for Bitbucket Auth | BITBUCKET_AUTH_CLIENT_SECRET |
Flag | Description | Environment Variable |
---|---|---|
--github-auth-client-id value |
Client ID for a github OAuth application - Used for Github Auth | GITHUB_AUTH_CLIENT_ID |
--github-auth-client-secret value |
Client Secret for a github OAuth application - Used for Github Auth | GITHUB_AUTH_CLIENT_SECRET |
--github-auth-host |
Host name (excluding protocol) for a GitHub Enterprise server to use instead of github.com - Used for Github Auth | GITHUB_AUTH_HOST |
--github-auth-ca-cert |
Contents of a CA certificate for a GitHub Enterprise server (required if providing --github-auth-host) - Used for Github Auth | GITHUB_AUTH_CA_CERT |
See here for instructions on creating the necessary OAuth app on github.com. As per Concourse docs:
Note that the client must be created under an organization if you want to authorize users based on organization/team membership. In addition, the GitHub application must have at least read access on the organization's members. If the client is created under a personal account, only individual users can be authorized.
Note that if you configure GitHub Auth authenticated users will not be authorized to access pipelines in the main
team unless one or more of the flags below is also provided.
Using any of the flags below without also setting GitHub Auth (above), or having done so on a previous deploy, will result in an error.
Flag | Description | Environment Variable |
---|---|---|
--main-team-github-users value |
Comma separated list of github users that are authorised for the main team | MAIN_TEAM_GITHUB_USERS |
--main-team-github-teams value |
Comma separated list of github teams that are authorised for the main team | MAIN_TEAM_GITHUB_TEAMS |
--main-team-github-orgs value |
Comma separated list of github orgs that are authorised for the main team | MAIN_TEAM_GITHUB_ORGS |
Example:
control-tower deploy \
--iaas aws \
--domain my-ci.engineerbetter.com \
--github-auth-client-id some-id \
--github-auth-client-secret some-secret \
--main-team-github-users "a-user,b-user" \
--main-team-github-teams foo:bar \
--main-team-github-orgs EngineerBetter \
my-ci
Results in:
fly -t my-ci teams -d
name/role users groups
main/owner github:a-user,github:b-user,local:admin github:engineerbetter,github:foo:bar
Flag | Description | Environment Variable |
---|---|---|
--microsoft-auth-client-id value |
Client ID for a microsoft OAuth application - Used for Microsoft Auth | MICROSOFT_AUTH_CLIENT_ID |
--microsoft-auth-client-secret value |
Client Secret for a microsoft OAuth application - Used for Microsoft Auth | MICROSOFT_AUTH_CLIENT_SECRET |
--microsoft-auth-tenant value |
Tenant for a microsoft OAuth application - Used for Microsoft Auth | MICROSOFT_AUTH_TENANT |
Flag | Description | Environment Variable |
---|---|---|
--add-tag key=value |
Add a tag to the VMs that form your control-tower deployment. Can be used multiple times in a single deploy command |
Flag | Description | Environment Variable |
---|---|---|
--spot=value |
Use spot instances for workers. Can be true/false. Default is true | SPOT |
--preemptible=value |
Use preemptible instances for workers. Can be true/false. Default is true | PREEMPTIBLE |
Control Tower uses spot/preemptible instances for workers by default as a cost saving measure. Users requiring lower risk may switch this feature off by setting --spot=false.
Be aware the preemptible instances will go down at least once every 24 hours so deployments with only one worker will experience downtime with this feature enabled. BOSH will ressurect falled workers automatically.
spot
and preemptible
are interchangeable so if either of them is set to false then interruptible instances will not be used regardless of your IaaS. i.e:
# Results in an AWS deployment using non-spot workers
control-tower deploy --spot=true --preemptible=false <your-project-name>
# Results in an AWS deployment using non-spot workers
control-tower deploy --preemptible=false <your-project-name>
# Results in a GCP deployment using non-preemptible workers
control-tower deploy --iaas gcp --spot=false <your-project-name>
Flag | Description | Environment Variable |
---|---|---|
--zone |
Specify an availability zone | ZONE |
This cannot be changed after the initial deployment
If any of the following 5 flags is set, all the required ones from this group need to be set (The rds
ones are AWS-Specific)
Flag | Description | Environment Variable |
---|---|---|
--vpc-network-range value |
Customise the VPC network CIDR to deploy into (required for AWS) |
VPC_NETWORK_RANGE |
--public-subnet-range value |
Customise public network CIDR (if IAAS is AWS must be within --vpc-network-range) (required) |
PUBLIC_SUBNET_RANGE |
--private-subnet-range value |
Customise private network CIDR (if IAAS is AWS must be within --vpc-network-range) (required) |
PRIVATE_SUBNET_RANGE |
--rds-subnet-range1 value |
Customise first rds network CIDR (must be within --vpc-network-range) (required for AWS) |
RDS_SUBNET_RANGE1 |
--rds-subnet-range2 value |
Customise second rds network CIDR (must be within --vpc-network-range) (required for AWS) |
RDS_SUBNET_RANGE2 |
All the ranges above should be in the CIDR format of IPv4/Mask. The sizes can vary as long as
vpc-network-range
is big enough to contain all others (in case IAAS is AWS). The smallest CIDR forpublic
andprivate
subnets is a /28. The smallest CIDR forrds1
andrds2
subnets is a /29
By default Control Tower colocates Grafana, Telegraf, and InfluxDB into the Concourse VMs. This can cause uneccessary resource usage if you don't use these features. It can be disabled with:
Flag | Description | Environment Variable |
---|---|---|
--no-metrics |
Don't deploy the metrics stack colocated on the web VM (default: true) | NO_METRICS |
In order to re-enable metrics after using this flag you need to deploy with
--no-metrics=false
.