Skip to content

Commit

Permalink
Added windmill
Browse files Browse the repository at this point in the history
  • Loading branch information
Alok G Singh [email protected] committed Sep 20, 2024
1 parent 2a7dc6e commit c378fba
Show file tree
Hide file tree
Showing 12 changed files with 581 additions and 132 deletions.
140 changes: 9 additions & 131 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,33 +2,14 @@

Infrastructure definition for CI/CD environments.

To add a new repo to CI,
- add the repo to `REPOS` in [wf-gen/common.zsh](wf-gen/common.zsh)
- run [wf-gen/prs.zsh](wf-gen/prs.zsh) for all repos
- get all your PRs merged

To add a new repo to CD,
- add it to the `local.repos` in [base/base.tf](base/base.tf)
- plan, apply there
- add a sensible default value to a new variable for the repo in [infra/variables.tf](infra/variables.tf)
- add it to `local.repos` in [infra/infra.tf](infra/infra.tf)
- plan, apply there
- add a definition of the service to [devenv/terraform](https://github.com/TykTechnologies/gromit/tree/master/devenv/terraform) in the gromit repo
- make new release of gromit
- let [the CD on this repo](.github/workflows) deploy the new gromit release

To manage the meta-automation which keeps the automation managed by `prs.zsh` in sync across release branches,
- review [wf-gen/meta.zsh](wf-gen/meta.zsh)
- run it

## Base
Contains the AWS Resources that require privileged access like IAM roles. These resources have a lifecycle separate from the infra and are stored in a separate state on [Terraform Cloud](https://app.terraform.io/app/Tyk/workspaces/base-euc1/states).
Contains the eesources that require persistence or have lifecycle separate from infra. Stored in a separate state on [Terraform Cloud](https://app.terraform.io/app/Tyk/workspaces/base-euc1/states).

Contents:
- vpc
- ECR repos
- ECS Task roles
- EFS filesystems for Tyk config (`config`) and Tyk PKI (`certs`)
- Shared EFS filesystem
- RDS PostgreSQL

See [base/*.auto.tfvars](base/*.auto.tfvars) for the actual values being used right now.

Expand All @@ -42,119 +23,16 @@ Given a vpc cidr of 10.91.0.0/16, we create,
### ECR
[Registries](https://eu-central-1.console.aws.amazon.com/ecr/repositories?region=eu-central-1 "eu-central-1") are created with mutable tags and no automated scanning.

### IAM Users
IAM users are created per-repo and given just enough access to access their repo with an inline policy. The users can login, push and pull images for just their repo.

The access key\_ids and secrets are stored in the terraform state. Use `terraform output` to see the values.

### EFS
This is used to hold all the configuration data requierd for the services. This is mounted on the mongo instance as well as _all_ the containers. To repeat, the same fs is mounted on all containers.

## TODOs
- add a permission boundary on the IAM users (paranoia)

## Infra
Contains the components required to support a Tyk installation. These resources have a lifecycle separate from the developer environments and are stored in a separate state on [Terraform Cloud](https://app.terraform.io/app/Tyk/workspaces/dev-euc1/states).
Contains the ephemeral components. In theory, this could be deleted and re-created with no data loss. Imports the state from <base/> as a remote state.

### Bastion
Adds a bastion host in the public subnet with alok's key. The EFS filesystems are mounted here. The `tyk` group has access to the config directories in `/config`. Login to the bastion to change the config or to create new config groups.

### Mongo
Adds the newest bitnami mongo image (4.2 in June 2020) on a `t3.micro` instance.

# Tyk PKI
In `certs`.

## Generating the CA

There is a self-signed root CA which is used for all resources and for revocations.

``` shellsession
% cd rootca
% cfssl gencert -initca csr.json | cfssljson -bare rootca
```

will generate `rootca-key.pem`, `rootca.pem`, and `rootca.csr` (for cross-signing).

Policies are defined in `rootca/config.json` for *server*, *peer*, and *client* roles. The authentication key can be generated with `openssl rand -hex 16` and set in `CFSSL_API_KEY`. A Dockerfile to provision the newest [cfssl](https://github.com/cloudflare/cfssl) is in `ca`.

## Generating an intemediate CA

``` shellsession
% cd sshca
# Generate pair
% cfssl genkey -initca csr.json | cfssljson -bare ssh
# Sign cert with root CA
% cfssl sign -ca=../rootca/rootca.pem -ca-key=../rootca/rootca-key.pem -config=config.json -profile peer ssh.csr | cfssljson -bare ssh
```

## Generating an mTLS pair

### Server

Define the request in `csr.json`.

``` json
{
"CN": "cd.tyk.technologies",
"hosts": [
"cd.dev.tyk.technologies",
"localhost"
],
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "UK",
"L": "London",
"O": "Tyk Technologies",
"OU": "Devops",
"ST": "Greater London"
}
]
}
```

Now request for the certpair to be signed using the server profile by using the config in `cfssl-sign.json`.

``` shell
% CFSSL_API_KEY=xxxxx cfssl gencert -profile=server -config cfssl-sign.json csr.json | cfssljson -bare server
2020/06/26 13:11:29 [DEBUG] loading configuration file from config.json
2020/06/26 13:11:29 [DEBUG] match remote in profile to remotes section
2020/06/26 13:11:29 [DEBUG] match auth key in profile to auth_keys section
2020/06/26 13:11:29 [DEBUG] validating configuration
2020/06/26 13:11:29 [DEBUG] validate remote profile
2020/06/26 13:11:29 [DEBUG] profile is valid
2020/06/26 13:11:29 [DEBUG] configuration ok
2020/06/26 13:11:29 [INFO] generate received request
2020/06/26 13:11:29 [INFO] received CSR
2020/06/26 13:11:29 [INFO] generating key: rsa-2048
2020/06/26 13:11:29 [DEBUG] generate key from request: algo=rsa, size=2048
2020/06/26 13:11:29 [INFO] encoded CSR
2020/06/26 13:11:29 [DEBUG] validating configuration
2020/06/26 13:11:29 [DEBUG] validate remote profile
2020/06/26 13:11:29 [DEBUG] profile is valid
2020/06/26 13:11:29 [DEBUG] validating configuration
2020/06/26 13:11:29 [DEBUG] validate remote profile
2020/06/26 13:11:29 [DEBUG] profile is valid
```

This will give you `server-key.pem`, `server.pem` and `server.csr`. Use these as you will.

### Client

The `csr.json` for the server can be re-used but you'll want to remove the hests entries. Or start with `cfssl print-defaults csr > csr.json`.

Then, request for signing as above except use `-profile=client`.
Adds a bastion host in the public subnet with alok's key. The EFS filesystem are mounted here.

# What's not checked in
### deptrack

- keys
DependencyTrack in ECS. It uses the shared RDS instance from <base>. Available at [https://deptrack.dev.tyk.technology](https://deptrack.dev.tyk.technology "deptrack").

# What's checked in
### windmill.dev

- certs
- CSR definitions in JSON form
- signing policy for `cfssl.dev.tyk.technology`
OSS version deployed on ECS on EC2. Available at [https://windmill.dev.tyk.technology](https://windmill.dev.tyk.technology "windmill").
2 changes: 1 addition & 1 deletion infra/gromit.tf
Original file line number Diff line number Diff line change
Expand Up @@ -40,7 +40,7 @@ module "tui" {
port = 80,
log_group = "internal",
image = var.gromit_image,
command = ["--textlogs=false", "policy", "serve", "--save=/shared/prod-variations.yml", "--port=:80"],
command = ["--textlogs=false", "policy", "serve", "--save=/shared", "--port=:80"],
mounts = [
{ src = "shared", dest = "/shared", readonly = false },
],
Expand Down
1 change: 1 addition & 0 deletions infra/infra.tf
Original file line number Diff line number Diff line change
Expand Up @@ -124,6 +124,7 @@ module "bastion" {
}

data "aws_ami" "al2023" {

most_recent = true
owners = ["amazon"]
filter {
Expand Down
File renamed without changes.
47 changes: 47 additions & 0 deletions windmill/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,47 @@
Keeping windmill in its own state to minimise its blast radius. It has access to the base state outputs.

The database provisioning is manual. The reasons it was not maintained as IaC are:
- the terraform provider [requires the instance to available](https://github.com/cyrilgdn/terraform-provider-postgresql/issues/81) where the manifests are being processed
- remote_exec requires management of ssh access to the bastion.

So, if you have deleted the windmill db or role, you will have to create it _before_ running this manifest. The role password is expected in the SSM parameter `/windmill/db_pass`.

## Creating DB objects
The RDS instance is not accessible from the internet. Use bastion.dev.tyk.technology which has psql installed. Add your key to the [cloudinit template](https://github.com/TykTechnologies/tyk-ci/blob/master/infra/bastion-cloudinit.yaml.tftpl#L19) or use the devacc key.

Obtain the DB host from the the AWS console or from `tf output` in `../base`. The master password is in SSM Parameter Store as `/base-prod/rds/master`.

```shellsession
$ psql -h postgres15.c1po6t6zkr9a.eu-central-1.rds.amazonaws.com -U master -W -d postgres
postgres=> create role windmill with nocreatedb nocreaterole login password 'supersekret';
CREATE ROLE
postgres=> create database windmill with owner windmill encoding 'UTF8';
ERROR: must be member of role "windmill"
postgres=> grant windmill to master;
GRANT ROLE
postgres=> create database windmill with owner windmill encoding 'UTF8';
CREATE DATABASE
postgres=> CREATE ROLE windmill_user;
CREATE ROLE
postgres=> GRANT ALL PRIVILEGES ON DATABASE windmill TO windmill_user;
GRANT
postgres=> CREATE ROLE windmill_admin WITH BYPASSRLS;
CREATE ROLE
postgres=> GRANT windmill_user TO windmill_admin;
GRANT ROLE
postgres=> grant windmill_admin to windmill;
GRANT ROLE
postgres=> grant windmill_user to windmill;
GRANT ROLE
```

`windmill` is the user used to connect to the database.` windmill_user` and `windmill_admin` and users internal to windmill. The documentation requires giving windmill an RDS instance to itself. By creating these users externally, the shared RDS instance in <../base> can be used.

Construct the URL to access the DB in SSM as a SecureString with name `/windmill/db_url`.

## Applying manifests
To apply the manifests from scratch, login to AWS on your CLI. You will need at least PowerUser access to the devacc (754489498669) sub-account. Then use the the usual incantation:

```
terraform init && terraform plan && terraform apply
```
8 changes: 8 additions & 0 deletions windmill/ecs-boot.tftpl
Original file line number Diff line number Diff line change
@@ -0,0 +1,8 @@
#!/bin/bash

cat <<'EOF' >> /etc/ecs/ecs.config
ECS_CLUSTER=${name}
#ECS_LOGLEVEL=debug
ECS_ENABLE_TASK_IAM_ROLE=true
ECS_ENABLE_SPOT_INSTANCE_DRAINING=true
EOF
53 changes: 53 additions & 0 deletions windmill/iam.tf
Original file line number Diff line number Diff line change
@@ -0,0 +1,53 @@
data "aws_iam_policy_document" "ecs_assume_role_policy" {
statement {
actions = ["sts:AssumeRole"]

principals {
type = "Service"
identifiers = ["ecs-tasks.amazonaws.com"]
}
}
}

# to decrypt secrets in SSM
data "aws_iam_policy_document" "ssm_decrypt" {
statement {
sid = "kms"
actions = [
"kms:Decrypt"
]

resources = [data.terraform_remote_state.base.outputs.kms]
}

statement {
sid = "ssm"
actions = [
"ssm:GetParameters"
]

resources = [data.aws_ssm_parameter.windmill_db_url.arn]
}

statement {
sid = "logs"
actions = [
"logs:CreateLogStream",
"logs:PutLogEvents"
]

resources = ["*"]
}
}

resource "aws_iam_role" "windmill" {
name = "windmill"
path = "/infra/windmill/"

inline_policy {
name = "ssm-decrypt"
policy = data.aws_iam_policy_document.ssm_decrypt.json
}
assume_role_policy = data.aws_iam_policy_document.ecs_assume_role_policy.json
}

Loading

0 comments on commit c378fba

Please sign in to comment.