Skip to content

Commit

Permalink
Include Architectural Changes from Downstream (#18)
Browse files Browse the repository at this point in the history
* ECS and ADOT updates

---------

Co-authored-by: smohiudd <[email protected]>
Co-authored-by: ranchodeluxe <[email protected]>
  • Loading branch information
3 people authored Mar 6, 2023
1 parent a35cc55 commit 7ebe3f9
Show file tree
Hide file tree
Showing 38 changed files with 1,896 additions and 36 deletions.
41 changes: 39 additions & 2 deletions .github/workflows/deploy.yaml
Original file line number Diff line number Diff line change
@@ -1,12 +1,13 @@
name: Deploy
name: deploy

on:
push:
branches:
- main

jobs:
deploy:
deploy_apigw_staging:
if: $APIGW_DEPLOY == 'true'
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v2
Expand Down Expand Up @@ -39,3 +40,39 @@ jobs:
# Build and Deploy CDK application
- name: Build & Deploy
run: npm run cdk deploy tifeatures-timvt-staging -- --require-approval never

deploy_ecs_staging:
if: $ECS_DEPLOY == 'true'
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v2

- name: configure AWS credentials
uses: aws-actions/configure-aws-credentials@v1
with:
aws-access-key-id: ${{ secrets.DEPLOY_USER_AWS_ACCESS_KEY_ID }}
aws-secret-access-key: ${{ secrets.DEPLOY_USER_AWS_SECRET_ACCESS_KEY }}
aws-region: us-west-2

- name: docker build, tag, and push image to Amazon ECR
env:
ECR_REGISTRY: 853558080719.dkr.ecr.us-west-1.amazonaws.com/tf-veda-wfs3-registry-staging
IMAGE_TAG: latest
run: |
aws ecr get-login-password | docker login --username AWS --password-stdin $ECR_REGISTRY
docker build -t $ECR_REGISTRY:$IMAGE_TAG .
docker push $ECR_REGISTRY:$IMAGE_TAG
echo "::set-output name=image::$ECR_REGISTRY:$IMAGE_TAG"
- name: ECS refresh service
env:
ECS_SERVICE_NAME: tf-veda-wfs3-service-staging
AWS_ACCESS_KEY_ID: ${{ secrets.DEPLOY_USER_AWS_ACCESS_KEY_ID }}
AWS_SECRET_ACCESS_KEY: ${{ secrets.DEPLOY_USER_AWS_SECRET_ACCESS_KEY }}
run: |
aws ecs update-service \
--cluster $ECS_SERVICE_NAME \
--service $ECS_SERVICE_NAME \
--task-definition $ECS_SERVICE_NAME \
--force-new-deployment
2 changes: 1 addition & 1 deletion .github/workflows/tags.yaml
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
name: Create tags
name: tag

on:
push:
Expand Down
7 changes: 5 additions & 2 deletions .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -2,8 +2,11 @@ node_modules
cdk.out
.idea

.env
.env*
.ipynb_checkpoints
data/

.pgdata
*/__pycache__

.pgdata

76 changes: 76 additions & 0 deletions READMEV2.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,76 @@
# VEDA Features API

Hosting and serving collections of vector data features for VEDA

---

## Implementation

* Storage: PostGIS
* WFS3 API with query support: [OGC API Features](https://ogcapi.ogc.org/features/) provided by [TiPG](https://github.com/developmentseed/tipg)
* Vector tiles API provided by [TiPG](https://github.com/developmentseed/tipg)

---

### Local Development in Docker

To locally run the site:

`docker-compose up`

---

### Continuous Deployment for `staging` and `production`

Unless you're manually deploying a `dev` environment all deploys happen through the CI/CD Github Actions. So please
grok the `/.github/workflows/deploy.yaml`

We use a third-party action to create tags https://github.com/mathieudutour/github-tag-action

This uses [conventional commit methodology](https://www.conventionalcommits.org/en/v1.0.0/) to create tags using the logic detailed [here](https://github.com/mathieudutour/github-tag-action#bumping)

---

### Manual Deployments

[Manual Deployments Explained](./docs/DEPLOYDETAILED.md)

---

### Infrastructure Changes

Note that each `./terraform/veda-wfs3/vars/<environment>.tf` file targets a different region:
* `staging`, `production` deploys will be happening against `us-west-2`
* `dev` deploys happen against `us-west-1`

Steps:

* install `tfenv` to manage multiple versions: [https://github.com/tfutils/tfenv](https://github.com/tfutils/tfenv)
* our `init.tf` file has a `required_version = "1.3.9"` so install that:

```bash
$ tfenv list
1.1.5
1.1.4

$ tfenv install 1.3.9
$ tfenv use 1.3.9
```
* make sure you setup an `AWS_PROFILE` in your `~/.aws/confg|credentials` files for the correct region
* then you can run `AWS_PROFILE=<region> terraform init`
* make sure you `cp envtf.template .envtf.sh` and change values in there for secrets needed
* then `source .envtf.sh`
* then `cd /terraform/veda-wfs3`
* then `AWS_PROFILE=<region> terraform <plan|apply> -var-file=./vars/<environment>.tf`

---

### Observability and Alarms

[See Obervability and Monitoring](./docs/OBSERVABILITY.md)

---

### License
This project is licensed under **Apache 2**, see the [LICENSE](LICENSE) file for more details.

1 change: 1 addition & 0 deletions docker-compose.yml
Original file line number Diff line number Diff line change
Expand Up @@ -14,6 +14,7 @@ services:
- POSTGRES_HOST=database
- POSTGRES_PORT=5432
- DEBUG=TRUE
# - TIPG_TABLE_CONFIG__public_fireline__datetimecol=t
ports:
- "${MY_DOCKER_IP:-127.0.0.1}:8081:8081"
depends_on:
Expand Down
175 changes: 175 additions & 0 deletions docs/DEPLOYDETAILED.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,175 @@
## Building Docker Image, Putting on ECR, Forcing a Deployment

This verbose and manual document shows show exactly how our CD pipeline works but gives more
context by retrieving the AWS inputs from `aws-cli`. It also can be used to run deployments from local setup.
Take note that we are using `grep` below to whittle down which project and environment
we are targeting from all the potential output:

0. Install `jq` because it's awesome: https://formulae.brew.sh/formula/jq

1. Make sure you've built your local docker branch and it's up to date with any branch changes

```bash
$ docker build -t veda-wfs3-api:latest .
```

2. Export some os env vars so we can use them filter. Make sure they match the environment you want to work against

```bash
$ export TARGET_PROJECT_NAME=veda-wfs3
$ export TARGET_ENVIRONMENT=dev
```

3. Make sure you have an `AWS_PROFILE` setup that matches the AWS `region` you want to work with. In the examples below `uah1` referes to the UAH account in `us-west-2`

4. List existing ECR repositories using "aws-cli" and whittle down which one we want to talk to with os env vars:

```bash
$ AWS_PROFILE=uah1 aws ecr describe-repositories
{
"repositories": [
{
"repositoryArn": "arn:aws:ecr:us-west-2:359356595137:repository/veda-wfs3-registry-dev",
"registryId": "359356595137",
"repositoryName": "veda-wfs3-registry-dev",
"repositoryUri": "359356595137.dkr.ecr.us-west-2.amazonaws.com/veda-wfs3-registry-dev",
"createdAt": "2022-12-10T13:46:05-08:00",
"imageTagMutability": "MUTABLE",
"imageScanningConfiguration": {
"scanOnPush": false
},
"encryptionConfiguration": {
"encryptionType": "AES256"
}
}
]
}
$ AWS_PROFILE=uah1 aws ecr describe-repositories \
| jq '.repositories | map(.repositoryUri)' \
| grep $TARGET_PROJECT_NAME | grep $TARGET_ENVIRONMENT
"359356595137.dkr.ecr.us-west-2.amazonaws.com/veda-wfs3-registry-dev"
```

5. Login to ECR from awscli:

```bash
$ AWS_PROFILE=uah1 aws ecr describe-repositories \
| jq '.repositories | map(.repositoryUri)' \
| grep $TARGET_PROJECT_NAME | grep $TARGET_ENVIRONMENT \
| xargs -I {} bash -c "AWS_PROFILE=uah1 aws ecr get-login-password | docker login --username AWS --password-stdin {}"
```

6. Now re-tag the local image we built with the remote ECR repository and tag name:

```bash
$ AWS_PROFILE=uah1 aws ecr describe-repositories \
| jq '.repositories | map(.repositoryUri)' \
| grep $TARGET_PROJECT_NAME | grep $TARGET_ENVIRONMENT \
| xargs -I {} docker images --format "{{json . }}" {} \
| grep '"Tag":"latest"' \
| jq '"\(.Repository):\(.Tag)"' \
| xargs -I{} docker tag veda-wfs3-api:latest {}
# check your work locally
$ AWS_PROFILE=uah1 aws ecr describe-repositories \
| jq '.repositories | map(.repositoryUri)' \
| grep $TARGET_PROJECT_NAME | grep $TARGET_ENVIRONMENT \
| xargs -I {} docker images --format "{{json . }}" {} \
| grep '"Tag":"latest"' \
| jq '"\(.Repository):\(.Tag)"' \
| jq
{
"Containers": "N/A",
"CreatedAt": "2022-12-12 08:16:23 -0800 PST",
"CreatedSince": "9 minutes ago",
"Digest": "<none>",
"ID": "a0a6c57e40e8",
"Repository": "359356595137.dkr.ecr.us-west-2.amazonaws.com/veda-wfs3-registry-dev",
"SharedSize": "N/A",
"Size": "887MB",
"Tag": "latest",
"UniqueSize": "N/A",
"VirtualSize": "887.2MB"
}
```

7. Push the image from local to ECR:

```bash
$ AWS_PROFILE=uah1 aws ecr describe-repositories \
| jq '.repositories | map(.repositoryUri)' \
| grep $TARGET_PROJECT_NAME | grep $TARGET_ENVIRONMENT \
| xargs -I {} docker images --format "{{json . }}" {} \
| grep '"Tag":"latest"' \
| jq '"\(.Repository):\(.Tag)"' \
| xargs -I{} docker push {}
# check your remote work
$ AWS_PROFILE=uah1 aws ecr describe-repositories \
| jq '.repositories | map(.repositoryUri)' \
| grep $TARGET_PROJECT_NAME | grep $TARGET_ENVIRONMENT \
| AWS_PROFILE=uah1 xargs -I {} aws ecr describe-images --repository-name={}
{
"imageDetails": [
{
"registryId": "359356595137",
"repositoryName": "veda-wfs3-registry-dev",
"imageDigest": "sha256:bf83dd6027aadbf190347529a317966656d875a2aa8b64bbd2cc2589466b68e7",
"imageTags": [
"latest"
],
"imageSizeInBytes": 325163652,
"imagePushedAt": "2022-12-12T08:35:14-08:00",
"imageManifestMediaType": "application/vnd.docker.distribution.manifest.v2+json",
"artifactMediaType": "application/vnd.docker.container.image.v1+json"
}
]
}
```

8. Show your existing clusters:

```bash
$ AWS_PROFILE=uah1 aws ecs list-clusters
{
"clusterArns": [
"arn:aws:ecs:us-west-2:359356595137:cluster/tf-veda-wfs3-service-dev"
]
}
$ AWS_PROFILE=uah1 aws ecs list-clusters \
| jq '.clusterArns[0]' \
| xargs -I{} aws ecs describe-clusters --cluster={}
{
"clusters": [
{
"clusterArn": "arn:aws:ecs:us-west-2:359356595137:cluster/tf-veda-wfs3-service-dev",
"clusterName": "tf-veda-wfs3-service-dev",
"status": "ACTIVE",
"registeredContainerInstancesCount": 0,
"runningTasksCount": 0,
"pendingTasksCount": 0,
"activeServicesCount": 1,
"statistics": [],
"tags": [],
"settings": [],
"capacityProviders": [],
"defaultCapacityProviderStrategy": []
}
],
"failures": []
}
```

9. Once it's there, we can force update the ECS cluster/service/tasks to use it with:
```bash
$ AWS_PROFILE=uah1 aws ecs list-clusters \
| jq '.clusterArns[0]' \
| grep $TARGET_PROJECT_NAME | grep $TARGET_ENVIRONMENT \
| AWS_PROFILE=uah1 xargs -I{} aws ecs describe-clusters --cluster={} \
| jq '.clusters[0].clusterName' \
| AWS_PROFILE=uah1 xargs -I{} aws ecs update-service --cluster {} --service {} --task-definition {} --force-new-deployment > /dev/null
```
Loading

0 comments on commit 7ebe3f9

Please sign in to comment.