diff --git a/.readme/veda-backend.drawio.svg b/.readme/veda-backend.drawio.svg new file mode 100644 index 00000000..9a85b369 --- /dev/null +++ b/.readme/veda-backend.drawio.svg @@ -0,0 +1,4 @@ + + + +
Data Store
Data Store
PgSTAC
PgSTAC
TiTiler
TiTiler
STAC-API
STAC-API
Scientists &
Public
Scientis...
veda-stac-ingestor
veda-stac-ingestor
veda-data-pipelines
veda-data-pipelines
 
Analysis
Platform
...
VPC
VPC
Scientists &
Data Providers
Scientis...
Dashboard (veda-ui & veda-config)
Dashboard (veda-...
Text is not SVG - cannot display
\ No newline at end of file diff --git a/.readme/veda-backend.drawio.xml b/.readme/veda-backend.drawio.xml new file mode 100644 index 00000000..43d3f2aa --- /dev/null +++ b/.readme/veda-backend.drawio.xml @@ -0,0 +1,2 @@ + +7Vxbc+MoFv41qdp5sEp32Y+2k8xO1eyWa9I1M/3UhSVis5GFFuFL5tcPSGBLgHyJZKcznU53xxwQQvCd7xwOR77zpqvdzwTky//gBKZ3rp3s7rz7O9d1HHfIfnHJayUJXa8SLAhKRKOD4An9BYXQFtI1SmDRaEgxTinKm8IYZxmMaUMGCMHbZrNnnDbvmoMF1ARPMUh16R8ooctKOgzsg/zfEC2W8s6OLWrmIH5ZELzOxP3uXO+h/FNVr4DsS7QvliDB25rIe7jzpgRjWn1a7aYw5XMrp6267rGldj9uAjN6zgVudcEGpGsoRxym7NLJM2Y9sKkDcVUR/n/NBzWZ4jVBkLCq/8LtQcw+LcTv8vK5KhD9FfQ1bXbIKwZFCYExa+B4+a66TOn5HlDA6p8oJlB2yh6s6rd5LybW7q81dRujcfGapiiD0z2ibDG2KU4xKdt4zpj/cDlK05r8MRhPAt6+oAS/QFmT4Yx1PklAsYSJ6HEDCUUMaL+COUxnuEAU4YzVzTGleFVrME7RgldQnDMpEKWYrSubfG+ypKuUlR0xSKFCjivL4sH4LUGRV0/0jHZ8HJMcI97Lw4Z1VohOGBBzfsFqt+AqbYFt4VvzdfwC+WyViN4/Q/WUUjGc/WzyocNdKxKdPb4Zb0C8gpS8sibigsjxq0sEZXi2awWVZHtQwUiIljXtiwTZAKH0i33fB+CzDwL7Zj3wWvUgQRvet5j/AyxT+Ezb4d8B7d1VbrZ4+jKeXqQMe3H5uC0qUuKmnORgwv4yZplW/wJWO+USyw0MQpMs0oWO3oz9ckx3UIUmWaQLHb0ZL8lRN4UmWRToI1avdgxXO8rV7K83uYhxmFYmiKnSPSKsccUZGSZcL85kI1YzsvlPz5RUKkI3QjKSD4EFw30Mf4n5aCasWH1SWiWFxk89sFGgslEU6VxkG7jI7s5FvqZ4zcfjq44XOGMrhvlKlHP+P0jpq5h0sKa4uSIaEpyh+8Dv3oq3lINhsvdmak2EP8M7BYSOubfFDVMKigLFUvyIUnlvmCWykTCITCLqD6sFE8UjqxZfuI/CBQRkAWnDcTljRQlMAUWbZu9d1ifof33YM5DXP8X1ZeErL3BqEeX7Xb32/rVemkGC2FNwv+DeqPcffrW991vt8JSH/D42vje3+gv6glJ+4/f1qNttVZt1++7c6g5WLAWreQK+Pa8zYdv7t2he4DYtWuBpFi2MdIsmZV10KLodY0Y/ImMOvyvGHH6cmALfJw3Gs18+2e8HYz/f1/350BBbkLIu+jDSsMGVWU6m0PkeCdHaU+DXel0LH35w6pMR48beYPh+5CfH0xZJMhJYZ0J8inlsABW0uOMB3xCsSv0u/68kVXvyJmYzB4eu/lCz9TxlsOk9jFW8QBovpaZ9hn011l4X5b16oF3H9pq0a+tOZ2AIowQ9hFEcx6CJNyDi44T6CFYo5bPR1IZ2zO1h5JxHvedwpnsTf1FcOuOwq0XWgmZkzVdDZtWoxFXKcu+HcR4Crn+6tYEJGDCzFg9QtoAFxeQivlLguF0iCp/yalBbppBNZJk5ok5KAsNH3Mk3hJcbCFRZ5uqR18AZGc6BnMDRWcPzemCN9pOgXjGTAAoGOcohtz7FJ2g6gsZTQCNJ5SaQ0QP2V3bwox/Kww++Mw8/uMTDr6tz5Y9n8+Ktbvl5/vf+yjGD12uBihvcasYm+RmT1SW9KjpTg7rZKb+Eso5RUzN9wTV4wRmkW0xeCitO8To5tiE45dFJB91TKZEXZoAyfzwrJa7dUx5FqHjco8BkPr0rnV06obauP/rhpYz/1PkrfEf6ukIw/oOvkGeIIb2ngTEF0D/4xhXuEBXByVEoyl+lt8I+H1wXXnitFdSjmuttgd3+l7xlCzzyLN+z7cAZOp6vJr4FocLD1bi1DbHWbegrO2v7ejvrkQbR32dTDaXGhDEtWUxNFNOSxJoJYlrylZp4pSVdNdO/tAwxNY1MyzVrpqNpGWFq2piWW3Y04aumgKad3KkoX8zKgHVKRB9qkK9qk6YgL9D8sAGD8ZoUDMG/weqUrNoomIKBXM3y0qMyHd2Utd82eXx8A6m7bmfllBVsLlC2+LUs3XvH9qKNaGsPbpSrKJPn6DtK3xS63DfsYgJc0ynCP8YE+IFfMwED23Ls4GI78Nbzrp7sh6fbD+dmBiS0jxoQ1ZE/14AEkWJARlczIPs5vVXMpA6RM7JEelaEfkAnSaABundM9JDjuTwS0iWe2uWs84LkjgsDIB0fSrzcMiN4gxJIih7GrKjX56nnzU49fXvU4NHQ032Ha516Gt9kUaAgXbq2BW+dAvG6HZjLnuxLp0aJTplf8gkdQx6iqxqjN82Of3p2ToL25OGKhl+jprSqlQrOv9YEWjFe5euqNxOiL17K+pJVmNHXrB4wdCJrFATDIc+kGvmOaYWGPZylGJLrr3yW4l75LOXg9o4i6QYLJ2Q0vEb047pxMx03io/i6z5KFUy7hUPSQ7K+25Iq2j0JVe/5DW5CsZxjQBLW4F/lOfIaGb2gso7R9TNa/PQ2X+IdzpwlMXY5bpaOTMfj5jZKlFe4lu9G9oEP3abB10zaaGQN7Rp7GuizF+sfnbZv0r6gVfkS/Ol1O2kQy57G0vEyemHiZvdLSvkr/GP+RO5jnGS2hUqYsgUi3MhxL7b0hR+5nPnBj5Ddv/w8cP18N9jgdMAnhBPNI7eKoBg47tDKs4WJnxtJDKrjeMrbFYAsBz+pv9nfqMAkgbXWveDLYfgKGpiS6Wx1oxxanreHVOgbQNXSpBPGzjiekBhjj0wRSH/jTmO24LXGIGcNghJUZYGIBylT36ogIP9YwrLtTJgtEUXZYlr6XtzwDqoA0eXpLQRTIKA+6o047NCK+DdJKAuyX+Sh5egpko49tGrkEfqhYaWHVtRDsrqrR/b11X3zVlJbBEaK7OeIN3zWfnKFkiQ1RZX3FHVsS3kq7K6alHIHWUWmWclr20/mBCfrmIpXH3Jj+Bzk6NsCULgFr/1sPV0lEcqT2Zb17ZWvo0fKOn2HwhlB60/oXAc6V/lKDhVMV3xphhUPX3pTxZMP3yzkPfwN \ No newline at end of file diff --git a/README.md b/README.md index 586d2087..986a125c 100644 --- a/README.md +++ b/README.md @@ -9,9 +9,14 @@ The primary tools employed in the [eoAPI demo](https://github.com/developmentsee - [titiler](https://github.com/developmentseed/titiler) - [titiler-pgstac](https://github.com/stac-utils/titiler-pgstac) +## VEDA backend context +![architecture diagram](.readme/veda-backend.drawio.svg) + +Veda backend is is the central index of the [VEDA ecosystem](#veda-ecosystem). This project provides the infrastructure for a PgSTAC database, STAC API, and TiTiler. This infrastructure is used to discover, access, and visualize the Analysis Ready Cloud Optimized (ARCO) assets of the VEDA Data Store. + ## Deployment -This repo includes CDK scripts to deploy a PgStac AWS RDS database and other resources to support APIs maintained by the VEDA backend development team. +This project uses an AWS CDK [CloudFormation](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/Welcome.html) stack to deploy a full AWS virtual private cloud environment with a database and supporting lambda function APIs. The deployment constructs, database, and API services are highly configurable. This section provices basic deployment instructions as well as support for customization. ### Tooling & supporting documentation @@ -20,38 +25,25 @@ This repo includes CDK scripts to deploy a PgStac AWS RDS database and other res ### Enviroment variables -An [.example.env](.example.env) template is supplied for for local deployments. If updating an existing deployment, it is essential to check the most current values for these variables by fetching these values from AWS Secrets Manager. The environment secrets are named `-backend/-env`, for example `veda-backend/dev-env`. +An [.example.env](.example.env) template is supplied for for local deployments. If updating an existing deployment, it is essential to check the most current values for these variables by fetching these values from AWS Secrets Manager. The environment secrets are named `--env`, for example `veda-backend-dev-env`. +> **Warning** The environment variables stored as AWS secrets are manually maintained and should be reviewed before deploying updates to existing stacks. ### Fetch environment variables using AWS CLI -To retrieve the variables for a stage that has been previously deployed, the secrets manager can be used to quickly populate an .env file. -> Note: The environment variables stored as AWS secrets are manually maintained and should be reviewed before using. +To retrieve the variables for a stage that has been previously deployed, the secrets manager can be used to quickly populate an .env file with [scripts/sync-env-local.sh](scripts/sync-env-local.sh). ``` -export AWS_SECRET_ID=-backend/-env - -aws secretsmanager get-secret-value --secret-id ${AWS_SECRET_ID} --query SecretString --output text | jq -r 'to_entries|map("\(.key)=\(.value|tostring)")|.[]' > .env +./scripts/sync-env-local.sh ``` - +### Basic environment variables | Name | Explanation | | --- | --- | -| `APP_NAME` | Optional app name used to name stack and resources, defaults to `veda` | +| `APP_NAME` | Optional app name used to name stack and resources, defaults to `veda-backend` | | `STAGE` | **REQUIRED** Deployment stage used to name stack and resources, i.e. `dev`, `staging`, `prod` | -| `VPC_ID` | Optional resource identifier of VPC, if none a new VPC with public and private subnets will be provisioned. | -| `PERMISSIONS_BOUNDARY_POLICY_NAME` | Optional name of IAM policy to define stack permissions boundary | -| `CDK_DEFAULT_ACCOUNT` | When deploying from a local machine the AWS account id is required to deploy to an exiting VPC | -| `CDK_DEFAULT_REGION` | When deploying from a local machine the AWS region id is required to deploy to an exiting VPC | | `VEDA_DB_PGSTAC_VERSION` | **REQUIRED** version of PgStac database, i.e. 0.5 | | `VEDA_DB_SCHEMA_VERSION` | **REQUIRED** The version of the custom veda-backend schema, i.e. 0.1.1 | | `VEDA_DB_SNAPSHOT_ID` | **Once used always REQUIRED** Optional RDS snapshot identifier to initialize RDS from a snapshot | -| `VEDA_DB_PRIVATE_SUBNETS` | Optional boolean to deploy database to private subnet | -| `VEDA_DOMAIN_HOSTED_ZONE_ID` | Optional Route53 zone identifier if using a custom domain name | -| `VEDA_DOMAIN_HOSTED_ZONE_NAME` | Optional custom domain name, i.e. veda-backend.xyz | -| `VEDA_DOMAIN_ALT_HOSTED_ZONE_ID` | Optional second Route53 zone identifier if using a custom domain name | -| `VEDA_DOMAIN_ALT_HOSTED_ZONE_NAME` | Optional second custom domain name, i.e. alt-veda-backend.xyz | -| `VEDA_DOMAIN_API_PREFIX` | Optional domain prefix override supports using a custom prefix instead of the STAGE variabe (an alternate version of the stack can be deployed with a unique STAGE=altprod and after testing prod API traffic can be cut over to the alternate version of the stack by setting the prefix to prod) | -| `VEDA_RASTER_ENABLE_MOSAIC_SEARCH` | Optional deploy the raster API with the mosaic/list endpoint TRUE/FALSE | -| `VEDA_RASTER_DATA_ACCESS_ROLE_ARN` | Optional arn of IAM Role to be assumed by raster-api for S3 bucket data access, if not provided default role for the lambda construct is used | +> **Note** See [Advanced Configuration](docs/advanced_configuration.md) for details about custom configuration options. ### Deploying to the cloud @@ -71,13 +63,9 @@ python3 -m pip install -e ".[dev,deploy,test]" ``` # Review what infrastructure changes your deployment will cause cdk diff -# Execute deployment, security changes will require approval for deployment +# Execute deployment and standby--security changes will require approval for deployment cdk deploy ``` - -#### Check CloudFormation deployment status - -After logging in to the console at https://.signin.aws.amazon.com/console the status of the CloudFormation stack can be viewed here: https://.console.aws.amazon.com/cloudformation/home. ## Deleting the CloudFormation stack @@ -87,79 +75,11 @@ If this is a development stack that is safe to delete, you can delete the stack 2. Detach the Internet Gateway (IGW) from the VPC and delete it. 3. If this stack created a new VPC, delete the VPC (this should delete a subnet and security group too). -## Deployment to MCP and/or an existing VPC - -### MCP access - - At this time, this project requires that anyone deploying to the Mission Cloud Platform (MCP) environments should have gone through a NASA credentialing process and then submitted and gotten approval for access to the VEDA project on MCP. +## Custom deployments -### MCP and existing VPC endpoint requirements - -VPC interface endpoints must be configured to allow app components to connect to other services within the VPC and gateway endpoints need to be configured for external connections. - -| service-name | vpc-endpoint-type | comments | -| -- | -- | -- | -| secretsmanager | Interface | security group configuration recommendations below | -| logs | Interface | cloudwatch-logs, security group configuration recommendations below | -| s3 | Gateway | | -| dynamodb | Gateway | required if using DynamoDB streams | - -### Create `Interface` VPC endpoints -Create a security group for the VPC Interface endpoints ([AWS docs](https://docs.aws.amazon.com/cli/latest/userguide/cli-services-ec2-sg.html)) -```bash -aws ec2 create-security-group --vpc-id --group-name vpc-interface-endpoints --description "security group for vpc interface endpoints" -``` -Configure ingress policy for this SG (the egress is configured for 'free' when a new SG is created) -```bash -# Lookup CidrBlock -aws ec2 describe-vpcs --vpc-ids $VPC_ID | jq -r '.Vpcs[].CidrBlock' - -aws ec2 authorize-security-group-ingress --group-id --protocol tcp --port 443 --cidr -``` -Create VPC Interface endpoints -``` -# Choose private subnets (example subnet was generated by aws-cdk) -aws ec2 describe-subnets --filters Name=vpc-id,Values= Name=tag:aws-cdk:subnet-name,Values=private | jq -r '.Subnets[].SubnetId' - -# Secrets manager endpoint -aws ec2 create-vpc-endpoint \ ---vpc-id \ ---vpc-endpoint-type Interface \ ---service-name com.amazonaws.us-west-2.secretsmanager \ ---subnet-ids \ ---security-group-ids - -# Cloudwatch logs endpoint uses same security group cfg -aws ec2 create-vpc-endpoint \ ---vpc-id \ ---vpc-endpoint-type Interface \ ---service-name com.amazonaws.us-west-2.logs \ ---subnet-ids \ ---security-group-ids -``` - -### Create `Gateway` VPC endpoints -``` -# List route tables for VPC -aws ec2 describe-route-tables --filters Name=vpc-id,Values= | jq -r '.RouteTables[].RouteTableId' - -# Create Gateway endpoint for S3 -aws ec2 create-vpc-endpoint \ ---vpc-id \ ---vpc-endpoint-type Gateway \ ---service-name com.amazonaws.us-west-2.s3 \ ---route-table-ids - -# Optional create Gateway endpoint for DynamoDB -aws ec2 create-vpc-endpoint \ ---vpc-id \ ---vpc-endpoint-type Gateway \ ---service-name com.amazonaws.us-west-2.dynamodb \ ---route-table-ids -``` - -## [OPTIONAL] Deploy standalone base infrastructure -For convenience, [standalone base infrastructure](standalone_base_infrastructure/README.md#standalone-base-infrastructure) scripts are provided to deploy base infrastructure to simulate deployment in a controlled environment. +The default settings for this project generate a complete AWS environment including a VPC and gateways for the stack. See this guidance for adjusting the veda-backend stack for existing managed and/or shared AWS environments. +- [Deploy to an existing managed AWS environment](docs/deploying_to_existing_environments.md) +- [Creating a shared base VPC and AWS environment](docs/deploying_to_existing_environments.md#optional-deploy-standalone-base-infrastructure) ## Local Docker deployment @@ -174,15 +94,34 @@ docker compose down # Operations -## Ingesting metadata -STAC records should be loaded using [pypgstac](https://github.com/stac-utils/pgstac#pypgstac). The [cloud-optimized-data-pipelines](https://github.com/NASA-IMPACT/cloud-optimized-data-pipelines) project provides examples of cloud pipelines that use pypgstac to load data into a STAC catalog, as well as examples of transforming data to cloud optimized formats. +## Adding new data to veda-backend + +> **Warning** PgSTAC records should be loaded in the database using [pypgstac](https://github.com/stac-utils/pgstac#pypgstac) for proper indexing and partitioning. + +The VEDA ecosystem includes tools specifially created for loading PgSTAC records and optimizing data assets. The [veda-data-pipelines](https://github.com/NASA-IMPACT/veda-data-pipelines) project provides examples of cloud pipelines that transform data to cloud optimized formats, generate STAC metadata, and submit records for publication to the veda-backend database using the [veda-stac-ingestor](https://github.com/NASA-IMPACT/veda-stac-ingestor). ## Support scripts Support scripts are provided for manual system operations. - [Rotate pgstac password](support_scripts/README.md#rotate-pgstac-password) -## Usage examples: -https://github.com/NASA-IMPACT/veda-documentation +# VEDA ecosystem + +## Projects +| Name | Explanation | +| --- | --- | +| **veda-backend** | Central index (database) and APIs for recording, discovering, viewing, and using VEDA assets | +| [**veda-config**](https://github.com/NASA-IMPACT/veda-config) | Configuration for viewing VEDA assets in dashboard UI | +| [**veda-ui**](https://github.com/NASA-IMPACT/veda-ui) | Dashboard UI for viewing and analysing VEDA assets | +| [**veda-stac-ingestor**](https://github.com/NASA-IMPACT/veda-stac-ingestor) | Entry-point for users/services to add new records to database | +| [**veda-data-pipelines**](https://github.com/NASA-IMPACT/veda-data-pipelines) | Cloud optimize data assets and submit records for publication to veda-stac-ingestor | +| [**veda-documentation**](https://github.com/NASA-IMPACT/veda-documentation) | Documentation repository for end users of VEDA ecosystem data and tools | + +## VEDA usage examples + +### [VEDA documentation](https://nasa-impact.github.io/veda-documentation/) + +### [VEDA dashboard](https://www.earthdata.nasa.gov/dashboard) + # STAC community resources ## STAC browser diff --git a/docs/advanced_configuration.md b/docs/advanced_configuration.md new file mode 100644 index 00000000..9ea34125 --- /dev/null +++ b/docs/advanced_configuration.md @@ -0,0 +1,24 @@ +# Advanced Configuration +The constructs and applications in this project are configured using [pydantic](https://docs.pydantic.dev/usage/settings/). The settings are defined in config.py files stored alongside the associated construct or application--for example the settings for the RDS PostgreSQL construct are defined in [database/infrastructure/config.py](../database/infrastructure/config.py); the settings for the TiTiler API are defined in [raster_api/runtime/src/config.py](../raster_api/runtime/src/config.py). For custom configuration, use environment variables to override the pydantic defaults. + +## Selected configuration variables +Environment variables for specific VEDA backend components are prefixed, for example database configuration variables are prefixed `VEDA_DB`. See the config.py file in each construct for the appropriate prefix. +| Name | Explanation | +| --- | --- | +| `APP_NAME` | Optional app name used to name stack and resources, defaults to `veda` | +| `STAGE` | **REQUIRED** Deployment stage used to name stack and resources, i.e. `dev`, `staging`, `prod` | +| `VPC_ID` | Optional resource identifier of VPC, if none a new VPC with public and private subnets will be provisioned. | +| `PERMISSIONS_BOUNDARY_POLICY_NAME` | Optional name of IAM policy to define stack permissions boundary | +| `CDK_DEFAULT_ACCOUNT` | When deploying from a local machine the AWS account id is required to deploy to an exiting VPC | +| `CDK_DEFAULT_REGION` | When deploying from a local machine the AWS region id is required to deploy to an exiting VPC | +| `VEDA_DB_PGSTAC_VERSION` | **REQUIRED** version of PgStac database, i.e. 0.5 | +| `VEDA_DB_SCHEMA_VERSION` | **REQUIRED** The version of the custom veda-backend schema, i.e. 0.1.1 | +| `VEDA_DB_SNAPSHOT_ID` | **Once used always REQUIRED** Optional RDS snapshot identifier to initialize RDS from a snapshot | +| `VEDA_DB_PRIVATE_SUBNETS` | Optional boolean to deploy database to private subnet | +| `VEDA_DOMAIN_HOSTED_ZONE_ID` | Optional Route53 zone identifier if using a custom domain name | +| `VEDA_DOMAIN_HOSTED_ZONE_NAME` | Optional custom domain name, i.e. veda-backend.xyz | +| `VEDA_DOMAIN_ALT_HOSTED_ZONE_ID` | Optional second Route53 zone identifier if using a custom domain name | +| `VEDA_DOMAIN_ALT_HOSTED_ZONE_NAME` | Optional second custom domain name, i.e. alt-veda-backend.xyz | +| `VEDA_DOMAIN_API_PREFIX` | Optional domain prefix override supports using a custom prefix instead of the STAGE variabe (an alternate version of the stack can be deployed with a unique STAGE=altprod and after testing prod API traffic can be cut over to the alternate version of the stack by setting the prefix to prod) | +| `VEDA_RASTER_ENABLE_MOSAIC_SEARCH` | Optional deploy the raster API with the mosaic/list endpoint TRUE/FALSE | +| `VEDA_RASTER_DATA_ACCESS_ROLE_ARN` | Optional arn of IAM Role to be assumed by raster-api for S3 bucket data access, if not provided default role for the lambda construct is used | \ No newline at end of file diff --git a/docs/deploying_to_existing_environments.md b/docs/deploying_to_existing_environments.md new file mode 100644 index 00000000..2f167cb5 --- /dev/null +++ b/docs/deploying_to_existing_environments.md @@ -0,0 +1,75 @@ +# Deploying to Existing Environments + +## Deployment to MCP and/or an existing VPC + +### MCP access + + At this time, this project requires that anyone deploying to the Mission Cloud Platform (MCP) environments should have gone through a NASA credentialing process and then submitted and gotten approval for access to the VEDA project on MCP. + +### MCP and existing VPC endpoint requirements + +VPC interface endpoints must be configured to allow app components to connect to other services within the VPC and gateway endpoints need to be configured for external connections. + +| service-name | vpc-endpoint-type | comments | +| -- | -- | -- | +| secretsmanager | Interface | security group configuration recommendations below | +| logs | Interface | cloudwatch-logs, security group configuration recommendations below | +| s3 | Gateway | | +| dynamodb | Gateway | required if using DynamoDB streams | + +### Create `Interface` VPC endpoints +Create a security group for the VPC Interface endpoints ([AWS docs](https://docs.aws.amazon.com/cli/latest/userguide/cli-services-ec2-sg.html)) +```bash +aws ec2 create-security-group --vpc-id --group-name vpc-interface-endpoints --description "security group for vpc interface endpoints" +``` +Configure ingress policy for this SG (the egress is configured for 'free' when a new SG is created) +```bash +# Lookup CidrBlock +aws ec2 describe-vpcs --vpc-ids $VPC_ID | jq -r '.Vpcs[].CidrBlock' + +aws ec2 authorize-security-group-ingress --group-id --protocol tcp --port 443 --cidr +``` +Create VPC Interface endpoints +``` +# Choose private subnets (example subnet was generated by aws-cdk) +aws ec2 describe-subnets --filters Name=vpc-id,Values= Name=tag:aws-cdk:subnet-name,Values=private | jq -r '.Subnets[].SubnetId' + +# Secrets manager endpoint +aws ec2 create-vpc-endpoint \ +--vpc-id \ +--vpc-endpoint-type Interface \ +--service-name com.amazonaws.us-west-2.secretsmanager \ +--subnet-ids \ +--security-group-ids + +# Cloudwatch logs endpoint uses same security group cfg +aws ec2 create-vpc-endpoint \ +--vpc-id \ +--vpc-endpoint-type Interface \ +--service-name com.amazonaws.us-west-2.logs \ +--subnet-ids \ +--security-group-ids +``` + +### Create `Gateway` VPC endpoints +``` +# List route tables for VPC +aws ec2 describe-route-tables --filters Name=vpc-id,Values= | jq -r '.RouteTables[].RouteTableId' + +# Create Gateway endpoint for S3 +aws ec2 create-vpc-endpoint \ +--vpc-id \ +--vpc-endpoint-type Gateway \ +--service-name com.amazonaws.us-west-2.s3 \ +--route-table-ids + +# Optional create Gateway endpoint for DynamoDB +aws ec2 create-vpc-endpoint \ +--vpc-id \ +--vpc-endpoint-type Gateway \ +--service-name com.amazonaws.us-west-2.dynamodb \ +--route-table-ids +``` + +## [OPTIONAL] Deploy standalone base infrastructure +For convenience, [standalone base infrastructure](standalone_base_infrastructure/README.md#standalone-base-infrastructure) scripts are provided to deploy base infrastructure to simulate deployment in a controlled environment. diff --git a/scripts/sync-env-local.sh b/scripts/sync-env-local.sh new file mode 100755 index 00000000..98c4ecaf --- /dev/null +++ b/scripts/sync-env-local.sh @@ -0,0 +1,5 @@ +#!/usr/bin/env bash +# Use this script to load environment variables for a deployment from AWS Secrets + +echo Loading environment secrets from $1 +aws secretsmanager get-secret-value --secret-id $1 --query SecretString --output text | jq -r 'to_entries|map("\(.key)=\(.value|tostring)")|.[]' > .env