Skip to content

Commit

Permalink
Edited the deploy file; updated the sidebar
Browse files Browse the repository at this point in the history
  • Loading branch information
ipopescu committed Sep 4, 2023
1 parent 4ed61c9 commit 2a28518
Show file tree
Hide file tree
Showing 2 changed files with 127 additions and 96 deletions.
18 changes: 18 additions & 0 deletions config/sidebar.config.js
Original file line number Diff line number Diff line change
Expand Up @@ -274,6 +274,24 @@ module.exports = {
},
items: ["operators/maintenance/archiving-and-restoring", "operators/maintenance/moving-node"],
},
{
type: "category",
label: "AWS Nodes",
collapsible: true,
collapsed: true,
link: {
type: "doc",
id: "operators/aws-nodes/index",
},
items: [
"operators/aws-nodes/deploying",
"operators/aws-nodes/connecting",
"operators/aws-nodes/modules",
"operators/aws-nodes/backup",
"operators/aws-nodes/open-vpn",
"operators/aws-nodes/troubleshooting",
],
},
],
resources: [
"resources/index",
Expand Down
205 changes: 109 additions & 96 deletions source/docs/casper/operators/aws-nodes/1-deploying.md
Original file line number Diff line number Diff line change
@@ -1,84 +1,110 @@
# Deploying the infrastructure
---
title: Deploy the Infrastructure
---

A containerized environment was configured using docker to have a clean, functional, immutable, and disposable development environment. Below we present the development container.
# Deploying the AWS Infrastructure

This section presents the development environment configured using Docker to have a clean, functional, immutable, and disposable development container.

## Prerequisites

| Program | Version |
| -------- | --------|
| terraform | 1.1.9 |
| terragrunt | 0.38.9 |
| aws-cli | 2.5.1 |
| jq | 1.6 |
The following programs are prerequisites.

| Program | Version |
| ------------ | --------|
| `terraform` | 1.1.9 |
| `terragrunt` | 0.38.9 |
| `aws-cli` | 2.5.1 |
| `jq` | 1.6 |

## Agnostic Devcontainer using Docker Compose

## Docker Container Structure

Using Docker Compose, create the following folder structure for the agnostic Docker container.

| Folder | Folder Tree |
| -------- | ----------- |
| .agnostic_devcontainer/ | <br/>.agnostic_devcontainer/<br/>├── docker-compose.yml<br/>└── Dockerfile<br/>|
| docker/ | <br/>docker/<br/>├── .bashrc<br/> ├── .env<br/> ├── requirements.txt<br/> └── entrypoint.sh<br/> |

### Devcontainer Files

### Container Files

| File | Description |
| ---- | ----------- |
| .agnostic_devcontainer/docker-compose.yml | Docker-compose configures the setup variables and mounting points for the development container. |
| .agnostic_devcontainer/Dockerfile | Dockerfile has the instructions that are needed to install all the necessary tools needed for the proposed solution. |
| docker/.bashrc | Custom .bashrc file that customizes the prompt |
| docker/.env | Contains the version number of the tools needed inside variables for the set-up process. |
| docker/requirements.txt | Contains the pip packages and their version needed for the proposed solution. |
| docker/entrypoint.sh | Script that install pre-commit and configures the timezone inside the development container. |
| .agnostic_devcontainer/docker-compose.yml | Docker Compose configures the development container's setup variables and mounting points. |
| .agnostic_devcontainer/Dockerfile | A text document with instructions for installing the necessary tools. |
| docker/.bashrc | A custom `.bashrc` file that customizes the prompt. |
| docker/.env | Contains version numbers for the tools needed in the setup process. |
| docker/requirements.txt | Contains the `pip` packages and their version for the proposed solution. |
| docker/entrypoint.sh | A script that installs the `pre-commit` tool and configures the timezone inside the development container. |


### Set-up Instructions
### Container Setup Instructions

1. In the repository's `.agnostic_devcontainer/` directory, create a container using the following command:

```bash
docker-compose --env-file ../docker/.env up --build -d
```

This command installs the tools and prerequisites necessary to execute the project in the container.

2. Modify the contents of the `.env` file and change the environment variables at your convenience.

3. Configure your AWS and SSH credentials inside the `~/.aws` and `~/.ssh` folders.

4. To stop the development environment, use the following command:

```bash
docker-compose down
```

* Create a container with the development environment using the command `docker-compose --env-file ../docker/.env up --build -d`, at the repository's `.agnostic_devcontainer/` directory.
* All the tools and prerequisites necessary to execute the project will be installed in the devcontainer
* Please, feel free to modify the contents of the `.env` file and change the environment variables at your convenience
* You must configure your aws credentials and ssh credentials inside the `~/.aws` and `~/.ssh` folders on your devcontainer.
* To stop the development environment you must do it with the command `docker-compose down`
## AWS Credentials

## Set up AWS credentials
Please follow the instructions in the [Configuration Basics Guide to Configure AWS-CLI](https://docs.aws.amazon.com/cli/latest/userguide/cli-configure-quickstart.html). After that, you must create and configure the `casper_{env}` profile, e.g., `casper_testnet`.

Please follow the instructions in the [Configuration Basics Guide to Configure AWS-CLI](https://docs.aws.amazon.com/cli/latest/userguide/cli-configure-quickstart.html). After that, you need to create and configure the `casper_{env}` profile. e.g., `casper_testnet`.
> *Note:* Before starting, ensure you have configured access to AWS. Administrator permissions would be helpful for this step.
> *Note:* before starting, ensure you have configured access to AWS. Administrator permissions are recommended
Below is a list of AWS services that the user and profile must access:

Below is the list of AWS services that the user and/or profile must have access to.
| Service | Access Requirement |
| ---------- | ------------------ |
| EC2 | Full access |
| Cloudwatch | Full access |
| VPC | Full access |
| S3 | Full access |
| Secret manager | Read-only access |
| IAM | Full access |
| SNS | Full access |
| Systems Manager features | Full access |

* EC2 full access
* Cloudwatch full access
* VPC full access
* S3 full access
* Secret manager read-only
* IAM full access
* SNS full access
* Systems Manager features full access

## Running IaC
## Running the IaC

1. Validate the main file by environment, replace the values corresponding to your environment like `ips_allows`.
> Possible environments could be *testnet* or *mainnet*
To run the infrastructure as code (IaC), follow these steps.

* `terragrunt/environment/{env}/terragrunt.hcl`
1. Configure the `terragrunt/environment/{env}/terragrunt.hcl` file based on the environment, replacing values that change per environment, such as `ips_allows`. Possible environments could be *testnet* or *mainnet*.

This file contains the following local variables, which are important for each environment: These can be modified according to the operator's needs.
This file contains the following local variables, which can be modified according to the operator's needs.

| Variable | type | Description |
| -------------------- | ----------- | ------------------------------------------------------------ |
| account_id | String | AWS account number |
| aws_region | String | AWS Region where to deploy. |
| environment | String | name of the environment, testnet, or mainnet |
| owner | String | Name of the owner, eg casper |
| project_name | String | Contains the owner + an additional short description. |
| vpc_cidr | String | The IPv4 CIDR block for the VPC e.g., **10.60.0.0/16** |
| cw_namespace | String | Contains the owner +-+ environment. |
| profile_settings_aws | String | Contains the owner +_+ environment. The configured AWS profile |
| instance_type | String | Type of the Instance. |
| nodes_count | Integer | Number of nodes to start. For this version, this value must go to **1** |
| ips_allows | List | List of IPv4 addresses allowed for SSH connection to the node. If you want from any source, enter 0.0.0.0/0 (This variable is only valid when the node is created without VPN - NOT_VPN=true) |
| sns_notifications | map(object) | List of subscribers to the notifications. A new list should be added as required. |
| Variable | type | Description |
| -------------------- | ----------- | ------------------------------------------------------------ |
| account_id | String | AWS account number. |
| aws_region | String | AWS region where to deploy. |
| environment | String | Name of the environment, e.g., `testnet` or `mainnet` |
| owner | String | Name of the owner, e.g., `casper` |
| project_name | String | Contains the owner and a short description. |
| vpc_cidr | String | The IPv4 CIDR block for the VPC, e.g., **10.60.0.0/16** |
| cw_namespace | String | Contains the owner and environment separated with a dash, e.g., `casper-testnet` |
| profile_settings_aws | String | The configured AWS profile containing the owner and environment separated with an underscore, e.g., `casper_testnet`. |
| instance_type | String | Instance type. |
| nodes_count | Integer | Number of nodes to start. For this version, this value must be **1**. |
| ips_allows | List | List of IPv4 addresses allowed to connect to the node via SSH. `0.0.0.0/0` represents any source. This variable is valid only when the node is created without VPN using the setting `NOT_VPN=true`. |
| sns_notifications | map(object) | List of notification subscribers, which should be added as required. |


**Sample Configuration:**

```bash
locals {
Expand All @@ -90,84 +116,71 @@ Below is the list of AWS services that the user and/or profile must have access
vpc_cidr = "10.20.0.0/16"
on_codebuild = get_env("ON_CODEBUILD", false)
cw_namespace = "${local.owner}-${local.environment}" #Namespace to create specific metrics and global resource names.
profile_settings_aws = "${local.owner}_${local.environment}" #Specifies a named profile with long-term credentials that the AWS CLI can use to assume a role that you specified with the role_arn parameter.
profile_settings_aws = "${local.owner}_${local.environment}" #Specifies a named profile with long-term credentials that the AWS CLI can use to assume a role you specified with the role_arn parameter.
instance_type = "t3.2xlarge" #The T3 instances feature: 8 vCPUs y 32 Memory GiB
nodes_count = 1
ips_allows = ["100.52.0.10/32", "100.52.0.11/32"] # List of IPv4 addresses allowed for SSH connection to the node. If you want from any source, enter 0.0.0.0/0 (This variable is only valid when the node is created without VPN - NOT_VPN=true)
ips_allows = ["100.52.0.10/32", "100.52.0.11/32"] # List of IPv4 addresses allowed for SSH connection to the node. If you want from any source, enter 0.0.0.0/0. This variable is only valid when the node is created without VPN - NOT_VPN=true.
sns_notifications = {
email1 = {
protocol = "email" # The protocol to use. The possible values for this are: sqs, sms, lambda, and application. (http or https are partially supported, see below) (email is an option but is unsupported, see below).
protocol = "email" # The protocol to use. The possible values are sqs, sms, lambda, and application. HTTP or HTTPS are partially supported; see below. Email is an option but is unsupported.
endpoint = "[email protected]"
endpoint_auto_confirms = true
raw_message_delivery = false
},
email2 = {
protocol = "email" # The protocol to use. The possible values for this are: sqs, sms, lambda, and application. (http or https are partially supported, see below) (email is an option but is unsupported, see below).
protocol = "email" # The protocol to use. The possible values are sqs, sms, lambda, and application. HTTP or HTTPS are partially supported; see below. Email is an option but is unsupported.
endpoint = "[email protected]"
endpoint_auto_confirms = true
raw_message_delivery = false
}
}
}

```

2. Go to the directory appropriate environment, *testnet* or *mainnet*
2. Navigate to the directory that matches your environment.

* On *testnet*
* For Testnet, use:

```bash
cd terragrunt/environment/testnet/
```

* On *mainnet*
* For Mainnet, use:

```bash
cd terragrunt/environment/mainnet/
```

## Planning IaC
3. Run the following command to validate the Terragrunt configuration. This command prepares the environment, downloads all providers, modules, and dependencies, and returns the number of resources to be provisioned. Finally, it validates all the configurations specified. Depending on your Internet connection and local environment, this operation may take around 30 minutes.

```bash
terragrunt run-all plan
```

This command validates your Terragrunt configuration and will return you with the information on how many resources it is going to add. It will prepare the environment and download all the providers, modules, and dependencies. Finally, it validates that all the configuration is ready to apply.

> **Note:** This operation may take between 20 to 30 minutes, depending on your Internet connection speed and local compute resources.
When running this command for the first time, answer `yes` to create the S3 bucket where Terraform files will be stored. Here is an example output:

The first time when executing this command, it will ask for the creation of the S3 Bucket where the terraform states files will be stored.

When that happens, it should show something similar to the following:

```md
```bash
Remote state S3 bucket casper-testnet-tfg-state does not exist or you don't have permission to access it. Would you like Terragrunt to create it? (y/n) Remote state S3 bucket casper-testnet-tfg-state does not exist or you don't have permission to access it. Would you like Terragrunt to create it? (y/n) Remote state S3 bucket casper-testnet-tfg-state does not exist or you don't have permission to access it. Would you like Terragrunt to create it? (y/n) Remote state S3 bucket casper-testnet-tfg-state does not exist or you don't have permission to access it. Would you like Terragrunt to create it? (y/n) Remote state S3 bucket casper-testnet-tfg-state does not exist or you don't have permission to access it. Would you like Terragrunt to create it? (y/n) Remote state S3 bucket casper-testnet-tfg-state does not exist or you don't have permission to access it. Would you like Terragrunt to create it? (y/n) Remote state S3 bucket casper-testnet-tfg-state does not exist or you don't have permission to access it. Would you like Terragrunt to create it? (y/n) Remote state S3 bucket casper-testnet-tfg-state does not exist or you don't have permission to access it. Would you like Terragrunt to create it? (y/n) Remote state S3 bucket casper-testnet-tfg-state does not exist or you don't have permission to access it. Would you like Terragrunt to create it? (y/n) yes
```

You must answer `yes`
The command should display the following message:

Then, It should display something like this:

```sh
```bash
Terraform has been successfully initialized!
```

## Applying IaC

For the infrastructure implementation, there are two options, where the operator decides which one to execute.
4. In this step, implement the planned infrastructure using the `terragrunt run-all apply` command, with or without a VPN server. This operation may take around 15 minutes, depending on the operator's Internet speed and local compute resources.
* Apply all the infrastructure with the VPN server
* Apply infrastructure without a VPN server
### Applying the IaC with the VPN server
### Applying IaC With the VPN server

Run the following Terragrunt command to *terraform apply on all sub-folders*
Run the following Terragrunt command on all sub-folders.
```bash
terragrunt run-all apply --terragrunt-parallelism 1
```
The output of the previous command will look something like the following:
<details>
<summary>Sample output</summary>
```bash
Group 1
Expand Down Expand Up @@ -203,26 +216,25 @@ Group 6
Are you sure you want to run 'terragrunt apply' in each folder of the stack described above? (y/n) Yes
```
> **Note:** This operation may take between 10 to 15 minutes, depending on your Internet connection speed and local compute resources.
</details>
<br></br>
Finally, it should display something like this:
If the command is successful, the result would look like this:
```sh
Apply complete! Resources: 8 added, 0 changed, 0 destroyed.
Outputs:
```
### Applying IaC without a VPN server

Run the following Terragrunt command to *terraform apply on all sub-folders*
### Applying the IaC without a VPN server
> **Note:** Remember to change the ***ips_allows*** variable with the list of IPs with access to the Node.
Run the following Terragrunt command on all sub-folders. Remember to change the ***ips_allows*** variable to use the list of IPs that can access the node.
```bash
NOT_VPN=true terragrunt run-all apply --terragrunt-exclude-dir compute/vpn --terragrunt-parallelism 1
```
It should display something like this:
<details>
<summary>Sample output</summary>
```bash
Group 1
Expand Down Expand Up @@ -257,18 +269,19 @@ Group 6
Are you sure you want to run 'terragrunt apply' in each folder of the stack described above? (y/n) Yes
```
> **Note:** This operation may take between 10 to 15 minutes, depending on your Internet connection speed and local compute resources.
</details>
<br></br>
After, it should display something like this:
If the command is successful, the result would look like this:
```sh
Apply complete! Resources: 8 added, 0 changed, 0 destroyed.
Outputs:
```
## Destroy the AWS resources using - terragrunt destroy
## Destroying the AWS Resources
To remove the AWS resources created, navigate to the appropriate directory and invoke the following `terragrunt run-all destroy` command:
Enter to the directory of the appropriate environment, *testnet* or *mainnet*. and destroy the desired environment. With the following commands:
```bash
cd terragrunt/environment/{env}/
Expand Down

0 comments on commit 2a28518

Please sign in to comment.