Skip to content

Commit

Permalink
Clean up DNS parameters etc
Browse files Browse the repository at this point in the history
  • Loading branch information
cvlc committed Feb 12, 2022
1 parent 4e0c048 commit 5fea1b8
Show file tree
Hide file tree
Showing 9 changed files with 41 additions and 45 deletions.
16 changes: 7 additions & 9 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -48,9 +48,7 @@ The file `variables.tf` declares the Terraform variables required to run this st
```
module "etcd3-terraform" {
source = "github.com/ondat/etcd3-terraform"
key_pair_public_key = "ssh-rsa..."
ssh_cidrs = ["10.2.3.4/32"] # ssh jumpbox
dns = { "domain_name": "mycompany.int" }
dns = "mycompany.int"
vpc_id = module.vpc.vpc_id
subnet_ids = module.vpc.private_subnets
Expand Down Expand Up @@ -93,7 +91,7 @@ module "etcd3-terraform" {
source = "github.com/ondat/etcd3-terraform"
key_pair_public_key = "ssh-rsa..."
ssh_cidrs = ["10.2.3.4/32"] # ssh jumpbox
dns = { "domain_name": "mycompany.int" }
dns = "mycompany.int"
client_cirs = ["10.3.0.0/16"] # k8s cluster
Expand Down Expand Up @@ -149,7 +147,7 @@ etcd is configured with a 100GB data disk per node on Amazon EBS SSDs by default

For further details of what these values and settings mean, refer to [etcd's official documentation](https://etcd.io/docs/v3.5/op-guide/maintenance/).

When conducting upgrades, be aware that changes to the `cloud-init` configuration do not trigger re-creation of the nodes - this is a conscious decision taken to avoid inadvertedly destroying quorum during an update. Make any necessary changes then use `terraform destroy -target=...` and `terraform apply -target=...` on each ASG/launch-group individually to roll them in series without destroying quorum, checking each time that the new node has rejoined the cluster before deleting the old one.
When conducting upgrades or maintenance such as expanding storage, make any necessary changes then use `terraform destroy -target=...` and `terraform apply -target=...` on each ASG/launch-group individually to roll them in series without destroying quorum, checking each time that the new node has rejoined the cluster before deleting the old one.

## How to run etcdctl 🔧
We presume that whatever system you choose to run these commands on can connect to the NLB (ie. if you're using a private subnet, your client machine is within the VPC or connected via a VPN).
Expand Down Expand Up @@ -480,6 +478,7 @@ No requirements.
| [aws_cloudwatch_event_target.lambda-cloudwatch-dns-service-autoscaling](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/cloudwatch_event_target) | resource |
| [aws_cloudwatch_event_target.lambda-cloudwatch-dns-service-ec2](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/cloudwatch_event_target) | resource |
| [aws_iam_instance_profile.default](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/iam_instance_profile) | resource |
| [aws_iam_policy.default](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/iam_policy) | resource |
| [aws_iam_role.default](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/iam_role) | resource |
| [aws_iam_role.lambda-cloudwatch-dns-service](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/iam_role) | resource |
| [aws_iam_role_policy.lambda-cloudwatch-dns-service](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/iam_role_policy) | resource |
Expand Down Expand Up @@ -529,13 +528,13 @@ No requirements.
| <a name="input_associate_public_ips"></a> [associate\_public\_ips](#input\_associate\_public\_ips) | Whether to associate public IPs with etcd instances (suggest false for security) | `string` | `"false"` | no |
| <a name="input_client_cidrs"></a> [client\_cidrs](#input\_client\_cidrs) | CIDRs to allow client access to etcd | `list` | <pre>[<br> "10.0.0.0/8"<br>]</pre> | no |
| <a name="input_cluster_size"></a> [cluster\_size](#input\_cluster\_size) | Number of etcd nodes to launch | `number` | `3` | no |
| <a name="input_dns"></a> [dns](#input\_dns) | Domain to install etcd | `map(string)` | <pre>{<br> "domain_name": "mycompany.int"<br>}</pre> | no |
| <a name="input_environment"></a> [environment](#input\_environment) | Target environment, used to apply tags | `string` | `"development"` | no |
| <a name="input_dns"></a> [dns](#input\_dns) | Private, internal domain name to generate for etcd | `string` | `"mycompany.int"` | no |
| <a name="input_ebs_bootstrap_binary_url"></a> [ebs\_bootstrap\_binary\_url](#input\_ebs\_bootstrap\_binary\_url) | Custom URL from which to download the ebs-bootstrap binary | `any` | `null` | no |
| <a name="input_environment"></a> [environment](#input\_environment) | Target environment, used to apply tags | `string` | `"development"` | no |
| <a name="input_etcd_url"></a> [etcd\_url](#input\_etcd\_url) | Custom URL from which to download the etcd tgz | `any` | `null` | no |
| <a name="input_etcd_version"></a> [etcd\_version](#input\_etcd\_version) | etcd version to install | `string` | `"3.5.1"` | no |
| <a name="input_instance_type"></a> [instance\_type](#input\_instance\_type) | AWS instance type, at least c5a.large is recommended. etcd suggest m4.large. | `string` | `"c5a.large"` | no |
| <a name="input_key_pair_public_key"></a> [key\_pair\_public\_key](#input\_key\_pair\_public\_key) | Public key for SSH access | `any` | n/a | yes |
| <a name="input_key_pair_public_key"></a> [key\_pair\_public\_key](#input\_key\_pair\_public\_key) | Public key for SSH access | `string` | `""` | no |
| <a name="input_nlb_internal"></a> [nlb\_internal](#input\_nlb\_internal) | 'true' to expose the NLB internally only, 'false' to expose it to the internet | `bool` | `true` | no |
| <a name="input_restore_snapshot_ids"></a> [restore\_snapshot\_ids](#input\_restore\_snapshot\_ids) | Map of of the snapshots to use to restore etcd data storage - eg. {0: "snap-abcdef", 1: "snap-fedcba", 2: "snap-012345"} | `map(string)` | `{}` | no |
| <a name="input_role"></a> [role](#input\_role) | Role name used for internal logic | `string` | `"etcd"` | no |
Expand All @@ -552,4 +551,3 @@ No requirements.
| <a name="output_client_cert"></a> [client\_cert](#output\_client\_cert) | Client certificate to use to authenticate with etcd (also see ./client.pem) |
| <a name="output_client_key"></a> [client\_key](#output\_client\_key) | Client private key to use to authenticate with etcd (also see ./client.key) |
| <a name="output_lb_address"></a> [lb\_address](#output\_lb\_address) | Load balancer address for use by clients |

22 changes: 11 additions & 11 deletions asg.tf
Original file line number Diff line number Diff line change
@@ -1,11 +1,11 @@
resource "aws_launch_configuration" "default" {
count = var.cluster_size
name_prefix = "peer-${count.index}.${var.role}.${data.aws_region.current.name}.i.${var.environment}.${var.dns["domain_name"]}-"
name_prefix = "peer-${count.index}.${var.role}.${data.aws_region.current.name}.i.${var.environment}.${var.dns}-"
image_id = var.ami != "" ? var.ami : data.aws_ami.ami.id
instance_type = var.instance_type
ebs_optimized = true
iam_instance_profile = aws_iam_instance_profile.default[count.index].id
key_name = aws_key_pair.default.key_name
key_name = var.key_pair_public_key == "" ? null : aws_key_pair.default[0].key_name
enable_monitoring = false
associate_public_ip_address = var.associate_public_ips
security_groups = [aws_security_group.default.id]
Expand All @@ -18,10 +18,10 @@ resource "aws_launch_configuration" "default" {

etcd_member_unit = templatefile("${path.module}/cloudinit/etcd_member_unit", {
peer_name = "peer-${count.index}"
discovery_domain_name = "${var.role}.${data.aws_region.current.name}.i.${var.environment}.${var.dns["domain_name"]}"
discovery_domain_name = "${var.role}.${data.aws_region.current.name}.i.${var.environment}.${var.dns}"
cluster_name = var.role
}),
etcd_endpoint = "peer-${count.index}.${var.role}.${data.aws_region.current.name}.i.${var.environment}.${var.dns["domain_name"]}"
etcd_endpoint = "peer-${count.index}.${var.role}.${data.aws_region.current.name}.i.${var.environment}.${var.dns}"
ca_file = tls_self_signed_cert.ca.cert_pem,
peer_cert_file = tls_locally_signed_cert.peer[count.index].cert_pem,
peer_key_file = tls_private_key.peer[count.index].private_key_pem,
Expand All @@ -33,13 +33,13 @@ resource "aws_launch_configuration" "default" {

lifecycle {
create_before_destroy = true
ignore_changes = [vpc_classic_link_security_groups, user_data]
ignore_changes = [vpc_classic_link_security_groups]
}
}

resource "aws_autoscaling_group" "default" {
count = var.cluster_size
name = "peer-${count.index}.${var.role}.${data.aws_region.current.name}.i.${var.environment}.${var.dns["domain_name"]}"
name = "peer-${count.index}.${var.role}.${data.aws_region.current.name}.i.${var.environment}.${var.dns}"
max_size = 1
min_size = 1
desired_capacity = 1
Expand All @@ -52,12 +52,12 @@ resource "aws_autoscaling_group" "default" {
wait_for_capacity_timeout = "0"
tag {
key = "Name"
value = "peer-${count.index}.${var.role}.${data.aws_region.current.name}.i.${var.environment}.${var.dns["domain_name"]}"
value = "peer-${count.index}.${var.role}.${data.aws_region.current.name}.i.${var.environment}.${var.dns}"
propagate_at_launch = true
}
tag {
key = "Group"
value = "peer.${var.role}.${data.aws_region.current.name}.i.${var.environment}.${var.dns["domain_name"]}"
value = "peer.${var.role}.${data.aws_region.current.name}.i.${var.environment}.${var.dns}"
propagate_at_launch = true
}
tag {
Expand All @@ -80,7 +80,7 @@ resource "aws_autoscaling_group" "default" {

tag {
key = "r53-domain-name"
value = "${var.environment}.${var.dns["domain_name"]}"
value = "${var.environment}.${var.dns}"
propagate_at_launch = false
}

Expand All @@ -100,11 +100,11 @@ data "aws_subnet" "target" {

module "attached-ebs" {
source = "github.com/ondat/etcd3-bootstrap/terraform/modules/attached_ebs"
group = "peer.${var.role}.${data.aws_region.current.name}.i.${var.environment}.${var.dns["domain_name"]}"
group = "peer.${var.role}.${data.aws_region.current.name}.i.${var.environment}.${var.dns}"
ebs_bootstrap_binary_url = var.ebs_bootstrap_binary_url
attached_ebs = {
for i in range(var.cluster_size) :
"data-${i}.${var.role}.${data.aws_region.current.name}.i.${var.environment}.${var.dns["domain_name"]}" => {
"data-${i}.${var.role}.${data.aws_region.current.name}.i.${var.environment}.${var.dns}" => {
size = var.ssd_size
availability_zone = element(data.aws_subnet.target, i)["availability_zone"]
encrypted = true
Expand Down
10 changes: 5 additions & 5 deletions certs.tf
Original file line number Diff line number Diff line change
Expand Up @@ -35,11 +35,11 @@ resource "tls_private_key" "peer" {
resource "tls_cert_request" "peer" {
key_algorithm = "ECDSA"
private_key_pem = tls_private_key.peer[count.index].private_key_pem
dns_names = ["peer-${count.index}.${var.role}.${data.aws_region.current.name}.i.${var.environment}.${var.dns["domain_name"]}", "${var.role}.${data.aws_region.current.name}.i.${var.environment}.${var.dns["domain_name"]}"]
dns_names = ["peer-${count.index}.${var.role}.${data.aws_region.current.name}.i.${var.environment}.${var.dns}", "${var.role}.${data.aws_region.current.name}.i.${var.environment}.${var.dns}"]
count = var.cluster_size

subject {
common_name = "peer-${count.index}.${var.role}.${data.aws_region.current.name}.i.${var.environment}.${var.dns["domain_name"]}"
common_name = "peer-${count.index}.${var.role}.${data.aws_region.current.name}.i.${var.environment}.${var.dns}"
organization = "Automated via Terraform"
}
}
Expand Down Expand Up @@ -70,11 +70,11 @@ resource "tls_private_key" "server" {
resource "tls_cert_request" "server" {
key_algorithm = "ECDSA"
private_key_pem = tls_private_key.server[count.index].private_key_pem
dns_names = ["peer-${count.index}.${var.role}.${data.aws_region.current.name}.i.${var.environment}.${var.dns["domain_name"]}", "${var.role}.${data.aws_region.current.name}.i.${var.environment}.${var.dns["domain_name"]}"]
dns_names = ["peer-${count.index}.${var.role}.${data.aws_region.current.name}.i.${var.environment}.${var.dns}", "${var.role}.${data.aws_region.current.name}.i.${var.environment}.${var.dns}"]
count = var.cluster_size

subject {
common_name = "peer-${count.index}.${var.role}.${data.aws_region.current.name}.i.${var.environment}.${var.dns["domain_name"]}"
common_name = "peer-${count.index}.${var.role}.${data.aws_region.current.name}.i.${var.environment}.${var.dns}"
organization = "Automated via Terraform"
}
}
Expand Down Expand Up @@ -104,7 +104,7 @@ resource "tls_private_key" "client" {
resource "tls_cert_request" "client" {
key_algorithm = "ECDSA"
private_key_pem = tls_private_key.client.private_key_pem
dns_names = ["${var.role}.${data.aws_region.current.name}.i.${var.environment}.${var.dns["domain_name"]}"]
dns_names = ["${var.role}.${data.aws_region.current.name}.i.${var.environment}.${var.dns}"]

subject {
common_name = "client"
Expand Down
5 changes: 3 additions & 2 deletions iam.tf
Original file line number Diff line number Diff line change
@@ -1,17 +1,18 @@
resource "aws_key_pair" "default" {
count = var.key_pair_public_key == "" ? 0 : 1
key_name = var.role
public_key = var.key_pair_public_key
}

resource "aws_iam_policy" "default" {
name = "${var.role}.${data.aws_region.current.name}.i.${var.environment}.${var.dns["domain_name"]}"
name = "${var.role}.${data.aws_region.current.name}.i.${var.environment}.${var.dns}"
path = "/"
description = "Allow data volume management for instances"
policy = module.attached-ebs.iam_role_policy_document
}

resource "aws_iam_role" "default" {
name = "${count.index}.${var.role}.${data.aws_region.current.name}.i.${var.environment}.${var.dns["domain_name"]}"
name = "${count.index}.${var.role}.${data.aws_region.current.name}.i.${var.environment}.${var.dns}"
count = var.cluster_size

assume_role_policy = <<EOF
Expand Down
6 changes: 3 additions & 3 deletions lambda.tf
Original file line number Diff line number Diff line change
@@ -1,5 +1,5 @@
resource "aws_iam_role" "lambda-cloudwatch-dns-service" {
name = "lambda-dns-service.${data.aws_region.current.name}.i.${var.environment}.${var.dns["domain_name"]}"
name = "lambda-dns-service.${data.aws_region.current.name}.i.${var.environment}.${var.dns}"

assume_role_policy = <<EOF
{
Expand All @@ -19,7 +19,7 @@ EOF
}

resource "aws_iam_role_policy" "lambda-cloudwatch-dns-service" {
name = "lambda-cloudwatch-dns-service.${data.aws_region.current.name}.i.${var.environment}.${var.dns["domain_name"]}"
name = "lambda-cloudwatch-dns-service.${data.aws_region.current.name}.i.${var.environment}.${var.dns}"
role = aws_iam_role.lambda-cloudwatch-dns-service.name

lifecycle {
Expand Down Expand Up @@ -88,7 +88,7 @@ resource "aws_lambda_function" "cloudwatch-dns-service" {
environment {
variables = {
HOSTED_ZONE_ID = aws_route53_zone.default.id
DOMAIN = "i.${var.environment}.${var.dns["domain_name"]}"
DOMAIN = "i.${var.environment}.${var.dns}"
}
}
}
Expand Down
4 changes: 2 additions & 2 deletions nlb.tf
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,7 @@ resource "aws_lb" "nlb" {
internal = var.nlb_internal

tags = {
Name = "${var.role}.${data.aws_region.current.name}.i.${var.environment}.${var.dns["domain_name"]}"
Name = "${var.role}.${data.aws_region.current.name}.i.${var.environment}.${var.dns}"
environment = var.environment
role = var.role
}
Expand Down Expand Up @@ -38,7 +38,7 @@ resource "aws_lb_target_group" "https" {

resource "aws_route53_record" "nlb" {
zone_id = aws_route53_zone.default.id
name = "${var.role}.${data.aws_region.current.name}.i.${var.environment}.${var.dns["domain_name"]}"
name = "${var.role}.${data.aws_region.current.name}.i.${var.environment}.${var.dns}"
type = "A"

alias {
Expand Down
8 changes: 4 additions & 4 deletions route53.tf
Original file line number Diff line number Diff line change
@@ -1,5 +1,5 @@
resource "aws_route53_zone" "default" {
name = "${var.environment}.${var.dns["domain_name"]}"
name = "${var.environment}.${var.dns}"
vpc {
vpc_id = data.aws_vpc.target.id
}
Expand All @@ -8,15 +8,15 @@ resource "aws_route53_zone" "default" {

resource "aws_route53_record" "defaultclient" {
zone_id = aws_route53_zone.default.id
name = "_etcd-client-ssl._tcp.${var.role}.${data.aws_region.current.name}.i.${var.environment}.${var.dns["domain_name"]}"
name = "_etcd-client-ssl._tcp.${var.role}.${data.aws_region.current.name}.i.${var.environment}.${var.dns}"
type = "SRV"
ttl = "1"
records = formatlist("0 0 2380 %s", aws_route53_record.peers.*.name)
}

resource "aws_route53_record" "defaultssl" {
zone_id = aws_route53_zone.default.id
name = "_etcd-server-ssl._tcp.${var.role}.${data.aws_region.current.name}.i.${var.environment}.${var.dns["domain_name"]}"
name = "_etcd-server-ssl._tcp.${var.role}.${data.aws_region.current.name}.i.${var.environment}.${var.dns}"
type = "SRV"
ttl = "1"
records = formatlist("0 0 2380 %s", aws_route53_record.peers.*.name)
Expand All @@ -25,7 +25,7 @@ resource "aws_route53_record" "defaultssl" {
resource "aws_route53_record" "peers" {
count = var.cluster_size
zone_id = aws_route53_zone.default.id
name = "peer-${count.index}.${var.role}.${data.aws_region.current.name}.i.${var.environment}.${var.dns["domain_name"]}"
name = "peer-${count.index}.${var.role}.${data.aws_region.current.name}.i.${var.environment}.${var.dns}"
type = "A"
ttl = "1"
records = ["198.51.100.${count.index}"]
Expand Down
4 changes: 2 additions & 2 deletions sg.tf
Original file line number Diff line number Diff line change
@@ -1,10 +1,10 @@
resource "aws_security_group" "default" {
name = "${var.role}.${data.aws_region.current.name}.i.${var.environment}.${var.dns["domain_name"]}"
name = "${var.role}.${data.aws_region.current.name}.i.${var.environment}.${var.dns}"
description = "ASG-${var.role}"
vpc_id = data.aws_vpc.target.id

tags = {
Name = "${var.role}.${data.aws_region.current.name}.i.${var.environment}.${var.dns["domain_name"]}"
Name = "${var.role}.${data.aws_region.current.name}.i.${var.environment}.${var.dns}"
role = var.role
environment = var.environment
}
Expand Down
11 changes: 4 additions & 7 deletions variables.tf
Original file line number Diff line number Diff line change
Expand Up @@ -96,17 +96,14 @@ variable "allow_download_from_cidrs" {
}

variable "dns" {
type = map(string)

default = {
domain_name = "mycompany.int"
}

description = "Domain to install etcd"
type = string
default = "mycompany.int"
description = "Private, internal domain name to generate for etcd"
}

variable "key_pair_public_key" {
description = "Public key for SSH access"
default = ""
}

variable "cluster_size" {
Expand Down

0 comments on commit 5fea1b8

Please sign in to comment.