Skip to content

Releases: cloudposse/testing.cloudposse.co

0.7.0 Bump `geodesic` and `terraform` versions

29 Sep 01:11
4dfd5c5
Compare
Choose a tag to compare

what

  • Bump geodesic and terraform versions

why

  • Keep the modules up to date
  • The latest version of AWS CLI is required for EKS cluster authentication (present in te latest geodesic version)
  • Terraform 0.12 is required for EKS cluster

0.6.0 Update `atlantis`. Update VPC and subnets modules for `atlantis`

06 Jun 16:49
84fbe00
Compare
Choose a tag to compare

what

  • Update atlantis
  • Update VPC and subnets modules for atlantis

why

  • Use the latest atlantis server version 0.8.0
  • Allow users to choose between NAT Gateways or NAT Instances to be deployed into the public subnets to allow the servers in the private subnets to access the Internet
  • In many cases, NAT Instances are cheaper than NAT Gateways, and for some use-cases (e.g. testing/demo infrastructure) are more appropriate to use (e.g. save on cost)

references

0.5.0: rename pipeline (#78)

04 Jun 02:28
b4ca97e
Compare
Choose a tag to compare
* rename pipeline

* redo pipelines

* redo build pipeline

* do not build kops manifest here

* Update badge

* fix typo

* Update codefresh/docker/build.yaml

Co-Authored-By: Andriy Knysh <[email protected]>

* Update codefresh/docker/release.yaml

Co-Authored-By: Andriy Knysh <[email protected]>

0.4.0: Upgrade geodesic to 0.114.0 (#77)

03 Jun 21:41
b0d14cc
Compare
Choose a tag to compare
* upgrade geodesic

* Update readme

0.0.0-test4

03 Jun 22:48
Compare
Choose a tag to compare
redo build pipeline

0.0.0-test3

30 Jan 16:57
Compare
Choose a tag to compare
Add override vars for backwards compatibility

These vars are needed for backwards compat with currently provisioned
$stage.cloudposse.co resources, where we didn’t build the parent_domain
name via $STAGE.$NAMESPACE as we have with newer accounts e.g. evenco

0.0.0-test2

30 Jan 13:20
Compare
Choose a tag to compare
Add override vars for backwards compatibility

These vars are needed for backwards compat with currently provisioned
$stage.cloudposse.co resources, where we didn’t build the parent_domain
name via $STAGE.$NAMESPACE as we have with newer accounts e.g. evenco

0.0.0-test1

17 Jan 12:45
Compare
Choose a tag to compare
pin to latest release of geodesic

0.3.0

14 Jan 10:53
Compare
Choose a tag to compare

Upscale cluster to 3 nodes

0.2.0

01 Oct 18:05
c29f7c6
Compare
Choose a tag to compare

what

  • Bump module versions
  • Add eks module

why

  • New geodesic version has Kubernetes and kubectl 1.10
  • New terraform-root-modules version has the EKS module
  • New helmfiles version fixes fluentd chart

provision EKS cluster

Outputs:

config_map_aws_auth = # The EKS service does not provide a cluster-level API parameter or resource to automatically configure the underlying Kubernetes cluster to allow worker nodes to join the cluster via AWS IAM role authentication.
# This is a Kubernetes ConfigMap configuration for worker nodes to join the cluster
# https://www.terraform.io/docs/providers/aws/guides/eks-getting-started.html#required-kubernetes-configuration-to-join-worker-nodes

apiVersion: v1
kind: ConfigMap
metadata:
  name: aws-auth
  namespace: kube-system
data:
  mapRoles: |
    - rolearn: arn:aws:iam::xxxxxxxxxxxxx:role/cpco-testing-eks-workers
      username: system:node:{{EC2PrivateDNSName}}
      groups:
        - system:bootstrappers
        - system:nodes

eks_cluster_arn = arn:aws:eks:us-west-2:xxxxxxxxxxxxx:cluster/cpco-testing-eks-cluster
eks_cluster_certificate_authority_data = xxxxxxxxxxxxx=
eks_cluster_endpoint = https://xxxxxxxxxxxxx.sk1.us-west-2.eks.amazonaws.com
eks_cluster_id = cpco-testing-eks-cluster
eks_cluster_security_group_arn = arn:aws:ec2:us-west-2:xxxxxxxxxxxxx:security-group/sg-xxxxxxxxxxxxx
eks_cluster_security_group_id = sg-xxxxxxxxxxxxx
eks_cluster_security_group_name = cpco-testing-eks-cluster
eks_cluster_version = 1.10

kubeconfig = apiVersion: v1
kind: Config
preferences: {}

clusters:
- cluster:
    server: https://xxxxxxxxxxxxx.sk1.us-west-2.eks.amazonaws.com
    certificate-authority-data: xxxxxxxxxxxxx=
  name: cpco-testing-eks-cluster

contexts:
- context:
    cluster: cpco-testing-eks-cluster
    user: cpco-testing-eks-cluster
  name: cpco-testing-eks-cluster

current-context: cpco-testing-eks-cluster

users:
- name: cpco-testing-eks-cluster
  user:
    exec:
      apiVersion: client.authentication.k8s.io/v1alpha1
      command: aws-iam-authenticator
      args:
        - "token"
        - "-i"
        - "cpco-testing-eks-cluster"

workers_autoscaling_group_arn = arn:aws:autoscaling:us-west-2:xxxxxxxxxxxxx:autoScalingGroup:f9c697c8-4d73-4645-85b7-90743c91fadb:autoScalingGroupName/cpco-testing-eks-20180926045735083000000008
workers_autoscaling_group_default_cooldown = 300
workers_autoscaling_group_desired_capacity = 2
workers_autoscaling_group_health_check_grace_period = 300
workers_autoscaling_group_health_check_type = EC2
workers_autoscaling_group_id = cpco-testing-eks-20180926045735083000000008
workers_autoscaling_group_max_size = 3
workers_autoscaling_group_min_size = 2
workers_autoscaling_group_name = cpco-testing-eks-20180926045735083000000008
workers_launch_template_arn = arn:aws:ec2:us-west-2:xxxxxxxxxxxxx:launch-template/lt-xxxxxxxxxxxxx
workers_launch_template_id = lt-xxxxxxxxxxxxx
workers_security_group_arn = arn:aws:ec2:us-west-2:xxxxxxxxxxxxx:security-group/sg-xxxxxxxxxxxxx
workers_security_group_id = sg-xxxxxxxxxxxxx
workers_security_group_name = cpco-testing-eks-workers

✓   eks ⨠  kubectl get nodes  --kubeconfig kubeconfig-cpco-testing-eks-cluster.yaml
NAME                                           STATUS   ROLES    AGE   VERSION
ip-172-30-133-246.us-west-2.compute.internal   Ready    <none>   26s   v1.10.3
ip-172-30-161-108.us-west-2.compute.internal   Ready    <none>   21s   v1.10.3
✓  eks ⨠  kubectl get pods --all-namespaces --kubeconfig kubeconfig-cpco-testing-eks-cluster.yaml
NAMESPACE     NAME                       READY   STATUS    RESTARTS   AGE
kube-system   aws-node-59z26             1/1     Running   1          1h
kube-system   aws-node-6nzhf             1/1     Running   1          1h
kube-system   kube-dns-7cc87d595-qhshm   3/3     Running   0          2d
kube-system   kube-proxy-2dx4f           1/1     Running   0          1h
kube-system   kube-proxy-jlpf5           1/1     Running   0          1h