Skip to content

Latest commit

 

History

History
96 lines (70 loc) · 5.05 KB

README.md

File metadata and controls

96 lines (70 loc) · 5.05 KB

AWS Infrastructure

These roles help you manage assets in AWS. Because these roles are AWS specific, you should not try to use them in a non-AWS environment. There are also some quite specific variables we expect to exist, specifically:

  • _aws_region
  • _aws_profile
  • _aws_resource_name

Hosts and groups handling

These roles assume you use the AWS EC2 inventory plugin to automatically build inventory:

This can be loaded via your ansible.cfg file in your config repository.

You should place a file called aws_ec2.yml in the hosts directory of your config repository. Our standard file looks like this:

plugin: amazon.aws.aws_ec2
filters:
  tag:Ansible: managed
keyed_groups:
  - key: tags.Name
    prefix: ""
  - key: tags.Env
    prefix: ""
  - key: tags.Profile
    prefix: ""
  - key: tags.Infra
    prefix: ""

How it works

The plugin is loading all EC2 instances that are tagged with Ansible: managed and then grouping them by the tags Name, Env, Profile and Infra. Any hyphens in tags are automatically converted to underscores, and the prefixing convention is taken from the default behaviour of the ansible.builtin.constructed plugin, which you can read here - note specifically the leading_separator parameter and its documentation:

Consequently, because we group all infra by the Name tag, effectively our inventory will always contain a group consisting of the name of that machine, prefixed with an underscore, for example the server named web1-example-com would end up in a group of one instance like this:

  |--@_web1_example_com:
  |  |--ec2-1-112-233-9.eu-west-1.compute.amazonaws.com

In this way we can act on a specific host or group of hosts by invoking its unique group, for example you can use a line like this at the top of your infrastructure plays to load the target(s) using a group name:

- hosts: "_{{ _aws_resource_name | regex_replace('-', '_') }}"
  become: true

Debugging and viewing hosts

You can view the graphed default infrastructure from the command line of a controller with a command like this when logged in as the ce-provision user, usually controller:

ansible-inventory -i ~/ce-provision/hosts/aws_ec2.yml --graph

If you wanted to see the inventory for another boto profile you need to set the AWS_PROFILE environment variable. For example, this would graph the acme profile's inventory:

AWS_PROFILE=acme ansible-inventory -i ~/ce-provision/hosts/aws_ec2.yml --graph

You will note there are other groupings, for example you can call all the _prod infrastructure because there is also a grouping against the Env tag, or you can call all the _web servers because they are also grouped by Profile, and so on.

Unmanaged infra

If you want a host that is not tagged with Ansible: managed in AWS, or indeed not in AWS at all, to be "known" to Ansible you need to add it to hosts.yml in your config repo.

Using group_vars

Once you understand this, the group_vars directory within your config repository starts to make sense. You can set variables that apply to any group that gets created automatically by the inventory plugin, for example, if you have a test infrastructure called test you can have a hosts/group_vars/_test folder containing variables which will apply to every single server in the test infra and take precedence over the defaults, which you can define in hosts/group_vars/all. Similarly we might have a _production folder containing variables for every server tagged in a production environment, regardless of infra.

You can play with tags in your plugin config to create the combinations and groupings you need.

Connection types

There are two different patterns for acting on AWS infrastructure. When you are connecting to an existing server and manipulating the standard packages, such as you would with any other server, you can make your playbook start like this for auto-discovery:

- hosts: "_{{ _aws_resource_name | regex_replace('-', '_') }}"
  become: true

However, when you are building AWS infrastructure and manipulating things via the AWS API, most of your actions need to occur on the controller, because your individual servers do not have the AWS API credentials. To achieve this, while retaining the necessary group variables, we use this pattern:

- hosts: "_{{ _aws_resource_name | regex_replace('-', '_') }}"
  connection: local
  become: false

The last two lines are very important, connection: local tells Ansible to stay on the controller and become: false tells it to stay as the controller user which has the AWS credentials available to it.

If you need to carry out tasks on the remote server(s) during an AWS infrastructure build you will need to set connection: ssh on a task level so the action occurs on the intended target.