Terraform module to generate well-formed JSON documents that are passed to the aws_ecs_task_definition
Terraform resource as container definitions.
This project is part of our comprehensive "SweetOps" approach towards DevOps.
It's 100% Open Source and licensed under the APACHE2.
We literally have hundreds of terraform modules that are Open Source and well-maintained. Check them out!
IMPORTANT: The master
branch is used in source
just as an example. In your code, do not pin to master
because there may be breaking changes between releases.
Instead pin to the release tag (e.g. ?ref=tags/x.y.z
) of one of our latest releases.
This module is meant to be used as output only, meaning it will be used to create outputs which are consumed as a parameter by Terraform resources or other modules.
For complete examples, see
For a complete example with automated tests, see examples/complete with bats
and Terratest
for the example test.
Available targets:
help Help screen
help/all Display help for all targets
help/short This help short screen
lint Lint terraform code
Name | Version |
---|---|
terraform | >= 0.12.0, < 0.14.0 |
local | ~> 1.2 |
No provider.
Name | Description | Type | Default | Required |
---|---|---|---|---|
command | The command that is passed to the container | list(string) |
null |
no |
container_cpu | The number of cpu units to reserve for the container. This is optional for tasks using Fargate launch type and the total amount of container_cpu of all containers in a task will need to be lower than the task-level cpu value | number |
0 |
no |
container_definition | Container definition overrides which allows for extra keys or overriding existing keys. | map |
{} |
no |
container_depends_on | The dependencies defined for container startup and shutdown. A container can contain multiple dependencies. When a dependency is defined for container startup, for container shutdown it is reversed. The condition can be one of START, COMPLETE, SUCCESS or HEALTHY | list(object({ |
null |
no |
container_image | The image used to start the container. Images in the Docker Hub registry available by default | string |
n/a | yes |
container_memory | The amount of memory (in MiB) to allow the container to use. This is a hard limit, if the container attempts to exceed the container_memory, the container is killed. This field is optional for Fargate launch type and the total amount of container_memory of all containers in a task will need to be lower than the task memory value | number |
null |
no |
container_memory_reservation | The amount of memory (in MiB) to reserve for the container. If container needs to exceed this threshold, it can do so up to the set container_memory hard limit | number |
null |
no |
container_name | The name of the container. Up to 255 characters ([a-z], [A-Z], [0-9], -, _ allowed) | string |
n/a | yes |
dns_search_domains | Container DNS search domains. A list of DNS search domains that are presented to the container | list(string) |
null |
no |
dns_servers | Container DNS servers. This is a list of strings specifying the IP addresses of the DNS servers | list(string) |
null |
no |
docker_labels | The configuration options to send to the docker_labels |
map(string) |
null |
no |
entrypoint | The entry point that is passed to the container | list(string) |
null |
no |
environment | The environment variables to pass to the container. This is a list of maps. map_environment overrides environment | list(object({ |
[] |
no |
environment_files | One or more files containing the environment variables to pass to the container. This maps to the --env-file option to docker run. The file must be hosted in Amazon S3. This option is only available to tasks using the EC2 launch type. This is a list of maps | list(object({ |
null |
no |
essential | Determines whether all other containers in a task are stopped, if this container fails or stops for any reason. Due to how Terraform type casts booleans in json it is required to double quote this value | bool |
true |
no |
extra_hosts | A list of hostnames and IP address mappings to append to the /etc/hosts file on the container. This is a list of maps | list(object({ |
null |
no |
firelens_configuration | The FireLens configuration for the container. This is used to specify and configure a log router for container logs. For more details, see https://docs.aws.amazon.com/AmazonECS/latest/APIReference/API_FirelensConfiguration.html | object({ |
null |
no |
healthcheck | A map containing command (string), timeout, interval (duration in seconds), retries (1-10, number of times to retry before marking container unhealthy), and startPeriod (0-300, optional grace period to wait, in seconds, before failed healthchecks count toward retries) | object({ |
null |
no |
links | List of container names this container can communicate with without port mappings | list(string) |
null |
no |
linux_parameters | Linux-specific modifications that are applied to the container, such as Linux kernel capabilities. For more details, see https://docs.aws.amazon.com/AmazonECS/latest/APIReference/API_LinuxParameters.html | object({ |
null |
no |
log_configuration | Log configuration options to send to a custom log driver for the container. For more details, see https://docs.aws.amazon.com/AmazonECS/latest/APIReference/API_LogConfiguration.html | any |
null |
no |
map_environment | The environment variables to pass to the container. This is a map of string: {key: value}. map_environment overrides environment | map(string) |
null |
no |
mount_points | Container mount points. This is a list of maps, where each map should contain a containerPath and sourceVolume . The readOnly key is optional. |
list |
[] |
no |
port_mappings | The port mappings to configure for the container. This is a list of maps. Each map should contain "containerPort", "hostPort", and "protocol", where "protocol" is one of "tcp" or "udp". If using containers in a task with the awsvpc or host network mode, the hostPort can either be left blank or set to the same value as the containerPort | list(object({ |
[] |
no |
privileged | When this variable is true , the container is given elevated privileges on the host container instance (similar to the root user). This parameter is not supported for Windows containers or tasks using the Fargate launch type. |
bool |
null |
no |
readonly_root_filesystem | Determines whether a container is given read-only access to its root filesystem. Due to how Terraform type casts booleans in json it is required to double quote this value | bool |
false |
no |
repository_credentials | Container repository credentials; required when using a private repo. This map currently supports a single key; "credentialsParameter", which should be the ARN of a Secrets Manager's secret holding the credentials | map(string) |
null |
no |
secrets | The secrets to pass to the container. This is a list of maps | list(object({ |
null |
no |
start_timeout | Time duration (in seconds) to wait before giving up on resolving dependencies for a container | number |
null |
no |
stop_timeout | Time duration (in seconds) to wait before the container is forcefully killed if it doesn't exit normally on its own | number |
null |
no |
system_controls | A list of namespaced kernel parameters to set in the container, mapping to the --sysctl option to docker run. This is a list of maps: { namespace = "", value = ""} | list(map(string)) |
null |
no |
ulimits | Container ulimit settings. This is a list of maps, where each map should contain "name", "hardLimit" and "softLimit" | list(object({ |
null |
no |
user | The user to run as inside the container. Can be any of these formats: user, user:group, uid, uid:gid, user:gid, uid:group. The default (null) will use the container's configured USER directive or root if not set. |
string |
null |
no |
volumes_from | A list of VolumesFrom maps which contain "sourceContainer" (name of the container that has the volumes to mount) and "readOnly" (whether the container can write to the volume) | list(object({ |
[] |
no |
working_directory | The working directory to run commands inside the container | string |
null |
no |
Name | Description |
---|---|
json_map_encoded | JSON string encoded container definitions for use with other terraform resources such as aws_ecs_task_definition |
json_map_encoded_list | JSON string encoded list of container definitions for use with other terraform resources such as aws_ecs_task_definition |
json_map_object | JSON map encoded container definition |
Like this project? Please give it a ★ on our GitHub! (it helps us a lot)
Are you using this project or any of our other projects? Consider leaving a testimonial. =)
Check out these related projects.
- terraform-aws-ecs-codepipeline - Terraform module for CI/CD with AWS Code Pipeline and Code Build for ECS
- terraform-aws-ecs-events - Provides a standard set of ECS events that notify an SNS topic
- terraform-aws-ecs-cloudwatch-autoscaling - Terraform module to autoscale ECS Service based on CloudWatch metrics
- terraform-aws-ecs-container-definition - Terraform module to generate well-formed JSON documents (container definitions) that are passed to the aws_ecs_task_definition Terraform resource
- terraform-aws-ecs-launch-template - Terraform module for generating an AWS Launch Template for ECS that handles draining on Spot Termination Requests
- terraform-aws-ecs-web-app - Terraform module that implements a web app on ECS and supporting AWS resources
- terraform-aws-ecs-spot-fleet - Terraform module to create a diversified spot fleet for ECS clusters
- terraform-aws-ecs-cloudwatch-sns-alarms - Terraform module to create CloudWatch Alarms on ECS Service level metrics
- terraform-aws-ecs-alb-service-task - Terraform module which implements an ECS service which exposes a web service via ALB
Got a question? We got answers.
File a GitHub issue, send us an email or join our Slack Community.
We are a DevOps Accelerator. We'll help you build your cloud infrastructure from the ground up so you can own it. Then we'll show you how to operate it and stick around for as long as you need us.
Work directly with our team of DevOps experts via email, slack, and video conferencing.
We deliver 10x the value for a fraction of the cost of a full-time engineer. Our track record is not even funny. If you want things done right and you need it done FAST, then we're your best bet.
- Reference Architecture. You'll get everything you need from the ground up built using 100% infrastructure as code.
- Release Engineering. You'll have end-to-end CI/CD with unlimited staging environments.
- Site Reliability Engineering. You'll have total visibility into your apps and microservices.
- Security Baseline. You'll have built-in governance with accountability and audit logs for all changes.
- GitOps. You'll be able to operate your infrastructure via Pull Requests.
- Training. You'll receive hands-on training so your team can operate what we build.
- Questions. You'll have a direct line of communication between our teams via a Shared Slack channel.
- Troubleshooting. You'll get help to triage when things aren't working.
- Code Reviews. You'll receive constructive feedback on Pull Requests.
- Bug Fixes. We'll rapidly work with you to fix any bugs in our projects.
Join our Open Source Community on Slack. It's FREE for everyone! Our "SweetOps" community is where you get to talk with others who share a similar vision for how to rollout and manage infrastructure. This is the best place to talk shop, ask questions, solicit feedback, and work together as a community to build totally sweet infrastructure.
Participate in our Discourse Forums. Here you'll find answers to commonly asked questions. Most questions will be related to the enormous number of projects we support on our GitHub. Come here to collaborate on answers, find solutions, and get ideas about the products and services we value. It only takes a minute to get started! Just sign in with SSO using your GitHub account.
Sign up for our newsletter that covers everything on our technology radar. Receive updates on what we're up to on GitHub as well as awesome new projects we discover.
Join us every Wednesday via Zoom for our weekly "Lunch & Learn" sessions. It's FREE for everyone!
Please use the issue tracker to report any bugs or file feature requests.
If you are interested in being a contributor and want to get involved in developing this project or help out with our other projects, we would love to hear from you! Shoot us an email.
In general, PRs are welcome. We follow the typical "fork-and-pull" Git workflow.
- Fork the repo on GitHub
- Clone the project to your own machine
- Commit changes to your own branch
- Push your work back up to your fork
- Submit a Pull Request so that we can review your changes
NOTE: Be sure to merge the latest changes from "upstream" before making a pull request!
Copyright © 2017-2020 Cloud Posse, LLC
See LICENSE for full details.
Licensed to the Apache Software Foundation (ASF) under one
or more contributor license agreements. See the NOTICE file
distributed with this work for additional information
regarding copyright ownership. The ASF licenses this file
to you under the Apache License, Version 2.0 (the
"License"); you may not use this file except in compliance
with the License. You may obtain a copy of the License at
https://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing,
software distributed under the License is distributed on an
"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
KIND, either express or implied. See the License for the
specific language governing permissions and limitations
under the License.
All other trademarks referenced herein are the property of their respective owners.
This project is maintained and funded by Cloud Posse, LLC. Like it? Please let us know by leaving a testimonial!
We're a DevOps Professional Services company based in Los Angeles, CA. We ❤️ Open Source Software.
We offer paid support on all of our projects.
Check out our other projects, follow us on twitter, apply for a job, or hire us to help with your cloud strategy and implementation.
Erik Osterman |
Sarkis Varozian |
Andriy Knysh |
Igor Rodionov |
---|