An AWS lambda to scan all regions for unattached EBS volumes and delete them.
This lambda will scan all regions active for the current account, looking
for EBS volumes older than a specified age (default five minutes) that have
a status of either error
or available
(aka 'unattached'). Any volumes
with a lambda-ebs-cleanup:ignore
tag set to True
will be ignored.
Once all matching volumes are found, deletes are issued for each.
Contributions are welcome.
Install the following applications:
Run pipenv install --dev
to install both production and development
requirements, and pipenv shell
to activate the virtual environment. For more
information see the pipenv docs.
After activating the virtual environment, run pre-commit install
to install
the pre-commit git hook.
First, make any needed updates to the base requirements in Pipfile
, then use
pipenv
to regenerate both Pipfile.lock
and requirements.txt
.
$ pipenv update --dev
We use pipenv
to control versions in testing, but sam
relies on
requirements.txt
directly for building the lambda artifact, so we dynamically
generate requirements.txt
from Pipfile.lock
before building the artifact.
The file must be created in the CodeUri
directory specified in
template.yaml
.
$ pipenv requirements > requirements.txt
Additionally, pre-commit
manages its own requirements.
$ pre-commit autoupdate
Use a Lambda-like docker container to build the Lambda artifact
$ sam build --use-container
Tests are defined in the tests
folder in this project, and dependencies are
managed with pipenv
. Install the development dependencies and run the tests
using coverage
.
$ pipenv run coverage run -m pytest tests/ -svv
Automated testing will upload coverage results to Coveralls.
Running integration tests requires docker
$ sam local invoke EbsCleanupFunction --event events/event.json
Deployments are sent to the
Sage cloudformation repository
which requires permissions to upload to Sage
bootstrap-awss3cloudformationbucket-19qromfd235z9
and
essentials-awss3lambdaartifactsbucket-x29ftznj6pqw
buckets.
sam package --template-file .aws-sam/build/template.yaml \
--s3-bucket essentials-awss3lambdaartifactsbucket-x29ftznj6pqw \
--output-template-file .aws-sam/build/lambda-ebs-cleanup.yaml
aws s3 cp .aws-sam/build/lambda-ebs-cleanup.yaml s3://bootstrap-awss3cloudformationbucket-19qromfd235z9/lambda-ebs-cleanup/master/
Publishing the lambda makes it available in your AWS account. It will be accessible in the serverless application repository.
sam publish --template .aws-sam/build/lambda-ebs-cleanup.yaml
Making the lambda publicly accessible makes it available in the global AWS serverless application repository
aws serverlessrepo put-application-policy \
--application-id <lambda ARN> \
--statements Principals=*,Actions=Deploy
Create the following sceptre file config/prod/lambda-ebs-cleanup.yaml
template:
type: http
url: "https://bootstrap-awss3cloudformationbucket-19qromfd235z9.s3.amazonaws.com/lambda-ebs-cleanup/master/lambda-ebs-cleanup.yaml"
stack_name: "lambda-ebs-cleanup"
stack_tags:
Department: "Platform"
Project: "Infrastructure"
OwnerEmail: "[email protected]"
Install the lambda using sceptre:
sceptre --var "profile=my-profile" --var "region=us-east-1" launch prod/lambda-ebs-cleanup.yaml
Steps to deploy from AWS console.
- Login to AWS
- Access the serverless application repository -> Available Applications
- Select application to install
- Enter Application settings
- Click Deploy
We have setup our CI to automate a releases. To kick off the process just create a tag (i.e 0.0.1) and push to the repo. The tag must be the same number as the current version in template.yaml. Our CI will do the work of deploying and publishing the lambda.
By default the lambda is scheduled to run daily at 2AM. This can be configured with the
Schedule
paramater using this schedule format.
Once a released template is deployed as a cloudformation stack, locate the EbsCleanupApi
output of the stack and make a simple GET request to the URL; for example:
curl https://RANDOM_STRING.execute-api.us-east-1.amazonaws.com/Prod/clean