The S3Objects
macro adds a new resource type: AWS::S3::Object
which you can use to populate an S3 bucket.
You can either create new S3 objects or copy S3 buckets from other buckets that you have permissions to access.
As with any other CloudFormation resource, if you delete a stack containing S3 objects defined with this macro, those objects will be deleted.
A typical use case for this macro might be, for example, to populate an S3 website with static assets.
Contributions are welcome.
Install the following applications:
Run pipenv install --dev
to install both production and development
requirements, and pipenv shell
to activate the virtual environment. For more
information see the pipenv docs.
After activating the virtual environment, run pre-commit install
to install
the pre-commit git hook.
First, make any needed updates to the base requirements in Pipfile
, then use
pipenv
to regenerate both Pipfile.lock
and requirements.txt
.
$ pipenv update --dev
We use pipenv
to control versions in testing, but sam
relies on
requirements.txt
directly for building the lambda artifact, so we dynamically
generate requirements.txt
from Pipfile.lock
before building the artifact.
The file must be created in the CodeUri
directory specified in
template.yaml
.
$ pipenv requirements > s3objects/requirements.txt
Additionally, pre-commit
manages its own requirements.
$ pre-commit autoupdate
Use a Lambda-like docker container to build the Lambda artifact
$ sam build --use-container
Tests are defined in the tests
folder in this project, and dependencies are
managed with pipenv
. Install the development dependencies and run the tests
using coverage
.
$ pipenv run coverage run -m pytest tests/ -svv
Automated testing will upload coverage results to Coveralls.
-
You will need an S3 bucket to store the CloudFormation artifacts:
- If you don't have one already, create one with
aws s3 mb s3://<bucket name>
- If you don't have one already, create one with
-
Package the CloudFormation template. The provided template uses the AWS Serverless Application Model so must be transformed before you can deploy it.
aws cloudformation package \ --template-file macro.template \ --s3-bucket <your bucket name here> \ --output-template-file packaged.template
-
Deploy the packaged CloudFormation template to a CloudFormation stack:
aws cloudformation deploy \ --stack-name s3objects-macro \ --template-file packaged.template \ --capabilities CAPABILITY_IAM
-
To test out the macro's capabilities, try launching the provided example template:
aws cloudformation deploy \ --stack-name s3objects-macro-example \ --template-file example.template \ --capabilities CAPABILITY_IAM
To make use of the macro, add Transform: S3Objects
to the top level of your CloudFormation template.
Here is a trivial example template:
Transform: S3Objects
Resources:
Bucket:
Type: AWS::S3::Bucket
Object:
Type: AWS::S3::Object
Properties:
Target:
Bucket: !Ref Bucket
Key: README.md
ContentType: text/plain
Body: Hello, world!
To create a new S3 object, add an AWS::S3::Object
resource to your template and specify the Target
and Body
properties. For example:
NewObject:
Type: AWS::S3::Object
Properties:
Target:
Bucket: !Ref TargetBucket
Key: README.md
Body: |
# My text file
This is my text file;
there are many like it,
but this one is mine.
The Target
property has the following sub-properties:
-
Bucket
(REQUIRED): The name of the bucket that will store the new object -
Key
(REQUIRED): The location within the bucket -
ACL
(OPTIONAL - Defaultprivate
): Sets a canned ACL for the new object
The following sub-properties also apply if you are creating a new object (but not if you are copying an object from another S3 bucket):
ContentType
(OPTIONAL): Sets a custom content type for the new object
The Body
property simply takes a string which will be used to populate the new object.
You can create a binary file by using the Base64Body
property and supplying your content base64-encoded. For example:
SinglePixel:
Type: AWS::S3::Object
Properties:
Target:
Bucket: !Ref TargetBucket
Key: 1pixel.gif
Base64Body: R0lGODdhAQABAIABAP///0qIbCwAAAAAAQABAAACAkQBADs=
To copy an S3 object, you need to specify the Source
property as well as the Target
. For example:
CopiedObject:
Type: AWS::S3::Object
Properties:
Source:
Bucket: !Ref SourceBucket
Key: index.html
Target:
Bucket: !Ref TargetBucket
Key: index.html
ACL: public-read
The Source
property has the following sub-properties:
-
Bucket
(REQUIRED): The bucket to copy from -
Key
(REQUIRED): The key of the S3 object that will be copied
Steve Engledow Senior Solutions Builder Amazon Web Services