Docker Image with Alpine Linux, mongodump and awscli for backup mongo database to s3
This project is a fork of Drivetech/mongodump-s3 which is no longer maintained. I have added a lot more features into this fork. Majorly
- Slack notifications support
- Support for using any S3 compatible storage endpoint
- Support for mongodb 4.x versions
- Added tests backed by github actions to ensure the backups actually work
NOTE: You can take backup to any s3 compatible storage, not just AWS S3. Just use environment variable AWS_S3_ENDPOINT=<s3-endpoint>
to specify the s3 endpoint which you want to use.
Run every day at 2 am
docker run -d --name mongodump \
-e "MONGO_URI=mongodb://user:pass@host:port/dbname" \
-e "AWS_ACCESS_KEY_ID=your_aws_access_key" \
-e "AWS_SECRET_ACCESS_KEY=your_aws_secret_access_key" \
-e "AWS_DEFAULT_REGION=us-west-1" \
-e "S3_BUCKET=your_aws_bucket" \
-e "BACKUP_CRON_SCHEDULE=0 2 * * *" \
vikasy/mongodump-s3
Run every day at 2 am with full mongodb
docker run -d --name mongodump \
-e "MONGO_URI=mongodb://user:pass@host:port/dbname" \
-e "AWS_ACCESS_KEY_ID=your_aws_access_key"
-e "AWS_SECRET_ACCESS_KEY=your_aws_secret_access_key" \
-e "AWS_DEFAULT_REGION=us-west-1" \
-e "S3_BUCKET=your_aws_bucket" \
-e "BACKUP_CRON_SCHEDULE=0 2 * * *" \
-e "MONGO_COMPLETE=true" \
vikasy/mongodump-s3
Run every day at 2 am with full mongodb and keep last 5 backups
docker run -d --name mongodump \
-v /tmp/backup:/backup \
-e "MONGO_URI=mongodb://user:pass@host:port/dbname" \
-e "AWS_ACCESS_KEY_ID=your_aws_access_key" \
-e "AWS_SECRET_ACCESS_KEY=your_aws_secret_access_key" \
-e "AWS_DEFAULT_REGION=us-west-1" \
-e "S3_BUCKET=your_aws_bucket" \
-e "BACKUP_CRON_SCHEDULE=0 2 * * *" \
-e "MONGO_COMPLETE=true" \
-e "MAX_BACKUPS=5" \
vikasy/mongodump-s3
docker run -d --name mongodump \
-e "MONGO_URI=mongodb://user:pass@host:port/dbname" \
-e "AWS_ACCESS_KEY_ID=your_aws_access_key" \
-e "AWS_SECRET_ACCESS_KEY=your_aws_secret_access_key" \
-e "AWS_DEFAULT_REGION=us-west-1" \
-e "S3_BUCKET=your_aws_bucket" \
vikasy/mongodump-s3
docker run -d --name mongodump \
-e "MONGO_URI=mongodb://user:pass@host:port/dbname" \
-e "AWS_ACCESS_KEY_ID=your_aws_access_key" \
-e "AWS_SECRET_ACCESS_KEY=your_aws_secret_access_key" \
-e "AWS_DEFAULT_REGION=us-west-1" \
-e "S3_BUCKET=your_aws_bucket" \
-e "SLACK_URI=your_slack_uri" \
vikasy/mongodump-s3
You need to add a user with the following policies. Be sure to change your_bucket
by the correct name.
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "Stmt1412062044000",
"Effect": "Allow",
"Action": [
"s3:PutObject",
"s3:PutObjectAcl"
],
"Resource": [
"arn:aws:s3:::your_bucket/*"
]
},
{
"Sid": "Stmt1412062128000",
"Effect": "Allow",
"Action": [
"s3:ListBucket"
],
"Resource": [
"arn:aws:s3:::your_bucket"
]
}
]
}
S3_PATH
- Default value ismongodb
. Examples3://your_bucket/mongodb
MONGO_COMPLETE
- Default not set. If set doing backup full mongodbMAX_BACKUPS
- Default not set. If set doing it keeps the last n backups in /backupAWS_S3_ENDPOINT
- Default not set. You can use this for backing up to other s3 compatible storagesBACKUP_NAME
- Default is$(date -u +%Y-%m-%d_%H-%M-%S)_UTC.gz
. If set this is the name of the backup file. Useful when using s3 versioning. (Remember to place .gz extension on your filename)EXTRA_OPTIONS
- Default not set.SLACK_URI
- Default not set. Sends a curl notification to the Slack Incoming Webhook.
Features to be implemented
- kubernetes yaml files and examples
- [*] Update CI to push images to docker hub
- Improve tests to cover more scenarios
- If you get SASL Authentication failure, add
--authenticationDatabase=admin
to EXTRA_OPTIONS. - If you get "Failed: error writing data for collection ... Unrecognized field 'snapshot'", add
--forceTableScan
to EXTRA_OPTIONS.