Skip to content
This repository has been archived by the owner on Apr 27, 2024. It is now read-only.

Update MySQL Database Backup Script for Improved Functionality and Cleanup #58

Open
wants to merge 1 commit into
base: master
Choose a base branch
from

Conversation

hamdallahjodah
Copy link

This pull request updates the existing MySQL database backup script to enhance its functionality and introduce file cleanup capabilities. The changes aim to make the backup process more efficient and organized.

Key Changes:

Backup Directory Structure: The script now creates a base directory (/tmp/full_database_backups/) for storing individual database backups. This change ensures a more organized approach to manage backups.

Cleanup Old Backups: The script introduces a new feature to cleanup old backups based on the MAX_FILES_TO_KEEP variable. If MAX_FILES_TO_KEEP is set to a positive value, the script will delete older backup files to maintain a specified number of recent backups.

Tarball Compression: Before uploading to the cloud provider, the script now compresses the individual database backups into a tarball (tar.gz). This reduces the size of backups and makes the upload process more efficient.

Encryption with age: If an AGE_PUBLIC_KEY is provided, the script will encrypt the tarball using age before uploading to the cloud provider. This adds an extra layer of security to the backups.

AWS S3 Endpoint Support: The script now supports custom AWS S3-compatible endpoints if AWS_S3_ENDPOINT is provided. This allows using alternative S3-compatible storage services.

How to Test:

  1. Run the updated script locally and ensure it performs database backups as expected.
  2. Verify that the script correctly creates the base directory /tmp/full_database_backups/ for storing individual backups.
  3. Check that old backups are being cleaned up correctly based on the MAX_FILES_TO_KEEP variable.
  4. Confirm that the individual database backups are being tarball-compressed (tar.gz) before being processed for upload.
  5. Test the optional encryption feature by providing a valid AGE_PUBLIC_KEY.
  6. Verify that AWS S3 uploads work correctly, and custom endpoints are used if AWS_S3_ENDPOINT is provided.
  7. Test GCP uploads and ensure backups are successfully stored in the specified GCS bucket.

Additional Notes:

Please update any environment-specific variables (e.g., GCP_GCLOUD_AUTH, TARGET_DATABASE_USER, etc.) according to your setup.
If you have any custom requirements or use cases not covered by this update, please provide feedback so that we can consider further improvements.

Why This Change is Beneficial:

The updated script improves the backup process by organizing backups in a separate directory, introducing tarball compression, and providing options for encryption and custom S3 endpoints. The addition of cleanup functionality also ensures that old backups do not clutter the storage space. Overall, these enhancements make the database backup process more efficient, secure, and maintainable.

Kindly review this pull request and let me know if any further modifications are required. I'm happy to address any feedback or suggestions you may have. Thank you!

@clementnuss
Copy link

this would be quite valuable. @benjamin-maynard can you have a look at it ?

@clementnuss
Copy link

@hamdallahjodah I went another way for my backups: maybe this can help you too:
https://gist.github.com/clementnuss/d66ff435f11570944f646b4f8a1677be

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants