Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Support compressing/decompressing files to s3 #20

Open
larryfenn opened this issue Jun 12, 2019 · 1 comment
Open

Support compressing/decompressing files to s3 #20

larryfenn opened this issue Jun 12, 2019 · 1 comment

Comments

@larryfenn
Copy link
Contributor

Currently, the contents of the data directory are directly uploaded to S3 and retrieved exactly as they are. Given that data files lend themselves well to being compressed (generally), substantial storage, bandwidth and time savings may be possible for larger projects if something like gzip were run on the data directory. This raises one problem, idiosyncratic to the AP's use case, which is where we have publicly viewable HTML files in a subdirectory of data meant for people to look at.

I have identified three possible approaches:

Option 1

Compress all folders under data except for reports, or some similarly named subfolder, that is explicitly not compressed before being uploaded to s3. Thus, anything put in that subfolder will be accessible directly by s3 pathing.

On s3, the data files would look like this after compression:

data/manual.gz
data/processed.gz
data/source.gz
data/reports/my_report.html
data/reports/some_image.png

Option 2

Support a 'protect' dotfile in directories that marks that directory and all subfolders as being compression-exempt. For example, data/reports would now have a file in it, data/reports/.nocompress, which would stop it from being compressed before being uploaded to s3. This would have the same overall s3 folder structure as above.

Option 3

The datakit-data.json config file expands to include a whitelist of folders to not be compressed, with the default set to data/reports (or just reports)

@zstumgoren
Copy link
Contributor

I like this idea in general and option 1 in particular. A related consideration on the price front would be to consider glacierizing data assets either manually or automatically after some period of time. But this may best be discussed as part of a new ticket, and may ultimately be something specific to each user (e.g. some may want the cost savings at the expense of losing immediate access, whereas others always require immediate access to files).

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

2 participants