Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Cassandra: enforce chunks size on compressed data #59

Open
sylvlecl opened this issue Oct 21, 2020 · 0 comments
Open

Cassandra: enforce chunks size on compressed data #59

sylvlecl opened this issue Oct 21, 2020 · 0 comments

Comments

@sylvlecl
Copy link
Contributor

sylvlecl commented Oct 21, 2020

  • Do you want to request a feature or report a bug?

Improvement.

  • What is the current behavior?

Cassandra AFS implementation stores data by cutting it in chunk and gzipping each chunk.
Therefore, the chunk size does not determine the size of chunks which are actually stored.

  • What is the expected behavior?

It would make more sense to enforce the chunk size on actually stored data, i.e on gzipped data.
It will allow to really decide the size of data chunks stored in Cassandra.

Also, the chunking logic should be separate from the compression logic, and from the underlying storage, to make it more testable. A possibility is to have a dedicated ChunkedOutputStream class, which would dump byte[] chunks of a given size to an arbitrary consumer (or, possibly, a delegate output stream ...).

  • What is the motivation / use case for changing the behavior?

Improved mastery of data stored in database. Also, it would help separate more the compression and chunking steps, which will help implement #58 .

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant