You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Cassandra AFS implementation stores data by cutting it in chunk and gzipping each chunk.
Therefore, the chunk size does not determine the size of chunks which are actually stored.
What is the expected behavior?
It would make more sense to enforce the chunk size on actually stored data, i.e on gzipped data.
It will allow to really decide the size of data chunks stored in Cassandra.
Also, the chunking logic should be separate from the compression logic, and from the underlying storage, to make it more testable. A possibility is to have a dedicated ChunkedOutputStream class, which would dump byte[] chunks of a given size to an arbitrary consumer (or, possibly, a delegate output stream ...).
What is the motivation / use case for changing the behavior?
Improved mastery of data stored in database. Also, it would help separate more the compression and chunking steps, which will help implement #58 .
The text was updated successfully, but these errors were encountered:
Improvement.
Cassandra AFS implementation stores data by cutting it in chunk and gzipping each chunk.
Therefore, the chunk size does not determine the size of chunks which are actually stored.
It would make more sense to enforce the chunk size on actually stored data, i.e on gzipped data.
It will allow to really decide the size of data chunks stored in Cassandra.
Also, the chunking logic should be separate from the compression logic, and from the underlying storage, to make it more testable. A possibility is to have a dedicated
ChunkedOutputStream
class, which would dumpbyte[]
chunks of a given size to an arbitrary consumer (or, possibly, a delegate output stream ...).Improved mastery of data stored in database. Also, it would help separate more the compression and chunking steps, which will help implement #58 .
The text was updated successfully, but these errors were encountered: