Define a new BatchObjectStore
trait. Start with just a single method: get_batch(Vec<operations>) -> Stream
.
#32
Labels
Milestone
ObjectStore
method in a loop.impl BatchObjectStore for IoUringLocal
. Implement justget_batch
. #33Item
in theStream
, see New crate: apply a user-supplied function to aStream
of buffers #26Why define a new
BatchObjectStore
trait?IoUringLocal::get_batch
#34 against other implementations. I have a hunch that, when we're dealing with millions of operations, it may be a significant overhead to create oneFuture
per operation, and to wake thoseFutures
.ObjectStore
API lacks some functionality that we need:ObjectStore::get_ranges
only returns once all the byte_ranges have been read.ObjectStore::get_range
multiple times. But that limit's LSIO's ability to optimise the reads.IoUringLocal::submit
function, such that no operations are submitted toio_uring
untilsubmit
is called? Whensubmit
is called, LSIO would first optimise all the operations submitted so far.BatchObjectStore
trait, but instead is:ObjectStoreWithBuffer
trait, which defines a bunch ofget_with_buffer
methods.WaitForSubmit
trait, with asubmit
method?Stream
(akaAsyncIterator
) of buffers. Then we can have a separate crate which applies an arbitrary processing function (such as decompression) to aStream
of buffers, in parallel across CPU cores (New crate: apply a user-supplied function to aStream
of buffers #26).Future
(see code in Try interleaving compute with IO #37).BatchObjectStore
trait, maybe we can just write a utility function which takes aVec<Future>
and returns aStream
?So, in conclusion, I think the main reason for wanting a new
BatchObjectStore
trait is because it might perform better. All the other reasons for wanting aBatchObjectStore
trait can be achieved in a less intrusive fashion.So, I should implement an MVP
BatchObjectStore
, just to benchmark it.The text was updated successfully, but these errors were encountered: