You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
For large files (100MB+), downloading and storing the entire file in-memory is prohibitively expensive, and may not > be needed if only small parts of the file are needed.
It would be great to support HTTP range requests in both synchronous and asynchronous modes.
Some important questions:
How large should the chunks be? Should that be configurable?
Should we prefetch chunks? What prefetch policies are useful?
What configuration knobs should we expose to the user?
What eviction policy should we use? FIFO? How much data should we keep in-memory?
Do browsers cache data retrieved via HTTP range requests? This should make re-acquiring previously evicted blocks much cheaper / avoid draining the server over a long session.
I don't anticipate being able to work on this anytime soon, but I'm recording my thoughts here for now.
This requires some changes to the Fetch backend to support the underlying partial reading and writing of files and configuration for the option. Other than that, it is just a matter of sending the right HTTP headers. If you want to take a look at implementing the feature, I can help with any questions or comments you may have.
@jvilk:
jvilk/BrowserFS#219
The text was updated successfully, but these errors were encountered: