Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
#84 - clearing all blocks for a
dataCid
should remove the data from…
… disk right now (#279) `leveldb` seems to not immediately delete things from disk when calling `clear` (or `del`) instead, `leveldb` will periodically remove nonreferenced data when it determines that it needs to (or if it's at an opportune moment, e.g. `open`) this can be manually tested (we can't do an automated test because `approximateSize` isn't available on all platforms) by doing the following: 1. change any of the `DataStore` tests involving large amounts of data (e.g. `10_000_000`) to be `it.only` 2. repeatedly do `npm run test:node` 3. observe the size of `TEST-DATASTORE` on disk the size of `TEST-DATASTORE` should not go above ~3× the size of the data (e.g. `30_000_000`) furthermore, if you then changed that test to not do anything other than `open`, you should see the size of `TEST-DATASTORE` decrease on most runs until it reaches a small enough value that `leveldb` doesn't feel the need to do any compacting if instead of all that we wanted to delete the data on `clear`, we need to leverage the additional method `compactRange` provided by `classic-level` in order to make this the nicest API, we only allow this for `sublevel` as that's the easiest way to derive a range to give to `compactRange` (based on the `prefix` of the `sublevel`) also, for now we're only using `compactRange` inside `clear` as the current structure of `DataStoreLevel` leverages it to delete data (and that's where the large majority of massive data will be)
- Loading branch information