-
Notifications
You must be signed in to change notification settings - Fork 26
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Parallel Hashing #495
Comments
It's actually slightly more complex, since the hasher will produce a different output depending on the order that files are passed to it. In a torrent file, Consider a torrent with the following files:
If the piece size is 2, In Thanks for the suggestion though! I wish it were that simple T_T |
Then could you use the chunked iterator to split and feed the hasher piece
sized portions in parallel somewhere around line 101 with rayon returning
the hashes with an index as it finishes?
…On Wed, 30 Jun 2021, 01:24 Casey Rodarmor, ***@***.***> wrote:
It's actually slightly more complex, since the hasher will produce a
different output depending on the order that files are passed to it.
In a torrent file, info.pieces contains the bytes of the SHA hashes of
the contents of the files, and often multiple files will contribute to a
single hash.
Consider a torrent with the following files:
a: "xyz",
b: "123",
If the piece size is 2, info.pieces will contain 3 hashes. The 3 hashes
will be hash("yx"), hash("z1"), and hash("23").
In imdl's implementation Hasher::hash_file is called with the path to a,
which will then add hash("xy") to the in-progress info.pieces, then the
next call to Hasher::hash_file must be passed the path to b, so that it
gets the first byte of b, in this case 1, so that it can push hash("z1")
into info.pieces. So since these calls are order-sensitive, it can't be
parallelized with rayon without some additional refactoring.
Thanks for the suggestion though! I wish it were that simple T_T
—
You are receiving this because you authored the thread.
Reply to this email directly, view it on GitHub
<#495 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AACHF4UK4MKMS746F23V3TLTVJP4LANCNFSM47PBKBMA>
.
|
I suppose that there would have to be an iterator over file bytes first, and that iterator would need to be parallelized. One question is whether the current hashing algorithm is I/O or CPU bound, since that would suggest whether parallelizing reads or hashing should be the priority. This is discussed a bit in #26, but I think this issue is useful for tracking parallelization of hashing. |
intermodal/src/hasher.rs
Line 69 in 2346c30
This may be something you've already ruled out, however I wanted to suggest it just in case
The text was updated successfully, but these errors were encountered: