You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
First of all, big thanks for this tool. You saved my life with this.
Now, an enhancement request. Would it be possible to add --offset and --size options for the source file/drive? Possibly even a --target-offset as well!
Rationale: I have a 2 TB device I need to sync to a remote host. The internet connection is imperfect (and slow), so I need to restart it sometimes.
Now, on reconnect rechecking even 30% of a 2TB drive takes about 2 hours (because of the actual device speed). This is 2 hours of checking for something which is known to be OK - or which could be easily and quickly fixed by one last full file sweep at the end of the full file transfer.
Now, with --offset and --size parameters I could split the same transfers into chunks of, let's say, 100 GB each, syncing each of them, checking for integrity and moving on to the next. This would allow for faster checks on reconnections, and in the end I could just concatenate the resulting files back to the full image and be done with it.
With --target-offset I won't even need to concatenate the files, it'll just happen automatically.
I'm pretty positive that operating with offsets would be beneficial for other uses as well, basically allowing diskrsync to become something like dd over ssh.
Thanks again for the amazing tool!
The text was updated successfully, but these errors were encountered:
Without actually checking the code I'd guess that source and destination must be the same (length and content) since the hashing wouldn't work otherwise.
Same offset for both src and tgt could work, or, as you seem to suggest, --begin-pos and --end-pos of the transferred data.
Hello!
First of all, big thanks for this tool. You saved my life with this.
Now, an enhancement request. Would it be possible to add
--offset
and--size
options for the source file/drive? Possibly even a--target-offset
as well!Rationale: I have a 2 TB device I need to sync to a remote host. The internet connection is imperfect (and slow), so I need to restart it sometimes.
Now, on reconnect rechecking even 30% of a 2TB drive takes about 2 hours (because of the actual device speed). This is 2 hours of checking for something which is known to be OK - or which could be easily and quickly fixed by one last full file sweep at the end of the full file transfer.
Now, with
--offset
and--size
parameters I could split the same transfers into chunks of, let's say, 100 GB each, syncing each of them, checking for integrity and moving on to the next. This would allow for faster checks on reconnections, and in the end I could just concatenate the resulting files back to the full image and be done with it.With
--target-offset
I won't even need to concatenate the files, it'll just happen automatically.I'm pretty positive that operating with offsets would be beneficial for other uses as well, basically allowing diskrsync to become something like
dd
over ssh.Thanks again for the amazing tool!
The text was updated successfully, but these errors were encountered: