You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I would like to propose that we distribute the mainnet and perhaps testnet snapshots used for FastSync globally. (used to be: slow download rates from contabo s3 bucket, sea cable between Germany and Finland cut, slow downloads between USA and Finland potentially)
Since the load-utxo script should validate the sha256 file hashes, it shouldn't pose any risk to distribute these files.
In a first p.o.c. I've build a server based GeoIP logic to calculate the distance between the remote_ip and the download servers and redirect to the nearest download server. This approach is still not ideal. (it doesn't know the actual download speeds which would be more relevant)
As an alternative, I would like to explore a client based approach:
Downloading the first 100 Bytes (potentially we need a bit more) from each mirror and measuring the execution time:
Before starting to build something for this issue, I would like to have the following questions cleared/discussed:
Is it safe to give mirroring of the snapshot files in the hands of 3rd partys (like myself) | Are we safe against manipulation thanks to the trusted sha256 hashes?
Are there any established scripts/best practises for this problem?
(follow up to the discussion on #957)
I would like to propose that we distribute the mainnet and perhaps testnet snapshots used for FastSync globally. (used to be: slow download rates from contabo s3 bucket, sea cable between Germany and Finland cut, slow downloads between USA and Finland potentially)
Since the load-utxo script should validate the sha256 file hashes, it shouldn't pose any risk to distribute these files.
In a first p.o.c. I've build a server based GeoIP logic to calculate the distance between the remote_ip and the download servers and redirect to the nearest download server. This approach is still not ideal. (it doesn't know the actual download speeds which would be more relevant)
As an alternative, I would like to explore a client based approach:
Downloading the first 100 Bytes (potentially we need a bit more) from each mirror and measuring the execution time:
Before starting to build something for this issue, I would like to have the following questions cleared/discussed:
The text was updated successfully, but these errors were encountered: