-
Notifications
You must be signed in to change notification settings - Fork 697
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Bug]: Node AppHash'ing after v19.x.x upgrade #3344
Comments
Thank you for raising this concern, I'm sorry you are facing issues. Have you tried any other node (default, pruned) from quicksync? We can try and get in contact with quicksync and help them debug. It is weird if this is happening frequently but we have had reports of apphashes tied to If possible, it would be beneficial to try and replicate this on a smaller node (to reduce debugging times). If this issue happens with other quicksync snapshots but not on polkachu or nodestake it could point to a slight misconfiguration in quicksync's export procedure that can be mitigated. |
I forgot to mention, we downloaded Polkachu's pruned snapshot, and it's running fine with the same binary without any issues. |
@MSalopek, did you get a chance to check with the QuickSync team? |
@MSalopek even i am getting similar issues |
ChainLayer has been contacted. Updates will be posted as they reach us. |
The issue seems to be solved on Quicksync's end. Feel free to resync from the newest snapshot. |
No, it's not resolved. We are running pruned nodes for now. |
I can confirm that it's not resloved yet. I've redownloaded archive for 2 of our RPC nodes after message that it got fixed on Quicksync's end, but if keeps on failing. |
Sorry to hear that this is stil persisting. We could provide instructions for a stop-gap solution that you could execute. The solution would require syncing an old gaia node instance and performing upgrades at designated block heights. Unfortunately, we do not have other action we can perform here other than checking in with quicksync to help troubleshoot. I will keep this issue open and close all other related issues. |
I'll reach our to you if we would decide to follow stop-gap solution. Yesterday once again we downloaded latest archival snapshot, but it failed after a while |
@a26nine In downloaded snapshot before wasm there was just one dir - data. |
Is there an existing issue for this?
What happened?
Our
cosmoshub-4
archive nodes stopped progressing after thev19
upgrade. So, we downloaded the archive snapshot from QuickSync. The nodes progressed smoothly for a while, but then it AppHash'd. We waited for a few days and downloaded another snapshot from the same source, and the results were same again; the node AppHash'd after some time. Once more, we waited for a few days for a new snapshot, got it, and got AppHash'd again.The most recent AppHash happened on
v19.2.0
:I am not sure who/what is the culprit here—the snapshot, the binary, or something else?
We rolled back a few times and cleared the
wasm
directory before startinggaiad
. We also tried running with the pre-built binaries supplied in the Releases section. But, none of it helped.Our build process:
make install
(Current go version is1.22.6
)Long Version:
Gaia Version
v19.2.0
How to reproduce?
gaiad
binarygaiad
The node will AppHash after some time.
The text was updated successfully, but these errors were encountered: