-
Notifications
You must be signed in to change notification settings - Fork 1.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat(rpc): submit chainlock signature if needed RPC #5765
feat(rpc): submit chainlock signature if needed RPC #5765
Conversation
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
there could be an edge case where the CL could arrive to an EvoNode faster through Platform quorum than regular P2P propagation.
But why is this an issue exactly? Why do we want to submit chainlock ourselves instead of maybe verifying it via verifychainlock
and allowing the node to receive it the usual way?
Also, that's a lot of code duplication with verifychainlock
but for some reason behaviour differs in some parts. Why is that?
What if we simplify this down to
|
I will try to answer to all of your comments here.
Regarding Core only, this is indeed not an issue. But if we see Core+Platform as a system, this is a race condition for Chainlock signature. This PR will help Platform avoid waiting for Core to get the CL from regular propagation.
Didn't know that. All right, will change it. That means that height can be optional yes. |
@UdjinM6 Applied @PastaPastaPasta 's suggestions. |
Co-authored-by: UdjinM6 <[email protected]>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
utACK
Not sure what's going on but Gitlab CI just won't start here for some reason... EDIT: pushed the branch manually https://gitlab.com/dashpay/dash/-/pipelines/1109351723 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
re-utACK
Thanks for approving your changes 😃 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
utACK for squash merge
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sorry; needs release notes first; otherwise looks good
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Small typo fix in release notes
Co-authored-by: thephez <[email protected]>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
👍
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
utACK
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
utACK for squash merge
So we have Is it really an error if the block is unknown?, Shouldn't we just be returning false? From platform's perspective this is not an error, and while we do vote down the proposal, this is not an "error" pathway. |
sounds reasonable. |
Yes @knst could you make that happen? |
assert_equal(best_0['signature'], best_1['signature']) | ||
assert_equal(best_0['known_block'], False) | ||
self.reconnect_isolated_node(0, 1) | ||
self.sync_all() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@QuantumExplorer that works not as described in the PR description.
If block is unknown -> CL is still submitted and if it succeeds we return true
.
We do not return error if block is not know, we just quietly process it, verify, re-transmit to network and add to known chainlocks.
Nothing to change already.
here's even tests for it.
@knst Part of the requirement is that it return true or false based on whether it had the block (so we didn't have to call another RPC), right now it can't ever return false. |
Talked to Sam; we want change the API in this way This provides two benefits; 1st is a short circuiting if we know the proposed chainlock could not overwrite the current one (due to lower height) |
Issue being fixed or feature implemented
Once Platform is live, there could be an edge case where the CL could arrive to an EvoNode faster through Platform quorum than regular P2P propagation.
What was done?
This PR introduces a new RPC
submitchainlock
with the following 3 mandatory parameters:blockHash
,signature
andheight
.Besides some basic tests:
the RPC returns an error (could happen if the node is stucked)the CL still proceed, RPC returns true and CL is broadcastedHow Has This Been Tested?
feature_llmq_chainlocks.py
was modified with the following scenario:getbestchainlock()
)getbestchainlock()
and make sure the CL was processed + 'known_block' is falseBreaking Changes
no
Checklist: