Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Make RPC client try more #2996

Merged
merged 5 commits into from
Nov 2, 2024
Merged

Make RPC client try more #2996

merged 5 commits into from
Nov 2, 2024

Conversation

roman-khimov
Copy link
Member

No description provided.

Added by ddf1ac0, currently not affecting
anything.

Signed-off-by: Roman Khimov <[email protected]>
These channels are set once per Client lifetime, there is no need to
synchronize.

Signed-off-by: Roman Khimov <[email protected]>
We're not making a switch here, so we don't need a full lock.

Signed-off-by: Roman Khimov <[email protected]>
Keep "subscriptions" per-connection.

Signed-off-by: Roman Khimov <[email protected]>
Copy link

codecov bot commented Nov 1, 2024

Codecov Report

Attention: Patch coverage is 0% with 278 lines in your changes missing coverage. Please review.

Project coverage is 22.91%. Comparing base (838126c) to head (5ec3d5a).
Report is 11 commits behind head on master.

Files with missing lines Patch % Lines
pkg/morph/client/client.go 0.00% 81 Missing ⚠️
pkg/morph/client/constructor.go 0.00% 60 Missing ⚠️
pkg/morph/client/connection.go 0.00% 40 Missing ⚠️
pkg/morph/client/notary.go 0.00% 34 Missing ⚠️
pkg/morph/client/notifications.go 0.00% 34 Missing ⚠️
pkg/morph/client/multi.go 0.00% 20 Missing ⚠️
pkg/morph/client/nns.go 0.00% 9 Missing ⚠️
Additional details and impacted files
@@            Coverage Diff             @@
##           master    #2996      +/-   ##
==========================================
+ Coverage   22.88%   22.91%   +0.03%     
==========================================
  Files         785      786       +1     
  Lines       58806    58709      -97     
==========================================
- Hits        13457    13454       -3     
+ Misses      44469    44374      -95     
- Partials      880      881       +1     

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

Copy link
Member

@carpawell carpawell left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I do not mind the code but not sure I understand why this PR is so big, "Reconnect indefinitely." could be solved in a single line change while the other functional changes just made this client less flexible (this client is used in SN, IR and even used to be used in external client or maybe still).

@@ -493,28 +451,26 @@ func (c *Client) TxHalt(h util.Uint256) (res bool, err error) {

// TxHeight returns true if transaction has been successfully executed and persisted.
func (c *Client) TxHeight(h util.Uint256) (res uint32, err error) {
c.switchLock.RLock()
defer c.switchLock.RUnlock()
var conn = c.conn.Load()
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

so many vars at the first line of the func...

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Love Pascal.


var err error
var act *actor.Actor
var (
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

even here? more parentheses, more nesting, more lines

Copy link
Member Author

@roman-khimov roman-khimov Nov 2, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I can add even more of them into this var () block for you. But hey, we're defining some variables, var () is exactly for that and it's preferred over a series of var ... var ... var ....

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

as always, i prefer :=

@roman-khimov
Copy link
Member Author

The other change here is that the client will fail requests easily instead of blocking when it's disconnected. It's harder to predict the effect, but it seems to be more fair (associated requests will fail quickly instead of waiting for unknown time).

Copy link
Contributor

@cthulhu-rider cthulhu-rider left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

woth to be changeloged

Reconnect indefinitely. After the initial probe this should be the behavior
for client, we can not work without blockchain data so we must wait for it as
long as needed, not entering any "inactive" state where the node pretends to
be alive, but can't do anything.

This makes client context (including everything belonging to it) an atomically
updated thing, regular calls fetch it when needed and can break if there are no
current connection. Meanwhile notification processing thread can reconnect when
needed and restore functionality.

Signed-off-by: Roman Khimov <[email protected]>
@cthulhu-rider cthulhu-rider merged commit ed2a5e0 into master Nov 2, 2024
19 of 20 checks passed
@cthulhu-rider cthulhu-rider deleted the make-morph-try-more branch November 2, 2024 12:35
roman-khimov added a commit that referenced this pull request Nov 7, 2024
internalErr has some buffer to accept multiple errors, but it only processes
one of them, so if we have some cascading failures with multiple components
reporting an error some of them will end up panicking because of closed
channed. It's not very critical since the node is to die anyway, but it's
nothing good either.

This particular case shouldn't really happen after #2996, but leaving this
channel open is better than panic in case some other events lead to multiple
sends to it.

Signed-off-by: Roman Khimov <[email protected]>
roman-khimov added a commit that referenced this pull request Nov 7, 2024
internalErr has some buffer to accept multiple errors, but it only processes
one of them, so if we have some cascading failures with multiple components
reporting an error some of them will end up panicking because of closed
channed. It's not very critical since the node is to die anyway, but it's
nothing good either.

This particular case shouldn't really happen after #2996, but leaving this
channel open is better than panic in case some other events lead to multiple
sends to it.

Signed-off-by: Roman Khimov <[email protected]>
carpawell added a commit that referenced this pull request Nov 7, 2024
internalErr has some buffer to accept multiple errors, but it only
processes one of them, so if we have some cascading failures with
multiple components reporting an error some of them will end up
panicking because of closed channed. It's not very critical since the
node is to die anyway, but it's nothing good either.

This particular case shouldn't really happen after #2996, but leaving
this channel open is better than panic in case some other events lead to
multiple sends to it.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants