-
Notifications
You must be signed in to change notification settings - Fork 2.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[4/4] - multi: integrate new rbf coop close FSM into the existing peer flow #8453
base: master
Are you sure you want to change the base?
[4/4] - multi: integrate new rbf coop close FSM into the existing peer flow #8453
Conversation
Important Review skippedAuto reviews are limited to specific labels. 🏷️ Labels to auto review (1)
Please check the settings in the CodeRabbit UI or the You can disable this status message by setting the Thank you for using CodeRabbit. We offer it for free to the OSS community and would appreciate your support in helping us grow. If you find it useful, would you consider giving us a shout-out on your favorite social media? 🪧 TipsChatThere are 3 ways to chat with CodeRabbit:
Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments. CodeRabbit Commands (Invoked using PR comments)
Other keywords and placeholders
CodeRabbit Configuration File (
|
8692815
to
f189dda
Compare
f189dda
to
ae5fd0d
Compare
Repurposing this to be the commits that integrates the new state machine into the daemon. New commit set coming shortly. Finalizing the itests, then will remove this from draft. |
ae5fd0d
to
9d76f2f
Compare
cbf3350
to
fd59d13
Compare
9d76f2f
to
b941ef2
Compare
Pushed a series of new commits that includes an e2e itest for the new RBF flow. Both sides can increase the fee rate for their version until one of them finally confirms. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ok so this does work. But I am afraid that I'm rather not fond of the dynamic nature of the message router and a lot of the switch code that handles multiplexing the different coop close protocols.
Right now it seems like we have made the peer responsible for managing the channel closure and I'm really not sure that's the right call. We have now introduced a new thread of control with respect to channel id message serialization and I think that can be problematic.
Protofsms always launch new threads afaict, and so now we have the main peer thread, the link thread, and the pfsm ccv2 thread all competing with one another for message ordering.
It occurs to me that the main weakness of protofsm is this requirement of always having a new thread to launch it. I find myself wanting a means of defining state machines that is composable such that the composition still shares the same control thread, lest we create more and more opportunities for concurrency issues.
Overall I can't find any issues with the actual implementation of the CCV2 protocol here. The tests look good. Some of the edges need to be sanded down. I also can't fully endorse the protofsm approach more broadly based off of what I see here.
We don't return an error on broadcast fail as the broadcast might have failed due to insufficient fees, or inability to be replaced, which may happen when one side attempts to unnecessarily bump their coop close fee.
This'll be useful to communicate what the new fee rate is to an RPC caller.
If we go to close while the channel is already flushed, we might get an extra event, so we can safely ignore it and do a self state transition.
This fixes some existing race conditions, as the `finalizeChanClosure` function was being called from outside the main event loop.
If we hit an error, we want to wipe the state machine state, which also includes removing the old endpoint.
This'll allow us to notify the caller each time a new coop close transaction with a higher fee rate is signed.
Resp is always nil, so we actually need to log event.Update here.
In this commit, we extend `CloseChannelAssertPending` with new args that returns the raw close status update (as we have more things we'd like to assert), and also allows us to pass in a custom fee rate.
Both these messages now carry the address of both parties, so you can update an address without needing to send shutdown again.
In this commit, we implement the latest version of the RBF loop as described in the spec. We remove the self loop back based on sending or receiving shutdown. Instead, from the ClosePending state, we can trigger a new loop by sending SendOfferEvent (we bump), or OfferReceivedEvent (they bump). We also update the rbf state machine w/ the new close addr logic. This log ensures that the remote party always sends our current address, and that if they send a new address, we'll update our view of it, and counter sign the correct transaction. We also add a CloseErr state. With this new state, we can ensure that we're able to properly report errors back to the RPC client, and also optionally force a reconnection or send a warning to the remote party.
In this commit, we implement a special case for OP_RETURN scripts outlined in the spec. If a party decides that its output will be too small even after the dust check, then they can opt to set it to zero by sending an `OP_RETURN` as their script.
We'll properly handle a protocol error due to user input by halting, and sending the error back to the user. When a user goes to issue a new update, based on which state we're in, we'll either kick off the shutdown, or attempt a new offer. This matches the new spec update where we'll only send `Shutdown` once per connection.
In this commit, we alter the existing co-op close flow to enable RBF bumps after re connection. With the new RBF close flow, it's possible that after a success round _and_ a re connection, either side wants to do another fee bump. Typically we route these requests through the switch, but in this case, the link no longer exists in the switch, so any requests to fee bump again would find that the link doesn't exist. In this commit, we implement a work around wherein if we have an RBF chan closer active, and the link isn't in the switch, then we just route the request directly to the chan closer via the peer. Once we have the chan closer, we can use the exact same flow as prior.
The itest has both sides try to close multiple times, each time with increasing fee rates. We also test the reconnection case, bad RBF updates, and instances where the local party can't actually pay for fees.
With this commit, we make sure we set the right height hint, even if the channel is a zero conf channel.
In this commit, we update `chooseDeliveryScript` to generate a new script if needed. This allows us to fold in a few other lines that always followed this function into this expanded function. The tests have been updated accordingly.
d6e5be3
to
c0dde0e
Compare
In this commit, we update the RBF state machine to handle early offer cases. This can happen if after we send out shutdown (to kick things off), the remote party sends their offer early. This can also happen if their outgoing shutdown (to ACK ours) was delayed for w/e reason, and we get their offer first. The alternative was to modify the state machine itself, but we feel that handling this early case is better in line with the Robustness principle.
This PR integrates the new RBF coop close FSM into the existing control flow in the peer struct. With the way the new state machine works in concert with the msg router, we actually need to create+register the new state machine for eligible channels as soon as the peer connection is established (
loadActiveChannels
). This is required as since these messages won't be part of the existing static switch in thereadHandler
, so if theMsgEndpoint
isn't registered from the very start, we'll fail to handle the messages (or they'll erroneously try to create the existing negotiation state machine).This PR can be divided into roughly 3 parts:
One point of discussion is that as is, in the main database, we'll only store the last coop close transaction we signed. Once confirmed, the wallet will know of the canonical version, but do we also want to store the complete series in the database as well? I think no, but thought it was worth explicitly calling out.
RPC wise, as long as the initial gRPC client that requested the coop close is still active, then we'll now send a new update event for each new RBF transaction signed. As is, we also send an update for both the local + remote coop close transactions (each side can now have an entirely distinct close txn). In contrast the existing coop close flow only ever sends a single update once the coop close is published, then another one after final confirmation.
TODO
Add opt out CLI args
Add itests
This change is