Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

x/gravity: refactor state #319

Closed
wants to merge 81 commits into from
Closed

x/gravity: refactor state #319

wants to merge 81 commits into from

Conversation

fedekunze
Copy link
Collaborator

@fedekunze fedekunze commented Apr 7, 2021

TODO

  • Init and ExportGenesis
  • finish slashing logic
  • finish confirm logic
  • Unit tests (/types) @mvid
  • Unit tests (/keeper)
  • cleanup types/keys.go
  • gRPC queries @marbar3778
  • address TODOs and FIXMEs
  • write comments (godoc)
  • update event attributes
  • add logs on each operation
  • add metrics for msg server
  • add metrics for EndBlock
  • document all major and changes on the refactor and define the required changes for updating the orchestrator and smart contract

@fedekunze fedekunze changed the title fedekunze/state x/gravity: refactor state Apr 7, 2021
@fedekunze fedekunze changed the base branch from main to marko/api_any April 7, 2021 15:02
@lgtm-com
Copy link

lgtm-com bot commented Apr 7, 2021

This pull request introduces 1 alert when merging 380409d into da9da06 - view on LGTM.com

new alerts:

  • 1 for Useless assignment to local variable

@lgtm-com
Copy link

lgtm-com bot commented Apr 8, 2021

This pull request introduces 2 alerts when merging 992a758 into da9da06 - view on LGTM.com

new alerts:

  • 2 for Useless assignment to local variable

@lgtm-com
Copy link

lgtm-com bot commented Apr 8, 2021

This pull request introduces 2 alerts when merging df951d0 into da9da06 - view on LGTM.com

new alerts:

  • 2 for Useless assignment to local variable

k.IterateOutgoingTXBatches(ctx, func(key []byte, iter_batch *types.OutgoingTxBatch) bool {

// Iterate through remaining batches with the same token contract
// TODO: still not getting why we need to cancel the other batch txs?
Copy link
Contributor

@jtremback jtremback May 4, 2021

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It is impossible for an earlier batch to be executed because its nonce is too low. Cancelling earlier batches gives transactions in them a chance to be included in new batches or cancelled

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This was also discussed here: https://blog.althea.net/how-gravity-works

After EthBlockDelay blocks on Ethereum, the validators submit the event to the Gravity module. When this event is fully observed (as detailed above), the batch is removed from the batch pool, since it is complete. Any batches of the same token with a lower batch nonce are also removed from the batch pool, but their transactions are also put back into the transaction pool. This is because we can be sure that it is impossible to submit them, since their batch nonces are too low. This mechanism allows transactions that were assembled into unprofitable batches to have a chance to make it into a profitable batch.


// lastBatch may be nil if there are no existing batches, we only need
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why remove the logic which ensures that only more profitable batches are created? Has it been moved somewhere else in the code? I've attached documentation of the feature that this PR appears to remove:

Only create more profitable batches

This is an important design criteria for permission-less batch creation.

Suppose we have an infinite stream of low fee spam transactions going
into the Gravity tx pool. Any given batch creation event will then lock
up high fee 'good transactions' with dozens of spam transactions.

In this case no relayer will ever chose to relay a batch because all of
them will be unprofitable.

In order to ensure that profitable batches are eventually created we
must avoid locking up the high fee 'good transactions' into obviously
bad batches. To add to the difficulty we don't actually know what any
token in this process is worth or what ETH gas costs.

The solution this patch provides is to ensure that a new batch always
has higher fees than the last batch. This means that even if there are
infinite spam transactions, and infinite spamming of the permission less
create batch request batch creation will halt long enough for enough
good transactions to build up in the pool and a new more profitable
batch to be created.

Over a long enough timescale the old batches either time out. Returning
profitable transactions to the pool to create a new even better batch. Or
the profitability of the latest batch continues to increase until one is
relayed, also freeing the previous good transactions to go into the next
batch.

So given this condition we can say that no matter the inputs a
successful batch will eventually be created.

Copy link
Contributor

@jtremback jtremback May 4, 2021

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I searched for all the places this function is being called in the code, and it doesn't seem like the "only more profitable" logic was moved, it was simply removed. Is there an ADR for this logic change? How will this affect the performance and spam resistance attributes discussed above?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This logic is also discussed here: https://blog.althea.net/how-gravity-works

As mentioned above, the Gravity module will only assemble a new batch if there are no batches already in the batch pool with a higher sum of fees. This is to prevent the following scenario: If there were many transactions in the transaction pool with an unprofitably low fee, and a few coming in every block with a high fee, each high fee transaction might end up in a batch with a bunch of unprofitable transactions. These batches would not be profitable to submit, and so the profitable transactions would end up in unprofitable batches, and not be submitted.

By making it so that every new batch must be more profitable than any other batch that is waiting to be submitted, it gives the few profitable transactions that come in every block the chance to build up and form a batch profitable enough to submit.

// upon receiving?
// FIXME: this will fail if the cosmos tokens are relayed for the first time and they don't have a counterparty contract
// Fede: Also there's the question of how do we handle IBC denominations from a security perspective. Do we assign them the same
// contract? My guess is that each new contract assigned to a cosmos coin should be approved by governance
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This seems incredibly cumbersome

@okwme okwme closed this Aug 27, 2021
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

8 participants