Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fix several sync issues #583

Closed
wants to merge 37 commits into from
Closed

Fix several sync issues #583

wants to merge 37 commits into from

Conversation

vcastellm
Copy link

@vcastellm vcastellm commented Feb 16, 2024

What does this PR do?

  • Removed code that performed a check that was not actually used in real scenarios and was causing an error.
  • Claim transactions are now filtered by the destination network
  • Unified all E2E tests under a single workflow using matrix
  • Added support for running 2 {bridge service, zkevm node, prover} on the Makefile and docker-compose
  • Consolidated all the DBs under a single docker instance
  • Updated L1 docker image to use one with 2 rollups already attached, one with custom native gas token
  • Updated zkevm-node to a more recent version on docker-compose
  • re-structured the test config files under test/config to support having 2 services
  • etherman.NewL1Client receives a URL instead of the hole config struct, as this was the only used param
  • Changed primary key for sync.claim from (network_id, index) => (index, rollup_index, mainnet_flag) as this was giving conflicts on the test. Those conflicts could be avoided by ignoring the claims that happen on L1 from CDKs we don't sync, as long as each bridge service only syncs one CDK. Obviously we want (eventually) for the service to support syncing many CDKs
  • Added helper functions on etherman to support CDKs with custom native gas token
  • Added new mock smart contract to support how the custom native gas token used on the rollup 2 is minted
  • Many improvements to operartions/manager.go mostly related to using 2 networks and custom native gas token. But also cleaned a bunch of stuff... still lots of work to do here

Left a bunch of TODOs all over the place. WOuld be nice to transform them into issues before this hits develop

@vcastellm vcastellm marked this pull request as draft February 16, 2024 19:09
@vcastellm
Copy link
Author

Converting to draft as this is still WIP

@0xPolygonHermez 0xPolygonHermez deleted a comment from cla-bot bot Feb 20, 2024
@0xPolygonHermez 0xPolygonHermez deleted a comment from cla-bot bot Feb 20, 2024
@0xPolygonHermez 0xPolygonHermez deleted a comment from cla-bot bot Feb 20, 2024
@0xPolygonHermez 0xPolygonHermez deleted a comment from cla-bot bot Feb 20, 2024
@0xPolygonHermez 0xPolygonHermez deleted a comment from cla-bot bot Feb 20, 2024
@0xPolygonHermez 0xPolygonHermez deleted a comment from cla-bot bot Feb 20, 2024
@0xPolygonHermez 0xPolygonHermez deleted a comment from cla-bot bot Feb 20, 2024
@0xPolygonHermez 0xPolygonHermez deleted a comment from cla-bot bot Feb 20, 2024
- Unified all E2E tests under a single workflow using matrix
- Added support for running 2 {bridge service, zkevm node, prover} on the Makefile and docker-compose
- Consolidated all the DBs under a single docker instance
- Updated L1 docker image to use one with 2 rollups already attached, one with custom native gas token
- Updated zkevm-node to a more recent version on docker-compose
- re-structured the test config files under `test/config` to support having 2 services
- `etherman.NewL1Client` receives a URL instead of the hole config struct, as this was the only used param
- Changed primary key for `sync.claim` from `(network_id, index)` => `(index, rollup_index, mainnet_flag)` as this was giving conflicts on the test. Those conflicts could be avoided by ignoring the claims that happen on L1 from CDKs we don't sync, as long as each bridge service only syncs one CDK. Obviously we want (eventually) for the service to support syncing many CDKs
- Added helper functions on etherman to support CDKs with custom native gas token
- Added new mock smart contract to support how the custom native gas token used on the rollup 2 is minted
- Many improvements to `operartions/manager.go` mostly related to using 2 networks and custom native gas token. But also cleaned a bunch of stuff... still lots of work to do here

Left a bunch of TODOs all over the place. WOuld be nice to transform them into issues before this hits develop
@0xPolygonHermez 0xPolygonHermez deleted a comment from cla-bot bot Feb 29, 2024
@0xPolygonHermez 0xPolygonHermez deleted a comment from cla-bot bot Feb 29, 2024
@0xPolygonHermez 0xPolygonHermez deleted a comment from cla-bot bot Feb 29, 2024
@0xPolygonHermez 0xPolygonHermez deleted a comment from cla-bot bot Feb 29, 2024
@0xPolygonHermez 0xPolygonHermez deleted a comment from cla-bot bot Feb 29, 2024
@0xPolygonHermez 0xPolygonHermez deleted a comment from cla-bot bot Feb 29, 2024
@0xPolygonHermez 0xPolygonHermez deleted a comment from cla-bot bot Feb 29, 2024
@0xPolygonHermez 0xPolygonHermez deleted a comment from cla-bot bot Feb 29, 2024
@0xPolygonHermez 0xPolygonHermez deleted a comment from cla-bot bot Feb 29, 2024
@cla-bot cla-bot bot added the cla-signed label Feb 29, 2024
Copy link
Member

@arnaubennassar arnaubennassar left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Added some comments to help the reviewing process

ADD COLUMN IF NOT EXISTS origin_rollup_id BIGINT DEFAULT 0;

ALTER TABLE sync.claim DROP CONSTRAINT claim_pkey;
ALTER TABLE sync.claim ADD PRIMARY KEY (index, rollup_index, mainnet_flag);
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

PK used to be index, metwork_id. This is wrong because network_id references destination network. Therefore there can be many claims with the same index (deposit count) and destination network. Example:

  • Rollup 1 does the first deposit to L1
  • Rollup 2 does the first deposit to L1
    Once both deposits are claimed on L1, they will same index and and network id.

OTOH, index is unique per origin network, it's a counter (deposit count). And the way to index the origin network is with rollup index and mainnet flag, as explained in this comment:

// origin rollup ID is calculated as follows:
	// // if mainnet_flag: 0
	// // else: rollup_index + 1
	// destination rollup ID == network_id: network that has received the claim, therefore, the destination rollupID of the claim

const addDepositSQL = "INSERT INTO sync.deposit (leaf_type, network_id, orig_net, orig_addr, amount, dest_net, dest_addr, block_id, deposit_cnt, tx_hash, metadata) VALUES ($1, $2, $3, $4, $5, $6, $7, $8, $9, $10, $11) RETURNING id"
const addDepositSQL = `
INSERT INTO sync.deposit (
leaf_type, network_id, orig_net, orig_addr, amount, dest_net, dest_addr, block_id, deposit_cnt, tx_hash, metadata, origin_rollup_id
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Added origin_rollup_id

@@ -198,13 +216,50 @@ func (p *PostgresStorage) AddTrustedGlobalExitRoot(ctx context.Context, trustedE
}

// GetClaim gets a specific claim from the storage.
func (p *PostgresStorage) GetClaim(ctx context.Context, depositCount, networkID uint, dbTx pgx.Tx) (*etherman.Claim, error) {
func (p *PostgresStorage) GetClaim(ctx context.Context, depositCount, originNetworkID, destNetworkID uint, dbTx pgx.Tx) (*etherman.Claim, error) {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This function now receives origin and destination network, to make sure we retrieve the correct calim. Altho now I realise that origin network and index (deposit count) would be enough. The code is still correct, and it used to be incorrect as it was asking for {index, network_id}

@@ -218,8 +273,28 @@ func (p *PostgresStorage) GetDeposit(ctx context.Context, depositCounterUser uin
deposit etherman.Deposit
amount string
)
const getDepositSQL = "SELECT leaf_type, orig_net, orig_addr, amount, dest_net, dest_addr, deposit_cnt, block_id, b.block_num, d.network_id, tx_hash, metadata, ready_for_claim FROM sync.deposit as d INNER JOIN sync.block as b ON d.network_id = b.network_id AND d.block_id = b.id WHERE d.network_id = $1 AND deposit_cnt = $2"
err := p.getExecQuerier(dbTx).QueryRow(ctx, getDepositSQL, networkID, depositCounterUser).Scan(&deposit.LeafType, &deposit.OriginalNetwork, &deposit.OriginalAddress, &amount, &deposit.DestinationNetwork, &deposit.DestinationAddress, &deposit.DepositCount, &deposit.BlockID, &deposit.BlockNumber, &deposit.NetworkID, &deposit.TxHash, &deposit.Metadata, &deposit.ReadyForClaim)
const getDepositSQL = `
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Modified to also select origin_rollup_id

@@ -516,7 +591,11 @@ func (p *PostgresStorage) GetClaims(ctx context.Context, destAddr string, limit

// GetDeposits gets the deposit list which be smaller than depositCount.
func (p *PostgresStorage) GetDeposits(ctx context.Context, destAddr string, limit uint, offset uint, dbTx pgx.Tx) ([]*etherman.Deposit, error) {
const getDepositsSQL = "SELECT leaf_type, orig_net, orig_addr, amount, dest_net, dest_addr, deposit_cnt, block_id, b.block_num, d.network_id, tx_hash, metadata, ready_for_claim FROM sync.deposit as d INNER JOIN sync.block as b ON d.network_id = b.network_id AND d.block_id = b.id WHERE dest_addr = $1 ORDER BY d.block_id DESC, d.deposit_cnt DESC LIMIT $2 OFFSET $3"
const getDepositsSQL = `
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Modified to also select origin_rollup_id

@@ -434,6 +441,7 @@ func (etherMan *Client) depositEvent(ctx context.Context, vLog types.Log, blocks
deposit.TxHash = vLog.TxHash
deposit.Metadata = d.Metadata
deposit.LeafType = d.LeafType
deposit.OriginRollupID = etherMan.GetRollupID()
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The only reliable way to know where a deposit comes from is contextual: since we are doing queries to a RPC, we know that a deposit event comes from that RPC, and we know the Rollup ID associated to that RPC

@@ -527,46 +527,12 @@ func (s *ClientSynchronizer) checkReorg(latestBlock *etherman.Block) (*etherman.
}

func (s *ClientSynchronizer) processVerifyBatch(verifyBatch etherman.VerifiedBatch, blockID uint64, dbTx pgx.Tx) error {
if verifyBatch.RollupID == s.etherMan.GetRollupID()-1 {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please see this issue #586


-- +migrate Down

ALTER TABLE sync.claim DROP CONSTRAINT claim_pkey;
Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should this be ADD PRIMARY KEY?

Copy link

sonarqubecloud bot commented Mar 6, 2024

Quality Gate Passed Quality Gate passed

Issues
14 New issues
2 Accepted issues

Measures
0 Security Hotspots
No data about Coverage
1.7% Duplication on New Code

See analysis details on SonarCloud

@ARR552
Copy link
Contributor

ARR552 commented Sep 7, 2024

too old

@ARR552 ARR552 closed this Sep 7, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants