-
Notifications
You must be signed in to change notification settings - Fork 72
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Fix several sync issues #583
Conversation
Converting to draft as this is still WIP |
- Unified all E2E tests under a single workflow using matrix - Added support for running 2 {bridge service, zkevm node, prover} on the Makefile and docker-compose - Consolidated all the DBs under a single docker instance - Updated L1 docker image to use one with 2 rollups already attached, one with custom native gas token - Updated zkevm-node to a more recent version on docker-compose - re-structured the test config files under `test/config` to support having 2 services - `etherman.NewL1Client` receives a URL instead of the hole config struct, as this was the only used param - Changed primary key for `sync.claim` from `(network_id, index)` => `(index, rollup_index, mainnet_flag)` as this was giving conflicts on the test. Those conflicts could be avoided by ignoring the claims that happen on L1 from CDKs we don't sync, as long as each bridge service only syncs one CDK. Obviously we want (eventually) for the service to support syncing many CDKs - Added helper functions on etherman to support CDKs with custom native gas token - Added new mock smart contract to support how the custom native gas token used on the rollup 2 is minted - Many improvements to `operartions/manager.go` mostly related to using 2 networks and custom native gas token. But also cleaned a bunch of stuff... still lots of work to do here Left a bunch of TODOs all over the place. WOuld be nice to transform them into issues before this hits develop
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Added some comments to help the reviewing process
db/pgstorage/migrations/0009.sql
Outdated
ADD COLUMN IF NOT EXISTS origin_rollup_id BIGINT DEFAULT 0; | ||
|
||
ALTER TABLE sync.claim DROP CONSTRAINT claim_pkey; | ||
ALTER TABLE sync.claim ADD PRIMARY KEY (index, rollup_index, mainnet_flag); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
PK used to be index, metwork_id
. This is wrong because network_id
references destination network. Therefore there can be many claims with the same index (deposit count) and destination network. Example:
- Rollup 1 does the first deposit to L1
- Rollup 2 does the first deposit to L1
Once both deposits are claimed on L1, they will same index and and network id.
OTOH, index is unique per origin network, it's a counter (deposit count). And the way to index the origin network is with rollup index and mainnet flag, as explained in this comment:
// origin rollup ID is calculated as follows:
// // if mainnet_flag: 0
// // else: rollup_index + 1
// destination rollup ID == network_id: network that has received the claim, therefore, the destination rollupID of the claim
db/pgstorage/pgstorage.go
Outdated
const addDepositSQL = "INSERT INTO sync.deposit (leaf_type, network_id, orig_net, orig_addr, amount, dest_net, dest_addr, block_id, deposit_cnt, tx_hash, metadata) VALUES ($1, $2, $3, $4, $5, $6, $7, $8, $9, $10, $11) RETURNING id" | ||
const addDepositSQL = ` | ||
INSERT INTO sync.deposit ( | ||
leaf_type, network_id, orig_net, orig_addr, amount, dest_net, dest_addr, block_id, deposit_cnt, tx_hash, metadata, origin_rollup_id |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Added origin_rollup_id
@@ -198,13 +216,50 @@ func (p *PostgresStorage) AddTrustedGlobalExitRoot(ctx context.Context, trustedE | |||
} | |||
|
|||
// GetClaim gets a specific claim from the storage. | |||
func (p *PostgresStorage) GetClaim(ctx context.Context, depositCount, networkID uint, dbTx pgx.Tx) (*etherman.Claim, error) { | |||
func (p *PostgresStorage) GetClaim(ctx context.Context, depositCount, originNetworkID, destNetworkID uint, dbTx pgx.Tx) (*etherman.Claim, error) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This function now receives origin and destination network, to make sure we retrieve the correct calim. Altho now I realise that origin network and index (deposit count) would be enough. The code is still correct, and it used to be incorrect as it was asking for {index, network_id}
@@ -218,8 +273,28 @@ func (p *PostgresStorage) GetDeposit(ctx context.Context, depositCounterUser uin | |||
deposit etherman.Deposit | |||
amount string | |||
) | |||
const getDepositSQL = "SELECT leaf_type, orig_net, orig_addr, amount, dest_net, dest_addr, deposit_cnt, block_id, b.block_num, d.network_id, tx_hash, metadata, ready_for_claim FROM sync.deposit as d INNER JOIN sync.block as b ON d.network_id = b.network_id AND d.block_id = b.id WHERE d.network_id = $1 AND deposit_cnt = $2" | |||
err := p.getExecQuerier(dbTx).QueryRow(ctx, getDepositSQL, networkID, depositCounterUser).Scan(&deposit.LeafType, &deposit.OriginalNetwork, &deposit.OriginalAddress, &amount, &deposit.DestinationNetwork, &deposit.DestinationAddress, &deposit.DepositCount, &deposit.BlockID, &deposit.BlockNumber, &deposit.NetworkID, &deposit.TxHash, &deposit.Metadata, &deposit.ReadyForClaim) | |||
const getDepositSQL = ` |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Modified to also select origin_rollup_id
@@ -516,7 +591,11 @@ func (p *PostgresStorage) GetClaims(ctx context.Context, destAddr string, limit | |||
|
|||
// GetDeposits gets the deposit list which be smaller than depositCount. | |||
func (p *PostgresStorage) GetDeposits(ctx context.Context, destAddr string, limit uint, offset uint, dbTx pgx.Tx) ([]*etherman.Deposit, error) { | |||
const getDepositsSQL = "SELECT leaf_type, orig_net, orig_addr, amount, dest_net, dest_addr, deposit_cnt, block_id, b.block_num, d.network_id, tx_hash, metadata, ready_for_claim FROM sync.deposit as d INNER JOIN sync.block as b ON d.network_id = b.network_id AND d.block_id = b.id WHERE dest_addr = $1 ORDER BY d.block_id DESC, d.deposit_cnt DESC LIMIT $2 OFFSET $3" | |||
const getDepositsSQL = ` |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Modified to also select origin_rollup_id
etherman/etherman.go
Outdated
@@ -434,6 +441,7 @@ func (etherMan *Client) depositEvent(ctx context.Context, vLog types.Log, blocks | |||
deposit.TxHash = vLog.TxHash | |||
deposit.Metadata = d.Metadata | |||
deposit.LeafType = d.LeafType | |||
deposit.OriginRollupID = etherMan.GetRollupID() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The only reliable way to know where a deposit comes from is contextual: since we are doing queries to a RPC, we know that a deposit event comes from that RPC, and we know the Rollup ID associated to that RPC
@@ -527,46 +527,12 @@ func (s *ClientSynchronizer) checkReorg(latestBlock *etherman.Block) (*etherman. | |||
} | |||
|
|||
func (s *ClientSynchronizer) processVerifyBatch(verifyBatch etherman.VerifiedBatch, blockID uint64, dbTx pgx.Tx) error { | |||
if verifyBatch.RollupID == s.etherMan.GetRollupID()-1 { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Please see this issue #586
db/pgstorage/migrations/0009.sql
Outdated
|
||
-- +migrate Down | ||
|
||
ALTER TABLE sync.claim DROP CONSTRAINT claim_pkey; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should this be ADD PRIMARY KEY?
Quality Gate passedIssues Measures |
too old |
What does this PR do?
test/config
to support having 2 servicesetherman.NewL1Client
receives a URL instead of the hole config struct, as this was the only used paramsync.claim
from(network_id, index)
=>(index, rollup_index, mainnet_flag)
as this was giving conflicts on the test. Those conflicts could be avoided by ignoring the claims that happen on L1 from CDKs we don't sync, as long as each bridge service only syncs one CDK. Obviously we want (eventually) for the service to support syncing many CDKsoperartions/manager.go
mostly related to using 2 networks and custom native gas token. But also cleaned a bunch of stuff... still lots of work to do hereLeft a bunch of TODOs all over the place. WOuld be nice to transform them into issues before this hits develop