Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fix several sync issues #583

Closed
wants to merge 37 commits into from
Closed
Show file tree
Hide file tree
Changes from 25 commits
Commits
Show all changes
37 commits
Select commit Hold shift + click to select a range
a79d440
Remove unused check
vcastellm Feb 16, 2024
a48ab4f
Fix the claim transaction
vcastellm Feb 16, 2024
edbac37
Fix wrong relation of claim_tx_hash
arnaubennassar Feb 19, 2024
355b9ab
WIP
arnaubennassar Feb 19, 2024
bcf26f3
looks to be working
arnaubennassar Feb 19, 2024
f2c1706
Merge pull request #585 from 0xPolygonHermez/abf/fix
vcastellm Feb 20, 2024
a3c1bdb
Remove commented Local Exit root existance check
vcastellm Feb 20, 2024
118699a
Restore previous
vcastellm Feb 20, 2024
7d63f89
typo
vcastellm Feb 20, 2024
bcba97f
Rename
vcastellm Feb 20, 2024
f04e8a6
Merge remote-tracking branch 'origin' into vcastellm/fix-sync
vcastellm Feb 20, 2024
fc9ff4b
Refactor logic
vcastellm Feb 20, 2024
ff6003f
Refactor claim.RollupIndex
vcastellm Feb 20, 2024
a7a1162
Fix test
vcastellm Feb 20, 2024
9921711
Pass ref
vcastellm Feb 20, 2024
726fed7
Fix UTs (#587)
arnaubennassar Feb 20, 2024
87db383
reverse negated logic for rollup id
arnaubennassar Feb 20, 2024
fd3b3ed
remove unused added field
arnaubennassar Feb 20, 2024
8426a6b
Fix linter
arnaubennassar Feb 20, 2024
c7ba4ff
Fix e2e tests
vcastellm Feb 20, 2024
c9660c3
Merge branch 'vcastellm/fix-sync' of github.com:0xPolygonHermez/zkevm…
vcastellm Feb 20, 2024
f989590
Remove test
vcastellm Feb 20, 2024
b6e663a
Test/multiple rollups (#591)
arnaubennassar Feb 29, 2024
4163e21
Remove _XXX from Makefile
arnaubennassar Feb 29, 2024
ac15bcc
Fix conflicts
arnaubennassar Feb 29, 2024
d60edb2
add migration test
arnaubennassar Feb 29, 2024
4eed41e
add migration test
arnaubennassar Feb 29, 2024
8b1a5d7
Fix linter
arnaubennassar Feb 29, 2024
7aa650c
Split migration file
arnaubennassar Mar 1, 2024
c00878c
TODOs for Monday :)
arnaubennassar Mar 1, 2024
1481244
Fix getting claims from deposits
arnaubennassar Mar 4, 2024
762ea6f
Fix UT
arnaubennassar Mar 4, 2024
7516f21
Fix e2e
arnaubennassar Mar 4, 2024
2e78d3c
Fix lint
arnaubennassar Mar 4, 2024
e7656e4
network_id is bigint
arnaubennassar Mar 6, 2024
e7eba0e
network_id is bigint
arnaubennassar Mar 6, 2024
21cf8cd
network_id is bigint
arnaubennassar Mar 6, 2024
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
3 changes: 2 additions & 1 deletion .github/workflows/test-e2e.yml
Original file line number Diff line number Diff line change
Expand Up @@ -13,6 +13,7 @@ jobs:
matrix:
go-version: [ 1.21.x ]
goarch: [ "amd64" ]
test: ["e2e", "edge", "multirollup"]
runs-on: ubuntu-latest
steps:
- name: Checkout code
Expand All @@ -24,4 +25,4 @@ jobs:
env:
GOARCH: ${{ matrix.goarch }}
- name: Test
run: make test-full
run: make test-${{ matrix.test }}
27 changes: 0 additions & 27 deletions .github/workflows/test-edge.yml

This file was deleted.

1 change: 1 addition & 0 deletions .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -18,3 +18,4 @@

config/config.mainnet.toml
config/config.testnet.toml
**__debug**
116 changes: 56 additions & 60 deletions Makefile
Original file line number Diff line number Diff line change
@@ -1,45 +1,44 @@
include version.mk

DOCKER_COMPOSE := docker-compose -f docker-compose.yml
DOCKER_COMPOSE_STATE_DB := zkevm-state-db
DOCKER_COMPOSE_POOL_DB := zkevm-pool-db
DOCKER_COMPOSE_RPC_DB := zkevm-rpc-db
DOCKER_COMPOSE_BRIDGE_DB := zkevm-bridge-db
DOCKER_COMPOSE_ZKEVM_NODE := zkevm-node
DOCKER_COMPOSE_DB := zkevm-db
DOCKER_COMPOSE_ZKEVM_NODE-1 := zkevm-node-1
DOCKER_COMPOSE_ZKEVM_NODE-2 := zkevm-node-2
DOCKER_COMPOSE_ZKEVM_NODE_V1TOV2 := zkevm-node-v1tov2
DOCKER_COMPOSE_ZKEVM_AGGREGATOR_V1TOV2 := zkevm-aggregator-v1tov2
DOCKER_COMPOSE_L1_NETWORK := zkevm-mock-l1-network
DOCKER_COMPOSE_L1_NETWORK_V1TOV2 := zkevm-v1tov2-l1-network
DOCKER_COMPOSE_ZKPROVER := zkevm-prover
DOCKER_COMPOSE_ZKPROVER-1 := zkevm-prover-1
DOCKER_COMPOSE_ZKPROVER-2 := zkevm-prover-2
DOCKER_COMPOSE_ZKPROVER_V1TOV2 := zkevm-prover-v1tov2
DOCKER_COMPOSE_BRIDGE := zkevm-bridge-service
DOCKER_COMPOSE_BRIDGE-1 := zkevm-bridge-service-1
DOCKER_COMPOSE_BRIDGE-2 := zkevm-bridge-service-2
DOCKER_COMPOSE_BRIDGE_V1TOV2 := zkevm-bridge-service-v1tov2

RUN_STATE_DB := $(DOCKER_COMPOSE) up -d $(DOCKER_COMPOSE_STATE_DB)
RUN_POOL_DB := $(DOCKER_COMPOSE) up -d $(DOCKER_COMPOSE_POOL_DB)
RUN_BRIDGE_DB := $(DOCKER_COMPOSE) up -d $(DOCKER_COMPOSE_BRIDGE_DB)
RUN_DBS := ${RUN_BRIDGE_DB} && ${RUN_STATE_DB} && ${RUN_POOL_DB}
RUN_NODE := $(DOCKER_COMPOSE) up -d $(DOCKER_COMPOSE_ZKEVM_NODE)
RUN_DB := $(DOCKER_COMPOSE) up -d $(DOCKER_COMPOSE_DB)
RUN_NODE_1 := $(DOCKER_COMPOSE) up -d $(DOCKER_COMPOSE_ZKEVM_NODE-1)
RUN_NODE_2 := $(DOCKER_COMPOSE) up -d $(DOCKER_COMPOSE_ZKEVM_NODE-2)
RUN_NODE_V1TOV2 := $(DOCKER_COMPOSE) up -d $(DOCKER_COMPOSE_ZKEVM_NODE_V1TOV2)
RUN_AGGREGATOR_V1TOV2 := $(DOCKER_COMPOSE) up -d $(DOCKER_COMPOSE_ZKEVM_AGGREGATOR_V1TOV2)
RUN_L1_NETWORK := $(DOCKER_COMPOSE) up -d $(DOCKER_COMPOSE_L1_NETWORK)
RUN_L1_NETWORK_V1TOV2 := $(DOCKER_COMPOSE) up -d $(DOCKER_COMPOSE_L1_NETWORK_V1TOV2)
RUN_ZKPROVER := $(DOCKER_COMPOSE) up -d $(DOCKER_COMPOSE_ZKPROVER)
RUN_ZKPROVER_1 := $(DOCKER_COMPOSE) up -d $(DOCKER_COMPOSE_ZKPROVER-1)
RUN_ZKPROVER_2 := $(DOCKER_COMPOSE) up -d $(DOCKER_COMPOSE_ZKPROVER-2)
RUN_ZKPROVER_V1TOV2 := $(DOCKER_COMPOSE) up -d $(DOCKER_COMPOSE_ZKPROVER_V1TOV2)
RUN_BRIDGE := $(DOCKER_COMPOSE) up -d $(DOCKER_COMPOSE_BRIDGE)
RUN_BRIDGE_1 := $(DOCKER_COMPOSE) up -d $(DOCKER_COMPOSE_BRIDGE-1)
RUN_BRIDGE_2 := $(DOCKER_COMPOSE) up -d $(DOCKER_COMPOSE_BRIDGE-2)
RUN_BRIDGE_V1TOV2 := $(DOCKER_COMPOSE) up -d $(DOCKER_COMPOSE_BRIDGE_V1TOV2)

STOP_NODE_DB := $(DOCKER_COMPOSE) stop $(DOCKER_COMPOSE_NODE_DB) && $(DOCKER_COMPOSE) rm -f $(DOCKER_COMPOSE_NODE_DB)
STOP_BRIDGE_DB := $(DOCKER_COMPOSE) stop $(DOCKER_COMPOSE_BRIDGE_DB) && $(DOCKER_COMPOSE) rm -f $(DOCKER_COMPOSE_BRIDGE_DB)
STOP_DBS := ${STOP_NODE_DB} && ${STOP_BRIDGE_DB}
STOP_NODE := $(DOCKER_COMPOSE) stop $(DOCKER_COMPOSE_ZKEVM_NODE) && $(DOCKER_COMPOSE) rm -f $(DOCKER_COMPOSE_ZKEVM_NODE)
STOP_DB := $(DOCKER_COMPOSE) stop $(DOCKER_COMPOSE_DB) && $(DOCKER_COMPOSE) rm -f $(DOCKER_COMPOSE_DB)
STOP_NODE := $(DOCKER_COMPOSE) stop $(DOCKER_COMPOSE_ZKEVM_NODE-1) && $(DOCKER_COMPOSE) rm -f $(DOCKER_COMPOSE_ZKEVM_NODE-1)
STOP_NODE_V1TOV2 := $(DOCKER_COMPOSE) stop $(DOCKER_COMPOSE_ZKEVM_NODE_V1TOV2) && $(DOCKER_COMPOSE) rm -f $(DOCKER_COMPOSE_ZKEVM_NODE_V1TOV2)
STOP_AGGREGATOR_V1TOV2 := $(DOCKER_COMPOSE) stop $(DOCKER_COMPOSE_ZKEVM_AGGREGATOR_V1TOV2) && $(DOCKER_COMPOSE) rm -f $(DOCKER_COMPOSE_ZKEVM_AGGREGATOR_V1TOV2)
STOP_NETWORK := $(DOCKER_COMPOSE) stop $(DOCKER_COMPOSE_L1_NETWORK) && $(DOCKER_COMPOSE) rm -f $(DOCKER_COMPOSE_L1_NETWORK)
STOP_NETWORK_V1TOV2 := $(DOCKER_COMPOSE) stop $(DOCKER_COMPOSE_L1_NETWORK_V1TOV2) && $(DOCKER_COMPOSE) rm -f $(DOCKER_COMPOSE_L1_NETWORK_V1TOV2)
STOP_ZKPROVER := $(DOCKER_COMPOSE) stop $(DOCKER_COMPOSE_ZKPROVER) && $(DOCKER_COMPOSE) rm -f $(DOCKER_COMPOSE_ZKPROVER)
STOP_ZKPROVER_1 := $(DOCKER_COMPOSE) stop $(DOCKER_COMPOSE_ZKPROVER-1) && $(DOCKER_COMPOSE) rm -f $(DOCKER_COMPOSE_ZKPROVER-1)
STOP_ZKPROVER_2 := $(DOCKER_COMPOSE) stop $(DOCKER_COMPOSE_ZKPROVER-2) && $(DOCKER_COMPOSE) rm -f $(DOCKER_COMPOSE_ZKPROVER-2)
STOP_ZKPROVER_V1TOV2 := $(DOCKER_COMPOSE) stop $(DOCKER_COMPOSE_ZKPROVER_V1TOV2) && $(DOCKER_COMPOSE) rm -f $(DOCKER_COMPOSE_ZKPROVER_V1TOV2)
STOP_BRIDGE := $(DOCKER_COMPOSE) stop $(DOCKER_COMPOSE_BRIDGE) && $(DOCKER_COMPOSE) rm -f $(DOCKER_COMPOSE_BRIDGE)
STOP_BRIDGE := $(DOCKER_COMPOSE) stop $(DOCKER_COMPOSE_BRIDGE-1) && $(DOCKER_COMPOSE) rm -f $(DOCKER_COMPOSE_BRIDGE-1)
STOP_BRIDGE_V1TOV2 := $(DOCKER_COMPOSE) stop $(DOCKER_COMPOSE_BRIDGE_V1TOV2) && $(DOCKER_COMPOSE) rm -f $(DOCKER_COMPOSE_BRIDGE_V1TOV2)
STOP := $(DOCKER_COMPOSE) down --remove-orphans

Expand Down Expand Up @@ -71,9 +70,9 @@ install-git-hooks: ## Moves hook files to the .git/hooks directory

.PHONY: test
test: ## Runs only short tests without checking race conditions
$(STOP_BRIDGE_DB) || true
$(RUN_BRIDGE_DB); sleep 3
trap '$(STOP_BRIDGE_DB)' EXIT; go test --cover -short -p 1 ./...
$(STOP_DB) || true
$(RUN_DB); sleep 3
trap '$(STOP_DB)' EXIT; go test --cover -short -p 1 ./...

.PHONY: install-linter
install-linter: ## Installs the linter
Expand All @@ -83,33 +82,17 @@ install-linter: ## Installs the linter
build-docker: ## Builds a docker image with the zkevm bridge binary
docker build -t zkevm-bridge-service -f ./Dockerfile .

.PHONY: run-db-node
run-db-node: ## Runs the node database
$(RUN_NODE_DB)
.PHONY: run-db
run-db: ## Runs the node database
$(RUN_DB)

.PHONY: stop-db-node
stop-db-node: ## Stops the node database
$(STOP_NODE_DB)

.PHONY: run-db-bridge
run-db-bridge: ## Runs the node database
$(RUN_BRIDGE_DB)

.PHONY: stop-db-bridge
stop-db-bridge: ## Stops the node database
$(STOP_BRIDGE_DB)

.PHONY: run-dbs
run-dbs: ## Runs the node database
$(RUN_DBS)

.PHONY: stop-dbs
stop-dbs: ## Stops the node database
$(STOP_DBS)
.PHONY: stop-db
stop-db: ## Stops the node database
$(STOP_DB)

.PHONY: run-node
run-node: ## Runs the node
$(RUN_NODE)
$(RUN_NODE_1)

.PHONY: stop-node
stop-node: ## Stops the node
Expand Down Expand Up @@ -149,11 +132,11 @@ stop-network-v1tov2: ## Stops the l1 network

.PHONY: run-prover
run-prover: ## Runs the zk prover
$(RUN_ZKPROVER)
$(RUN_ZKPROVER_1)

.PHONY: stop-prover
stop-prover: ## Stops the zk prover
$(STOP_ZKPROVER)
$(STOP_ZKPROVER_1)

.PHONY: run-prover-v1tov2
run-prover-v1tov2: ## Runs the zk prover
Expand All @@ -165,7 +148,7 @@ stop-prover-v1tov2: ## Stops the zk prover

.PHONY: run-bridge
run-bridge: ## Runs the bridge service
$(RUN_BRIDGE)
$(RUN_BRIDGE_1)

.PHONY: stop-bridge
stop-bridge: ## Stops the bridge service
Expand All @@ -188,18 +171,26 @@ restart: stop run ## Executes `make stop` and `make run` commands

.PHONY: run
run: stop ## runs all services
$(RUN_DBS)
$(RUN_DB)
$(RUN_L1_NETWORK)
sleep 5
$(RUN_ZKPROVER)
$(RUN_ZKPROVER_1)
sleep 3
$(RUN_NODE)
sleep 7
$(RUN_BRIDGE)
$(RUN_NODE_1)
sleep 25
$(RUN_BRIDGE_1)

.PHONY: run-2Rollups
run-2Rollups: run
$(RUN_ZKPROVER_2)
sleep 3
$(RUN_NODE_2)
sleep 25
$(RUN_BRIDGE_2)

.PHONY: run-v1tov2
run-v1tov2: stop ## runs all services
$(RUN_DBS)
$(RUN_DB)
$(RUN_L1_NETWORK_V1TOV2)
sleep 5
$(RUN_ZKPROVER_V1TOV2)
Expand All @@ -224,9 +215,9 @@ stop-mockserver: ## Stops the mock bridge service

.PHONY: bench
bench: ## benchmark test
$(STOP_BRIDGE_DB) || true
$(RUN_BRIDGE_DB); sleep 3
trap '$(STOP_BRIDGE_DB)' EXIT; go test -run=NOTEST -timeout=30m -bench=Small ./test/benchmark/...
$(STOP_DB) || true
$(RUN_DB); sleep 3
trap '$(STOP_DB)' EXIT; go test -run=NOTEST -timeout=30m -bench=Small ./test/benchmark/...

.PHONY: bench-full
bench-full: export ZKEVM_BRIDGE_DATABASE_PORT = 5432
Expand All @@ -236,8 +227,8 @@ bench-full: ## benchmark full test
go test -run=NOTEST -bench=Medium . && \
go test -run=NOTEST -timeout=30m -bench=Large .

.PHONY: test-full
test-full: build-docker stop run ## Runs all tests checking race conditions
.PHONY: test-e2e
test-e2e: build-docker stop run ## Runs all tests checking race conditions
sleep 3
trap '$(STOP)' EXIT; MallocNanoZone=0 go test -v -failfast -race -p 1 -timeout 2400s ./test/e2e/... -count 1 -tags='e2e'

Expand All @@ -246,6 +237,11 @@ test-edge: build-docker stop run ## Runs all tests checking race conditions
sleep 3
trap '$(STOP)' EXIT; MallocNanoZone=0 go test -v -failfast -race -p 1 -timeout 2400s ./test/e2e/... -count 1 -tags='edge'

.PHONY: test-multirollup
test-multirollup: build-docker stop run-2Rollups ## Runs all tests checking race conditions
sleep 3
trap '$(STOP)' EXIT; MallocNanoZone=0 go test -v -failfast -race -p 1 -timeout 2400s ./test/e2e/... -count 1 -tags='multirollup'

.PHONY: validate
validate: lint build test-full ## Validates the whole integrity of the code base

Expand Down
19 changes: 13 additions & 6 deletions claimtxman/claimtxman.go
Original file line number Diff line number Diff line change
Expand Up @@ -85,6 +85,7 @@ func (tm *ClaimTxManager) Start() {
for {
select {
case <-tm.ctx.Done():
ticker.Stop()
return
case netID := <-tm.chSynced:
if netID == tm.l2NetworkID && !tm.synced {
Expand Down Expand Up @@ -154,46 +155,52 @@ func (tm *ClaimTxManager) processDepositStatus(ger *etherman.GlobalExitRoot, dbT
return err
}
for _, deposit := range deposits {
if tm.l2NetworkID != deposit.DestinationNetwork {
log.Infof("Ignoring deposit: %d: dest_net: %d, we are:%d", deposit.DepositCount, deposit.DestinationNetwork, tm.l2NetworkID)
continue
}

claimHash, err := tm.bridgeService.GetDepositStatus(tm.ctx, deposit.DepositCount, deposit.DestinationNetwork)
claimHash, err := tm.bridgeService.GetDepositStatus(tm.ctx, deposit.DepositCount, deposit.OriginalNetwork, deposit.DestinationNetwork)
if err != nil {
log.Errorf("error getting deposit status for deposit %d. Error: %v", deposit.DepositCount, err)
return err
}

if len(claimHash) > 0 || deposit.LeafType == LeafTypeMessage && !tm.isDepositMessageAllowed(deposit) {
log.Infof("Ignoring deposit: %d, leafType: %d, claimHash: %s, deposit.OriginalAddress: %s", deposit.DepositCount, deposit.LeafType, claimHash, deposit.OriginalAddress.String())
continue
}

if tm.l2NetworkID != deposit.DestinationNetwork {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

we need to review this after all the changes to rollup/network ID

log.Debugf("Ignoring deposit: %d", deposit.DepositCount)
continue
}

log.Infof("create the claim tx for the deposit %d", deposit.DepositCount)
ger, proof, rollupProof, err := tm.bridgeService.GetClaimProof(deposit.DepositCount, deposit.NetworkID, dbTx)
if err != nil {
log.Errorf("error getting Claim Proof for deposit %d. Error: %v", deposit.DepositCount, err)
return err
}

var (
mtProof [mtHeight][keyLen]byte
mtRollupProof [mtHeight][keyLen]byte
)

for i := 0; i < mtHeight; i++ {
mtProof[i] = proof[i]
mtRollupProof[i] = rollupProof[i]
}

tx, err := tm.l2Node.BuildSendClaim(tm.ctx, deposit, mtProof, mtRollupProof,
&etherman.GlobalExitRoot{
ExitRoots: []common.Hash{
ger.ExitRoots[0],
ger.ExitRoots[1],
}}, 1, 1, 1, tm.rollupID,
tm.auth)

if err != nil {
log.Errorf("error BuildSendClaim tx for deposit %d. Error: %v", deposit.DepositCount, err)
return err
}

if err = tm.addClaimTx(deposit.DepositCount, tm.auth.From, tx.To(), nil, tx.Data(), dbTx); err != nil {
log.Errorf("error adding claim tx for deposit %d. Error: %v", deposit.DepositCount, err)
return err
Expand Down
2 changes: 1 addition & 1 deletion claimtxman/interfaces.go
Original file line number Diff line number Diff line change
Expand Up @@ -24,5 +24,5 @@ type storageInterface interface {

type bridgeServiceInterface interface {
GetClaimProof(depositCnt, networkID uint, dbTx pgx.Tx) (*etherman.GlobalExitRoot, [][bridgectrl.KeyLen]byte, [][bridgectrl.KeyLen]byte, error)
GetDepositStatus(ctx context.Context, depositCount uint, destNetworkID uint) (string, error)
GetDepositStatus(ctx context.Context, depositCount uint, originNetworkID, destNetworkID uint) (string, error)
}
16 changes: 12 additions & 4 deletions cmd/run.go
Original file line number Diff line number Diff line change
Expand Up @@ -95,7 +95,9 @@ func start(ctx *cli.Context) error {
log.Error(err)
return err
}
rollupID := l1Etherman.GetRollupID()

// TODO: this works because the service only supports one L2 network
rollupID := l2Ethermans[0].GetRollupID()
bridgeService := server.NewBridgeService(c.BridgeServer, c.BridgeController.Height, networkIDs, apiStorage, rollupID)
err = server.RunServer(c.BridgeServer, bridgeService)
if err != nil {
Expand Down Expand Up @@ -153,17 +155,23 @@ func setupLog(c log.Config) {
}

func newEthermans(c *config.Config) (*etherman.Client, []*etherman.Client, error) {
l1Etherman, err := etherman.NewClient(c.Etherman, c.NetworkConfig.PolygonBridgeAddress, c.NetworkConfig.PolygonZkEVMGlobalExitRootAddress, c.NetworkConfig.PolygonRollupManagerAddress, c.NetworkConfig.PolygonZkEvmAddress)
l1Etherman, err := etherman.NewL1Client(c.Etherman.L1URL, c.NetworkConfig.PolygonBridgeAddress, c.NetworkConfig.PolygonZkEVMGlobalExitRootAddress, c.NetworkConfig.PolygonRollupManagerAddress, c.NetworkConfig.PolygonZkEvmAddress)
if err != nil {
log.Error("L1 etherman error: ", err)
return nil, nil, err
}
if len(c.L2PolygonBridgeAddresses) != len(c.Etherman.L2URLs) {
log.Fatal("environment configuration error. zkevm bridge addresses and zkevm node urls mismatch")
}
if len(c.L2PolygonBridgeAddresses) != 1 {
return nil, nil, fmt.Errorf(
"the bridge service only supports working with a single L2, but %d were provided",
len(c.L2PolygonBridgeAddresses),
)
}
var l2Ethermans []*etherman.Client
for i, addr := range c.L2PolygonBridgeAddresses {
l2Etherman, err := etherman.NewL2Client(c.Etherman.L2URLs[i], addr)
for i, bridgeAddr := range c.L2PolygonBridgeAddresses {
l2Etherman, err := etherman.NewL2Client(c.Etherman.L2URLs[i], bridgeAddr)
if err != nil {
log.Error("L2 etherman ", i, c.Etherman.L2URLs[i], ", error: ", err)
return l1Etherman, nil, err
Expand Down
15 changes: 15 additions & 0 deletions db/pgstorage/migrations/0009.sql
Original file line number Diff line number Diff line change
@@ -0,0 +1,15 @@
-- +migrate Up

ALTER TABLE sync.deposit
ADD COLUMN IF NOT EXISTS origin_rollup_id BIGINT DEFAULT 0;

ALTER TABLE sync.claim DROP CONSTRAINT claim_pkey;
ALTER TABLE sync.claim ADD PRIMARY KEY (index, rollup_index, mainnet_flag);
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

PK used to be index, metwork_id. This is wrong because network_id references destination network. Therefore there can be many claims with the same index (deposit count) and destination network. Example:

  • Rollup 1 does the first deposit to L1
  • Rollup 2 does the first deposit to L1
    Once both deposits are claimed on L1, they will same index and and network id.

OTOH, index is unique per origin network, it's a counter (deposit count). And the way to index the origin network is with rollup index and mainnet flag, as explained in this comment:

// origin rollup ID is calculated as follows:
	// // if mainnet_flag: 0
	// // else: rollup_index + 1
	// destination rollup ID == network_id: network that has received the claim, therefore, the destination rollupID of the claim


-- +migrate Down

ALTER TABLE sync.claim DROP CONSTRAINT claim_pkey;
Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should this be ADD PRIMARY KEY?

ALTER TABLE sync.claim ADD PRIMARY KEY (network_id, index);

ALTER TABLE sync.deposit
DROP COLUMN IF EXISTS origin_rollup_id;
Loading
Loading