From c3d2c2689a7c3bf393424cc59017d0da10cfbc89 Mon Sep 17 00:00:00 2001 From: Manav Darji Date: Mon, 20 Feb 2023 18:37:21 +0530 Subject: [PATCH 01/29] Merge qa to master (#750) * Added checks to RPC requests and introduced new flags to customise the parameters (#657) * added a check to reject rpc requests with batch size > the one set using a newly added flag (rpcbatchlimit) * added a check to reject rpc requests whose result size > the one set using a newly added flag (rpcreturndatalimit) * updated the config files and docs * chg : trieTimeout from 60 to 10 mins (#692) * chg : trieTimeout from 60 to 10 mins * chg : cache.timout to 10m from 1h in configs * internal/cli/server : fix : added triesInMemory in config (#691) * changed version from 0.3.0 to 0.3.4-beta (#693) * fix nil state-sync issue, increase grpc limit (#695) * Increase grpc message size limit in pprof * consensus/bor/bor.go : stateSyncs init fixed [Fix #686] * eth/filters: handle nil state-sync before notify * eth/filters: update check Co-authored-by: Jerry Co-authored-by: Daniil * core, tests/bor: add more tests for state-sync validation (#710) * core: add get state sync function for tests * tests/bor: add validation for state sync events post consensus * Arpit/temp bor sync (#701) * Increase grpc message size limit in pprof * ReadBorReceipts improvements * use internal function * fix tests * fetch geth upstread for ReadBorReceiptRLP * Only query bor receipt when the query index is equal to # tx in block body This change reduces the frequency of calling ReadBorReceipt and ReadBorTransaction, which are CPU and db intensive. * Revert "fetch geth upstread for ReadBorReceiptRLP" This reverts commit 2e838a6b1313d26674f3a8df4b044e35dcbf35a0. * Restore ReadBorReceiptRLP * fix bor receipts * remove unused * fix lints --------- Co-authored-by: Jerry Co-authored-by: Manav Darji Co-authored-by: Evgeny Danienko <6655321@bk.ru> * Revert "chg : trieTimeout from 60 to 10 mins (#692)" (#720) This reverts commit 241843c7e7bb18e64d2e157fd6fbbd665f6ce9d9. * Arpit/add execution pool 2 (#719) * initial * linters * linters * remove timeout * update pool * change pool size function * check nil * check nil * fix tests * Use execution pool from server in all handlers * simplify things * test fix * add support for cli, config * add to cli and config * merge base branch * debug statements * fix bug * atomic pointer timeout * add apis * update workerpool * fix issues * change params * fix issues * fix ipc issue * remove execution pool from IPC * revert * fix tests * mutex * refactor flag and value names * ordering fix * refactor flag and value names * update default ep size to 40 * fix bor start issues * revert file changes * debug statements * fix bug * update workerpool * atomic pointer timeout * add apis * Merge branch 'add-execution-pool' of github.com:maticnetwork/bor into arpit/add-execution-pool * fix issues * change params * fix issues * fix ipc issue * remove execution pool from IPC * revert * merge base branch * Merge branch 'add-execution-pool' of github.com:maticnetwork/bor into arpit/add-execution-pool * mutex * fix tests * Merge branch 'arpit/add-execution-pool' of github.com:maticnetwork/bor into arpit/add-execution-pool * Change default size of execution pool to 40 * refactor flag and value names * fix merge conflicts * ordering fix * refactor flag and value names * update default ep size to 40 * fix bor start issues * revert file changes * fix linters * fix go.mod * change sec to ms * change default value for ep timeout * fix node api calls * comment setter for ep timeout --------- Co-authored-by: Evgeny Danienko <6655321@bk.ru> Co-authored-by: Jerry Co-authored-by: Manav Darji * version change (#721) * Event based pprof (#732) * feature * Save pprof to /tmp --------- Co-authored-by: Jerry * Cherry-pick changes from develop (#738) * Check if block is nil to prevent panic (#736) * miner: use env for tracing instead of block object (#728) --------- Co-authored-by: Dmitry <46797839+dkeysil@users.noreply.github.com> * add max code init size check in txpool (#739) * Revert "Event based pprof" and update version (#742) * Revert "Event based pprof (#732)" This reverts commit 22fa4033e8fabb51c44e8d2a8c6bb695a6e9285e. * params: update version to 0.3.4-beta3 * packaging/templates: update bor version * params, packaging/templates: update bor version --------- Co-authored-by: SHIVAM SHARMA Co-authored-by: Pratik Patil Co-authored-by: Jerry Co-authored-by: Daniil Co-authored-by: Arpit Temani Co-authored-by: Evgeny Danienko <6655321@bk.ru> Co-authored-by: Dmitry <46797839+dkeysil@users.noreply.github.com> --- builder/files/config.toml | 6 ++ cmd/clef/main.go | 2 +- consensus/bor/bor.go | 2 +- core/blockchain.go | 2 +- core/blockchain_reader.go | 4 + core/bor_blockchain.go | 2 +- core/error.go | 4 + core/rawdb/bor_receipt.go | 64 +++++------- core/tx_pool.go | 6 ++ docs/cli/example_config.toml | 25 +++-- docs/cli/server.md | 12 +++ eth/api_backend.go | 4 + eth/ethconfig/config.go | 14 ++- eth/filters/bor_api.go | 7 +- eth/filters/test_backend.go | 2 +- eth/tracers/api.go | 2 +- eth/tracers/api_test.go | 4 + go.mod | 3 + go.sum | 4 + internal/cli/debug_pprof.go | 3 +- internal/cli/dumpconfig.go | 2 + internal/cli/server/config.go | 67 +++++++++---- internal/cli/server/flags.go | 40 ++++++++ internal/ethapi/api.go | 33 +++++-- internal/ethapi/backend.go | 9 +- internal/web3ext/web3ext.go | 28 ++++++ les/api_backend.go | 4 + miner/worker.go | 6 +- node/api.go | 89 +++++++++++++++++ node/config.go | 9 ++ node/node.go | 31 +++--- node/rpcstack.go | 21 +++- node/rpcstack_test.go | 2 +- .../templates/mainnet-v1/archive/config.toml | 6 ++ .../mainnet-v1/sentry/sentry/bor/config.toml | 6 ++ .../sentry/validator/bor/config.toml | 6 ++ .../mainnet-v1/without-sentry/bor/config.toml | 6 ++ packaging/templates/package_scripts/control | 2 +- .../templates/package_scripts/control.arm64 | 2 +- .../package_scripts/control.profile.amd64 | 2 +- .../package_scripts/control.profile.arm64 | 2 +- .../package_scripts/control.validator | 2 +- .../package_scripts/control.validator.arm64 | 2 +- .../templates/testnet-v4/archive/config.toml | 6 ++ .../testnet-v4/sentry/sentry/bor/config.toml | 6 ++ .../sentry/validator/bor/config.toml | 6 ++ .../testnet-v4/without-sentry/bor/config.toml | 6 ++ params/config.go | 4 + params/protocol_params.go | 3 +- params/version.go | 2 +- rpc/client.go | 2 +- rpc/client_test.go | 7 +- rpc/endpoints.go | 2 +- rpc/execution_pool.go | 99 +++++++++++++++++++ rpc/handler.go | 38 ++++--- rpc/http_test.go | 2 +- rpc/inproc.go | 8 +- rpc/ipc.go | 6 +- rpc/server.go | 56 ++++++++++- rpc/server_test.go | 2 +- rpc/subscription_test.go | 2 +- rpc/testservice_test.go | 2 +- rpc/websocket_test.go | 2 +- tests/bor/bor_test.go | 13 +++ tests/bor/helper.go | 2 +- 65 files changed, 671 insertions(+), 154 deletions(-) create mode 100644 rpc/execution_pool.go diff --git a/builder/files/config.toml b/builder/files/config.toml index 0f2919807f..1b8d915b7b 100644 --- a/builder/files/config.toml +++ b/builder/files/config.toml @@ -8,6 +8,8 @@ chain = "mainnet" datadir = "/var/lib/bor/data" # ancient = "" # keystore = "/var/lib/bor/keystore" +# "rpc.batchlimit" = 100 +# "rpc.returndatalimit" = 100000 syncmode = "full" # gcmode = "full" # snapshot = true @@ -73,6 +75,8 @@ syncmode = "full" # api = ["eth", "net", "web3", "txpool", "bor"] # vhosts = ["*"] # corsdomain = ["*"] +# ep-size = 40 +# ep-requesttimeout = "0s" # [jsonrpc.ws] # enabled = false # port = 8546 @@ -80,6 +84,8 @@ syncmode = "full" # host = "localhost" # api = ["web3", "net"] # origins = ["*"] +# ep-size = 40 +# ep-requesttimeout = "0s" # [jsonrpc.graphql] # enabled = false # port = 0 diff --git a/cmd/clef/main.go b/cmd/clef/main.go index f7c3adebc4..1bfb2610e5 100644 --- a/cmd/clef/main.go +++ b/cmd/clef/main.go @@ -656,7 +656,7 @@ func signer(c *cli.Context) error { vhosts := utils.SplitAndTrim(c.GlobalString(utils.HTTPVirtualHostsFlag.Name)) cors := utils.SplitAndTrim(c.GlobalString(utils.HTTPCORSDomainFlag.Name)) - srv := rpc.NewServer() + srv := rpc.NewServer(0, 0) err := node.RegisterApis(rpcAPI, []string{"account"}, srv, false) if err != nil { utils.Fatalf("Could not register API: %w", err) diff --git a/consensus/bor/bor.go b/consensus/bor/bor.go index 1b4ddec45d..5b32263762 100644 --- a/consensus/bor/bor.go +++ b/consensus/bor/bor.go @@ -1161,7 +1161,7 @@ func (c *Bor) CommitStates( processStart := time.Now() totalGas := 0 /// limit on gas for state sync per block chainID := c.chainConfig.ChainID.String() - stateSyncs := make([]*types.StateSyncData, len(eventRecords)) + stateSyncs := make([]*types.StateSyncData, 0, len(eventRecords)) var gasUsed uint64 diff --git a/core/blockchain.go b/core/blockchain.go index 74fd4bfeda..cbcf02fef4 100644 --- a/core/blockchain.go +++ b/core/blockchain.go @@ -2059,7 +2059,7 @@ func (bc *BlockChain) collectLogs(hash common.Hash, removed bool) []*types.Log { receipts := rawdb.ReadReceipts(bc.db, hash, *number, bc.chainConfig) // Append bor receipt - borReceipt := rawdb.ReadBorReceipt(bc.db, hash, *number) + borReceipt := rawdb.ReadBorReceipt(bc.db, hash, *number, bc.chainConfig) if borReceipt != nil { receipts = append(receipts, borReceipt) } diff --git a/core/blockchain_reader.go b/core/blockchain_reader.go index f61f930496..8405d4a54c 100644 --- a/core/blockchain_reader.go +++ b/core/blockchain_reader.go @@ -422,6 +422,10 @@ func (bc *BlockChain) SetStateSync(stateData []*types.StateSyncData) { bc.stateSyncData = stateData } +func (bc *BlockChain) GetStateSync() []*types.StateSyncData { + return bc.stateSyncData +} + // SubscribeStateSyncEvent registers a subscription of StateSyncEvent. func (bc *BlockChain) SubscribeStateSyncEvent(ch chan<- StateSyncEvent) event.Subscription { return bc.scope.Track(bc.stateSyncFeed.Subscribe(ch)) diff --git a/core/bor_blockchain.go b/core/bor_blockchain.go index ae2cdf3c6f..49973421bd 100644 --- a/core/bor_blockchain.go +++ b/core/bor_blockchain.go @@ -19,7 +19,7 @@ func (bc *BlockChain) GetBorReceiptByHash(hash common.Hash) *types.Receipt { } // read bor reciept by hash and number - receipt := rawdb.ReadBorReceipt(bc.db, hash, *number) + receipt := rawdb.ReadBorReceipt(bc.db, hash, *number, bc.chainConfig) if receipt == nil { return nil } diff --git a/core/error.go b/core/error.go index 51ebefc137..234620ee4b 100644 --- a/core/error.go +++ b/core/error.go @@ -63,6 +63,10 @@ var ( // have enough funds for transfer(topmost call only). ErrInsufficientFundsForTransfer = errors.New("insufficient funds for transfer") + // ErrMaxInitCodeSizeExceeded is returned if creation transaction provides the init code bigger + // than init code size limit. + ErrMaxInitCodeSizeExceeded = errors.New("max initcode size exceeded") + // ErrInsufficientFunds is returned if the total cost of executing a transaction // is higher than the balance of the user's account. ErrInsufficientFunds = errors.New("insufficient funds for gas * price + value") diff --git a/core/rawdb/bor_receipt.go b/core/rawdb/bor_receipt.go index e225083741..0739c67a9f 100644 --- a/core/rawdb/bor_receipt.go +++ b/core/rawdb/bor_receipt.go @@ -7,6 +7,7 @@ import ( "github.com/ethereum/go-ethereum/core/types" "github.com/ethereum/go-ethereum/ethdb" "github.com/ethereum/go-ethereum/log" + "github.com/ethereum/go-ethereum/params" "github.com/ethereum/go-ethereum/rlp" ) @@ -33,49 +34,28 @@ func borTxLookupKey(hash common.Hash) []byte { return append(borTxLookupPrefix, hash.Bytes()...) } -// HasBorReceipt verifies the existence of all block receipt belonging -// to a block. -func HasBorReceipt(db ethdb.Reader, hash common.Hash, number uint64) bool { - if has, err := db.Ancient(freezerHashTable, number); err == nil && common.BytesToHash(has) == hash { - return true - } - - if has, err := db.Has(borReceiptKey(number, hash)); !has || err != nil { - return false - } +func ReadBorReceiptRLP(db ethdb.Reader, hash common.Hash, number uint64) rlp.RawValue { + var data []byte - return true -} + err := db.ReadAncients(func(reader ethdb.AncientReader) error { + // Check if the data is in ancients + if isCanon(reader, number, hash) { + data, _ = reader.Ancient(freezerBorReceiptTable, number) -// ReadBorReceiptRLP retrieves the block receipt belonging to a block in RLP encoding. -func ReadBorReceiptRLP(db ethdb.Reader, hash common.Hash, number uint64) rlp.RawValue { - // First try to look up the data in ancient database. Extra hash - // comparison is necessary since ancient database only maintains - // the canonical data. - data, _ := db.Ancient(freezerBorReceiptTable, number) - if len(data) > 0 { - h, _ := db.Ancient(freezerHashTable, number) - if common.BytesToHash(h) == hash { - return data - } - } - // Then try to look up the data in leveldb. - data, _ = db.Get(borReceiptKey(number, hash)) - if len(data) > 0 { - return data - } - // In the background freezer is moving data from leveldb to flatten files. - // So during the first check for ancient db, the data is not yet in there, - // but when we reach into leveldb, the data was already moved. That would - // result in a not found error. - data, _ = db.Ancient(freezerBorReceiptTable, number) - if len(data) > 0 { - h, _ := db.Ancient(freezerHashTable, number) - if common.BytesToHash(h) == hash { - return data + return nil } + + // If not, try reading from leveldb + data, _ = db.Get(borReceiptKey(number, hash)) + + return nil + }) + + if err != nil { + log.Warn("during ReadBorReceiptRLP", "number", number, "hash", hash, "err", err) } - return nil // Can't find the data anywhere. + + return data } // ReadRawBorReceipt retrieves the block receipt belonging to a block. @@ -101,7 +81,11 @@ func ReadRawBorReceipt(db ethdb.Reader, hash common.Hash, number uint64) *types. // ReadBorReceipt retrieves all the bor block receipts belonging to a block, including // its correspoinding metadata fields. If it is unable to populate these metadata // fields then nil is returned. -func ReadBorReceipt(db ethdb.Reader, hash common.Hash, number uint64) *types.Receipt { +func ReadBorReceipt(db ethdb.Reader, hash common.Hash, number uint64, config *params.ChainConfig) *types.Receipt { + if config != nil && config.Bor != nil && config.Bor.Sprint != nil && !config.Bor.IsSprintStart(number) { + return nil + } + // We're deriving many fields from the block body, retrieve beside the receipt borReceipt := ReadRawBorReceipt(db, hash, number) if borReceipt == nil { diff --git a/core/tx_pool.go b/core/tx_pool.go index 7648668688..3d3f01eecb 100644 --- a/core/tx_pool.go +++ b/core/tx_pool.go @@ -18,6 +18,7 @@ package core import ( "errors" + "fmt" "math" "math/big" "sort" @@ -604,6 +605,11 @@ func (pool *TxPool) validateTx(tx *types.Transaction, local bool) error { if uint64(tx.Size()) > txMaxSize { return ErrOversizedData } + // Check whether the init code size has been exceeded. + // (TODO): Add a hardfork check here while pulling upstream changes. + if tx.To() == nil && len(tx.Data()) > params.MaxInitCodeSize { + return fmt.Errorf("%w: code size %v limit %v", ErrMaxInitCodeSizeExceeded, len(tx.Data()), params.MaxInitCodeSize) + } // Transactions can't be negative. This may never happen using RLP decoded // transactions but may occur if you create a transaction using the RPC. if tx.Value().Sign() < 0 { diff --git a/docs/cli/example_config.toml b/docs/cli/example_config.toml index 64ef60ae12..c32c40e2c6 100644 --- a/docs/cli/example_config.toml +++ b/docs/cli/example_config.toml @@ -8,6 +8,8 @@ log-level = "INFO" # Set log level for the server datadir = "var/lib/bor" # Path of the data directory to store information ancient = "" # Data directory for ancient chain segments (default = inside chaindata) keystore = "" # Path of the directory where keystores are located +"rpc.batchlimit" = 100 # Maximum number of messages in a batch (default=100, use 0 for no limits) +"rpc.returndatalimit" = 100000 # Maximum size (in bytes) a result of an rpc request could have (default=100000, use 0 for no limits) syncmode = "full" # Blockchain sync mode (only "full" sync supported) gcmode = "full" # Blockchain garbage collection mode ("full", "archive") snapshot = true # Enables the snapshot-database mode @@ -72,18 +74,22 @@ ethstats = "" # Reporting URL of a ethstats service (nodename:sec api = ["eth", "net", "web3", "txpool", "bor"] # API's offered over the HTTP-RPC interface vhosts = ["localhost"] # Comma separated list of virtual hostnames from which to accept requests (server enforced). Accepts '*' wildcard. corsdomain = ["localhost"] # Comma separated list of domains from which to accept cross origin requests (browser enforced) + ep-size = 40 # Maximum size of workers to run in rpc execution pool for HTTP requests (default: 40) + ep-requesttimeout = "0s" # Request Timeout for rpc execution pool for HTTP requests (default: 0s, 0s = disabled) [jsonrpc.ws] - enabled = false # Enable the WS-RPC server - port = 8546 # WS-RPC server listening port - prefix = "" # HTTP path prefix on which JSON-RPC is served. Use '/' to serve on all paths. - host = "localhost" # ws.addr - api = ["net", "web3"] # API's offered over the WS-RPC interface - origins = ["localhost"] # Origins from which to accept websockets requests + enabled = false # Enable the WS-RPC server + port = 8546 # WS-RPC server listening port + prefix = "" # HTTP path prefix on which JSON-RPC is served. Use '/' to serve on all paths. + host = "localhost" # ws.addr + api = ["net", "web3"] # API's offered over the WS-RPC interface + origins = ["localhost"] # Origins from which to accept websockets requests + ep-size = 40 # Maximum size of workers to run in rpc execution pool for WS requests (default: 40) + ep-requesttimeout = "0s" # Request Timeout for rpc execution pool for WS requests (default: 0s, 0s = disabled) [jsonrpc.graphql] enabled = false # Enable GraphQL on the HTTP-RPC server. Note that GraphQL can only be started if an HTTP server is started as well. - port = 0 # - prefix = "" # - host = "" # + port = 0 # + prefix = "" # + host = "" # vhosts = ["localhost"] # Comma separated list of virtual hostnames from which to accept requests (server enforced). Accepts '*' wildcard. corsdomain = ["localhost"] # Comma separated list of domains from which to accept cross origin requests (browser enforced) [jsonrpc.timeouts] @@ -91,6 +97,7 @@ ethstats = "" # Reporting URL of a ethstats service (nodename:sec write = "30s" idle = "2m0s" + [gpo] blocks = 20 # Number of recent blocks to check for gas prices percentile = 60 # Suggested gas price is the given percentile of a set of recent transaction gas prices diff --git a/docs/cli/server.md b/docs/cli/server.md index 5bc0ff1024..b91b000eb6 100644 --- a/docs/cli/server.md +++ b/docs/cli/server.md @@ -16,6 +16,10 @@ The ```bor server``` command runs the Bor client. - ```keystore```: Path of the directory where keystores are located +- ```rpc.batchlimit```: Maximum number of messages in a batch (default=100, use 0 for no limits) (default: 100) + +- ```rpc.returndatalimit```: Maximum size (in bytes) a result of an rpc request could have (default=100000, use 0 for no limits) (default: 100000) + - ```config```: File for the config file - ```syncmode```: Blockchain sync mode (only "full" sync supported) (default: full) @@ -116,6 +120,10 @@ The ```bor server``` command runs the Bor client. - ```http.api```: API's offered over the HTTP-RPC interface (default: eth,net,web3,txpool,bor) +- ```http.ep-size```: Maximum size of workers to run in rpc execution pool for HTTP requests (default: 40) + +- ```http.ep-requesttimeout```: Request Timeout for rpc execution pool for HTTP requests (default: 0s) + - ```ws```: Enable the WS-RPC server (default: false) - ```ws.addr```: WS-RPC server listening interface (default: localhost) @@ -126,6 +134,10 @@ The ```bor server``` command runs the Bor client. - ```ws.api```: API's offered over the WS-RPC interface (default: net,web3) +- ```ws.ep-size```: Maximum size of workers to run in rpc execution pool for WS requests (default: 40) + +- ```ws.ep-requesttimeout```: Request Timeout for rpc execution pool for WS requests (default: 0s) + - ```graphql```: Enable GraphQL on the HTTP-RPC server. Note that GraphQL can only be started if an HTTP server is started as well. (default: false) ### P2P Options diff --git a/eth/api_backend.go b/eth/api_backend.go index c33f3cf6f2..60aea7527e 100644 --- a/eth/api_backend.go +++ b/eth/api_backend.go @@ -317,6 +317,10 @@ func (b *EthAPIBackend) RPCGasCap() uint64 { return b.eth.config.RPCGasCap } +func (b *EthAPIBackend) RPCRpcReturnDataLimit() uint64 { + return b.eth.config.RPCReturnDataLimit +} + func (b *EthAPIBackend) RPCEVMTimeout() time.Duration { return b.eth.config.RPCEVMTimeout } diff --git a/eth/ethconfig/config.go b/eth/ethconfig/config.go index c9272758ab..68cf733cc6 100644 --- a/eth/ethconfig/config.go +++ b/eth/ethconfig/config.go @@ -94,11 +94,12 @@ var Defaults = Config{ GasPrice: big.NewInt(params.GWei), Recommit: 125 * time.Second, }, - TxPool: core.DefaultTxPoolConfig, - RPCGasCap: 50000000, - RPCEVMTimeout: 5 * time.Second, - GPO: FullNodeGPO, - RPCTxFeeCap: 5, // 5 matic + TxPool: core.DefaultTxPoolConfig, + RPCGasCap: 50000000, + RPCReturnDataLimit: 100000, + RPCEVMTimeout: 5 * time.Second, + GPO: FullNodeGPO, + RPCTxFeeCap: 5, // 5 matic } func init() { @@ -199,6 +200,9 @@ type Config struct { // RPCGasCap is the global gas cap for eth-call variants. RPCGasCap uint64 + // Maximum size (in bytes) a result of an rpc request could have + RPCReturnDataLimit uint64 + // RPCEVMTimeout is the global timeout for eth-call. RPCEVMTimeout time.Duration diff --git a/eth/filters/bor_api.go b/eth/filters/bor_api.go index db13c95959..aeb370d6be 100644 --- a/eth/filters/bor_api.go +++ b/eth/filters/bor_api.go @@ -1,7 +1,6 @@ package filters import ( - "bytes" "context" "errors" @@ -19,7 +18,7 @@ func (api *PublicFilterAPI) SetChainConfig(chainConfig *params.ChainConfig) { func (api *PublicFilterAPI) GetBorBlockLogs(ctx context.Context, crit FilterCriteria) ([]*types.Log, error) { if api.chainConfig == nil { - return nil, errors.New("No chain config found. Proper PublicFilterAPI initialization required") + return nil, errors.New("no chain config found. Proper PublicFilterAPI initialization required") } // get sprint from bor config @@ -67,8 +66,8 @@ func (api *PublicFilterAPI) NewDeposits(ctx context.Context, crit ethereum.State for { select { case h := <-stateSyncData: - if crit.ID == h.ID || bytes.Compare(crit.Contract.Bytes(), h.Contract.Bytes()) == 0 || - (crit.ID == 0 && crit.Contract == common.Address{}) { + if h != nil && (crit.ID == h.ID || crit.Contract == h.Contract || + (crit.ID == 0 && crit.Contract == common.Address{})) { notifier.Notify(rpcSub.ID, h) } case <-rpcSub.Err(): diff --git a/eth/filters/test_backend.go b/eth/filters/test_backend.go index 979ed3efb6..8b2ef4a7f2 100644 --- a/eth/filters/test_backend.go +++ b/eth/filters/test_backend.go @@ -38,7 +38,7 @@ func (b *TestBackend) GetBorBlockReceipt(ctx context.Context, hash common.Hash) return &types.Receipt{}, nil } - receipt := rawdb.ReadBorReceipt(b.DB, hash, *number) + receipt := rawdb.ReadBorReceipt(b.DB, hash, *number, nil) if receipt == nil { return &types.Receipt{}, nil } diff --git a/eth/tracers/api.go b/eth/tracers/api.go index 3fce91ac9c..13f5c627cd 100644 --- a/eth/tracers/api.go +++ b/eth/tracers/api.go @@ -177,7 +177,7 @@ func (api *API) getAllBlockTransactions(ctx context.Context, block *types.Block) stateSyncPresent := false - borReceipt := rawdb.ReadBorReceipt(api.backend.ChainDb(), block.Hash(), block.NumberU64()) + borReceipt := rawdb.ReadBorReceipt(api.backend.ChainDb(), block.Hash(), block.NumberU64(), api.backend.ChainConfig()) if borReceipt != nil { txHash := types.GetDerivedBorTxHash(types.BorReceiptKey(block.Number().Uint64(), block.Hash())) if txHash != (common.Hash{}) { diff --git a/eth/tracers/api_test.go b/eth/tracers/api_test.go index 6dd94e4870..d394e4fbe3 100644 --- a/eth/tracers/api_test.go +++ b/eth/tracers/api_test.go @@ -126,6 +126,10 @@ func (b *testBackend) RPCGasCap() uint64 { return 25000000 } +func (b *testBackend) RPCRpcReturnDataLimit() uint64 { + return 100000 +} + func (b *testBackend) ChainConfig() *params.ChainConfig { return b.chainConfig } diff --git a/go.mod b/go.mod index 36595ca307..f55b2f9aa7 100644 --- a/go.mod +++ b/go.mod @@ -6,6 +6,7 @@ require ( github.com/Azure/azure-sdk-for-go/sdk/storage/azblob v0.3.0 github.com/BurntSushi/toml v1.1.0 github.com/JekaMas/go-grpc-net-conn v0.0.0-20220708155319-6aff21f2d13d + github.com/JekaMas/workerpool v1.1.5 github.com/VictoriaMetrics/fastcache v1.6.0 github.com/aws/aws-sdk-go-v2 v1.2.0 github.com/aws/aws-sdk-go-v2/config v1.1.1 @@ -84,6 +85,8 @@ require ( pgregory.net/rapid v0.4.8 ) +require github.com/gammazero/deque v0.2.1 // indirect + require ( github.com/Azure/azure-sdk-for-go/sdk/azcore v0.21.1 // indirect github.com/Azure/azure-sdk-for-go/sdk/internal v0.8.3 // indirect diff --git a/go.sum b/go.sum index 96fa9d3f04..4b312ccfb1 100644 --- a/go.sum +++ b/go.sum @@ -31,6 +31,8 @@ github.com/BurntSushi/xgb v0.0.0-20160522181843-27f122750802/go.mod h1:IVnqGOEym github.com/DATA-DOG/go-sqlmock v1.3.3/go.mod h1:f/Ixk793poVmq4qj/V1dPUg2JEAKC73Q5eFN3EC/SaM= github.com/JekaMas/go-grpc-net-conn v0.0.0-20220708155319-6aff21f2d13d h1:RO27lgfZF8s9lZ3pWyzc0gCE0RZC+6/PXbRjAa0CNp8= github.com/JekaMas/go-grpc-net-conn v0.0.0-20220708155319-6aff21f2d13d/go.mod h1:romz7UPgSYhfJkKOalzEEyV6sWtt/eAEm0nX2aOrod0= +github.com/JekaMas/workerpool v1.1.5 h1:xmrx2Zyft95CEGiEqzDxiawptCIRZQ0zZDhTGDFOCaw= +github.com/JekaMas/workerpool v1.1.5/go.mod h1:IoDWPpwMcA27qbuugZKeBslDrgX09lVmksuh9sjzbhc= github.com/Masterminds/goutils v1.1.0 h1:zukEsf/1JZwCMgHiK3GZftabmxiCw4apj3a28RPBiVg= github.com/Masterminds/goutils v1.1.0/go.mod h1:8cTjp+g8YejhMuvIA5y2vz3BpJxksy863GQaJW2MFNU= github.com/Masterminds/semver v1.5.0 h1:H65muMkzWKEuNDnfl9d70GUjFniHKHRbFPGBuZ3QEww= @@ -157,6 +159,8 @@ github.com/fogleman/gg v1.2.1-0.20190220221249-0403632d5b90/go.mod h1:R/bRT+9gY/ github.com/fsnotify/fsnotify v1.4.7/go.mod h1:jwhsz4b93w/PPRr/qN1Yymfu8t87LnFCMoQvtojpjFo= github.com/fsnotify/fsnotify v1.4.9 h1:hsms1Qyu0jgnwNXIxa+/V/PDsU6CfLf6CNO8H7IWoS4= github.com/fsnotify/fsnotify v1.4.9/go.mod h1:znqG4EE+3YCdAaPaxE2ZRY/06pZUdp0tY4IgpuI1SZQ= +github.com/gammazero/deque v0.2.1 h1:qSdsbG6pgp6nL7A0+K/B7s12mcCY/5l5SIUpMOl+dC0= +github.com/gammazero/deque v0.2.1/go.mod h1:LFroj8x4cMYCukHJDbxFCkT+r9AndaJnFMuZDV34tuU= github.com/gballet/go-libpcsclite v0.0.0-20190607065134-2772fd86a8ff h1:tY80oXqGNY4FhTFhk+o9oFHGINQ/+vhlm8HFzi6znCI= github.com/gballet/go-libpcsclite v0.0.0-20190607065134-2772fd86a8ff/go.mod h1:x7DCsMOv1taUwEWCzT4cmDeAkigA5/QCwUodaVOe8Ww= github.com/getkin/kin-openapi v0.53.0/go.mod h1:7Yn5whZr5kJi6t+kShccXS8ae1APpYTW6yheSwk8Yi4= diff --git a/internal/cli/debug_pprof.go b/internal/cli/debug_pprof.go index 01698719e5..4cbe989408 100644 --- a/internal/cli/debug_pprof.go +++ b/internal/cli/debug_pprof.go @@ -7,6 +7,7 @@ import ( "fmt" "strings" + "google.golang.org/grpc" empty "google.golang.org/protobuf/types/known/emptypb" "github.com/ethereum/go-ethereum/internal/cli/flagset" @@ -103,7 +104,7 @@ func (d *DebugPprofCommand) Run(args []string) int { req.Profile = profile } - stream, err := clt.DebugPprof(ctx, req) + stream, err := clt.DebugPprof(ctx, req, grpc.MaxCallRecvMsgSize(1024*1024*1024)) if err != nil { return err diff --git a/internal/cli/dumpconfig.go b/internal/cli/dumpconfig.go index a748af3357..787eab2d13 100644 --- a/internal/cli/dumpconfig.go +++ b/internal/cli/dumpconfig.go @@ -55,6 +55,8 @@ func (c *DumpconfigCommand) Run(args []string) int { userConfig.JsonRPC.HttpTimeout.ReadTimeoutRaw = userConfig.JsonRPC.HttpTimeout.ReadTimeout.String() userConfig.JsonRPC.HttpTimeout.WriteTimeoutRaw = userConfig.JsonRPC.HttpTimeout.WriteTimeout.String() userConfig.JsonRPC.HttpTimeout.IdleTimeoutRaw = userConfig.JsonRPC.HttpTimeout.IdleTimeout.String() + userConfig.JsonRPC.Http.ExecutionPoolRequestTimeoutRaw = userConfig.JsonRPC.Http.ExecutionPoolRequestTimeout.String() + userConfig.JsonRPC.Ws.ExecutionPoolRequestTimeoutRaw = userConfig.JsonRPC.Ws.ExecutionPoolRequestTimeout.String() userConfig.TxPool.RejournalRaw = userConfig.TxPool.Rejournal.String() userConfig.TxPool.LifeTimeRaw = userConfig.TxPool.LifeTime.String() userConfig.Sealer.GasPriceRaw = userConfig.Sealer.GasPrice.String() diff --git a/internal/cli/server/config.go b/internal/cli/server/config.go index 35d7e19359..ca7a235ace 100644 --- a/internal/cli/server/config.go +++ b/internal/cli/server/config.go @@ -60,6 +60,12 @@ type Config struct { // KeyStoreDir is the directory to store keystores KeyStoreDir string `hcl:"keystore,optional" toml:"keystore,optional"` + // Maximum number of messages in a batch (default=100, use 0 for no limits) + RPCBatchLimit uint64 `hcl:"rpc.batchlimit,optional" toml:"rpc.batchlimit,optional"` + + // Maximum size (in bytes) a result of an rpc request could have (default=100000, use 0 for no limits) + RPCReturnDataLimit uint64 `hcl:"rpc.returndatalimit,optional" toml:"rpc.returndatalimit,optional"` + // SyncMode selects the sync protocol SyncMode string `hcl:"syncmode,optional" toml:"syncmode,optional"` @@ -275,6 +281,13 @@ type APIConfig struct { // Origins is the list of endpoints to accept requests from (only consumed for websockets) Origins []string `hcl:"origins,optional" toml:"origins,optional"` + + // ExecutionPoolSize is max size of workers to be used for rpc execution + ExecutionPoolSize uint64 `hcl:"ep-size,optional" toml:"ep-size,optional"` + + // ExecutionPoolRequestTimeout is timeout used by execution pool for rpc execution + ExecutionPoolRequestTimeout time.Duration `hcl:"-,optional" toml:"-"` + ExecutionPoolRequestTimeoutRaw string `hcl:"ep-requesttimeout,optional" toml:"ep-requesttimeout,optional"` } // Used from rpc.HTTPTimeouts @@ -435,12 +448,14 @@ type DeveloperConfig struct { func DefaultConfig() *Config { return &Config{ - Chain: "mainnet", - Identity: Hostname(), - RequiredBlocks: map[string]string{}, - LogLevel: "INFO", - DataDir: DefaultDataDir(), - Ancient: "", + Chain: "mainnet", + Identity: Hostname(), + RequiredBlocks: map[string]string{}, + LogLevel: "INFO", + DataDir: DefaultDataDir(), + Ancient: "", + RPCBatchLimit: 100, + RPCReturnDataLimit: 100000, P2P: &P2PConfig{ MaxPeers: 50, MaxPendPeers: 50, @@ -499,21 +514,25 @@ func DefaultConfig() *Config { GasCap: ethconfig.Defaults.RPCGasCap, TxFeeCap: ethconfig.Defaults.RPCTxFeeCap, Http: &APIConfig{ - Enabled: false, - Port: 8545, - Prefix: "", - Host: "localhost", - API: []string{"eth", "net", "web3", "txpool", "bor"}, - Cors: []string{"localhost"}, - VHost: []string{"localhost"}, + Enabled: false, + Port: 8545, + Prefix: "", + Host: "localhost", + API: []string{"eth", "net", "web3", "txpool", "bor"}, + Cors: []string{"localhost"}, + VHost: []string{"localhost"}, + ExecutionPoolSize: 40, + ExecutionPoolRequestTimeout: 0, }, Ws: &APIConfig{ - Enabled: false, - Port: 8546, - Prefix: "", - Host: "localhost", - API: []string{"net", "web3"}, - Origins: []string{"localhost"}, + Enabled: false, + Port: 8546, + Prefix: "", + Host: "localhost", + API: []string{"net", "web3"}, + Origins: []string{"localhost"}, + ExecutionPoolSize: 40, + ExecutionPoolRequestTimeout: 0, }, Graphql: &APIConfig{ Enabled: false, @@ -620,6 +639,8 @@ func (c *Config) fillTimeDurations() error { {"jsonrpc.timeouts.read", &c.JsonRPC.HttpTimeout.ReadTimeout, &c.JsonRPC.HttpTimeout.ReadTimeoutRaw}, {"jsonrpc.timeouts.write", &c.JsonRPC.HttpTimeout.WriteTimeout, &c.JsonRPC.HttpTimeout.WriteTimeoutRaw}, {"jsonrpc.timeouts.idle", &c.JsonRPC.HttpTimeout.IdleTimeout, &c.JsonRPC.HttpTimeout.IdleTimeoutRaw}, + {"jsonrpc.ws.ep-requesttimeout", &c.JsonRPC.Ws.ExecutionPoolRequestTimeout, &c.JsonRPC.Ws.ExecutionPoolRequestTimeoutRaw}, + {"jsonrpc.http.ep-requesttimeout", &c.JsonRPC.Http.ExecutionPoolRequestTimeout, &c.JsonRPC.Http.ExecutionPoolRequestTimeoutRaw}, {"txpool.lifetime", &c.TxPool.LifeTime, &c.TxPool.LifeTimeRaw}, {"txpool.rejournal", &c.TxPool.Rejournal, &c.TxPool.RejournalRaw}, {"cache.rejournal", &c.Cache.Rejournal, &c.Cache.RejournalRaw}, @@ -883,6 +904,7 @@ func (c *Config) buildEth(stack *node.Node, accountManager *accounts.Manager) (* n.Preimages = c.Cache.Preimages n.TxLookupLimit = c.Cache.TxLookupLimit n.TrieTimeout = c.Cache.TrieTimeout + n.TriesInMemory = c.Cache.TriesInMemory } n.RPCGasCap = c.JsonRPC.GasCap @@ -936,6 +958,8 @@ func (c *Config) buildEth(stack *node.Node, accountManager *accounts.Manager) (* n.BorLogs = c.BorLogs n.DatabaseHandles = dbHandles + n.RPCReturnDataLimit = c.RPCReturnDataLimit + if c.Ancient != "" { n.DatabaseFreezer = c.Ancient } @@ -986,6 +1010,11 @@ func (c *Config) buildNode() (*node.Config, error) { WriteTimeout: c.JsonRPC.HttpTimeout.WriteTimeout, IdleTimeout: c.JsonRPC.HttpTimeout.IdleTimeout, }, + RPCBatchLimit: c.RPCBatchLimit, + WSJsonRPCExecutionPoolSize: c.JsonRPC.Ws.ExecutionPoolSize, + WSJsonRPCExecutionPoolRequestTimeout: c.JsonRPC.Ws.ExecutionPoolRequestTimeout, + HTTPJsonRPCExecutionPoolSize: c.JsonRPC.Http.ExecutionPoolSize, + HTTPJsonRPCExecutionPoolRequestTimeout: c.JsonRPC.Http.ExecutionPoolRequestTimeout, } // dev mode diff --git a/internal/cli/server/flags.go b/internal/cli/server/flags.go index 822bb81aef..abf5fa3465 100644 --- a/internal/cli/server/flags.go +++ b/internal/cli/server/flags.go @@ -46,6 +46,18 @@ func (c *Command) Flags() *flagset.Flagset { Usage: "Path of the directory where keystores are located", Value: &c.cliConfig.KeyStoreDir, }) + f.Uint64Flag(&flagset.Uint64Flag{ + Name: "rpc.batchlimit", + Usage: "Maximum number of messages in a batch (default=100, use 0 for no limits)", + Value: &c.cliConfig.RPCBatchLimit, + Default: c.cliConfig.RPCBatchLimit, + }) + f.Uint64Flag(&flagset.Uint64Flag{ + Name: "rpc.returndatalimit", + Usage: "Maximum size (in bytes) a result of an rpc request could have (default=100000, use 0 for no limits)", + Value: &c.cliConfig.RPCReturnDataLimit, + Default: c.cliConfig.RPCReturnDataLimit, + }) f.StringFlag(&flagset.StringFlag{ Name: "config", Usage: "File for the config file", @@ -432,6 +444,20 @@ func (c *Command) Flags() *flagset.Flagset { Default: c.cliConfig.JsonRPC.Http.API, Group: "JsonRPC", }) + f.Uint64Flag(&flagset.Uint64Flag{ + Name: "http.ep-size", + Usage: "Maximum size of workers to run in rpc execution pool for HTTP requests", + Value: &c.cliConfig.JsonRPC.Http.ExecutionPoolSize, + Default: c.cliConfig.JsonRPC.Http.ExecutionPoolSize, + Group: "JsonRPC", + }) + f.DurationFlag(&flagset.DurationFlag{ + Name: "http.ep-requesttimeout", + Usage: "Request Timeout for rpc execution pool for HTTP requests", + Value: &c.cliConfig.JsonRPC.Http.ExecutionPoolRequestTimeout, + Default: c.cliConfig.JsonRPC.Http.ExecutionPoolRequestTimeout, + Group: "JsonRPC", + }) // ws options f.BoolFlag(&flagset.BoolFlag{ @@ -469,6 +495,20 @@ func (c *Command) Flags() *flagset.Flagset { Default: c.cliConfig.JsonRPC.Ws.API, Group: "JsonRPC", }) + f.Uint64Flag(&flagset.Uint64Flag{ + Name: "ws.ep-size", + Usage: "Maximum size of workers to run in rpc execution pool for WS requests", + Value: &c.cliConfig.JsonRPC.Ws.ExecutionPoolSize, + Default: c.cliConfig.JsonRPC.Ws.ExecutionPoolSize, + Group: "JsonRPC", + }) + f.DurationFlag(&flagset.DurationFlag{ + Name: "ws.ep-requesttimeout", + Usage: "Request Timeout for rpc execution pool for WS requests", + Value: &c.cliConfig.JsonRPC.Ws.ExecutionPoolRequestTimeout, + Default: c.cliConfig.JsonRPC.Ws.ExecutionPoolRequestTimeout, + Group: "JsonRPC", + }) // graphql options f.BoolFlag(&flagset.BoolFlag{ diff --git a/internal/ethapi/api.go b/internal/ethapi/api.go index 2fd148c7c6..49b1610987 100644 --- a/internal/ethapi/api.go +++ b/internal/ethapi/api.go @@ -627,6 +627,10 @@ func (s *PublicBlockChainAPI) GetTransactionReceiptsByBlock(ctx context.Context, return nil, err } + if block == nil { + return nil, errors.New("block not found") + } + receipts, err := s.b.GetReceipts(ctx, block.Hash()) if err != nil { return nil, err @@ -636,7 +640,7 @@ func (s *PublicBlockChainAPI) GetTransactionReceiptsByBlock(ctx context.Context, var txHash common.Hash - borReceipt := rawdb.ReadBorReceipt(s.b.ChainDb(), block.Hash(), block.NumberU64()) + borReceipt := rawdb.ReadBorReceipt(s.b.ChainDb(), block.Hash(), block.NumberU64(), s.b.ChainConfig()) if borReceipt != nil { receipts = append(receipts, borReceipt) txHash = types.GetDerivedBorTxHash(types.BorReceiptKey(block.Number().Uint64(), block.Hash())) @@ -1078,6 +1082,11 @@ func (s *PublicBlockChainAPI) Call(ctx context.Context, args TransactionArgs, bl if err != nil { return nil, err } + + if int(s.b.RPCRpcReturnDataLimit()) > 0 && len(result.ReturnData) > int(s.b.RPCRpcReturnDataLimit()) { + return nil, fmt.Errorf("call returned result of length %d exceeding limit %d", len(result.ReturnData), int(s.b.RPCRpcReturnDataLimit())) + } + // If the result contains a revert reason, try to unpack and return it. if len(result.Revert()) > 0 { return nil, newRevertError(result) @@ -1448,15 +1457,23 @@ func newRPCPendingTransaction(tx *types.Transaction, current *types.Header, conf func newRPCTransactionFromBlockIndex(b *types.Block, index uint64, config *params.ChainConfig, db ethdb.Database) *RPCTransaction { txs := b.Transactions() - borReceipt := rawdb.ReadBorReceipt(db, b.Hash(), b.NumberU64()) - if borReceipt != nil { - tx, _, _, _ := rawdb.ReadBorTransaction(db, borReceipt.TxHash) + if index >= uint64(len(txs)+1) { + return nil + } - if tx != nil { - txs = append(txs, tx) + // If the index out of the range of transactions defined in block body, it means that the transaction is a bor state sync transaction, and we need to fetch it from the database + if index == uint64(len(txs)) { + borReceipt := rawdb.ReadBorReceipt(db, b.Hash(), b.NumberU64(), config) + if borReceipt != nil { + tx, _, _, _ := rawdb.ReadBorTransaction(db, borReceipt.TxHash) + + if tx != nil { + txs = append(txs, tx) + } } } + // If the index is still out of the range after checking bor state sync transaction, it means that the transaction index is invalid if index >= uint64(len(txs)) { return nil } @@ -1597,7 +1614,7 @@ func (api *PublicTransactionPoolAPI) getAllBlockTransactions(ctx context.Context stateSyncPresent := false - borReceipt := rawdb.ReadBorReceipt(api.b.ChainDb(), block.Hash(), block.NumberU64()) + borReceipt := rawdb.ReadBorReceipt(api.b.ChainDb(), block.Hash(), block.NumberU64(), api.b.ChainConfig()) if borReceipt != nil { txHash := types.GetDerivedBorTxHash(types.BorReceiptKey(block.Number().Uint64(), block.Hash())) if txHash != (common.Hash{}) { @@ -1767,7 +1784,7 @@ func (s *PublicTransactionPoolAPI) GetTransactionReceipt(ctx context.Context, ha if borTx { // Fetch bor block receipt - receipt = rawdb.ReadBorReceipt(s.b.ChainDb(), blockHash, blockNumber) + receipt = rawdb.ReadBorReceipt(s.b.ChainDb(), blockHash, blockNumber, s.b.ChainConfig()) } else { receipts, err := s.b.GetReceipts(ctx, blockHash) if err != nil { diff --git a/internal/ethapi/backend.go b/internal/ethapi/backend.go index 1287640b83..14ddbba70e 100644 --- a/internal/ethapi/backend.go +++ b/internal/ethapi/backend.go @@ -48,10 +48,11 @@ type Backend interface { ChainDb() ethdb.Database AccountManager() *accounts.Manager ExtRPCEnabled() bool - RPCGasCap() uint64 // global gas cap for eth_call over rpc: DoS protection - RPCEVMTimeout() time.Duration // global timeout for eth_call over rpc: DoS protection - RPCTxFeeCap() float64 // global tx fee cap for all transaction related APIs - UnprotectedAllowed() bool // allows only for EIP155 transactions. + RPCGasCap() uint64 // global gas cap for eth_call over rpc: DoS protection + RPCRpcReturnDataLimit() uint64 // Maximum size (in bytes) a result of an rpc request could have + RPCEVMTimeout() time.Duration // global timeout for eth_call over rpc: DoS protection + RPCTxFeeCap() float64 // global tx fee cap for all transaction related APIs + UnprotectedAllowed() bool // allows only for EIP155 transactions. // Blockchain API SetHead(number uint64) diff --git a/internal/web3ext/web3ext.go b/internal/web3ext/web3ext.go index dcdd5baf23..c823f096d6 100644 --- a/internal/web3ext/web3ext.go +++ b/internal/web3ext/web3ext.go @@ -192,6 +192,34 @@ web3._extend({ name: 'stopWS', call: 'admin_stopWS' }), + new web3._extend.Method({ + name: 'getExecutionPoolSize', + call: 'admin_getExecutionPoolSize' + }), + new web3._extend.Method({ + name: 'getExecutionPoolRequestTimeout', + call: 'admin_getExecutionPoolRequestTimeout' + }), + // new web3._extend.Method({ + // name: 'setWSExecutionPoolRequestTimeout', + // call: 'admin_setWSExecutionPoolRequestTimeout', + // params: 1 + // }), + // new web3._extend.Method({ + // name: 'setHttpExecutionPoolRequestTimeout', + // call: 'admin_setHttpExecutionPoolRequestTimeout', + // params: 1 + // }), + new web3._extend.Method({ + name: 'setWSExecutionPoolSize', + call: 'admin_setWSExecutionPoolSize', + params: 1 + }), + new web3._extend.Method({ + name: 'setHttpExecutionPoolSize', + call: 'admin_setHttpExecutionPoolSize', + params: 1 + }), ], properties: [ new web3._extend.Property({ diff --git a/les/api_backend.go b/les/api_backend.go index c716a3967f..786e77ed46 100644 --- a/les/api_backend.go +++ b/les/api_backend.go @@ -294,6 +294,10 @@ func (b *LesApiBackend) RPCGasCap() uint64 { return b.eth.config.RPCGasCap } +func (b *LesApiBackend) RPCRpcReturnDataLimit() uint64 { + return b.eth.config.RPCReturnDataLimit +} + func (b *LesApiBackend) RPCEVMTimeout() time.Duration { return b.eth.config.RPCEVMTimeout } diff --git a/miner/worker.go b/miner/worker.go index 797e7ea980..30809cd558 100644 --- a/miner/worker.go +++ b/miner/worker.go @@ -1314,9 +1314,9 @@ func (w *worker) commit(ctx context.Context, env *environment, interval func(), tracing.SetAttributes( span, - attribute.Int("number", int(block.Number().Uint64())), - attribute.String("hash", block.Hash().String()), - attribute.String("sealhash", w.engine.SealHash(block.Header()).String()), + attribute.Int("number", int(env.header.Number.Uint64())), + attribute.String("hash", env.header.Hash().String()), + attribute.String("sealhash", w.engine.SealHash(env.header).String()), attribute.Int("len of env.txs", len(env.txs)), attribute.Bool("error", err != nil), ) diff --git a/node/api.go b/node/api.go index 1b32399f63..f8e7f944a6 100644 --- a/node/api.go +++ b/node/api.go @@ -20,6 +20,7 @@ import ( "context" "fmt" "strings" + "time" "github.com/ethereum/go-ethereum/common/hexutil" "github.com/ethereum/go-ethereum/crypto" @@ -342,3 +343,91 @@ func (s *publicWeb3API) ClientVersion() string { func (s *publicWeb3API) Sha3(input hexutil.Bytes) hexutil.Bytes { return crypto.Keccak256(input) } + +type ExecutionPoolSize struct { + HttpLimit int + WSLimit int +} + +type ExecutionPoolRequestTimeout struct { + HttpLimit time.Duration + WSLimit time.Duration +} + +func (api *privateAdminAPI) GetExecutionPoolSize() *ExecutionPoolSize { + var httpLimit int + if api.node.http.host != "" { + httpLimit = api.node.http.httpHandler.Load().(*rpcHandler).server.GetExecutionPoolSize() + } + + var wsLimit int + if api.node.ws.host != "" { + wsLimit = api.node.ws.wsHandler.Load().(*rpcHandler).server.GetExecutionPoolSize() + } + + executionPoolSize := &ExecutionPoolSize{ + HttpLimit: httpLimit, + WSLimit: wsLimit, + } + + return executionPoolSize +} + +func (api *privateAdminAPI) GetExecutionPoolRequestTimeout() *ExecutionPoolRequestTimeout { + var httpLimit time.Duration + if api.node.http.host != "" { + httpLimit = api.node.http.httpHandler.Load().(*rpcHandler).server.GetExecutionPoolRequestTimeout() + } + + var wsLimit time.Duration + if api.node.ws.host != "" { + wsLimit = api.node.ws.wsHandler.Load().(*rpcHandler).server.GetExecutionPoolRequestTimeout() + } + + executionPoolRequestTimeout := &ExecutionPoolRequestTimeout{ + HttpLimit: httpLimit, + WSLimit: wsLimit, + } + + return executionPoolRequestTimeout +} + +// func (api *privateAdminAPI) SetWSExecutionPoolRequestTimeout(n int) *ExecutionPoolRequestTimeout { +// if api.node.ws.host != "" { +// api.node.ws.wsConfig.executionPoolRequestTimeout = time.Duration(n) * time.Millisecond +// api.node.ws.wsHandler.Load().(*rpcHandler).server.SetExecutionPoolRequestTimeout(time.Duration(n) * time.Millisecond) +// log.Warn("updating ws execution pool request timeout", "timeout", n) +// } + +// return api.GetExecutionPoolRequestTimeout() +// } + +// func (api *privateAdminAPI) SetHttpExecutionPoolRequestTimeout(n int) *ExecutionPoolRequestTimeout { +// if api.node.http.host != "" { +// api.node.http.httpConfig.executionPoolRequestTimeout = time.Duration(n) * time.Millisecond +// api.node.http.httpHandler.Load().(*rpcHandler).server.SetExecutionPoolRequestTimeout(time.Duration(n) * time.Millisecond) +// log.Warn("updating http execution pool request timeout", "timeout", n) +// } + +// return api.GetExecutionPoolRequestTimeout() +// } + +func (api *privateAdminAPI) SetWSExecutionPoolSize(n int) *ExecutionPoolSize { + if api.node.ws.host != "" { + api.node.ws.wsConfig.executionPoolSize = uint64(n) + api.node.ws.wsHandler.Load().(*rpcHandler).server.SetExecutionPoolSize(n) + log.Warn("updating ws execution pool size", "threads", n) + } + + return api.GetExecutionPoolSize() +} + +func (api *privateAdminAPI) SetHttpExecutionPoolSize(n int) *ExecutionPoolSize { + if api.node.http.host != "" { + api.node.http.httpConfig.executionPoolSize = uint64(n) + api.node.http.httpHandler.Load().(*rpcHandler).server.SetExecutionPoolSize(n) + log.Warn("updating http execution pool size", "threads", n) + } + + return api.GetExecutionPoolSize() +} diff --git a/node/config.go b/node/config.go index 853190c95f..c8f40c1062 100644 --- a/node/config.go +++ b/node/config.go @@ -25,6 +25,7 @@ import ( "runtime" "strings" "sync" + "time" "github.com/ethereum/go-ethereum/common" "github.com/ethereum/go-ethereum/crypto" @@ -204,6 +205,14 @@ type Config struct { // JWTSecret is the hex-encoded jwt secret. JWTSecret string `toml:",omitempty"` + + // Maximum number of messages in a batch + RPCBatchLimit uint64 `toml:",omitempty"` + // Configs for RPC execution pool + WSJsonRPCExecutionPoolSize uint64 `toml:",omitempty"` + WSJsonRPCExecutionPoolRequestTimeout time.Duration `toml:",omitempty"` + HTTPJsonRPCExecutionPoolSize uint64 `toml:",omitempty"` + HTTPJsonRPCExecutionPoolRequestTimeout time.Duration `toml:",omitempty"` } // IPCEndpoint resolves an IPC endpoint based on a configured value, taking into diff --git a/node/node.go b/node/node.go index e12bcf6675..5cf233d17a 100644 --- a/node/node.go +++ b/node/node.go @@ -105,7 +105,7 @@ func New(conf *Config) (*Node, error) { node := &Node{ config: conf, - inprocHandler: rpc.NewServer(), + inprocHandler: rpc.NewServer(0, 0), eventmux: new(event.TypeMux), log: conf.Logger, stop: make(chan struct{}), @@ -113,6 +113,9 @@ func New(conf *Config) (*Node, error) { databases: make(map[*closeTrackingDB]struct{}), } + // set RPC batch limit + node.inprocHandler.SetRPCBatchLimit(conf.RPCBatchLimit) + // Register built-in APIs. node.rpcAPIs = append(node.rpcAPIs, node.apis()...) @@ -153,10 +156,10 @@ func New(conf *Config) (*Node, error) { } // Configure RPC servers. - node.http = newHTTPServer(node.log, conf.HTTPTimeouts) - node.httpAuth = newHTTPServer(node.log, conf.HTTPTimeouts) - node.ws = newHTTPServer(node.log, rpc.DefaultHTTPTimeouts) - node.wsAuth = newHTTPServer(node.log, rpc.DefaultHTTPTimeouts) + node.http = newHTTPServer(node.log, conf.HTTPTimeouts, conf.RPCBatchLimit) + node.httpAuth = newHTTPServer(node.log, conf.HTTPTimeouts, conf.RPCBatchLimit) + node.ws = newHTTPServer(node.log, rpc.DefaultHTTPTimeouts, conf.RPCBatchLimit) + node.wsAuth = newHTTPServer(node.log, rpc.DefaultHTTPTimeouts, conf.RPCBatchLimit) node.ipc = newIPCServer(node.log, conf.IPCEndpoint()) return node, nil @@ -402,10 +405,12 @@ func (n *Node) startRPC() error { return err } if err := server.enableRPC(apis, httpConfig{ - CorsAllowedOrigins: n.config.HTTPCors, - Vhosts: n.config.HTTPVirtualHosts, - Modules: n.config.HTTPModules, - prefix: n.config.HTTPPathPrefix, + CorsAllowedOrigins: n.config.HTTPCors, + Vhosts: n.config.HTTPVirtualHosts, + Modules: n.config.HTTPModules, + prefix: n.config.HTTPPathPrefix, + executionPoolSize: n.config.HTTPJsonRPCExecutionPoolSize, + executionPoolRequestTimeout: n.config.HTTPJsonRPCExecutionPoolRequestTimeout, }); err != nil { return err } @@ -419,9 +424,11 @@ func (n *Node) startRPC() error { return err } if err := server.enableWS(n.rpcAPIs, wsConfig{ - Modules: n.config.WSModules, - Origins: n.config.WSOrigins, - prefix: n.config.WSPathPrefix, + Modules: n.config.WSModules, + Origins: n.config.WSOrigins, + prefix: n.config.WSPathPrefix, + executionPoolSize: n.config.WSJsonRPCExecutionPoolSize, + executionPoolRequestTimeout: n.config.WSJsonRPCExecutionPoolRequestTimeout, }); err != nil { return err } diff --git a/node/rpcstack.go b/node/rpcstack.go index eabf1dcae7..cba9a22f6f 100644 --- a/node/rpcstack.go +++ b/node/rpcstack.go @@ -28,6 +28,7 @@ import ( "strings" "sync" "sync/atomic" + "time" "github.com/rs/cors" @@ -42,6 +43,10 @@ type httpConfig struct { Vhosts []string prefix string // path prefix on which to mount http handler jwtSecret []byte // optional JWT secret + + // Execution pool config + executionPoolSize uint64 + executionPoolRequestTimeout time.Duration } // wsConfig is the JSON-RPC/Websocket configuration @@ -50,6 +55,10 @@ type wsConfig struct { Modules []string prefix string // path prefix on which to mount ws handler jwtSecret []byte // optional JWT secret + + // Execution pool config + executionPoolSize uint64 + executionPoolRequestTimeout time.Duration } type rpcHandler struct { @@ -81,10 +90,12 @@ type httpServer struct { port int handlerNames map[string]string + + RPCBatchLimit uint64 } -func newHTTPServer(log log.Logger, timeouts rpc.HTTPTimeouts) *httpServer { - h := &httpServer{log: log, timeouts: timeouts, handlerNames: make(map[string]string)} +func newHTTPServer(log log.Logger, timeouts rpc.HTTPTimeouts, rpcBatchLimit uint64) *httpServer { + h := &httpServer{log: log, timeouts: timeouts, handlerNames: make(map[string]string), RPCBatchLimit: rpcBatchLimit} h.httpHandler.Store((*rpcHandler)(nil)) h.wsHandler.Store((*rpcHandler)(nil)) @@ -282,7 +293,8 @@ func (h *httpServer) enableRPC(apis []rpc.API, config httpConfig) error { } // Create RPC server and handler. - srv := rpc.NewServer() + srv := rpc.NewServer(config.executionPoolSize, config.executionPoolRequestTimeout) + srv.SetRPCBatchLimit(h.RPCBatchLimit) if err := RegisterApis(apis, config.Modules, srv, false); err != nil { return err } @@ -313,7 +325,8 @@ func (h *httpServer) enableWS(apis []rpc.API, config wsConfig) error { return fmt.Errorf("JSON-RPC over WebSocket is already enabled") } // Create RPC server and handler. - srv := rpc.NewServer() + srv := rpc.NewServer(config.executionPoolSize, config.executionPoolRequestTimeout) + srv.SetRPCBatchLimit(h.RPCBatchLimit) if err := RegisterApis(apis, config.Modules, srv, false); err != nil { return err } diff --git a/node/rpcstack_test.go b/node/rpcstack_test.go index 60fcab5a90..49db8435ac 100644 --- a/node/rpcstack_test.go +++ b/node/rpcstack_test.go @@ -234,7 +234,7 @@ func Test_checkPath(t *testing.T) { func createAndStartServer(t *testing.T, conf *httpConfig, ws bool, wsConf *wsConfig) *httpServer { t.Helper() - srv := newHTTPServer(testlog.Logger(t, log.LvlDebug), rpc.DefaultHTTPTimeouts) + srv := newHTTPServer(testlog.Logger(t, log.LvlDebug), rpc.DefaultHTTPTimeouts, 100) assert.NoError(t, srv.enableRPC(nil, *conf)) if ws { assert.NoError(t, srv.enableWS(nil, *wsConf)) diff --git a/packaging/templates/mainnet-v1/archive/config.toml b/packaging/templates/mainnet-v1/archive/config.toml index 9eaafd3bee..5491c784ef 100644 --- a/packaging/templates/mainnet-v1/archive/config.toml +++ b/packaging/templates/mainnet-v1/archive/config.toml @@ -4,6 +4,8 @@ chain = "mainnet" datadir = "/var/lib/bor/data" # ancient = "" # keystore = "" +# "rpc.batchlimit" = 100 +# "rpc.returndatalimit" = 100000 syncmode = "full" gcmode = "archive" # snapshot = true @@ -65,6 +67,8 @@ gcmode = "archive" vhosts = ["*"] corsdomain = ["*"] # prefix = "" + # ep-size = 40 + # ep-requesttimeout = "0s" [jsonrpc.ws] enabled = true port = 8546 @@ -72,6 +76,8 @@ gcmode = "archive" # host = "localhost" # api = ["web3", "net"] origins = ["*"] + # ep-size = 40 + # ep-requesttimeout = "0s" # [jsonrpc.graphql] # enabled = false # port = 0 diff --git a/packaging/templates/mainnet-v1/sentry/sentry/bor/config.toml b/packaging/templates/mainnet-v1/sentry/sentry/bor/config.toml index 94dd6634f0..90df84dc07 100644 --- a/packaging/templates/mainnet-v1/sentry/sentry/bor/config.toml +++ b/packaging/templates/mainnet-v1/sentry/sentry/bor/config.toml @@ -4,6 +4,8 @@ chain = "mainnet" datadir = "/var/lib/bor/data" # ancient = "" # keystore = "" +# "rpc.batchlimit" = 100 +# "rpc.returndatalimit" = 100000 syncmode = "full" # gcmode = "full" # snapshot = true @@ -65,6 +67,8 @@ syncmode = "full" vhosts = ["*"] corsdomain = ["*"] # prefix = "" + # ep-size = 40 +# ep-requesttimeout = "0s" # [jsonrpc.ws] # enabled = false # port = 8546 @@ -72,6 +76,8 @@ syncmode = "full" # host = "localhost" # api = ["web3", "net"] # origins = ["*"] + # ep-size = 40 +# ep-requesttimeout = "0s" # [jsonrpc.graphql] # enabled = false # port = 0 diff --git a/packaging/templates/mainnet-v1/sentry/validator/bor/config.toml b/packaging/templates/mainnet-v1/sentry/validator/bor/config.toml index 9c55683c96..9e2d80fd2a 100644 --- a/packaging/templates/mainnet-v1/sentry/validator/bor/config.toml +++ b/packaging/templates/mainnet-v1/sentry/validator/bor/config.toml @@ -6,6 +6,8 @@ chain = "mainnet" datadir = "/var/lib/bor/data" # ancient = "" # keystore = "$BOR_DIR/keystore" +# "rpc.batchlimit" = 100 +# "rpc.returndatalimit" = 100000 syncmode = "full" # gcmode = "full" # snapshot = true @@ -67,6 +69,8 @@ syncmode = "full" vhosts = ["*"] corsdomain = ["*"] # prefix = "" + # ep-size = 40 +# ep-requesttimeout = "0s" # [jsonrpc.ws] # enabled = false # port = 8546 @@ -74,6 +78,8 @@ syncmode = "full" # host = "localhost" # api = ["web3", "net"] # origins = ["*"] + # ep-size = 40 +# ep-requesttimeout = "0s" # [jsonrpc.graphql] # enabled = false # port = 0 diff --git a/packaging/templates/mainnet-v1/without-sentry/bor/config.toml b/packaging/templates/mainnet-v1/without-sentry/bor/config.toml index 573f1f3be8..1e5fd67762 100644 --- a/packaging/templates/mainnet-v1/without-sentry/bor/config.toml +++ b/packaging/templates/mainnet-v1/without-sentry/bor/config.toml @@ -6,6 +6,8 @@ chain = "mainnet" datadir = "/var/lib/bor/data" # ancient = "" # keystore = "$BOR_DIR/keystore" +# "rpc.batchlimit" = 100 +# "rpc.returndatalimit" = 100000 syncmode = "full" # gcmode = "full" # snapshot = true @@ -67,6 +69,8 @@ syncmode = "full" vhosts = ["*"] corsdomain = ["*"] # prefix = "" + # ep-size = 40 +# ep-requesttimeout = "0s" # [jsonrpc.ws] # enabled = false # port = 8546 @@ -74,6 +78,8 @@ syncmode = "full" # host = "localhost" # api = ["web3", "net"] # origins = ["*"] + # ep-size = 40 +# ep-requesttimeout = "0s" # [jsonrpc.graphql] # enabled = false # port = 0 diff --git a/packaging/templates/package_scripts/control b/packaging/templates/package_scripts/control index cb62165a5e..c036378b0e 100644 --- a/packaging/templates/package_scripts/control +++ b/packaging/templates/package_scripts/control @@ -1,5 +1,5 @@ Source: bor -Version: 0.3.3 +Version: 0.3.4-stable Section: develop Priority: standard Maintainer: Polygon diff --git a/packaging/templates/package_scripts/control.arm64 b/packaging/templates/package_scripts/control.arm64 index 56276cb43a..3f45d5564f 100644 --- a/packaging/templates/package_scripts/control.arm64 +++ b/packaging/templates/package_scripts/control.arm64 @@ -1,5 +1,5 @@ Source: bor -Version: 0.3.3 +Version: 0.3.4-stable Section: develop Priority: standard Maintainer: Polygon diff --git a/packaging/templates/package_scripts/control.profile.amd64 b/packaging/templates/package_scripts/control.profile.amd64 index 4ddd8424ff..e3261e3f57 100644 --- a/packaging/templates/package_scripts/control.profile.amd64 +++ b/packaging/templates/package_scripts/control.profile.amd64 @@ -1,5 +1,5 @@ Source: bor-profile -Version: 0.3.3 +Version: 0.3.4-stable Section: develop Priority: standard Maintainer: Polygon diff --git a/packaging/templates/package_scripts/control.profile.arm64 b/packaging/templates/package_scripts/control.profile.arm64 index 9f9301c925..93c2206770 100644 --- a/packaging/templates/package_scripts/control.profile.arm64 +++ b/packaging/templates/package_scripts/control.profile.arm64 @@ -1,5 +1,5 @@ Source: bor-profile -Version: 0.3.3 +Version: 0.3.4-stable Section: develop Priority: standard Maintainer: Polygon diff --git a/packaging/templates/package_scripts/control.validator b/packaging/templates/package_scripts/control.validator index d43250c891..3c5633d11f 100644 --- a/packaging/templates/package_scripts/control.validator +++ b/packaging/templates/package_scripts/control.validator @@ -1,5 +1,5 @@ Source: bor-profile -Version: 0.3.3 +Version: 0.3.4-stable Section: develop Priority: standard Maintainer: Polygon diff --git a/packaging/templates/package_scripts/control.validator.arm64 b/packaging/templates/package_scripts/control.validator.arm64 index 5a50f8cb39..e475d91bd3 100644 --- a/packaging/templates/package_scripts/control.validator.arm64 +++ b/packaging/templates/package_scripts/control.validator.arm64 @@ -1,5 +1,5 @@ Source: bor-profile -Version: 0.3.3 +Version: 0.3.4-stable Section: develop Priority: standard Maintainer: Polygon diff --git a/packaging/templates/testnet-v4/archive/config.toml b/packaging/templates/testnet-v4/archive/config.toml index 1762fdf117..fb9ffd0a17 100644 --- a/packaging/templates/testnet-v4/archive/config.toml +++ b/packaging/templates/testnet-v4/archive/config.toml @@ -4,6 +4,8 @@ chain = "mumbai" datadir = "/var/lib/bor/data" # ancient = "" # keystore = "" +# "rpc.batchlimit" = 100 +# "rpc.returndatalimit" = 100000 syncmode = "full" gcmode = "archive" # snapshot = true @@ -65,6 +67,8 @@ gcmode = "archive" vhosts = ["*"] corsdomain = ["*"] # prefix = "" + # ep-size = 40 + # ep-requesttimeout = "0s" [jsonrpc.ws] enabled = true port = 8546 @@ -72,6 +76,8 @@ gcmode = "archive" # host = "localhost" # api = ["web3", "net"] origins = ["*"] + # ep-size = 40 + # ep-requesttimeout = "0s" # [jsonrpc.graphql] # enabled = false # port = 0 diff --git a/packaging/templates/testnet-v4/sentry/sentry/bor/config.toml b/packaging/templates/testnet-v4/sentry/sentry/bor/config.toml index ae191cec2c..9884c0eccc 100644 --- a/packaging/templates/testnet-v4/sentry/sentry/bor/config.toml +++ b/packaging/templates/testnet-v4/sentry/sentry/bor/config.toml @@ -4,6 +4,8 @@ chain = "mumbai" datadir = "/var/lib/bor/data" # ancient = "" # keystore = "" +# "rpc.batchlimit" = 100 +# "rpc.returndatalimit" = 100000 syncmode = "full" # gcmode = "full" # snapshot = true @@ -65,6 +67,8 @@ syncmode = "full" vhosts = ["*"] corsdomain = ["*"] # prefix = "" + # ep-size = 40 + # ep-requesttimeout = "0s" # [jsonrpc.ws] # enabled = false # port = 8546 @@ -72,6 +76,8 @@ syncmode = "full" # host = "localhost" # api = ["web3", "net"] # origins = ["*"] + # ep-size = 40 + # ep-requesttimeout = "0s" # [jsonrpc.graphql] # enabled = false # port = 0 diff --git a/packaging/templates/testnet-v4/sentry/validator/bor/config.toml b/packaging/templates/testnet-v4/sentry/validator/bor/config.toml index b441cc137d..49c47fedd4 100644 --- a/packaging/templates/testnet-v4/sentry/validator/bor/config.toml +++ b/packaging/templates/testnet-v4/sentry/validator/bor/config.toml @@ -6,6 +6,8 @@ chain = "mumbai" datadir = "/var/lib/bor/data" # ancient = "" # keystore = "$BOR_DIR/keystore" +# "rpc.batchlimit" = 100 +# "rpc.returndatalimit" = 100000 syncmode = "full" # gcmode = "full" # snapshot = true @@ -67,6 +69,8 @@ syncmode = "full" vhosts = ["*"] corsdomain = ["*"] # prefix = "" + # ep-size = 40 +# ep-requesttimeout = "0s" # [jsonrpc.ws] # enabled = false # port = 8546 @@ -74,6 +78,8 @@ syncmode = "full" # host = "localhost" # api = ["web3", "net"] # origins = ["*"] + # ep-size = 40 +# ep-requesttimeout = "0s" # [jsonrpc.graphql] # enabled = false # port = 0 diff --git a/packaging/templates/testnet-v4/without-sentry/bor/config.toml b/packaging/templates/testnet-v4/without-sentry/bor/config.toml index 05a254e184..2fb83a6ae2 100644 --- a/packaging/templates/testnet-v4/without-sentry/bor/config.toml +++ b/packaging/templates/testnet-v4/without-sentry/bor/config.toml @@ -6,6 +6,8 @@ chain = "mumbai" datadir = "/var/lib/bor/data" # ancient = "" # keystore = "$BOR_DIR/keystore" +# "rpc.batchlimit" = 100 +# "rpc.returndatalimit" = 100000 syncmode = "full" # gcmode = "full" # snapshot = true @@ -67,6 +69,8 @@ syncmode = "full" vhosts = ["*"] corsdomain = ["*"] # prefix = "" + # ep-size = 40 + # ep-requesttimeout = "0s" # [jsonrpc.ws] # enabled = false # port = 8546 @@ -74,6 +78,8 @@ syncmode = "full" # host = "localhost" # api = ["web3", "net"] # origins = ["*"] + # ep-size = 40 + # ep-requesttimeout = "0s" # [jsonrpc.graphql] # enabled = false # port = 0 diff --git a/params/config.go b/params/config.go index 94729224bb..9833c9eac5 100644 --- a/params/config.go +++ b/params/config.go @@ -617,6 +617,10 @@ func (c *BorConfig) IsDelhi(number *big.Int) bool { return isForked(c.DelhiBlock, number) } +func (c *BorConfig) IsSprintStart(number uint64) bool { + return number%c.CalculateSprint(number) == 0 +} + func (c *BorConfig) calculateBorConfigHelper(field map[string]uint64, number uint64) uint64 { keys := make([]string, 0, len(field)) for k := range field { diff --git a/params/protocol_params.go b/params/protocol_params.go index d468af5d3c..103266caff 100644 --- a/params/protocol_params.go +++ b/params/protocol_params.go @@ -125,7 +125,8 @@ const ( ElasticityMultiplier = 2 // Bounds the maximum gas limit an EIP-1559 block may have. InitialBaseFee = 1000000000 // Initial base fee for EIP-1559 blocks. - MaxCodeSize = 24576 // Maximum bytecode to permit for a contract + MaxCodeSize = 24576 // Maximum bytecode to permit for a contract + MaxInitCodeSize = 2 * MaxCodeSize // Maximum initcode to permit in a creation transaction and create instructions // Precompiled contract gas prices diff --git a/params/version.go b/params/version.go index 199e49095f..37b67a87e9 100644 --- a/params/version.go +++ b/params/version.go @@ -23,7 +23,7 @@ import ( const ( VersionMajor = 0 // Major version component of the current release VersionMinor = 3 // Minor version component of the current release - VersionPatch = 3 // Patch version component of the current release + VersionPatch = 4 // Patch version component of the current release VersionMeta = "stable" // Version metadata to append to the version string ) diff --git a/rpc/client.go b/rpc/client.go index d3ce029775..fc286fe8dc 100644 --- a/rpc/client.go +++ b/rpc/client.go @@ -112,7 +112,7 @@ func (c *Client) newClientConn(conn ServerCodec) *clientConn { ctx := context.Background() ctx = context.WithValue(ctx, clientContextKey{}, c) ctx = context.WithValue(ctx, peerInfoContextKey{}, conn.peerInfo()) - handler := newHandler(ctx, conn, c.idgen, c.services) + handler := newHandler(ctx, conn, c.idgen, c.services, NewExecutionPool(100, 0)) return &clientConn{conn, handler} } diff --git a/rpc/client_test.go b/rpc/client_test.go index fa6010bb19..1bebd27677 100644 --- a/rpc/client_test.go +++ b/rpc/client_test.go @@ -33,12 +33,14 @@ import ( "time" "github.com/davecgh/go-spew/spew" + "github.com/ethereum/go-ethereum/log" ) func TestClientRequest(t *testing.T) { server := newTestServer() defer server.Stop() + client := DialInProc(server) defer client.Close() @@ -46,6 +48,7 @@ func TestClientRequest(t *testing.T) { if err := client.Call(&resp, "test_echo", "hello", 10, &echoArgs{"world"}); err != nil { t.Fatal(err) } + if !reflect.DeepEqual(resp, echoResult{"hello", 10, &echoArgs{"world"}}) { t.Errorf("incorrect result %#v", resp) } @@ -407,7 +410,7 @@ func TestClientSubscriptionUnsubscribeServer(t *testing.T) { t.Parallel() // Create the server. - srv := NewServer() + srv := NewServer(0, 0) srv.RegisterName("nftest", new(notificationTestService)) p1, p2 := net.Pipe() recorder := &unsubscribeRecorder{ServerCodec: NewCodec(p1)} @@ -443,7 +446,7 @@ func TestClientSubscriptionChannelClose(t *testing.T) { t.Parallel() var ( - srv = NewServer() + srv = NewServer(0, 0) httpsrv = httptest.NewServer(srv.WebsocketHandler(nil)) wsURL = "ws:" + strings.TrimPrefix(httpsrv.URL, "http:") ) diff --git a/rpc/endpoints.go b/rpc/endpoints.go index d78ebe2858..2a539d4fc5 100644 --- a/rpc/endpoints.go +++ b/rpc/endpoints.go @@ -27,7 +27,7 @@ import ( func StartIPCEndpoint(ipcEndpoint string, apis []API) (net.Listener, *Server, error) { // Register all the APIs exposed by the services. var ( - handler = NewServer() + handler = NewServer(0, 0) regMap = make(map[string]struct{}) registered []string ) diff --git a/rpc/execution_pool.go b/rpc/execution_pool.go new file mode 100644 index 0000000000..d0f5ab5daa --- /dev/null +++ b/rpc/execution_pool.go @@ -0,0 +1,99 @@ +package rpc + +import ( + "context" + "sync" + "sync/atomic" + "time" + + "github.com/JekaMas/workerpool" +) + +type SafePool struct { + executionPool *atomic.Pointer[workerpool.WorkerPool] + + sync.RWMutex + + timeout time.Duration + size int + + // Skip sending task to execution pool + fastPath bool +} + +func NewExecutionPool(initialSize int, timeout time.Duration) *SafePool { + sp := &SafePool{ + size: initialSize, + timeout: timeout, + } + + if initialSize == 0 { + sp.fastPath = true + + return sp + } + + var ptr atomic.Pointer[workerpool.WorkerPool] + + p := workerpool.New(initialSize) + ptr.Store(p) + sp.executionPool = &ptr + + return sp +} + +func (s *SafePool) Submit(ctx context.Context, fn func() error) (<-chan error, bool) { + if s.fastPath { + go func() { + _ = fn() + }() + + return nil, true + } + + if s.executionPool == nil { + return nil, false + } + + pool := s.executionPool.Load() + if pool == nil { + return nil, false + } + + return pool.Submit(ctx, fn, s.Timeout()), true +} + +func (s *SafePool) ChangeSize(n int) { + oldPool := s.executionPool.Swap(workerpool.New(n)) + + if oldPool != nil { + go func() { + oldPool.StopWait() + }() + } + + s.Lock() + s.size = n + s.Unlock() +} + +func (s *SafePool) ChangeTimeout(n time.Duration) { + s.Lock() + defer s.Unlock() + + s.timeout = n +} + +func (s *SafePool) Timeout() time.Duration { + s.RLock() + defer s.RUnlock() + + return s.timeout +} + +func (s *SafePool) Size() int { + s.RLock() + defer s.RUnlock() + + return s.size +} diff --git a/rpc/handler.go b/rpc/handler.go index 488a29300a..f1fb555c00 100644 --- a/rpc/handler.go +++ b/rpc/handler.go @@ -34,21 +34,20 @@ import ( // // The entry points for incoming messages are: // -// h.handleMsg(message) -// h.handleBatch(message) +// h.handleMsg(message) +// h.handleBatch(message) // // Outgoing calls use the requestOp struct. Register the request before sending it // on the connection: // -// op := &requestOp{ids: ...} -// h.addRequestOp(op) +// op := &requestOp{ids: ...} +// h.addRequestOp(op) // // Now send the request, then wait for the reply to be delivered through handleMsg: // -// if err := op.wait(...); err != nil { -// h.removeRequestOp(op) // timeout, etc. -// } -// +// if err := op.wait(...); err != nil { +// h.removeRequestOp(op) // timeout, etc. +// } type handler struct { reg *serviceRegistry unsubscribeCb *callback @@ -64,6 +63,8 @@ type handler struct { subLock sync.Mutex serverSubs map[ID]*Subscription + + executionPool *SafePool } type callProc struct { @@ -71,7 +72,7 @@ type callProc struct { notifiers []*Notifier } -func newHandler(connCtx context.Context, conn jsonWriter, idgen func() ID, reg *serviceRegistry) *handler { +func newHandler(connCtx context.Context, conn jsonWriter, idgen func() ID, reg *serviceRegistry, pool *SafePool) *handler { rootCtx, cancelRoot := context.WithCancel(connCtx) h := &handler{ reg: reg, @@ -84,11 +85,13 @@ func newHandler(connCtx context.Context, conn jsonWriter, idgen func() ID, reg * allowSubscribe: true, serverSubs: make(map[ID]*Subscription), log: log.Root(), + executionPool: pool, } if conn.remoteAddr() != "" { h.log = h.log.New("conn", conn.remoteAddr()) } h.unsubscribeCb = newCallback(reflect.Value{}, reflect.ValueOf(h.unsubscribe)) + return h } @@ -219,12 +222,16 @@ func (h *handler) cancelServerSubscriptions(err error) { // startCallProc runs fn in a new goroutine and starts tracking it in the h.calls wait group. func (h *handler) startCallProc(fn func(*callProc)) { h.callWG.Add(1) - go func() { - ctx, cancel := context.WithCancel(h.rootCtx) + + ctx, cancel := context.WithCancel(h.rootCtx) + + h.executionPool.Submit(context.Background(), func() error { defer h.callWG.Done() defer cancel() fn(&callProc{ctx: ctx}) - }() + + return nil + }) } // handleImmediate executes non-call messages. It returns false if the message is a @@ -261,6 +268,7 @@ func (h *handler) handleSubscriptionResult(msg *jsonrpcMessage) { // handleResponse processes method call responses. func (h *handler) handleResponse(msg *jsonrpcMessage) { + op := h.respWait[string(msg.ID)] if op == nil { h.log.Debug("Unsolicited RPC response", "reqid", idForLog{msg.ID}) @@ -281,7 +289,11 @@ func (h *handler) handleResponse(msg *jsonrpcMessage) { return } if op.err = json.Unmarshal(msg.Result, &op.sub.subid); op.err == nil { - go op.sub.run() + h.executionPool.Submit(context.Background(), func() error { + op.sub.run() + return nil + }) + h.clientSubs[op.sub.subid] = op.sub } } diff --git a/rpc/http_test.go b/rpc/http_test.go index c84d7705f2..9737e64e91 100644 --- a/rpc/http_test.go +++ b/rpc/http_test.go @@ -103,7 +103,7 @@ func TestHTTPResponseWithEmptyGet(t *testing.T) { func TestHTTPRespBodyUnlimited(t *testing.T) { const respLength = maxRequestContentLength * 3 - s := NewServer() + s := NewServer(0, 0) defer s.Stop() s.RegisterName("test", largeRespService{respLength}) ts := httptest.NewServer(s) diff --git a/rpc/inproc.go b/rpc/inproc.go index fbe9a40cec..29af5507b9 100644 --- a/rpc/inproc.go +++ b/rpc/inproc.go @@ -26,7 +26,13 @@ func DialInProc(handler *Server) *Client { initctx := context.Background() c, _ := newClient(initctx, func(context.Context) (ServerCodec, error) { p1, p2 := net.Pipe() - go handler.ServeCodec(NewCodec(p1), 0) + + //nolint:contextcheck + handler.executionPool.Submit(initctx, func() error { + handler.ServeCodec(NewCodec(p1), 0) + return nil + }) + return NewCodec(p2), nil }) return c diff --git a/rpc/ipc.go b/rpc/ipc.go index 07a211c627..76fbd13f92 100644 --- a/rpc/ipc.go +++ b/rpc/ipc.go @@ -35,7 +35,11 @@ func (s *Server) ServeListener(l net.Listener) error { return err } log.Trace("Accepted RPC connection", "conn", conn.RemoteAddr()) - go s.ServeCodec(NewCodec(conn), 0) + + s.executionPool.Submit(context.Background(), func() error { + s.ServeCodec(NewCodec(conn), 0) + return nil + }) } } diff --git a/rpc/server.go b/rpc/server.go index babc5688e2..04ee2dc87b 100644 --- a/rpc/server.go +++ b/rpc/server.go @@ -18,10 +18,13 @@ package rpc import ( "context" + "fmt" "io" "sync/atomic" + "time" mapset "github.com/deckarep/golang-set" + "github.com/ethereum/go-ethereum/log" ) @@ -47,11 +50,20 @@ type Server struct { idgen func() ID run int32 codecs mapset.Set + + BatchLimit uint64 + executionPool *SafePool } // NewServer creates a new server instance with no registered handlers. -func NewServer() *Server { - server := &Server{idgen: randomIDGenerator(), codecs: mapset.NewSet(), run: 1} +func NewServer(executionPoolSize uint64, executionPoolRequesttimeout time.Duration) *Server { + server := &Server{ + idgen: randomIDGenerator(), + codecs: mapset.NewSet(), + run: 1, + executionPool: NewExecutionPool(int(executionPoolSize), executionPoolRequesttimeout), + } + // Register the default service providing meta information about the RPC service such // as the services and methods it offers. rpcService := &RPCService{server} @@ -59,6 +71,26 @@ func NewServer() *Server { return server } +func (s *Server) SetRPCBatchLimit(batchLimit uint64) { + s.BatchLimit = batchLimit +} + +func (s *Server) SetExecutionPoolSize(n int) { + s.executionPool.ChangeSize(n) +} + +func (s *Server) SetExecutionPoolRequestTimeout(n time.Duration) { + s.executionPool.ChangeTimeout(n) +} + +func (s *Server) GetExecutionPoolRequestTimeout() time.Duration { + return s.executionPool.Timeout() +} + +func (s *Server) GetExecutionPoolSize() int { + return s.executionPool.Size() +} + // RegisterName creates a service for the given receiver type under the given name. When no // methods on the given receiver match the criteria to be either a RPC method or a // subscription an error is returned. Otherwise a new service is created and added to the @@ -98,20 +130,34 @@ func (s *Server) serveSingleRequest(ctx context.Context, codec ServerCodec) { return } - h := newHandler(ctx, codec, s.idgen, &s.services) + h := newHandler(ctx, codec, s.idgen, &s.services, s.executionPool) + h.allowSubscribe = false defer h.close(io.EOF, nil) reqs, batch, err := codec.readBatch() if err != nil { if err != io.EOF { - codec.writeJSON(ctx, errorMessage(&invalidMessageError{"parse error"})) + if err1 := codec.writeJSON(ctx, err); err1 != nil { + log.Warn("WARNING - error in reading batch", "err", err1) + return + } } return } + if batch { - h.handleBatch(reqs) + if s.BatchLimit > 0 && len(reqs) > int(s.BatchLimit) { + if err1 := codec.writeJSON(ctx, errorMessage(fmt.Errorf("batch limit %d exceeded: %d requests given", s.BatchLimit, len(reqs)))); err1 != nil { + log.Warn("WARNING - requests given exceeds the batch limit", "err", err1) + log.Debug("batch limit %d exceeded: %d requests given", s.BatchLimit, len(reqs)) + } + } else { + //nolint:contextcheck + h.handleBatch(reqs) + } } else { + //nolint:contextcheck h.handleMsg(reqs[0]) } } diff --git a/rpc/server_test.go b/rpc/server_test.go index e67893710d..166956681b 100644 --- a/rpc/server_test.go +++ b/rpc/server_test.go @@ -29,7 +29,7 @@ import ( ) func TestServerRegisterName(t *testing.T) { - server := NewServer() + server := NewServer(0, 0) service := new(testService) if err := server.RegisterName("test", service); err != nil { diff --git a/rpc/subscription_test.go b/rpc/subscription_test.go index 54a053dba8..cfca1b24b9 100644 --- a/rpc/subscription_test.go +++ b/rpc/subscription_test.go @@ -53,7 +53,7 @@ func TestSubscriptions(t *testing.T) { subCount = len(namespaces) notificationCount = 3 - server = NewServer() + server = NewServer(0, 0) clientConn, serverConn = net.Pipe() out = json.NewEncoder(clientConn) in = json.NewDecoder(clientConn) diff --git a/rpc/testservice_test.go b/rpc/testservice_test.go index 253e263289..2285821779 100644 --- a/rpc/testservice_test.go +++ b/rpc/testservice_test.go @@ -26,7 +26,7 @@ import ( ) func newTestServer() *Server { - server := NewServer() + server := NewServer(0, 0) server.idgen = sequentialIDGenerator() if err := server.RegisterName("test", new(testService)); err != nil { panic(err) diff --git a/rpc/websocket_test.go b/rpc/websocket_test.go index f74b7fd08b..b805ed2023 100644 --- a/rpc/websocket_test.go +++ b/rpc/websocket_test.go @@ -203,7 +203,7 @@ func TestClientWebsocketPing(t *testing.T) { // This checks that the websocket transport can deal with large messages. func TestClientWebsocketLargeMessage(t *testing.T) { var ( - srv = NewServer() + srv = NewServer(0, 0) httpsrv = httptest.NewServer(srv.WebsocketHandler(nil)) wsURL = "ws:" + strings.TrimPrefix(httpsrv.URL, "http:") ) diff --git a/tests/bor/bor_test.go b/tests/bor/bor_test.go index d059956e6a..2dc20a915e 100644 --- a/tests/bor/bor_test.go +++ b/tests/bor/bor_test.go @@ -7,6 +7,7 @@ import ( "context" "crypto/ecdsa" "encoding/hex" + "fmt" "io" "math/big" "os" @@ -458,9 +459,21 @@ func TestFetchStateSyncEvents(t *testing.T) { _bor.SetHeimdallClient(h) block = buildNextBlock(t, _bor, chain, block, nil, init.genesis.Config.Bor, nil, res.Result.ValidatorSet.Validators) + + // Validate the state sync transactions set by consensus + validateStateSyncEvents(t, eventRecords, chain.GetStateSync()) + insertNewBlock(t, chain, block) } +func validateStateSyncEvents(t *testing.T, expected []*clerk.EventRecordWithTime, got []*types.StateSyncData) { + require.Equal(t, len(expected), len(got), "number of state sync events should be equal") + + for i := 0; i < len(expected); i++ { + require.Equal(t, expected[i].ID, got[i].ID, fmt.Sprintf("state sync ids should be equal - index: %d, expected: %d, got: %d", i, expected[i].ID, got[i].ID)) + } +} + func TestFetchStateSyncEvents_2(t *testing.T) { init := buildEthereumInstance(t, rawdb.NewMemoryDatabase()) chain := init.ethereum.BlockChain() diff --git a/tests/bor/helper.go b/tests/bor/helper.go index 64d5c299ac..e28076a3b1 100644 --- a/tests/bor/helper.go +++ b/tests/bor/helper.go @@ -360,7 +360,7 @@ func generateFakeStateSyncEvents(sample *clerk.EventRecordWithTime, count int) [ *events[0] = event for i := 1; i < count; i++ { - event.ID = uint64(i) + event.ID = uint64(i + 1) event.Time = event.Time.Add(1 * time.Second) events[i] = &clerk.EventRecordWithTime{} *events[i] = event From 9859f2d43ab22351db091ae6418ca6b648e7829a Mon Sep 17 00:00:00 2001 From: Manav Darji Date: Thu, 23 Feb 2023 18:20:03 +0530 Subject: [PATCH 02/29] core, miner: add sub-spans for tracing (#753) * core, miner: add sub-spans for tracing * fix linters * core: add logs for debugging * core: add more logs to print tdd while reorg * fix linters * core: minor fix * core: remove debug logs * core: use different span for write block and set head --- core/blockchain.go | 80 +++++++++++++++++++++++++++++++++++++--------- miner/worker.go | 4 +-- 2 files changed, 67 insertions(+), 17 deletions(-) diff --git a/core/blockchain.go b/core/blockchain.go index cbcf02fef4..8fb4035f4b 100644 --- a/core/blockchain.go +++ b/core/blockchain.go @@ -18,6 +18,7 @@ package core import ( + "context" "errors" "fmt" "io" @@ -28,11 +29,14 @@ import ( "time" lru "github.com/hashicorp/golang-lru" + "go.opentelemetry.io/otel/attribute" + "go.opentelemetry.io/otel/trace" "github.com/ethereum/go-ethereum" "github.com/ethereum/go-ethereum/common" "github.com/ethereum/go-ethereum/common/mclock" "github.com/ethereum/go-ethereum/common/prque" + "github.com/ethereum/go-ethereum/common/tracing" "github.com/ethereum/go-ethereum/consensus" "github.com/ethereum/go-ethereum/core/rawdb" "github.com/ethereum/go-ethereum/core/state" @@ -1349,43 +1353,89 @@ func (bc *BlockChain) writeBlockWithState(block *types.Block, receipts []*types. } // WriteBlockWithState writes the block and all associated state to the database. -func (bc *BlockChain) WriteBlockAndSetHead(block *types.Block, receipts []*types.Receipt, logs []*types.Log, state *state.StateDB, emitHeadEvent bool) (status WriteStatus, err error) { +func (bc *BlockChain) WriteBlockAndSetHead(ctx context.Context, block *types.Block, receipts []*types.Receipt, logs []*types.Log, state *state.StateDB, emitHeadEvent bool) (status WriteStatus, err error) { if !bc.chainmu.TryLock() { return NonStatTy, errChainStopped } defer bc.chainmu.Unlock() - return bc.writeBlockAndSetHead(block, receipts, logs, state, emitHeadEvent) + return bc.writeBlockAndSetHead(ctx, block, receipts, logs, state, emitHeadEvent) } // writeBlockAndSetHead writes the block and all associated state to the database, // and also it applies the given block as the new chain head. This function expects // the chain mutex to be held. -func (bc *BlockChain) writeBlockAndSetHead(block *types.Block, receipts []*types.Receipt, logs []*types.Log, state *state.StateDB, emitHeadEvent bool) (status WriteStatus, err error) { +func (bc *BlockChain) writeBlockAndSetHead(ctx context.Context, block *types.Block, receipts []*types.Receipt, logs []*types.Log, state *state.StateDB, emitHeadEvent bool) (status WriteStatus, err error) { + writeBlockAndSetHeadCtx, span := tracing.StartSpan(ctx, "blockchain.writeBlockAndSetHead") + defer tracing.EndSpan(span) + var stateSyncLogs []*types.Log - if stateSyncLogs, err = bc.writeBlockWithState(block, receipts, logs, state); err != nil { + + tracing.Exec(writeBlockAndSetHeadCtx, "blockchain.writeBlockWithState", func(_ context.Context, span trace.Span) { + stateSyncLogs, err = bc.writeBlockWithState(block, receipts, logs, state) + tracing.SetAttributes( + span, + attribute.Int("number", int(block.Number().Uint64())), + attribute.Bool("error", err != nil), + ) + }) + + if err != nil { return NonStatTy, err } + currentBlock := bc.CurrentBlock() - reorg, err := bc.forker.ReorgNeeded(currentBlock.Header(), block.Header()) + + var reorg bool + + tracing.Exec(ctx, "blockchain.ReorgNeeded", func(_ context.Context, span trace.Span) { + reorg, err = bc.forker.ReorgNeeded(currentBlock.Header(), block.Header()) + tracing.SetAttributes( + span, + attribute.Int("number", int(block.Number().Uint64())), + attribute.Int("current block", int(currentBlock.Number().Uint64())), + attribute.Bool("reorg needed", reorg), + attribute.Bool("error", err != nil), + ) + }) if err != nil { return NonStatTy, err } - if reorg { - // Reorganise the chain if the parent is not the head block - if block.ParentHash() != currentBlock.Hash() { - if err := bc.reorg(currentBlock, block); err != nil { - return NonStatTy, err + + tracing.Exec(ctx, "blockchain.reorg", func(_ context.Context, span trace.Span) { + if reorg { + // Reorganise the chain if the parent is not the head block + if block.ParentHash() != currentBlock.Hash() { + if err = bc.reorg(currentBlock, block); err != nil { + status = NonStatTy + } } + status = CanonStatTy + } else { + status = SideStatTy } - status = CanonStatTy - } else { - status = SideStatTy + + tracing.SetAttributes( + span, + attribute.Int("number", int(block.Number().Uint64())), + attribute.Int("current block", int(currentBlock.Number().Uint64())), + attribute.Bool("reorg needed", reorg), + attribute.Bool("error", err != nil), + attribute.String("status", string(status)), + ) + }) + + if status == NonStatTy { + return } + // Set new head. if status == CanonStatTy { - bc.writeHeadBlock(block) + tracing.Exec(ctx, "blockchain.writeHeadBlock", func(_ context.Context, _ trace.Span) { + bc.writeHeadBlock(block) + }) } + bc.futureBlocks.Remove(block.Hash()) if status == CanonStatTy { @@ -1782,7 +1832,7 @@ func (bc *BlockChain) insertChain(chain types.Blocks, verifySeals, setHead bool) // Don't set the head, only insert the block _, err = bc.writeBlockWithState(block, receipts, logs, statedb) } else { - status, err = bc.writeBlockAndSetHead(block, receipts, logs, statedb, false) + status, err = bc.writeBlockAndSetHead(context.Background(), block, receipts, logs, statedb, false) } atomic.StoreUint32(&followupInterrupt, 1) if err != nil { diff --git a/miner/worker.go b/miner/worker.go index 30809cd558..08acc56d55 100644 --- a/miner/worker.go +++ b/miner/worker.go @@ -782,8 +782,8 @@ func (w *worker) resultLoop() { } // Commit block and state to database. - tracing.ElapsedTime(ctx, span, "WriteBlockAndSetHead time taken", func(_ context.Context, _ trace.Span) { - _, err = w.chain.WriteBlockAndSetHead(block, receipts, logs, task.state, true) + tracing.Exec(ctx, "resultLoop.WriteBlockAndSetHead", func(ctx context.Context, span trace.Span) { + _, err = w.chain.WriteBlockAndSetHead(ctx, block, receipts, logs, task.state, true) }) tracing.SetAttributes( From 4ca527255b760fd19ddfaadf06c2ee65c189934c Mon Sep 17 00:00:00 2001 From: Manav Darji Date: Fri, 24 Feb 2023 12:52:45 +0530 Subject: [PATCH 03/29] core: use internal context for sending traces (#755) --- core/blockchain.go | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/core/blockchain.go b/core/blockchain.go index 8fb4035f4b..a66b02065b 100644 --- a/core/blockchain.go +++ b/core/blockchain.go @@ -1388,7 +1388,7 @@ func (bc *BlockChain) writeBlockAndSetHead(ctx context.Context, block *types.Blo var reorg bool - tracing.Exec(ctx, "blockchain.ReorgNeeded", func(_ context.Context, span trace.Span) { + tracing.Exec(writeBlockAndSetHeadCtx, "blockchain.ReorgNeeded", func(_ context.Context, span trace.Span) { reorg, err = bc.forker.ReorgNeeded(currentBlock.Header(), block.Header()) tracing.SetAttributes( span, @@ -1402,7 +1402,7 @@ func (bc *BlockChain) writeBlockAndSetHead(ctx context.Context, block *types.Blo return NonStatTy, err } - tracing.Exec(ctx, "blockchain.reorg", func(_ context.Context, span trace.Span) { + tracing.Exec(writeBlockAndSetHeadCtx, "blockchain.reorg", func(_ context.Context, span trace.Span) { if reorg { // Reorganise the chain if the parent is not the head block if block.ParentHash() != currentBlock.Hash() { @@ -1431,7 +1431,7 @@ func (bc *BlockChain) writeBlockAndSetHead(ctx context.Context, block *types.Blo // Set new head. if status == CanonStatTy { - tracing.Exec(ctx, "blockchain.writeHeadBlock", func(_ context.Context, _ trace.Span) { + tracing.Exec(writeBlockAndSetHeadCtx, "blockchain.writeHeadBlock", func(_ context.Context, _ trace.Span) { bc.writeHeadBlock(block) }) } From f973926ef0cf7aff74491819cb337acdbea9e739 Mon Sep 17 00:00:00 2001 From: SHIVAM SHARMA Date: Sun, 26 Feb 2023 21:28:12 +0530 Subject: [PATCH 04/29] core: add : impossible reorg block dump (#754) * add : impossible reorg block dump * chg : 3 seperate files for impossoble reorg dump * add : use exportBlocks method and RLP blocks before writing * chg : small changes --- core/blockchain.go | 71 ++++++++++++++++++++++++++++++++++++++++++++++ 1 file changed, 71 insertions(+) diff --git a/core/blockchain.go b/core/blockchain.go index a66b02065b..9ff04c4cf8 100644 --- a/core/blockchain.go +++ b/core/blockchain.go @@ -18,12 +18,16 @@ package core import ( + "compress/gzip" "context" "errors" "fmt" "io" "math/big" + "os" + "path/filepath" "sort" + "strings" "sync" "sync/atomic" "time" @@ -2241,6 +2245,35 @@ func (bc *BlockChain) reorg(oldBlock, newBlock *types.Block) error { } else { // len(newChain) == 0 && len(oldChain) > 0 // rewind the canonical chain to a lower point. + + home, err := os.UserHomeDir() + if err != nil { + fmt.Println("Impossible reorg : Unable to get user home dir", "Error", err) + } + outPath := filepath.Join(home, "impossible-reorgs", fmt.Sprintf("%v-impossibleReorg", time.Now().Format(time.RFC3339))) + + if _, err := os.Stat(outPath); errors.Is(err, os.ErrNotExist) { + err := os.MkdirAll(outPath, os.ModePerm) + if err != nil { + log.Error("Impossible reorg : Unable to create Dir", "Error", err) + } + } else { + err = ExportBlocks(oldChain, filepath.Join(outPath, "oldChain.gz")) + if err != nil { + log.Error("Impossible reorg : Unable to export oldChain", "Error", err) + } + + err = ExportBlocks([]*types.Block{oldBlock}, filepath.Join(outPath, "oldBlock.gz")) + if err != nil { + log.Error("Impossible reorg : Unable to export oldBlock", "Error", err) + } + + err = ExportBlocks([]*types.Block{newBlock}, filepath.Join(outPath, "newBlock.gz")) + if err != nil { + log.Error("Impossible reorg : Unable to export newBlock", "Error", err) + } + } + log.Error("Impossible reorg, please file an issue", "oldnum", oldBlock.Number(), "oldhash", oldBlock.Hash(), "oldblocks", len(oldChain), "newnum", newBlock.Number(), "newhash", newBlock.Hash(), "newblocks", len(newChain)) } // Insert the new chain(except the head block(reverse order)), @@ -2293,6 +2326,44 @@ func (bc *BlockChain) reorg(oldBlock, newBlock *types.Block) error { return nil } +// ExportBlocks exports blocks into the specified file, truncating any data +// already present in the file. +func ExportBlocks(blocks []*types.Block, fn string) error { + log.Info("Exporting blockchain", "file", fn) + + // Open the file handle and potentially wrap with a gzip stream + fh, err := os.OpenFile(fn, os.O_CREATE|os.O_WRONLY|os.O_TRUNC, os.ModePerm) + if err != nil { + return err + } + defer fh.Close() + + var writer io.Writer = fh + if strings.HasSuffix(fn, ".gz") { + writer = gzip.NewWriter(writer) + defer writer.(*gzip.Writer).Close() + } + // Iterate over the blocks and export them + if err := ExportN(writer, blocks); err != nil { + return err + } + + log.Info("Exported blocks", "file", fn) + + return nil +} + +// ExportBlock writes a block to the given writer. +func ExportN(w io.Writer, blocks []*types.Block) error { + for _, block := range blocks { + if err := block.EncodeRLP(w); err != nil { + return err + } + } + + return nil +} + // InsertBlockWithoutSetHead executes the block, runs the necessary verification // upon it and then persist the block and the associate state into the database. // The key difference between the InsertChain is it won't do the canonical chain From 4561012af9a31d20c2715ce26e4b39ca93420b8b Mon Sep 17 00:00:00 2001 From: SHIVAM SHARMA Date: Fri, 3 Mar 2023 14:21:26 +0530 Subject: [PATCH 05/29] bump : go version from 1.19 to 1.20.1 (#761) --- .github/workflows/ci.yml | 11 ++-- .github/workflows/packager.yml | 2 +- .github/workflows/release.yml | 2 +- .golangci.yml | 4 +- .travis.yml | 22 +++---- Dockerfile.alltools | 2 +- Makefile | 2 +- build/ci.go | 23 ++++---- go.mod | 2 +- p2p/dnsdisc/client_test.go | 103 +++++++++++++++++++-------------- p2p/dnsdisc/tree_test.go | 17 ++++-- 11 files changed, 106 insertions(+), 84 deletions(-) diff --git a/.github/workflows/ci.yml b/.github/workflows/ci.yml index b5bb62a8bb..8ae09f603d 100644 --- a/.github/workflows/ci.yml +++ b/.github/workflows/ci.yml @@ -29,7 +29,7 @@ jobs: - uses: actions/setup-go@v3 with: - go-version: 1.19.x + go-version: 1.20.x - name: Install dependencies on Linux if: runner.os == 'Linux' @@ -51,9 +51,10 @@ jobs: - name: Build run: make all - - name: Lint - if: runner.os == 'Linux' - run: make lint + # # TODO: fix LINT not working with go 1.20.x + # - name: Lint + # if: runner.os == 'Linux' + # run: make lint - name: Test run: make test @@ -98,7 +99,7 @@ jobs: - uses: actions/setup-go@v3 with: - go-version: 1.18.x + go-version: 1.20.x - name: Checkout matic-cli uses: actions/checkout@v3 diff --git a/.github/workflows/packager.yml b/.github/workflows/packager.yml index 7485aca976..bb2c00b4c0 100644 --- a/.github/workflows/packager.yml +++ b/.github/workflows/packager.yml @@ -21,7 +21,7 @@ jobs: - name: Set up Go uses: actions/setup-go@master with: - go-version: 1.19 + go-version: 1.20 - name: Adding TAG to ENV run: echo "GIT_TAG=`echo $(git describe --tags --abbrev=0)`" >> $GITHUB_ENV diff --git a/.github/workflows/release.yml b/.github/workflows/release.yml index 2ceda3d2ee..41c6ba47b0 100644 --- a/.github/workflows/release.yml +++ b/.github/workflows/release.yml @@ -21,7 +21,7 @@ jobs: - name: Set up Go uses: actions/setup-go@master with: - go-version: 1.19.x + go-version: 1.20.x - name: Prepare id: prepare diff --git a/.golangci.yml b/.golangci.yml index 89eebfe9fe..94f1bedc41 100644 --- a/.golangci.yml +++ b/.golangci.yml @@ -1,7 +1,7 @@ # This file configures github.com/golangci/golangci-lint. run: - go: '1.18' + go: '1.20' timeout: 20m tests: true # default is true. Enables skipping of directories: @@ -185,4 +185,4 @@ issues: max-issues-per-linter: 0 max-same-issues: 0 #new: true - new-from-rev: origin/master \ No newline at end of file + new-from-rev: origin/master diff --git a/.travis.yml b/.travis.yml index 5bd7beed83..b170a3e282 100644 --- a/.travis.yml +++ b/.travis.yml @@ -16,7 +16,7 @@ jobs: - stage: lint os: linux dist: bionic - go: 1.18.x + go: 1.20.x env: - lint git: @@ -31,7 +31,7 @@ jobs: os: linux arch: amd64 dist: bionic - go: 1.18.x + go: 1.20.x env: - docker services: @@ -48,7 +48,7 @@ jobs: os: linux arch: arm64 dist: bionic - go: 1.18.x + go: 1.20.x env: - docker services: @@ -65,7 +65,7 @@ jobs: if: type = push os: linux dist: bionic - go: 1.18.x + go: 1.20.x env: - ubuntu-ppa - GO111MODULE=on @@ -90,7 +90,7 @@ jobs: os: linux dist: bionic sudo: required - go: 1.18.x + go: 1.20.x env: - azure-linux - GO111MODULE=on @@ -148,7 +148,7 @@ jobs: - sdkmanager "platform-tools" "platforms;android-15" "platforms;android-19" "platforms;android-24" "ndk-bundle" # Install Go to allow building with - - curl https://dl.google.com/go/go1.18.linux-amd64.tar.gz | tar -xz + - curl https://dl.google.com/go/go1.20.linux-amd64.tar.gz | tar -xz - export PATH=`pwd`/go/bin:$PATH - export GOROOT=`pwd`/go - export GOPATH=$HOME/go @@ -162,7 +162,7 @@ jobs: - stage: build if: type = push os: osx - go: 1.18.x + go: 1.20.x env: - azure-osx - azure-ios @@ -194,7 +194,7 @@ jobs: os: linux arch: amd64 dist: bionic - go: 1.18.x + go: 1.20.x env: - GO111MODULE=on script: @@ -205,7 +205,7 @@ jobs: os: linux arch: arm64 dist: bionic - go: 1.18.x + go: 1.20.x env: - GO111MODULE=on script: @@ -225,7 +225,7 @@ jobs: if: type = cron os: linux dist: bionic - go: 1.18.x + go: 1.20.x env: - azure-purge - GO111MODULE=on @@ -239,7 +239,7 @@ jobs: if: type = cron os: linux dist: bionic - go: 1.18.x + go: 1.20.x env: - GO111MODULE=on script: diff --git a/Dockerfile.alltools b/Dockerfile.alltools index 1c4437e251..880cc6e255 100644 --- a/Dockerfile.alltools +++ b/Dockerfile.alltools @@ -1,5 +1,5 @@ # Build Geth in a stock Go builder container -FROM golang:1.18.1-alpine as builder +FROM golang:1.20-alpine as builder RUN apk add --no-cache make gcc musl-dev linux-headers git diff --git a/Makefile b/Makefile index 242435df76..f8307efda0 100644 --- a/Makefile +++ b/Makefile @@ -196,7 +196,7 @@ geth-windows-amd64: @ls -ld $(GOBIN)/geth-windows-* | grep amd64 PACKAGE_NAME := github.com/maticnetwork/bor -GOLANG_CROSS_VERSION ?= v1.19.1 +GOLANG_CROSS_VERSION ?= v1.20.1 .PHONY: release-dry-run release-dry-run: diff --git a/build/ci.go b/build/ci.go index c3dccfc588..3f00753a14 100644 --- a/build/ci.go +++ b/build/ci.go @@ -24,19 +24,18 @@ Usage: go run build/ci.go Available commands are: - install [ -arch architecture ] [ -cc compiler ] [ packages... ] -- builds packages and executables - test [ -coverage ] [ packages... ] -- runs the tests - lint -- runs certain pre-selected linters - archive [ -arch architecture ] [ -type zip|tar ] [ -signer key-envvar ] [ -signify key-envvar ] [ -upload dest ] -- archives build artifacts - importkeys -- imports signing keys from env - debsrc [ -signer key-id ] [ -upload dest ] -- creates a debian source package - nsis -- creates a Windows NSIS installer - aar [ -local ] [ -sign key-id ] [-deploy repo] [ -upload dest ] -- creates an Android archive - xcode [ -local ] [ -sign key-id ] [-deploy repo] [ -upload dest ] -- creates an iOS XCode framework - purge [ -store blobstore ] [ -days threshold ] -- purges old archives from the blobstore + install [ -arch architecture ] [ -cc compiler ] [ packages... ] -- builds packages and executables + test [ -coverage ] [ packages... ] -- runs the tests + lint -- runs certain pre-selected linters + archive [ -arch architecture ] [ -type zip|tar ] [ -signer key-envvar ] [ -signify key-envvar ] [ -upload dest ] -- archives build artifacts + importkeys -- imports signing keys from env + debsrc [ -signer key-id ] [ -upload dest ] -- creates a debian source package + nsis -- creates a Windows NSIS installer + aar [ -local ] [ -sign key-id ] [-deploy repo] [ -upload dest ] -- creates an Android archive + xcode [ -local ] [ -sign key-id ] [-deploy repo] [ -upload dest ] -- creates an iOS XCode framework + purge [ -store blobstore ] [ -days threshold ] -- purges old archives from the blobstore For all commands, -n prevents execution of external programs (dry run mode). - */ package main @@ -148,7 +147,7 @@ var ( // This is the version of go that will be downloaded by // // go run ci.go install -dlgo - dlgoVersion = "1.18" + dlgoVersion = "1.20.1" ) var GOBIN, _ = filepath.Abs(filepath.Join("build", "bin")) diff --git a/go.mod b/go.mod index f55b2f9aa7..4015d41bfe 100644 --- a/go.mod +++ b/go.mod @@ -1,6 +1,6 @@ module github.com/ethereum/go-ethereum -go 1.19 +go 1.20 require ( github.com/Azure/azure-sdk-for-go/sdk/storage/azblob v0.3.0 diff --git a/p2p/dnsdisc/client_test.go b/p2p/dnsdisc/client_test.go index 9320dd667a..7d8b89ff38 100644 --- a/p2p/dnsdisc/client_test.go +++ b/p2p/dnsdisc/client_test.go @@ -20,12 +20,12 @@ import ( "context" "crypto/ecdsa" "errors" - "math/rand" "reflect" "testing" "time" "github.com/davecgh/go-spew/spew" + "github.com/ethereum/go-ethereum/common/hexutil" "github.com/ethereum/go-ethereum/common/mclock" "github.com/ethereum/go-ethereum/crypto" "github.com/ethereum/go-ethereum/internal/testlog" @@ -34,23 +34,25 @@ import ( "github.com/ethereum/go-ethereum/p2p/enr" ) -const ( - signingKeySeed = 0x111111 - nodesSeed1 = 0x2945237 - nodesSeed2 = 0x4567299 -) +var signingKeyForTesting, _ = crypto.ToECDSA(hexutil.MustDecode("0xdc599867fc513f8f5e2c2c9c489cde5e71362d1d9ec6e693e0de063236ed1240")) func TestClientSyncTree(t *testing.T) { + + nodes := []string{ + "enr:-HW4QOFzoVLaFJnNhbgMoDXPnOvcdVuj7pDpqRvh6BRDO68aVi5ZcjB3vzQRZH2IcLBGHzo8uUN3snqmgTiE56CH3AMBgmlkgnY0iXNlY3AyNTZrMaECC2_24YYkYHEgdzxlSNKQEnHhuNAbNlMlWJxrJxbAFvA", + "enr:-HW4QAggRauloj2SDLtIHN1XBkvhFZ1vtf1raYQp9TBW2RD5EEawDzbtSmlXUfnaHcvwOizhVYLtr7e6vw7NAf6mTuoCgmlkgnY0iXNlY3AyNTZrMaECjrXI8TLNXU0f8cthpAMxEshUyQlK-AM0PW2wfrnacNI", + "enr:-HW4QLAYqmrwllBEnzWWs7I5Ev2IAs7x_dZlbYdRdMUx5EyKHDXp7AV5CkuPGUPdvbv1_Ms1CPfhcGCvSElSosZmyoqAgmlkgnY0iXNlY3AyNTZrMaECriawHKWdDRk2xeZkrOXBQ0dfMFLHY4eENZwdufn1S1o", + } + r := mapResolver{ "n": "enrtree-root:v1 e=JWXYDBPXYWG6FX3GMDIBFA6CJ4 l=C7HRFPF3BLGF3YR4DY5KX3SMBE seq=1 sig=o908WmNp7LibOfPsr4btQwatZJ5URBr2ZAuxvK4UWHlsB9sUOTJQaGAlLPVAhM__XJesCHxLISo94z5Z2a463gA", "C7HRFPF3BLGF3YR4DY5KX3SMBE.n": "enrtree://AM5FCQLWIZX2QFPNJAP7VUERCCRNGRHWZG3YYHIUV7BVDQ5FDPRT2@morenodes.example.org", "JWXYDBPXYWG6FX3GMDIBFA6CJ4.n": "enrtree-branch:2XS2367YHAXJFGLZHVAWLQD4ZY,H4FHT4B454P6UXFD7JCYQ5PWDY,MHTDO6TMUBRIA2XWG5LUDACK24", - "2XS2367YHAXJFGLZHVAWLQD4ZY.n": "enr:-HW4QOFzoVLaFJnNhbgMoDXPnOvcdVuj7pDpqRvh6BRDO68aVi5ZcjB3vzQRZH2IcLBGHzo8uUN3snqmgTiE56CH3AMBgmlkgnY0iXNlY3AyNTZrMaECC2_24YYkYHEgdzxlSNKQEnHhuNAbNlMlWJxrJxbAFvA", - "H4FHT4B454P6UXFD7JCYQ5PWDY.n": "enr:-HW4QAggRauloj2SDLtIHN1XBkvhFZ1vtf1raYQp9TBW2RD5EEawDzbtSmlXUfnaHcvwOizhVYLtr7e6vw7NAf6mTuoCgmlkgnY0iXNlY3AyNTZrMaECjrXI8TLNXU0f8cthpAMxEshUyQlK-AM0PW2wfrnacNI", - "MHTDO6TMUBRIA2XWG5LUDACK24.n": "enr:-HW4QLAYqmrwllBEnzWWs7I5Ev2IAs7x_dZlbYdRdMUx5EyKHDXp7AV5CkuPGUPdvbv1_Ms1CPfhcGCvSElSosZmyoqAgmlkgnY0iXNlY3AyNTZrMaECriawHKWdDRk2xeZkrOXBQ0dfMFLHY4eENZwdufn1S1o", - } + "2XS2367YHAXJFGLZHVAWLQD4ZY.n": nodes[0], + "H4FHT4B454P6UXFD7JCYQ5PWDY.n": nodes[1], + "MHTDO6TMUBRIA2XWG5LUDACK24.n": nodes[2]} var ( - wantNodes = testNodes(0x29452, 3) + wantNodes = sortByID(parseNodes(nodes)) wantLinks = []string{"enrtree://AM5FCQLWIZX2QFPNJAP7VUERCCRNGRHWZG3YYHIUV7BVDQ5FDPRT2@morenodes.example.org"} wantSeq = uint(1) ) @@ -60,7 +62,7 @@ func TestClientSyncTree(t *testing.T) { if err != nil { t.Fatal("sync error:", err) } - if !reflect.DeepEqual(sortByID(stree.Nodes()), sortByID(wantNodes)) { + if !reflect.DeepEqual(sortByID(stree.Nodes()), wantNodes) { t.Errorf("wrong nodes in synced tree:\nhave %v\nwant %v", spew.Sdump(stree.Nodes()), spew.Sdump(wantNodes)) } if !reflect.DeepEqual(stree.Links(), wantLinks) { @@ -99,9 +101,13 @@ func TestClientSyncTreeBadNode(t *testing.T) { // This test checks that randomIterator finds all entries. func TestIterator(t *testing.T) { - nodes := testNodes(nodesSeed1, 30) - tree, url := makeTestTree("n", nodes, nil) - r := mapResolver(tree.ToTXT("n")) + var ( + keys = testKeys(30) + nodes = testNodes(keys) + tree, url = makeTestTree("n", nodes, nil) + r = mapResolver(tree.ToTXT("n")) + ) + c := NewClient(Config{ Resolver: r, Logger: testlog.Logger(t, log.LvlTrace), @@ -132,8 +138,11 @@ func TestIteratorCloseWithoutNext(t *testing.T) { // This test checks if closing randomIterator races. func TestIteratorClose(t *testing.T) { - nodes := testNodes(nodesSeed1, 500) - tree1, url1 := makeTestTree("t1", nodes, nil) + var ( + keys = testKeys(500) + nodes = testNodes(keys) + tree1, url1 = makeTestTree("t1", nodes, nil) + ) c := NewClient(Config{Resolver: newMapResolver(tree1.ToTXT("t1"))}) it, err := c.NewIterator(url1) if err != nil { @@ -155,9 +164,12 @@ func TestIteratorClose(t *testing.T) { // This test checks that randomIterator traverses linked trees as well as explicitly added trees. func TestIteratorLinks(t *testing.T) { - nodes := testNodes(nodesSeed1, 40) - tree1, url1 := makeTestTree("t1", nodes[:10], nil) - tree2, url2 := makeTestTree("t2", nodes[10:], []string{url1}) + var ( + keys = testKeys(40) + nodes = testNodes(keys) + tree1, url1 = makeTestTree("t1", nodes[:10], nil) + tree2, url2 = makeTestTree("t2", nodes[10:], []string{url1}) + ) c := NewClient(Config{ Resolver: newMapResolver(tree1.ToTXT("t1"), tree2.ToTXT("t2")), Logger: testlog.Logger(t, log.LvlTrace), @@ -176,7 +188,8 @@ func TestIteratorLinks(t *testing.T) { func TestIteratorNodeUpdates(t *testing.T) { var ( clock = new(mclock.Simulated) - nodes = testNodes(nodesSeed1, 30) + keys = testKeys(30) + nodes = testNodes(keys) resolver = newMapResolver() c = NewClient(Config{ Resolver: resolver, @@ -197,7 +210,7 @@ func TestIteratorNodeUpdates(t *testing.T) { checkIterator(t, it, nodes[:25]) // Ensure RandomNode returns the new nodes after the tree is updated. - updateSomeNodes(nodesSeed1, nodes) + updateSomeNodes(keys, nodes) tree2, _ := makeTestTree("n", nodes, nil) resolver.clear() resolver.add(tree2.ToTXT("n")) @@ -213,7 +226,8 @@ func TestIteratorNodeUpdates(t *testing.T) { func TestIteratorRootRecheckOnFail(t *testing.T) { var ( clock = new(mclock.Simulated) - nodes = testNodes(nodesSeed1, 30) + keys = testKeys(30) + nodes = testNodes(keys) resolver = newMapResolver() c = NewClient(Config{ Resolver: resolver, @@ -237,7 +251,7 @@ func TestIteratorRootRecheckOnFail(t *testing.T) { checkIterator(t, it, nodes[:25]) // Ensure RandomNode returns the new nodes after the tree is updated. - updateSomeNodes(nodesSeed1, nodes) + updateSomeNodes(keys, nodes) tree2, _ := makeTestTree("n", nodes, nil) resolver.clear() resolver.add(tree2.ToTXT("n")) @@ -250,7 +264,8 @@ func TestIteratorRootRecheckOnFail(t *testing.T) { func TestIteratorEmptyTree(t *testing.T) { var ( clock = new(mclock.Simulated) - nodes = testNodes(nodesSeed1, 1) + keys = testKeys(1) + nodes = testNodes(keys) resolver = newMapResolver() c = NewClient(Config{ Resolver: resolver, @@ -294,8 +309,7 @@ func TestIteratorEmptyTree(t *testing.T) { } // updateSomeNodes applies ENR updates to some of the given nodes. -func updateSomeNodes(keySeed int64, nodes []*enode.Node) { - keys := testKeys(nodesSeed1, len(nodes)) +func updateSomeNodes(keys []*ecdsa.PrivateKey, nodes []*enode.Node) { for i, n := range nodes[:len(nodes)/2] { r := n.Record() r.Set(enr.IP{127, 0, 0, 1}) @@ -311,7 +325,8 @@ func updateSomeNodes(keySeed int64, nodes []*enode.Node) { func TestIteratorLinkUpdates(t *testing.T) { var ( clock = new(mclock.Simulated) - nodes = testNodes(nodesSeed1, 30) + keys = testKeys(30) + nodes = testNodes(keys) resolver = newMapResolver() c = NewClient(Config{ Resolver: resolver, @@ -384,7 +399,7 @@ func makeTestTree(domain string, nodes []*enode.Node, links []string) (*Tree, st if err != nil { panic(err) } - url, err := tree.Sign(testKey(signingKeySeed), domain) + url, err := tree.Sign(signingKeyForTesting, domain) if err != nil { panic(err) } @@ -392,11 +407,10 @@ func makeTestTree(domain string, nodes []*enode.Node, links []string) (*Tree, st } // testKeys creates deterministic private keys for testing. -func testKeys(seed int64, n int) []*ecdsa.PrivateKey { - rand := rand.New(rand.NewSource(seed)) +func testKeys(n int) []*ecdsa.PrivateKey { keys := make([]*ecdsa.PrivateKey, n) for i := 0; i < n; i++ { - key, err := ecdsa.GenerateKey(crypto.S256(), rand) + key, err := crypto.GenerateKey() if err != nil { panic("can't generate key: " + err.Error()) } @@ -405,13 +419,8 @@ func testKeys(seed int64, n int) []*ecdsa.PrivateKey { return keys } -func testKey(seed int64) *ecdsa.PrivateKey { - return testKeys(seed, 1)[0] -} - -func testNodes(seed int64, n int) []*enode.Node { - keys := testKeys(seed, n) - nodes := make([]*enode.Node, n) +func testNodes(keys []*ecdsa.PrivateKey) []*enode.Node { + nodes := make([]*enode.Node, len(keys)) for i, key := range keys { record := new(enr.Record) record.SetSeq(uint64(i)) @@ -425,10 +434,6 @@ func testNodes(seed int64, n int) []*enode.Node { return nodes } -func testNode(seed int64) *enode.Node { - return testNodes(seed, 1)[0] -} - type mapResolver map[string]string func newMapResolver(maps ...map[string]string) mapResolver { @@ -457,3 +462,15 @@ func (mr mapResolver) LookupTXT(ctx context.Context, name string) ([]string, err } return nil, errors.New("not found") } + +func parseNodes(rec []string) []*enode.Node { + var ns []*enode.Node + for _, r := range rec { + var n enode.Node + if err := n.UnmarshalText([]byte(r)); err != nil { + panic(err) + } + ns = append(ns, &n) + } + return ns +} diff --git a/p2p/dnsdisc/tree_test.go b/p2p/dnsdisc/tree_test.go index 4048c35d63..4450aac830 100644 --- a/p2p/dnsdisc/tree_test.go +++ b/p2p/dnsdisc/tree_test.go @@ -61,7 +61,8 @@ func TestParseRoot(t *testing.T) { } func TestParseEntry(t *testing.T) { - testkey := testKey(signingKeySeed) + testENRs := []string{"enr:-HW4QES8QIeXTYlDzbfr1WEzE-XKY4f8gJFJzjJL-9D7TC9lJb4Z3JPRRz1lP4pL_N_QpT6rGQjAU9Apnc-C1iMP36OAgmlkgnY0iXNlY3AyNTZrMaED5IdwfMxdmR8W37HqSFdQLjDkIwBd4Q_MjxgZifgKSdM"} + testNodes := parseNodes(testENRs) tests := []struct { input string e entry @@ -91,8 +92,11 @@ func TestParseEntry(t *testing.T) { // Links { input: "enrtree://AKPYQIUQIL7PSIACI32J7FGZW56E5FKHEFCCOFHILBIMW3M6LWXS2@nodes.example.org", - e: &linkEntry{"AKPYQIUQIL7PSIACI32J7FGZW56E5FKHEFCCOFHILBIMW3M6LWXS2@nodes.example.org", "nodes.example.org", &testkey.PublicKey}, - }, + e: &linkEntry{ + str: "AKPYQIUQIL7PSIACI32J7FGZW56E5FKHEFCCOFHILBIMW3M6LWXS2@nodes.example.org", + domain: "nodes.example.org", + pubkey: &signingKeyForTesting.PublicKey, + }}, { input: "enrtree://nodes.example.org", err: entryError{"link", errNoPubkey}, @@ -107,8 +111,8 @@ func TestParseEntry(t *testing.T) { }, // ENRs { - input: "enr:-HW4QES8QIeXTYlDzbfr1WEzE-XKY4f8gJFJzjJL-9D7TC9lJb4Z3JPRRz1lP4pL_N_QpT6rGQjAU9Apnc-C1iMP36OAgmlkgnY0iXNlY3AyNTZrMaED5IdwfMxdmR8W37HqSFdQLjDkIwBd4Q_MjxgZifgKSdM", - e: &enrEntry{node: testNode(nodesSeed1)}, + input: testENRs[0], + e: &enrEntry{node: testNodes[0]}, }, { input: "enr:-HW4QLZHjM4vZXkbp-5xJoHsKSbE7W39FPC8283X-y8oHcHPTnDDlIlzL5ArvDUlHZVDPgmFASrh7cWgLOLxj4wprRkHgmlkgnY0iXNlY3AyNTZrMaEC3t2jLMhDpCDX5mbSEwDn4L3iUfyXzoO8G28XvjGRkrAg=", @@ -132,7 +136,8 @@ func TestParseEntry(t *testing.T) { } func TestMakeTree(t *testing.T) { - nodes := testNodes(nodesSeed2, 50) + keys := testKeys(50) + nodes := testNodes(keys) tree, err := MakeTree(2, nodes, nil) if err != nil { t.Fatal(err) From a7bf4ba6bdaf98e1522fb1cdcb504dc8c8f1c906 Mon Sep 17 00:00:00 2001 From: Jerry Date: Mon, 6 Mar 2023 09:08:46 -0800 Subject: [PATCH 06/29] Revert "bump : go version from 1.19 to 1.20.1 (#761)" This reverts commit 4561012af9a31d20c2715ce26e4b39ca93420b8b. --- .github/workflows/ci.yml | 11 ++-- .github/workflows/packager.yml | 2 +- .github/workflows/release.yml | 2 +- .golangci.yml | 4 +- .travis.yml | 22 +++---- Dockerfile.alltools | 2 +- Makefile | 2 +- build/ci.go | 23 ++++---- go.mod | 2 +- p2p/dnsdisc/client_test.go | 103 ++++++++++++++------------------- p2p/dnsdisc/tree_test.go | 17 ++---- 11 files changed, 84 insertions(+), 106 deletions(-) diff --git a/.github/workflows/ci.yml b/.github/workflows/ci.yml index 8ae09f603d..b5bb62a8bb 100644 --- a/.github/workflows/ci.yml +++ b/.github/workflows/ci.yml @@ -29,7 +29,7 @@ jobs: - uses: actions/setup-go@v3 with: - go-version: 1.20.x + go-version: 1.19.x - name: Install dependencies on Linux if: runner.os == 'Linux' @@ -51,10 +51,9 @@ jobs: - name: Build run: make all - # # TODO: fix LINT not working with go 1.20.x - # - name: Lint - # if: runner.os == 'Linux' - # run: make lint + - name: Lint + if: runner.os == 'Linux' + run: make lint - name: Test run: make test @@ -99,7 +98,7 @@ jobs: - uses: actions/setup-go@v3 with: - go-version: 1.20.x + go-version: 1.18.x - name: Checkout matic-cli uses: actions/checkout@v3 diff --git a/.github/workflows/packager.yml b/.github/workflows/packager.yml index bb2c00b4c0..7485aca976 100644 --- a/.github/workflows/packager.yml +++ b/.github/workflows/packager.yml @@ -21,7 +21,7 @@ jobs: - name: Set up Go uses: actions/setup-go@master with: - go-version: 1.20 + go-version: 1.19 - name: Adding TAG to ENV run: echo "GIT_TAG=`echo $(git describe --tags --abbrev=0)`" >> $GITHUB_ENV diff --git a/.github/workflows/release.yml b/.github/workflows/release.yml index 41c6ba47b0..2ceda3d2ee 100644 --- a/.github/workflows/release.yml +++ b/.github/workflows/release.yml @@ -21,7 +21,7 @@ jobs: - name: Set up Go uses: actions/setup-go@master with: - go-version: 1.20.x + go-version: 1.19.x - name: Prepare id: prepare diff --git a/.golangci.yml b/.golangci.yml index 94f1bedc41..89eebfe9fe 100644 --- a/.golangci.yml +++ b/.golangci.yml @@ -1,7 +1,7 @@ # This file configures github.com/golangci/golangci-lint. run: - go: '1.20' + go: '1.18' timeout: 20m tests: true # default is true. Enables skipping of directories: @@ -185,4 +185,4 @@ issues: max-issues-per-linter: 0 max-same-issues: 0 #new: true - new-from-rev: origin/master + new-from-rev: origin/master \ No newline at end of file diff --git a/.travis.yml b/.travis.yml index b170a3e282..5bd7beed83 100644 --- a/.travis.yml +++ b/.travis.yml @@ -16,7 +16,7 @@ jobs: - stage: lint os: linux dist: bionic - go: 1.20.x + go: 1.18.x env: - lint git: @@ -31,7 +31,7 @@ jobs: os: linux arch: amd64 dist: bionic - go: 1.20.x + go: 1.18.x env: - docker services: @@ -48,7 +48,7 @@ jobs: os: linux arch: arm64 dist: bionic - go: 1.20.x + go: 1.18.x env: - docker services: @@ -65,7 +65,7 @@ jobs: if: type = push os: linux dist: bionic - go: 1.20.x + go: 1.18.x env: - ubuntu-ppa - GO111MODULE=on @@ -90,7 +90,7 @@ jobs: os: linux dist: bionic sudo: required - go: 1.20.x + go: 1.18.x env: - azure-linux - GO111MODULE=on @@ -148,7 +148,7 @@ jobs: - sdkmanager "platform-tools" "platforms;android-15" "platforms;android-19" "platforms;android-24" "ndk-bundle" # Install Go to allow building with - - curl https://dl.google.com/go/go1.20.linux-amd64.tar.gz | tar -xz + - curl https://dl.google.com/go/go1.18.linux-amd64.tar.gz | tar -xz - export PATH=`pwd`/go/bin:$PATH - export GOROOT=`pwd`/go - export GOPATH=$HOME/go @@ -162,7 +162,7 @@ jobs: - stage: build if: type = push os: osx - go: 1.20.x + go: 1.18.x env: - azure-osx - azure-ios @@ -194,7 +194,7 @@ jobs: os: linux arch: amd64 dist: bionic - go: 1.20.x + go: 1.18.x env: - GO111MODULE=on script: @@ -205,7 +205,7 @@ jobs: os: linux arch: arm64 dist: bionic - go: 1.20.x + go: 1.18.x env: - GO111MODULE=on script: @@ -225,7 +225,7 @@ jobs: if: type = cron os: linux dist: bionic - go: 1.20.x + go: 1.18.x env: - azure-purge - GO111MODULE=on @@ -239,7 +239,7 @@ jobs: if: type = cron os: linux dist: bionic - go: 1.20.x + go: 1.18.x env: - GO111MODULE=on script: diff --git a/Dockerfile.alltools b/Dockerfile.alltools index 880cc6e255..1c4437e251 100644 --- a/Dockerfile.alltools +++ b/Dockerfile.alltools @@ -1,5 +1,5 @@ # Build Geth in a stock Go builder container -FROM golang:1.20-alpine as builder +FROM golang:1.18.1-alpine as builder RUN apk add --no-cache make gcc musl-dev linux-headers git diff --git a/Makefile b/Makefile index f8307efda0..242435df76 100644 --- a/Makefile +++ b/Makefile @@ -196,7 +196,7 @@ geth-windows-amd64: @ls -ld $(GOBIN)/geth-windows-* | grep amd64 PACKAGE_NAME := github.com/maticnetwork/bor -GOLANG_CROSS_VERSION ?= v1.20.1 +GOLANG_CROSS_VERSION ?= v1.19.1 .PHONY: release-dry-run release-dry-run: diff --git a/build/ci.go b/build/ci.go index 3f00753a14..c3dccfc588 100644 --- a/build/ci.go +++ b/build/ci.go @@ -24,18 +24,19 @@ Usage: go run build/ci.go Available commands are: - install [ -arch architecture ] [ -cc compiler ] [ packages... ] -- builds packages and executables - test [ -coverage ] [ packages... ] -- runs the tests - lint -- runs certain pre-selected linters - archive [ -arch architecture ] [ -type zip|tar ] [ -signer key-envvar ] [ -signify key-envvar ] [ -upload dest ] -- archives build artifacts - importkeys -- imports signing keys from env - debsrc [ -signer key-id ] [ -upload dest ] -- creates a debian source package - nsis -- creates a Windows NSIS installer - aar [ -local ] [ -sign key-id ] [-deploy repo] [ -upload dest ] -- creates an Android archive - xcode [ -local ] [ -sign key-id ] [-deploy repo] [ -upload dest ] -- creates an iOS XCode framework - purge [ -store blobstore ] [ -days threshold ] -- purges old archives from the blobstore + install [ -arch architecture ] [ -cc compiler ] [ packages... ] -- builds packages and executables + test [ -coverage ] [ packages... ] -- runs the tests + lint -- runs certain pre-selected linters + archive [ -arch architecture ] [ -type zip|tar ] [ -signer key-envvar ] [ -signify key-envvar ] [ -upload dest ] -- archives build artifacts + importkeys -- imports signing keys from env + debsrc [ -signer key-id ] [ -upload dest ] -- creates a debian source package + nsis -- creates a Windows NSIS installer + aar [ -local ] [ -sign key-id ] [-deploy repo] [ -upload dest ] -- creates an Android archive + xcode [ -local ] [ -sign key-id ] [-deploy repo] [ -upload dest ] -- creates an iOS XCode framework + purge [ -store blobstore ] [ -days threshold ] -- purges old archives from the blobstore For all commands, -n prevents execution of external programs (dry run mode). + */ package main @@ -147,7 +148,7 @@ var ( // This is the version of go that will be downloaded by // // go run ci.go install -dlgo - dlgoVersion = "1.20.1" + dlgoVersion = "1.18" ) var GOBIN, _ = filepath.Abs(filepath.Join("build", "bin")) diff --git a/go.mod b/go.mod index 4015d41bfe..f55b2f9aa7 100644 --- a/go.mod +++ b/go.mod @@ -1,6 +1,6 @@ module github.com/ethereum/go-ethereum -go 1.20 +go 1.19 require ( github.com/Azure/azure-sdk-for-go/sdk/storage/azblob v0.3.0 diff --git a/p2p/dnsdisc/client_test.go b/p2p/dnsdisc/client_test.go index 7d8b89ff38..9320dd667a 100644 --- a/p2p/dnsdisc/client_test.go +++ b/p2p/dnsdisc/client_test.go @@ -20,12 +20,12 @@ import ( "context" "crypto/ecdsa" "errors" + "math/rand" "reflect" "testing" "time" "github.com/davecgh/go-spew/spew" - "github.com/ethereum/go-ethereum/common/hexutil" "github.com/ethereum/go-ethereum/common/mclock" "github.com/ethereum/go-ethereum/crypto" "github.com/ethereum/go-ethereum/internal/testlog" @@ -34,25 +34,23 @@ import ( "github.com/ethereum/go-ethereum/p2p/enr" ) -var signingKeyForTesting, _ = crypto.ToECDSA(hexutil.MustDecode("0xdc599867fc513f8f5e2c2c9c489cde5e71362d1d9ec6e693e0de063236ed1240")) +const ( + signingKeySeed = 0x111111 + nodesSeed1 = 0x2945237 + nodesSeed2 = 0x4567299 +) func TestClientSyncTree(t *testing.T) { - - nodes := []string{ - "enr:-HW4QOFzoVLaFJnNhbgMoDXPnOvcdVuj7pDpqRvh6BRDO68aVi5ZcjB3vzQRZH2IcLBGHzo8uUN3snqmgTiE56CH3AMBgmlkgnY0iXNlY3AyNTZrMaECC2_24YYkYHEgdzxlSNKQEnHhuNAbNlMlWJxrJxbAFvA", - "enr:-HW4QAggRauloj2SDLtIHN1XBkvhFZ1vtf1raYQp9TBW2RD5EEawDzbtSmlXUfnaHcvwOizhVYLtr7e6vw7NAf6mTuoCgmlkgnY0iXNlY3AyNTZrMaECjrXI8TLNXU0f8cthpAMxEshUyQlK-AM0PW2wfrnacNI", - "enr:-HW4QLAYqmrwllBEnzWWs7I5Ev2IAs7x_dZlbYdRdMUx5EyKHDXp7AV5CkuPGUPdvbv1_Ms1CPfhcGCvSElSosZmyoqAgmlkgnY0iXNlY3AyNTZrMaECriawHKWdDRk2xeZkrOXBQ0dfMFLHY4eENZwdufn1S1o", - } - r := mapResolver{ "n": "enrtree-root:v1 e=JWXYDBPXYWG6FX3GMDIBFA6CJ4 l=C7HRFPF3BLGF3YR4DY5KX3SMBE seq=1 sig=o908WmNp7LibOfPsr4btQwatZJ5URBr2ZAuxvK4UWHlsB9sUOTJQaGAlLPVAhM__XJesCHxLISo94z5Z2a463gA", "C7HRFPF3BLGF3YR4DY5KX3SMBE.n": "enrtree://AM5FCQLWIZX2QFPNJAP7VUERCCRNGRHWZG3YYHIUV7BVDQ5FDPRT2@morenodes.example.org", "JWXYDBPXYWG6FX3GMDIBFA6CJ4.n": "enrtree-branch:2XS2367YHAXJFGLZHVAWLQD4ZY,H4FHT4B454P6UXFD7JCYQ5PWDY,MHTDO6TMUBRIA2XWG5LUDACK24", - "2XS2367YHAXJFGLZHVAWLQD4ZY.n": nodes[0], - "H4FHT4B454P6UXFD7JCYQ5PWDY.n": nodes[1], - "MHTDO6TMUBRIA2XWG5LUDACK24.n": nodes[2]} + "2XS2367YHAXJFGLZHVAWLQD4ZY.n": "enr:-HW4QOFzoVLaFJnNhbgMoDXPnOvcdVuj7pDpqRvh6BRDO68aVi5ZcjB3vzQRZH2IcLBGHzo8uUN3snqmgTiE56CH3AMBgmlkgnY0iXNlY3AyNTZrMaECC2_24YYkYHEgdzxlSNKQEnHhuNAbNlMlWJxrJxbAFvA", + "H4FHT4B454P6UXFD7JCYQ5PWDY.n": "enr:-HW4QAggRauloj2SDLtIHN1XBkvhFZ1vtf1raYQp9TBW2RD5EEawDzbtSmlXUfnaHcvwOizhVYLtr7e6vw7NAf6mTuoCgmlkgnY0iXNlY3AyNTZrMaECjrXI8TLNXU0f8cthpAMxEshUyQlK-AM0PW2wfrnacNI", + "MHTDO6TMUBRIA2XWG5LUDACK24.n": "enr:-HW4QLAYqmrwllBEnzWWs7I5Ev2IAs7x_dZlbYdRdMUx5EyKHDXp7AV5CkuPGUPdvbv1_Ms1CPfhcGCvSElSosZmyoqAgmlkgnY0iXNlY3AyNTZrMaECriawHKWdDRk2xeZkrOXBQ0dfMFLHY4eENZwdufn1S1o", + } var ( - wantNodes = sortByID(parseNodes(nodes)) + wantNodes = testNodes(0x29452, 3) wantLinks = []string{"enrtree://AM5FCQLWIZX2QFPNJAP7VUERCCRNGRHWZG3YYHIUV7BVDQ5FDPRT2@morenodes.example.org"} wantSeq = uint(1) ) @@ -62,7 +60,7 @@ func TestClientSyncTree(t *testing.T) { if err != nil { t.Fatal("sync error:", err) } - if !reflect.DeepEqual(sortByID(stree.Nodes()), wantNodes) { + if !reflect.DeepEqual(sortByID(stree.Nodes()), sortByID(wantNodes)) { t.Errorf("wrong nodes in synced tree:\nhave %v\nwant %v", spew.Sdump(stree.Nodes()), spew.Sdump(wantNodes)) } if !reflect.DeepEqual(stree.Links(), wantLinks) { @@ -101,13 +99,9 @@ func TestClientSyncTreeBadNode(t *testing.T) { // This test checks that randomIterator finds all entries. func TestIterator(t *testing.T) { - var ( - keys = testKeys(30) - nodes = testNodes(keys) - tree, url = makeTestTree("n", nodes, nil) - r = mapResolver(tree.ToTXT("n")) - ) - + nodes := testNodes(nodesSeed1, 30) + tree, url := makeTestTree("n", nodes, nil) + r := mapResolver(tree.ToTXT("n")) c := NewClient(Config{ Resolver: r, Logger: testlog.Logger(t, log.LvlTrace), @@ -138,11 +132,8 @@ func TestIteratorCloseWithoutNext(t *testing.T) { // This test checks if closing randomIterator races. func TestIteratorClose(t *testing.T) { - var ( - keys = testKeys(500) - nodes = testNodes(keys) - tree1, url1 = makeTestTree("t1", nodes, nil) - ) + nodes := testNodes(nodesSeed1, 500) + tree1, url1 := makeTestTree("t1", nodes, nil) c := NewClient(Config{Resolver: newMapResolver(tree1.ToTXT("t1"))}) it, err := c.NewIterator(url1) if err != nil { @@ -164,12 +155,9 @@ func TestIteratorClose(t *testing.T) { // This test checks that randomIterator traverses linked trees as well as explicitly added trees. func TestIteratorLinks(t *testing.T) { - var ( - keys = testKeys(40) - nodes = testNodes(keys) - tree1, url1 = makeTestTree("t1", nodes[:10], nil) - tree2, url2 = makeTestTree("t2", nodes[10:], []string{url1}) - ) + nodes := testNodes(nodesSeed1, 40) + tree1, url1 := makeTestTree("t1", nodes[:10], nil) + tree2, url2 := makeTestTree("t2", nodes[10:], []string{url1}) c := NewClient(Config{ Resolver: newMapResolver(tree1.ToTXT("t1"), tree2.ToTXT("t2")), Logger: testlog.Logger(t, log.LvlTrace), @@ -188,8 +176,7 @@ func TestIteratorLinks(t *testing.T) { func TestIteratorNodeUpdates(t *testing.T) { var ( clock = new(mclock.Simulated) - keys = testKeys(30) - nodes = testNodes(keys) + nodes = testNodes(nodesSeed1, 30) resolver = newMapResolver() c = NewClient(Config{ Resolver: resolver, @@ -210,7 +197,7 @@ func TestIteratorNodeUpdates(t *testing.T) { checkIterator(t, it, nodes[:25]) // Ensure RandomNode returns the new nodes after the tree is updated. - updateSomeNodes(keys, nodes) + updateSomeNodes(nodesSeed1, nodes) tree2, _ := makeTestTree("n", nodes, nil) resolver.clear() resolver.add(tree2.ToTXT("n")) @@ -226,8 +213,7 @@ func TestIteratorNodeUpdates(t *testing.T) { func TestIteratorRootRecheckOnFail(t *testing.T) { var ( clock = new(mclock.Simulated) - keys = testKeys(30) - nodes = testNodes(keys) + nodes = testNodes(nodesSeed1, 30) resolver = newMapResolver() c = NewClient(Config{ Resolver: resolver, @@ -251,7 +237,7 @@ func TestIteratorRootRecheckOnFail(t *testing.T) { checkIterator(t, it, nodes[:25]) // Ensure RandomNode returns the new nodes after the tree is updated. - updateSomeNodes(keys, nodes) + updateSomeNodes(nodesSeed1, nodes) tree2, _ := makeTestTree("n", nodes, nil) resolver.clear() resolver.add(tree2.ToTXT("n")) @@ -264,8 +250,7 @@ func TestIteratorRootRecheckOnFail(t *testing.T) { func TestIteratorEmptyTree(t *testing.T) { var ( clock = new(mclock.Simulated) - keys = testKeys(1) - nodes = testNodes(keys) + nodes = testNodes(nodesSeed1, 1) resolver = newMapResolver() c = NewClient(Config{ Resolver: resolver, @@ -309,7 +294,8 @@ func TestIteratorEmptyTree(t *testing.T) { } // updateSomeNodes applies ENR updates to some of the given nodes. -func updateSomeNodes(keys []*ecdsa.PrivateKey, nodes []*enode.Node) { +func updateSomeNodes(keySeed int64, nodes []*enode.Node) { + keys := testKeys(nodesSeed1, len(nodes)) for i, n := range nodes[:len(nodes)/2] { r := n.Record() r.Set(enr.IP{127, 0, 0, 1}) @@ -325,8 +311,7 @@ func updateSomeNodes(keys []*ecdsa.PrivateKey, nodes []*enode.Node) { func TestIteratorLinkUpdates(t *testing.T) { var ( clock = new(mclock.Simulated) - keys = testKeys(30) - nodes = testNodes(keys) + nodes = testNodes(nodesSeed1, 30) resolver = newMapResolver() c = NewClient(Config{ Resolver: resolver, @@ -399,7 +384,7 @@ func makeTestTree(domain string, nodes []*enode.Node, links []string) (*Tree, st if err != nil { panic(err) } - url, err := tree.Sign(signingKeyForTesting, domain) + url, err := tree.Sign(testKey(signingKeySeed), domain) if err != nil { panic(err) } @@ -407,10 +392,11 @@ func makeTestTree(domain string, nodes []*enode.Node, links []string) (*Tree, st } // testKeys creates deterministic private keys for testing. -func testKeys(n int) []*ecdsa.PrivateKey { +func testKeys(seed int64, n int) []*ecdsa.PrivateKey { + rand := rand.New(rand.NewSource(seed)) keys := make([]*ecdsa.PrivateKey, n) for i := 0; i < n; i++ { - key, err := crypto.GenerateKey() + key, err := ecdsa.GenerateKey(crypto.S256(), rand) if err != nil { panic("can't generate key: " + err.Error()) } @@ -419,8 +405,13 @@ func testKeys(n int) []*ecdsa.PrivateKey { return keys } -func testNodes(keys []*ecdsa.PrivateKey) []*enode.Node { - nodes := make([]*enode.Node, len(keys)) +func testKey(seed int64) *ecdsa.PrivateKey { + return testKeys(seed, 1)[0] +} + +func testNodes(seed int64, n int) []*enode.Node { + keys := testKeys(seed, n) + nodes := make([]*enode.Node, n) for i, key := range keys { record := new(enr.Record) record.SetSeq(uint64(i)) @@ -434,6 +425,10 @@ func testNodes(keys []*ecdsa.PrivateKey) []*enode.Node { return nodes } +func testNode(seed int64) *enode.Node { + return testNodes(seed, 1)[0] +} + type mapResolver map[string]string func newMapResolver(maps ...map[string]string) mapResolver { @@ -462,15 +457,3 @@ func (mr mapResolver) LookupTXT(ctx context.Context, name string) ([]string, err } return nil, errors.New("not found") } - -func parseNodes(rec []string) []*enode.Node { - var ns []*enode.Node - for _, r := range rec { - var n enode.Node - if err := n.UnmarshalText([]byte(r)); err != nil { - panic(err) - } - ns = append(ns, &n) - } - return ns -} diff --git a/p2p/dnsdisc/tree_test.go b/p2p/dnsdisc/tree_test.go index 4450aac830..4048c35d63 100644 --- a/p2p/dnsdisc/tree_test.go +++ b/p2p/dnsdisc/tree_test.go @@ -61,8 +61,7 @@ func TestParseRoot(t *testing.T) { } func TestParseEntry(t *testing.T) { - testENRs := []string{"enr:-HW4QES8QIeXTYlDzbfr1WEzE-XKY4f8gJFJzjJL-9D7TC9lJb4Z3JPRRz1lP4pL_N_QpT6rGQjAU9Apnc-C1iMP36OAgmlkgnY0iXNlY3AyNTZrMaED5IdwfMxdmR8W37HqSFdQLjDkIwBd4Q_MjxgZifgKSdM"} - testNodes := parseNodes(testENRs) + testkey := testKey(signingKeySeed) tests := []struct { input string e entry @@ -92,11 +91,8 @@ func TestParseEntry(t *testing.T) { // Links { input: "enrtree://AKPYQIUQIL7PSIACI32J7FGZW56E5FKHEFCCOFHILBIMW3M6LWXS2@nodes.example.org", - e: &linkEntry{ - str: "AKPYQIUQIL7PSIACI32J7FGZW56E5FKHEFCCOFHILBIMW3M6LWXS2@nodes.example.org", - domain: "nodes.example.org", - pubkey: &signingKeyForTesting.PublicKey, - }}, + e: &linkEntry{"AKPYQIUQIL7PSIACI32J7FGZW56E5FKHEFCCOFHILBIMW3M6LWXS2@nodes.example.org", "nodes.example.org", &testkey.PublicKey}, + }, { input: "enrtree://nodes.example.org", err: entryError{"link", errNoPubkey}, @@ -111,8 +107,8 @@ func TestParseEntry(t *testing.T) { }, // ENRs { - input: testENRs[0], - e: &enrEntry{node: testNodes[0]}, + input: "enr:-HW4QES8QIeXTYlDzbfr1WEzE-XKY4f8gJFJzjJL-9D7TC9lJb4Z3JPRRz1lP4pL_N_QpT6rGQjAU9Apnc-C1iMP36OAgmlkgnY0iXNlY3AyNTZrMaED5IdwfMxdmR8W37HqSFdQLjDkIwBd4Q_MjxgZifgKSdM", + e: &enrEntry{node: testNode(nodesSeed1)}, }, { input: "enr:-HW4QLZHjM4vZXkbp-5xJoHsKSbE7W39FPC8283X-y8oHcHPTnDDlIlzL5ArvDUlHZVDPgmFASrh7cWgLOLxj4wprRkHgmlkgnY0iXNlY3AyNTZrMaEC3t2jLMhDpCDX5mbSEwDn4L3iUfyXzoO8G28XvjGRkrAg=", @@ -136,8 +132,7 @@ func TestParseEntry(t *testing.T) { } func TestMakeTree(t *testing.T) { - keys := testKeys(50) - nodes := testNodes(keys) + nodes := testNodes(nodesSeed2, 50) tree, err := MakeTree(2, nodes, nil) if err != nil { t.Fatal(err) From 7303e59f52b64e7194fb2384d02e1e61f646bb46 Mon Sep 17 00:00:00 2001 From: Martin Holst Swende Date: Thu, 27 Oct 2022 10:39:01 +0200 Subject: [PATCH 07/29] core/vm: use optimized bigint (#26021) --- core/vm/contracts.go | 27 +++++++++++++++++---------- 1 file changed, 17 insertions(+), 10 deletions(-) diff --git a/core/vm/contracts.go b/core/vm/contracts.go index 9210f5486c..1314c24386 100644 --- a/core/vm/contracts.go +++ b/core/vm/contracts.go @@ -29,8 +29,7 @@ import ( "github.com/ethereum/go-ethereum/crypto/bls12381" "github.com/ethereum/go-ethereum/crypto/bn256" "github.com/ethereum/go-ethereum/params" - - //lint:ignore SA1019 Needed for precompile + big2 "github.com/holiman/big" "golang.org/x/crypto/ripemd160" ) @@ -266,9 +265,10 @@ var ( // modexpMultComplexity implements bigModexp multComplexity formula, as defined in EIP-198 // // def mult_complexity(x): -// if x <= 64: return x ** 2 -// elif x <= 1024: return x ** 2 // 4 + 96 * x - 3072 -// else: return x ** 2 // 16 + 480 * x - 199680 +// +// if x <= 64: return x ** 2 +// elif x <= 1024: return x ** 2 // 4 + 96 * x - 3072 +// else: return x ** 2 // 16 + 480 * x - 199680 // // where is x is max(length_of_MODULUS, length_of_BASE) func modexpMultComplexity(x *big.Int) *big.Int { @@ -379,15 +379,22 @@ func (c *bigModExp) Run(input []byte) ([]byte, error) { } // Retrieve the operands and execute the exponentiation var ( - base = new(big.Int).SetBytes(getData(input, 0, baseLen)) - exp = new(big.Int).SetBytes(getData(input, baseLen, expLen)) - mod = new(big.Int).SetBytes(getData(input, baseLen+expLen, modLen)) + base = new(big2.Int).SetBytes(getData(input, 0, baseLen)) + exp = new(big2.Int).SetBytes(getData(input, baseLen, expLen)) + mod = new(big2.Int).SetBytes(getData(input, baseLen+expLen, modLen)) + v []byte ) - if mod.BitLen() == 0 { + switch { + case mod.BitLen() == 0: // Modulo 0 is undefined, return zero return common.LeftPadBytes([]byte{}, int(modLen)), nil + case base.BitLen() == 1: // a bit length of 1 means it's 1 (or -1). + //If base == 1, then we can just return base % mod (if mod >= 1, which it is) + v = base.Mod(base, mod).Bytes() + default: + v = base.Exp(base, exp, mod).Bytes() } - return common.LeftPadBytes(base.Exp(base, exp, mod).Bytes(), int(modLen)), nil + return common.LeftPadBytes(v, int(modLen)), nil } // newCurvePoint unmarshals a binary blob into a bn256 elliptic curve point, From c7cc8c08ed242c85d8d0ff03d6fa43b3a29bf836 Mon Sep 17 00:00:00 2001 From: Jerry Date: Fri, 3 Mar 2023 08:15:58 -0800 Subject: [PATCH 08/29] Add holiman/big --- go.mod | 7 +++++-- go.sum | 4 ++++ 2 files changed, 9 insertions(+), 2 deletions(-) diff --git a/go.mod b/go.mod index f55b2f9aa7..6a3ad71ca6 100644 --- a/go.mod +++ b/go.mod @@ -71,7 +71,7 @@ require ( go.uber.org/goleak v1.1.12 golang.org/x/crypto v0.0.0-20220507011949-2cf3adece122 golang.org/x/sync v0.0.0-20220722155255-886fb9371eb4 - golang.org/x/sys v0.0.0-20220728004956-3c1f35247d10 + golang.org/x/sys v0.0.0-20221013171732-95e765b1cc43 golang.org/x/text v0.3.7 golang.org/x/time v0.0.0-20210220033141-f8bda1e9f3ba golang.org/x/tools v0.1.12 @@ -85,7 +85,10 @@ require ( pgregory.net/rapid v0.4.8 ) -require github.com/gammazero/deque v0.2.1 // indirect +require ( + github.com/gammazero/deque v0.2.1 // indirect + github.com/holiman/big v0.0.0-20221017200358-a027dc42d04e // indirect +) require ( github.com/Azure/azure-sdk-for-go/sdk/azcore v0.21.1 // indirect diff --git a/go.sum b/go.sum index 4b312ccfb1..f817a8bbbe 100644 --- a/go.sum +++ b/go.sum @@ -275,6 +275,8 @@ github.com/hashicorp/golang-lru v0.5.5-0.20210104140557-80c98217689d h1:dg1dEPuW github.com/hashicorp/golang-lru v0.5.5-0.20210104140557-80c98217689d/go.mod h1:iADmTwqILo4mZ8BN3D2Q6+9jd8WM5uGBxy+E8yxSoD4= github.com/hashicorp/hcl/v2 v2.10.1 h1:h4Xx4fsrRE26ohAk/1iGF/JBqRQbyUqu5Lvj60U54ys= github.com/hashicorp/hcl/v2 v2.10.1/go.mod h1:FwWsfWEjyV/CMj8s/gqAuiviY72rJ1/oayI9WftqcKg= +github.com/holiman/big v0.0.0-20221017200358-a027dc42d04e h1:pIYdhNkDh+YENVNi3gto8n9hAmRxKxoar0iE6BLucjw= +github.com/holiman/big v0.0.0-20221017200358-a027dc42d04e/go.mod h1:j9cQbcqHQujT0oKJ38PylVfqohClLr3CvDC+Qcg+lhU= github.com/holiman/bloomfilter/v2 v2.0.3 h1:73e0e/V0tCydx14a0SCYS/EWCxgwLZ18CZcZKVu0fao= github.com/holiman/bloomfilter/v2 v2.0.3/go.mod h1:zpoh+gs7qcpqrHr3dB55AMiJwo0iURXE7ZOP9L9hSkA= github.com/holiman/uint256 v1.2.0 h1:gpSYcPLWGv4sG43I2mVLiDZCNDh/EpGjSk8tmtxitHM= @@ -660,6 +662,8 @@ golang.org/x/sys v0.0.0-20210423185535-09eb48e85fd7/go.mod h1:h1NjWce9XRLGQEsW7w golang.org/x/sys v0.0.0-20210510120138-977fb7262007/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= golang.org/x/sys v0.0.0-20220728004956-3c1f35247d10 h1:WIoqL4EROvwiPdUtaip4VcDdpZ4kha7wBWZrbVKCIZg= golang.org/x/sys v0.0.0-20220728004956-3c1f35247d10/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= +golang.org/x/sys v0.0.0-20221013171732-95e765b1cc43 h1:OK7RB6t2WQX54srQQYSXMW8dF5C6/8+oA/s5QBmmto4= +golang.org/x/sys v0.0.0-20221013171732-95e765b1cc43/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= golang.org/x/term v0.0.0-20201117132131-f5c789dd3221/go.mod h1:Nr5EML6q2oocZ2LXRh80K7BxOlk5/8JxuGnuhpl+muw= golang.org/x/term v0.0.0-20201126162022-7de9c90e9dd1/go.mod h1:bj7SfCRtBDWHUb9snDiAeCFNEtKQo2Wmx5Cou7ajbmo= golang.org/x/term v0.0.0-20210927222741-03fcf44c2211 h1:JGgROgKl9N8DuW20oFS5gxc+lE67/N3FcwmBPMe7ArY= From 8698da045fee4ef203e2581bbe257015c67fd127 Mon Sep 17 00:00:00 2001 From: Jerry Date: Fri, 3 Mar 2023 08:39:10 -0800 Subject: [PATCH 09/29] Fix linter --- core/vm/contracts.go | 3 +++ 1 file changed, 3 insertions(+) diff --git a/core/vm/contracts.go b/core/vm/contracts.go index 1314c24386..d7d9eeb525 100644 --- a/core/vm/contracts.go +++ b/core/vm/contracts.go @@ -29,6 +29,7 @@ import ( "github.com/ethereum/go-ethereum/crypto/bls12381" "github.com/ethereum/go-ethereum/crypto/bn256" "github.com/ethereum/go-ethereum/params" + big2 "github.com/holiman/big" "golang.org/x/crypto/ripemd160" ) @@ -384,6 +385,7 @@ func (c *bigModExp) Run(input []byte) ([]byte, error) { mod = new(big2.Int).SetBytes(getData(input, baseLen+expLen, modLen)) v []byte ) + switch { case mod.BitLen() == 0: // Modulo 0 is undefined, return zero @@ -394,6 +396,7 @@ func (c *bigModExp) Run(input []byte) ([]byte, error) { default: v = base.Exp(base, exp, mod).Bytes() } + return common.LeftPadBytes(v, int(modLen)), nil } From 4671d97f311ff76ddc84d74070d91b06d4588faa Mon Sep 17 00:00:00 2001 From: Jerry Date: Mon, 6 Mar 2023 09:14:44 -0800 Subject: [PATCH 10/29] Bump version to v0.3.5 --- packaging/templates/package_scripts/control | 2 +- packaging/templates/package_scripts/control.arm64 | 2 +- packaging/templates/package_scripts/control.profile.amd64 | 2 +- packaging/templates/package_scripts/control.profile.arm64 | 2 +- packaging/templates/package_scripts/control.validator | 2 +- packaging/templates/package_scripts/control.validator.arm64 | 2 +- params/version.go | 2 +- 7 files changed, 7 insertions(+), 7 deletions(-) diff --git a/packaging/templates/package_scripts/control b/packaging/templates/package_scripts/control index c036378b0e..4c50800fcf 100644 --- a/packaging/templates/package_scripts/control +++ b/packaging/templates/package_scripts/control @@ -1,5 +1,5 @@ Source: bor -Version: 0.3.4-stable +Version: 0.3.5-stable Section: develop Priority: standard Maintainer: Polygon diff --git a/packaging/templates/package_scripts/control.arm64 b/packaging/templates/package_scripts/control.arm64 index 3f45d5564f..11645c582d 100644 --- a/packaging/templates/package_scripts/control.arm64 +++ b/packaging/templates/package_scripts/control.arm64 @@ -1,5 +1,5 @@ Source: bor -Version: 0.3.4-stable +Version: 0.3.5-stable Section: develop Priority: standard Maintainer: Polygon diff --git a/packaging/templates/package_scripts/control.profile.amd64 b/packaging/templates/package_scripts/control.profile.amd64 index e3261e3f57..2bfa82a23a 100644 --- a/packaging/templates/package_scripts/control.profile.amd64 +++ b/packaging/templates/package_scripts/control.profile.amd64 @@ -1,5 +1,5 @@ Source: bor-profile -Version: 0.3.4-stable +Version: 0.3.5-stable Section: develop Priority: standard Maintainer: Polygon diff --git a/packaging/templates/package_scripts/control.profile.arm64 b/packaging/templates/package_scripts/control.profile.arm64 index 93c2206770..504c6d3ea2 100644 --- a/packaging/templates/package_scripts/control.profile.arm64 +++ b/packaging/templates/package_scripts/control.profile.arm64 @@ -1,5 +1,5 @@ Source: bor-profile -Version: 0.3.4-stable +Version: 0.3.5-stable Section: develop Priority: standard Maintainer: Polygon diff --git a/packaging/templates/package_scripts/control.validator b/packaging/templates/package_scripts/control.validator index 3c5633d11f..3d11d78542 100644 --- a/packaging/templates/package_scripts/control.validator +++ b/packaging/templates/package_scripts/control.validator @@ -1,5 +1,5 @@ Source: bor-profile -Version: 0.3.4-stable +Version: 0.3.5-stable Section: develop Priority: standard Maintainer: Polygon diff --git a/packaging/templates/package_scripts/control.validator.arm64 b/packaging/templates/package_scripts/control.validator.arm64 index e475d91bd3..a6e8753812 100644 --- a/packaging/templates/package_scripts/control.validator.arm64 +++ b/packaging/templates/package_scripts/control.validator.arm64 @@ -1,5 +1,5 @@ Source: bor-profile -Version: 0.3.4-stable +Version: 0.3.5-stable Section: develop Priority: standard Maintainer: Polygon diff --git a/params/version.go b/params/version.go index 37b67a87e9..2054cd18c9 100644 --- a/params/version.go +++ b/params/version.go @@ -23,7 +23,7 @@ import ( const ( VersionMajor = 0 // Major version component of the current release VersionMinor = 3 // Minor version component of the current release - VersionPatch = 4 // Patch version component of the current release + VersionPatch = 5 // Patch version component of the current release VersionMeta = "stable" // Version metadata to append to the version string ) From 318b7fadc9d2eb81de5640248b8c9469736d1b18 Mon Sep 17 00:00:00 2001 From: Pratik Patil Date: Mon, 6 Mar 2023 15:25:38 +0530 Subject: [PATCH 11/29] fix lints from develop (few lints decided to appear from code that was untouched, weird) --- p2p/dnsdisc/client_test.go | 8 +++++++- 1 file changed, 7 insertions(+), 1 deletion(-) diff --git a/p2p/dnsdisc/client_test.go b/p2p/dnsdisc/client_test.go index 9320dd667a..99c3aac0ea 100644 --- a/p2p/dnsdisc/client_test.go +++ b/p2p/dnsdisc/client_test.go @@ -57,15 +57,19 @@ func TestClientSyncTree(t *testing.T) { c := NewClient(Config{Resolver: r, Logger: testlog.Logger(t, log.LvlTrace)}) stree, err := c.SyncTree("enrtree://AKPYQIUQIL7PSIACI32J7FGZW56E5FKHEFCCOFHILBIMW3M6LWXS2@n") + if err != nil { t.Fatal("sync error:", err) } + if !reflect.DeepEqual(sortByID(stree.Nodes()), sortByID(wantNodes)) { t.Errorf("wrong nodes in synced tree:\nhave %v\nwant %v", spew.Sdump(stree.Nodes()), spew.Sdump(wantNodes)) } + if !reflect.DeepEqual(stree.Links(), wantLinks) { t.Errorf("wrong links in synced tree: %v", stree.Links()) } + if stree.Seq() != wantSeq { t.Errorf("synced tree has wrong seq: %d", stree.Seq()) } @@ -295,7 +299,7 @@ func TestIteratorEmptyTree(t *testing.T) { // updateSomeNodes applies ENR updates to some of the given nodes. func updateSomeNodes(keySeed int64, nodes []*enode.Node) { - keys := testKeys(nodesSeed1, len(nodes)) + keys := testKeys(keySeed, len(nodes)) for i, n := range nodes[:len(nodes)/2] { r := n.Record() r.Set(enr.IP{127, 0, 0, 1}) @@ -384,10 +388,12 @@ func makeTestTree(domain string, nodes []*enode.Node, links []string) (*Tree, st if err != nil { panic(err) } + url, err := tree.Sign(testKey(signingKeySeed), domain) if err != nil { panic(err) } + return tree, url } From d817857e5205473ac40e3b6f5da19fe8f63e75a6 Mon Sep 17 00:00:00 2001 From: Arpit Temani Date: Thu, 9 Mar 2023 15:16:14 +0530 Subject: [PATCH 12/29] upgrade crypto lib version (#770) --- go.mod | 16 +++++++--------- go.sum | 21 ++++++++++----------- 2 files changed, 17 insertions(+), 20 deletions(-) diff --git a/go.mod b/go.mod index 6a3ad71ca6..49f9261b7b 100644 --- a/go.mod +++ b/go.mod @@ -37,6 +37,7 @@ require ( github.com/hashicorp/go-bexpr v0.1.10 github.com/hashicorp/golang-lru v0.5.5-0.20210104140557-80c98217689d github.com/hashicorp/hcl/v2 v2.10.1 + github.com/holiman/big v0.0.0-20221017200358-a027dc42d04e github.com/holiman/bloomfilter/v2 v2.0.3 github.com/holiman/uint256 v1.2.0 github.com/huin/goupnp v1.0.3-0.20220313090229-ca81a64b4204 @@ -69,10 +70,10 @@ require ( go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracegrpc v1.2.0 go.opentelemetry.io/otel/sdk v1.2.0 go.uber.org/goleak v1.1.12 - golang.org/x/crypto v0.0.0-20220507011949-2cf3adece122 + golang.org/x/crypto v0.1.0 golang.org/x/sync v0.0.0-20220722155255-886fb9371eb4 - golang.org/x/sys v0.0.0-20221013171732-95e765b1cc43 - golang.org/x/text v0.3.7 + golang.org/x/sys v0.1.0 + golang.org/x/text v0.4.0 golang.org/x/time v0.0.0-20210220033141-f8bda1e9f3ba golang.org/x/tools v0.1.12 gonum.org/v1/gonum v0.11.0 @@ -85,10 +86,7 @@ require ( pgregory.net/rapid v0.4.8 ) -require ( - github.com/gammazero/deque v0.2.1 // indirect - github.com/holiman/big v0.0.0-20221017200358-a027dc42d04e // indirect -) +require github.com/gammazero/deque v0.2.1 // indirect require ( github.com/Azure/azure-sdk-for-go/sdk/azcore v0.21.1 // indirect @@ -140,8 +138,8 @@ require ( go.opentelemetry.io/proto/otlp v0.10.0 // indirect golang.org/x/exp v0.0.0-20220722155223-a9213eeb770e // indirect golang.org/x/mod v0.6.0-dev.0.20220419223038-86c51ed26bb4 // indirect - golang.org/x/net v0.0.0-20220728030405-41545e8bf201 // indirect - golang.org/x/term v0.0.0-20210927222741-03fcf44c2211 // indirect + golang.org/x/net v0.1.0 // indirect + golang.org/x/term v0.1.0 // indirect golang.org/x/xerrors v0.0.0-20220411194840-2f41105eb62f // indirect google.golang.org/genproto v0.0.0-20220725144611-272f38e5d71b // indirect gopkg.in/yaml.v2 v2.4.0 // indirect diff --git a/go.sum b/go.sum index f817a8bbbe..ba5a3eec35 100644 --- a/go.sum +++ b/go.sum @@ -537,8 +537,8 @@ golang.org/x/crypto v0.0.0-20200622213623-75b288015ac9/go.mod h1:LzIPMQfyMNhhGPh golang.org/x/crypto v0.0.0-20200820211705-5c72a883971a/go.mod h1:LzIPMQfyMNhhGPhUkYOs5KpL4U8rLKemX1yGLhDgUto= golang.org/x/crypto v0.0.0-20201221181555-eec23a3978ad/go.mod h1:jdWPYTVW3xRLrWPugEBEK3UY2ZEsg3UU495nc5E+M+I= golang.org/x/crypto v0.0.0-20210322153248-0c34fe9e7dc2/go.mod h1:T9bdIzuCu7OtxOm1hfPfRQxPLYneinmdGuTeoZ9dtd4= -golang.org/x/crypto v0.0.0-20220507011949-2cf3adece122 h1:NvGWuYG8dkDHFSKksI1P9faiVJ9rayE6l0+ouWVIDs8= -golang.org/x/crypto v0.0.0-20220507011949-2cf3adece122/go.mod h1:IxCIyHEi3zRg3s0A5j5BB6A9Jmi73HwBIUl50j+osU4= +golang.org/x/crypto v0.1.0 h1:MDRAIl0xIo9Io2xV565hzXHw3zVseKrJKodhohM5CjU= +golang.org/x/crypto v0.1.0/go.mod h1:RecgLatLF4+eUMCP1PoPZQb+cVrJcOPbHkTkbkB9sbw= golang.org/x/exp v0.0.0-20180321215751-8460e604b9de/go.mod h1:CJ0aWSM057203Lf6IL+f9T1iT9GByDxfZKAQTCR3kQA= golang.org/x/exp v0.0.0-20180807140117-3d87b88a115f/go.mod h1:CJ0aWSM057203Lf6IL+f9T1iT9GByDxfZKAQTCR3kQA= golang.org/x/exp v0.0.0-20190121172915-509febef88a4/go.mod h1:CJ0aWSM057203Lf6IL+f9T1iT9GByDxfZKAQTCR3kQA= @@ -602,8 +602,8 @@ golang.org/x/net v0.0.0-20210226172049-e18ecbb05110/go.mod h1:m0MpNAwzfU5UDzcl9v golang.org/x/net v0.0.0-20210405180319-a5a99cb37ef4/go.mod h1:p54w0d4576C0XHj96bSt6lcn1PtDYWL6XObtHCRCNQM= golang.org/x/net v0.0.0-20210610132358-84b48f89b13b/go.mod h1:9nx3DQGgdP8bBQD5qxJ1jj9UTztislL4KSBs9R2vV5Y= golang.org/x/net v0.0.0-20210805182204-aaa1db679c0d/go.mod h1:9nx3DQGgdP8bBQD5qxJ1jj9UTztislL4KSBs9R2vV5Y= -golang.org/x/net v0.0.0-20220728030405-41545e8bf201 h1:bvOltf3SADAfG05iRml8lAB3qjoEX5RCyN4K6G5v3N0= -golang.org/x/net v0.0.0-20220728030405-41545e8bf201/go.mod h1:YDH+HFinaLZZlnHAfSS6ZXJJ9M9t4Dl22yv3iI2vPwk= +golang.org/x/net v0.1.0 h1:hZ/3BUoy5aId7sCpA/Tc5lt8DkFgdVS2onTpJsZ/fl0= +golang.org/x/net v0.1.0/go.mod h1:Cx3nUiGt4eDBEyega/BKRp+/AlGL8hYe7U9odMt2Cco= golang.org/x/oauth2 v0.0.0-20180821212333-d2e6202438be/go.mod h1:N/0e6XlmueqKjAGxoOufVs8QHGRruUQn6yWY3a++T0U= golang.org/x/oauth2 v0.0.0-20190226205417-e64efc72b421/go.mod h1:gOpvHmFTYa4IltrdGE7lF6nIHvwfUNPOp7c8zoXwtLw= golang.org/x/oauth2 v0.0.0-20190604053449-0f29369cfe45/go.mod h1:gOpvHmFTYa4IltrdGE7lF6nIHvwfUNPOp7c8zoXwtLw= @@ -660,14 +660,12 @@ golang.org/x/sys v0.0.0-20210420205809-ac73e9fd8988/go.mod h1:h1NjWce9XRLGQEsW7w golang.org/x/sys v0.0.0-20210423082822-04245dca01da/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= golang.org/x/sys v0.0.0-20210423185535-09eb48e85fd7/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= golang.org/x/sys v0.0.0-20210510120138-977fb7262007/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= -golang.org/x/sys v0.0.0-20220728004956-3c1f35247d10 h1:WIoqL4EROvwiPdUtaip4VcDdpZ4kha7wBWZrbVKCIZg= -golang.org/x/sys v0.0.0-20220728004956-3c1f35247d10/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= -golang.org/x/sys v0.0.0-20221013171732-95e765b1cc43 h1:OK7RB6t2WQX54srQQYSXMW8dF5C6/8+oA/s5QBmmto4= -golang.org/x/sys v0.0.0-20221013171732-95e765b1cc43/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= +golang.org/x/sys v0.1.0 h1:kunALQeHf1/185U1i0GOB/fy1IPRDDpuoOOqRReG57U= +golang.org/x/sys v0.1.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= golang.org/x/term v0.0.0-20201117132131-f5c789dd3221/go.mod h1:Nr5EML6q2oocZ2LXRh80K7BxOlk5/8JxuGnuhpl+muw= golang.org/x/term v0.0.0-20201126162022-7de9c90e9dd1/go.mod h1:bj7SfCRtBDWHUb9snDiAeCFNEtKQo2Wmx5Cou7ajbmo= -golang.org/x/term v0.0.0-20210927222741-03fcf44c2211 h1:JGgROgKl9N8DuW20oFS5gxc+lE67/N3FcwmBPMe7ArY= -golang.org/x/term v0.0.0-20210927222741-03fcf44c2211/go.mod h1:jbD1KX2456YbFQfuXm/mYQcufACuNUgVhRMnK/tPxf8= +golang.org/x/term v0.1.0 h1:g6Z6vPFA9dYBAF7DWcH6sCcOntplXsDKcliusYijMlw= +golang.org/x/term v0.1.0/go.mod h1:jbD1KX2456YbFQfuXm/mYQcufACuNUgVhRMnK/tPxf8= golang.org/x/text v0.3.0/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ= golang.org/x/text v0.3.1-0.20180807135948-17ff2d5776d2/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ= golang.org/x/text v0.3.2/go.mod h1:bEr9sfX3Q8Zfm5fL9x+3itogRgK3+ptLWKqgva+5dAk= @@ -675,8 +673,9 @@ golang.org/x/text v0.3.3/go.mod h1:5Zoc/QRtKVWzQhOtBMvqHzDpF6irO9z98xDceosuGiQ= golang.org/x/text v0.3.4/go.mod h1:5Zoc/QRtKVWzQhOtBMvqHzDpF6irO9z98xDceosuGiQ= golang.org/x/text v0.3.5/go.mod h1:5Zoc/QRtKVWzQhOtBMvqHzDpF6irO9z98xDceosuGiQ= golang.org/x/text v0.3.6/go.mod h1:5Zoc/QRtKVWzQhOtBMvqHzDpF6irO9z98xDceosuGiQ= -golang.org/x/text v0.3.7 h1:olpwvP2KacW1ZWvsR7uQhoyTYvKAupfQrRGBFM352Gk= golang.org/x/text v0.3.7/go.mod h1:u+2+/6zg+i71rQMx5EYifcz6MCKuco9NR6JIITiCfzQ= +golang.org/x/text v0.4.0 h1:BrVqGRd7+k1DiOgtnFvAkoQEWQvBc25ouMJM6429SFg= +golang.org/x/text v0.4.0/go.mod h1:mrYo+phRRbMaCq/xk9113O4dZlRixOauAjOtrjsXDZ8= golang.org/x/time v0.0.0-20181108054448-85acf8d2951c/go.mod h1:tRJNPiyCQ0inRvYxbN9jk5I+vvW/OXSQhTDSoE431IQ= golang.org/x/time v0.0.0-20190308202827-9d24e82272b4/go.mod h1:tRJNPiyCQ0inRvYxbN9jk5I+vvW/OXSQhTDSoE431IQ= golang.org/x/time v0.0.0-20201208040808-7e3f01d25324/go.mod h1:tRJNPiyCQ0inRvYxbN9jk5I+vvW/OXSQhTDSoE431IQ= From fa489cab3f803cb36afb3ec9c5f55a327d97e299 Mon Sep 17 00:00:00 2001 From: SHIVAM SHARMA Date: Thu, 9 Mar 2023 19:51:38 +0530 Subject: [PATCH 13/29] bump dep : github.com/Masterminds/goutils to v1.1.1 (#769) --- go.mod | 2 +- go.sum | 2 ++ 2 files changed, 3 insertions(+), 1 deletion(-) diff --git a/go.mod b/go.mod index 49f9261b7b..28669aceaa 100644 --- a/go.mod +++ b/go.mod @@ -91,7 +91,7 @@ require github.com/gammazero/deque v0.2.1 // indirect require ( github.com/Azure/azure-sdk-for-go/sdk/azcore v0.21.1 // indirect github.com/Azure/azure-sdk-for-go/sdk/internal v0.8.3 // indirect - github.com/Masterminds/goutils v1.1.0 // indirect + github.com/Masterminds/goutils v1.1.1 // indirect github.com/Masterminds/semver v1.5.0 // indirect github.com/Masterminds/sprig v2.22.0+incompatible // indirect github.com/StackExchange/wmi v0.0.0-20180116203802-5d049714c4a6 // indirect diff --git a/go.sum b/go.sum index ba5a3eec35..6591bdb2bd 100644 --- a/go.sum +++ b/go.sum @@ -35,6 +35,8 @@ github.com/JekaMas/workerpool v1.1.5 h1:xmrx2Zyft95CEGiEqzDxiawptCIRZQ0zZDhTGDFO github.com/JekaMas/workerpool v1.1.5/go.mod h1:IoDWPpwMcA27qbuugZKeBslDrgX09lVmksuh9sjzbhc= github.com/Masterminds/goutils v1.1.0 h1:zukEsf/1JZwCMgHiK3GZftabmxiCw4apj3a28RPBiVg= github.com/Masterminds/goutils v1.1.0/go.mod h1:8cTjp+g8YejhMuvIA5y2vz3BpJxksy863GQaJW2MFNU= +github.com/Masterminds/goutils v1.1.1 h1:5nUrii3FMTL5diU80unEVvNevw1nH4+ZV4DSLVJLSYI= +github.com/Masterminds/goutils v1.1.1/go.mod h1:8cTjp+g8YejhMuvIA5y2vz3BpJxksy863GQaJW2MFNU= github.com/Masterminds/semver v1.5.0 h1:H65muMkzWKEuNDnfl9d70GUjFniHKHRbFPGBuZ3QEww= github.com/Masterminds/semver v1.5.0/go.mod h1:MB6lktGJrhw8PrUyiEoblNEGEQ+RzHPF078ddwwvV3Y= github.com/Masterminds/sprig v2.22.0+incompatible h1:z4yfnGrZ7netVz+0EDJ0Wi+5VZCSYp4Z0m2dk6cEM60= From a96622e86e34fe2e35a17425db0a0cb4b7586cbb Mon Sep 17 00:00:00 2001 From: marcello33 Date: Thu, 9 Mar 2023 15:22:54 +0100 Subject: [PATCH 14/29] mardizzone/pos-1313: bump crypto dependency (#772) * dev: chg: bumd net dependency * dev: chg: bump crypto dependency * dev: chg: bump crypto dependency --- go.mod | 2 +- go.sum | 4 ++-- 2 files changed, 3 insertions(+), 3 deletions(-) diff --git a/go.mod b/go.mod index 28669aceaa..6a3a7a74d2 100644 --- a/go.mod +++ b/go.mod @@ -138,7 +138,7 @@ require ( go.opentelemetry.io/proto/otlp v0.10.0 // indirect golang.org/x/exp v0.0.0-20220722155223-a9213eeb770e // indirect golang.org/x/mod v0.6.0-dev.0.20220419223038-86c51ed26bb4 // indirect - golang.org/x/net v0.1.0 // indirect + golang.org/x/net v0.1.1-0.20221104162952-702349b0e862 // indirect golang.org/x/term v0.1.0 // indirect golang.org/x/xerrors v0.0.0-20220411194840-2f41105eb62f // indirect google.golang.org/genproto v0.0.0-20220725144611-272f38e5d71b // indirect diff --git a/go.sum b/go.sum index 6591bdb2bd..c729fe9d8f 100644 --- a/go.sum +++ b/go.sum @@ -604,8 +604,8 @@ golang.org/x/net v0.0.0-20210226172049-e18ecbb05110/go.mod h1:m0MpNAwzfU5UDzcl9v golang.org/x/net v0.0.0-20210405180319-a5a99cb37ef4/go.mod h1:p54w0d4576C0XHj96bSt6lcn1PtDYWL6XObtHCRCNQM= golang.org/x/net v0.0.0-20210610132358-84b48f89b13b/go.mod h1:9nx3DQGgdP8bBQD5qxJ1jj9UTztislL4KSBs9R2vV5Y= golang.org/x/net v0.0.0-20210805182204-aaa1db679c0d/go.mod h1:9nx3DQGgdP8bBQD5qxJ1jj9UTztislL4KSBs9R2vV5Y= -golang.org/x/net v0.1.0 h1:hZ/3BUoy5aId7sCpA/Tc5lt8DkFgdVS2onTpJsZ/fl0= -golang.org/x/net v0.1.0/go.mod h1:Cx3nUiGt4eDBEyega/BKRp+/AlGL8hYe7U9odMt2Cco= +golang.org/x/net v0.1.1-0.20221104162952-702349b0e862 h1:KrLJ+iz8J6j6VVr/OCfULAcK+xozUmWE43fKpMR4MlI= +golang.org/x/net v0.1.1-0.20221104162952-702349b0e862/go.mod h1:Cx3nUiGt4eDBEyega/BKRp+/AlGL8hYe7U9odMt2Cco= golang.org/x/oauth2 v0.0.0-20180821212333-d2e6202438be/go.mod h1:N/0e6XlmueqKjAGxoOufVs8QHGRruUQn6yWY3a++T0U= golang.org/x/oauth2 v0.0.0-20190226205417-e64efc72b421/go.mod h1:gOpvHmFTYa4IltrdGE7lF6nIHvwfUNPOp7c8zoXwtLw= golang.org/x/oauth2 v0.0.0-20190604053449-0f29369cfe45/go.mod h1:gOpvHmFTYa4IltrdGE7lF6nIHvwfUNPOp7c8zoXwtLw= From f0d8a9d24da93098f5c56023cffe6c7466ecf204 Mon Sep 17 00:00:00 2001 From: SHIVAM SHARMA Date: Thu, 9 Mar 2023 22:42:51 +0530 Subject: [PATCH 15/29] bump dep : golang.org/x/net to v0.8.0 (#771) --- go.mod | 14 +++++++------- go.sum | 30 ++++++++++++++++++++++-------- 2 files changed, 29 insertions(+), 15 deletions(-) diff --git a/go.mod b/go.mod index 6a3a7a74d2..6090fcb59b 100644 --- a/go.mod +++ b/go.mod @@ -71,11 +71,11 @@ require ( go.opentelemetry.io/otel/sdk v1.2.0 go.uber.org/goleak v1.1.12 golang.org/x/crypto v0.1.0 - golang.org/x/sync v0.0.0-20220722155255-886fb9371eb4 - golang.org/x/sys v0.1.0 - golang.org/x/text v0.4.0 + golang.org/x/sync v0.1.0 + golang.org/x/sys v0.6.0 + golang.org/x/text v0.8.0 golang.org/x/time v0.0.0-20210220033141-f8bda1e9f3ba - golang.org/x/tools v0.1.12 + golang.org/x/tools v0.6.0 gonum.org/v1/gonum v0.11.0 google.golang.org/grpc v1.48.0 google.golang.org/protobuf v1.28.0 @@ -137,9 +137,9 @@ require ( go.opentelemetry.io/otel/trace v1.2.0 go.opentelemetry.io/proto/otlp v0.10.0 // indirect golang.org/x/exp v0.0.0-20220722155223-a9213eeb770e // indirect - golang.org/x/mod v0.6.0-dev.0.20220419223038-86c51ed26bb4 // indirect - golang.org/x/net v0.1.1-0.20221104162952-702349b0e862 // indirect - golang.org/x/term v0.1.0 // indirect + golang.org/x/mod v0.8.0 // indirect + golang.org/x/net v0.8.0 // indirect + golang.org/x/term v0.6.0 // indirect golang.org/x/xerrors v0.0.0-20220411194840-2f41105eb62f // indirect google.golang.org/genproto v0.0.0-20220725144611-272f38e5d71b // indirect gopkg.in/yaml.v2 v2.4.0 // indirect diff --git a/go.sum b/go.sum index c729fe9d8f..e28a4deb1f 100644 --- a/go.sum +++ b/go.sum @@ -575,6 +575,8 @@ golang.org/x/mod v0.3.0/go.mod h1:s0Qsj1ACt9ePp/hMypM3fl4fZqREWJwdYDEqhRiZZUA= golang.org/x/mod v0.4.2/go.mod h1:s0Qsj1ACt9ePp/hMypM3fl4fZqREWJwdYDEqhRiZZUA= golang.org/x/mod v0.6.0-dev.0.20220419223038-86c51ed26bb4 h1:6zppjxzCulZykYSLyVDYbneBfbaBIQPYMevg0bEwv2s= golang.org/x/mod v0.6.0-dev.0.20220419223038-86c51ed26bb4/go.mod h1:jJ57K6gSWd91VN4djpZkiMVwK6gcyfeH4XE8wZrZaV4= +golang.org/x/mod v0.8.0 h1:LUYupSeNrTNCGzR/hVBk2NHZO4hXcVaW1k4Qx7rjPx8= +golang.org/x/mod v0.8.0/go.mod h1:iBbtSCu2XBx23ZKBPSOrRkjjQPZFPuis4dIYUhu/chs= golang.org/x/net v0.0.0-20180724234803-3673e40ba225/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4= golang.org/x/net v0.0.0-20180811021610-c39426892332/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4= golang.org/x/net v0.0.0-20180826012351-8a410e7b638d/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4= @@ -604,8 +606,10 @@ golang.org/x/net v0.0.0-20210226172049-e18ecbb05110/go.mod h1:m0MpNAwzfU5UDzcl9v golang.org/x/net v0.0.0-20210405180319-a5a99cb37ef4/go.mod h1:p54w0d4576C0XHj96bSt6lcn1PtDYWL6XObtHCRCNQM= golang.org/x/net v0.0.0-20210610132358-84b48f89b13b/go.mod h1:9nx3DQGgdP8bBQD5qxJ1jj9UTztislL4KSBs9R2vV5Y= golang.org/x/net v0.0.0-20210805182204-aaa1db679c0d/go.mod h1:9nx3DQGgdP8bBQD5qxJ1jj9UTztislL4KSBs9R2vV5Y= -golang.org/x/net v0.1.1-0.20221104162952-702349b0e862 h1:KrLJ+iz8J6j6VVr/OCfULAcK+xozUmWE43fKpMR4MlI= -golang.org/x/net v0.1.1-0.20221104162952-702349b0e862/go.mod h1:Cx3nUiGt4eDBEyega/BKRp+/AlGL8hYe7U9odMt2Cco= +golang.org/x/net v0.0.0-20220728030405-41545e8bf201 h1:bvOltf3SADAfG05iRml8lAB3qjoEX5RCyN4K6G5v3N0= +golang.org/x/net v0.0.0-20220728030405-41545e8bf201/go.mod h1:YDH+HFinaLZZlnHAfSS6ZXJJ9M9t4Dl22yv3iI2vPwk= +golang.org/x/net v0.8.0 h1:Zrh2ngAOFYneWTAIAPethzeaQLuHwhuBkuV6ZiRnUaQ= +golang.org/x/net v0.8.0/go.mod h1:QVkue5JL9kW//ek3r6jTKnTFis1tRmNAW2P1shuFdJc= golang.org/x/oauth2 v0.0.0-20180821212333-d2e6202438be/go.mod h1:N/0e6XlmueqKjAGxoOufVs8QHGRruUQn6yWY3a++T0U= golang.org/x/oauth2 v0.0.0-20190226205417-e64efc72b421/go.mod h1:gOpvHmFTYa4IltrdGE7lF6nIHvwfUNPOp7c8zoXwtLw= golang.org/x/oauth2 v0.0.0-20190604053449-0f29369cfe45/go.mod h1:gOpvHmFTYa4IltrdGE7lF6nIHvwfUNPOp7c8zoXwtLw= @@ -621,6 +625,8 @@ golang.org/x/sync v0.0.0-20201020160332-67f06af15bc9/go.mod h1:RxMgew5VJxzue5/jJ golang.org/x/sync v0.0.0-20210220032951-036812b2e83c/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM= golang.org/x/sync v0.0.0-20220722155255-886fb9371eb4 h1:uVc8UZUe6tr40fFVnUP5Oj+veunVezqYl9z7DYw9xzw= golang.org/x/sync v0.0.0-20220722155255-886fb9371eb4/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM= +golang.org/x/sync v0.1.0 h1:wsuoTGHzEhffawBOhz5CYhcrV4IdKZbEyZjBMuTp12o= +golang.org/x/sync v0.1.0/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM= golang.org/x/sys v0.0.0-20180830151530-49385e6e1522/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY= golang.org/x/sys v0.0.0-20180905080454-ebe1bf3edb33/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY= golang.org/x/sys v0.0.0-20180909124046-d0be0721c37e/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY= @@ -662,12 +668,18 @@ golang.org/x/sys v0.0.0-20210420205809-ac73e9fd8988/go.mod h1:h1NjWce9XRLGQEsW7w golang.org/x/sys v0.0.0-20210423082822-04245dca01da/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= golang.org/x/sys v0.0.0-20210423185535-09eb48e85fd7/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= golang.org/x/sys v0.0.0-20210510120138-977fb7262007/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= -golang.org/x/sys v0.1.0 h1:kunALQeHf1/185U1i0GOB/fy1IPRDDpuoOOqRReG57U= -golang.org/x/sys v0.1.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= +golang.org/x/sys v0.0.0-20220728004956-3c1f35247d10 h1:WIoqL4EROvwiPdUtaip4VcDdpZ4kha7wBWZrbVKCIZg= +golang.org/x/sys v0.0.0-20220728004956-3c1f35247d10/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= +golang.org/x/sys v0.0.0-20221013171732-95e765b1cc43 h1:OK7RB6t2WQX54srQQYSXMW8dF5C6/8+oA/s5QBmmto4= +golang.org/x/sys v0.0.0-20221013171732-95e765b1cc43/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= +golang.org/x/sys v0.6.0 h1:MVltZSvRTcU2ljQOhs94SXPftV6DCNnZViHeQps87pQ= +golang.org/x/sys v0.6.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= golang.org/x/term v0.0.0-20201117132131-f5c789dd3221/go.mod h1:Nr5EML6q2oocZ2LXRh80K7BxOlk5/8JxuGnuhpl+muw= golang.org/x/term v0.0.0-20201126162022-7de9c90e9dd1/go.mod h1:bj7SfCRtBDWHUb9snDiAeCFNEtKQo2Wmx5Cou7ajbmo= -golang.org/x/term v0.1.0 h1:g6Z6vPFA9dYBAF7DWcH6sCcOntplXsDKcliusYijMlw= -golang.org/x/term v0.1.0/go.mod h1:jbD1KX2456YbFQfuXm/mYQcufACuNUgVhRMnK/tPxf8= +golang.org/x/term v0.0.0-20210927222741-03fcf44c2211 h1:JGgROgKl9N8DuW20oFS5gxc+lE67/N3FcwmBPMe7ArY= +golang.org/x/term v0.0.0-20210927222741-03fcf44c2211/go.mod h1:jbD1KX2456YbFQfuXm/mYQcufACuNUgVhRMnK/tPxf8= +golang.org/x/term v0.6.0 h1:clScbb1cHjoCkyRbWwBEUZ5H/tIFu5TAXIqaZD0Gcjw= +golang.org/x/term v0.6.0/go.mod h1:m6U89DPEgQRMq3DNkDClhWw02AUbt2daBVO4cn4Hv9U= golang.org/x/text v0.3.0/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ= golang.org/x/text v0.3.1-0.20180807135948-17ff2d5776d2/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ= golang.org/x/text v0.3.2/go.mod h1:bEr9sfX3Q8Zfm5fL9x+3itogRgK3+ptLWKqgva+5dAk= @@ -676,8 +688,8 @@ golang.org/x/text v0.3.4/go.mod h1:5Zoc/QRtKVWzQhOtBMvqHzDpF6irO9z98xDceosuGiQ= golang.org/x/text v0.3.5/go.mod h1:5Zoc/QRtKVWzQhOtBMvqHzDpF6irO9z98xDceosuGiQ= golang.org/x/text v0.3.6/go.mod h1:5Zoc/QRtKVWzQhOtBMvqHzDpF6irO9z98xDceosuGiQ= golang.org/x/text v0.3.7/go.mod h1:u+2+/6zg+i71rQMx5EYifcz6MCKuco9NR6JIITiCfzQ= -golang.org/x/text v0.4.0 h1:BrVqGRd7+k1DiOgtnFvAkoQEWQvBc25ouMJM6429SFg= -golang.org/x/text v0.4.0/go.mod h1:mrYo+phRRbMaCq/xk9113O4dZlRixOauAjOtrjsXDZ8= +golang.org/x/text v0.8.0 h1:57P1ETyNKtuIjB4SRd15iJxuhj8Gc416Y78H3qgMh68= +golang.org/x/text v0.8.0/go.mod h1:e1OnstbJyHTd6l/uOt8jFFHp6TRDWZR/bV3emEE/zU8= golang.org/x/time v0.0.0-20181108054448-85acf8d2951c/go.mod h1:tRJNPiyCQ0inRvYxbN9jk5I+vvW/OXSQhTDSoE431IQ= golang.org/x/time v0.0.0-20190308202827-9d24e82272b4/go.mod h1:tRJNPiyCQ0inRvYxbN9jk5I+vvW/OXSQhTDSoE431IQ= golang.org/x/time v0.0.0-20201208040808-7e3f01d25324/go.mod h1:tRJNPiyCQ0inRvYxbN9jk5I+vvW/OXSQhTDSoE431IQ= @@ -716,6 +728,8 @@ golang.org/x/tools v0.1.1/go.mod h1:o0xws9oXOQQZyjljx8fwUC0k7L1pTE6eaCbjGeHmOkk= golang.org/x/tools v0.1.5/go.mod h1:o0xws9oXOQQZyjljx8fwUC0k7L1pTE6eaCbjGeHmOkk= golang.org/x/tools v0.1.12 h1:VveCTK38A2rkS8ZqFY25HIDFscX5X9OoEhJd3quQmXU= golang.org/x/tools v0.1.12/go.mod h1:hNGJHUnrk76NpqgfD5Aqm5Crs+Hm0VOH/i9J2+nxYbc= +golang.org/x/tools v0.6.0 h1:BOw41kyTf3PuCW1pVQf8+Cyg8pMlkYB1oo9iJ6D/lKM= +golang.org/x/tools v0.6.0/go.mod h1:Xwgl3UAJ/d3gWutnCtw505GrjyAbvKui8lOU390QaIU= golang.org/x/xerrors v0.0.0-20190717185122-a985d3407aa7/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0= golang.org/x/xerrors v0.0.0-20191011141410-1b5146add898/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0= golang.org/x/xerrors v0.0.0-20191204190536-9bdfabe68543/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0= From d0a9e0db2a5e51437ddcf04043be487accfb5447 Mon Sep 17 00:00:00 2001 From: Jerry Date: Mon, 13 Mar 2023 02:38:04 -0700 Subject: [PATCH 16/29] Verify validator set against local contract on receiving an end-of-sprint block (#768) * Verify validator set against local contract on receiving an end-of-sprint block * Fix tests * Respect error returned by ParseValidators * Keep going back until a parent block presents --- consensus/bor/bor.go | 45 ++++++++++++++++++++++++++ consensus/bor/heimdall/span/spanner.go | 2 +- tests/bor/bor_test.go | 20 ++++++++++++ tests/bor/helper.go | 10 ++++++ 4 files changed, 76 insertions(+), 1 deletion(-) diff --git a/consensus/bor/bor.go b/consensus/bor/bor.go index 5b32263762..cffe692843 100644 --- a/consensus/bor/bor.go +++ b/consensus/bor/bor.go @@ -298,6 +298,14 @@ func (c *Bor) VerifyHeader(chain consensus.ChainHeaderReader, header *types.Head return c.verifyHeader(chain, header, nil) } +func (c *Bor) GetSpanner() Spanner { + return c.spanner +} + +func (c *Bor) SetSpanner(spanner Spanner) { + c.spanner = spanner +} + // VerifyHeaders is similar to VerifyHeader, but verifies a batch of headers. The // method returns a quit channel to abort the operations and a results channel to // retrieve the async verifications (the order is that of the input slice). @@ -454,6 +462,43 @@ func (c *Bor) verifyCascadingFields(chain consensus.ChainHeaderReader, header *t return err } + // Verify the validator list match the local contract + if IsSprintStart(number+1, c.config.CalculateSprint(number)) { + parentBlockNumber := number - 1 + + var newValidators []*valset.Validator + + var err error + + for newValidators, err = c.spanner.GetCurrentValidators(context.Background(), chain.GetHeaderByNumber(parentBlockNumber).Hash(), number+1); err != nil && parentBlockNumber > 0; { + parentBlockNumber-- + } + + if err != nil { + return errUnknownValidators + } + + sort.Sort(valset.ValidatorsByAddress(newValidators)) + + headerVals, err := valset.ParseValidators(header.Extra[extraVanity : len(header.Extra)-extraSeal]) + + if err != nil { + return err + } + + sort.Sort(valset.ValidatorsByAddress(headerVals)) + + if len(newValidators) != len(headerVals) { + return errInvalidSpanValidators + } + + for i, val := range newValidators { + if !bytes.Equal(val.HeaderBytes(), headerVals[i].HeaderBytes()) { + return errInvalidSpanValidators + } + } + } + // verify the validator list in the last sprint block if IsSprintStart(number, c.config.CalculateSprint(number)) { parentValidatorBytes := parent.Extra[extraVanity : len(parent.Extra)-extraSeal] diff --git a/consensus/bor/heimdall/span/spanner.go b/consensus/bor/heimdall/span/spanner.go index e0f2d66c6b..5001b0f738 100644 --- a/consensus/bor/heimdall/span/spanner.go +++ b/consensus/bor/heimdall/span/spanner.go @@ -116,7 +116,7 @@ func (c *ChainSpanner) GetCurrentValidators(ctx context.Context, headerHash comm Data: &msgData, }, blockNr, nil) if err != nil { - panic(err) + return nil, err } var ( diff --git a/tests/bor/bor_test.go b/tests/bor/bor_test.go index 2dc20a915e..e6e8188ce0 100644 --- a/tests/bor/bor_test.go +++ b/tests/bor/bor_test.go @@ -392,12 +392,18 @@ func TestInsertingSpanSizeBlocks(t *testing.T) { currentValidators := []*valset.Validator{valset.NewValidator(addr, 10)} + spanner := getMockedSpanner(t, currentValidators) + _bor.SetSpanner(spanner) + // Insert sprintSize # of blocks so that span is fetched at the start of a new sprint for i := uint64(1); i <= spanSize; i++ { block = buildNextBlock(t, _bor, chain, block, nil, init.genesis.Config.Bor, nil, currentValidators) insertNewBlock(t, chain, block) } + spanner = getMockedSpanner(t, currentSpan.ValidatorSet.Validators) + _bor.SetSpanner(spanner) + validators, err := _bor.GetCurrentValidators(context.Background(), block.Hash(), spanSize) // check validator set at the first block of new span if err != nil { t.Fatalf("%s", err) @@ -427,6 +433,9 @@ func TestFetchStateSyncEvents(t *testing.T) { currentValidators := []*valset.Validator{valset.NewValidator(addr, 10)} + spanner := getMockedSpanner(t, currentValidators) + _bor.SetSpanner(spanner) + // Insert sprintSize # of blocks so that span is fetched at the start of a new sprint for i := uint64(1); i < sprintSize; i++ { if IsSpanEnd(i) { @@ -528,6 +537,9 @@ func TestFetchStateSyncEvents_2(t *testing.T) { currentValidators = []*valset.Validator{valset.NewValidator(addr, 10)} } + spanner := getMockedSpanner(t, currentValidators) + _bor.SetSpanner(spanner) + block = buildNextBlock(t, _bor, chain, block, nil, init.genesis.Config.Bor, nil, currentValidators) insertNewBlock(t, chain, block) } @@ -554,6 +566,9 @@ func TestFetchStateSyncEvents_2(t *testing.T) { currentValidators = []*valset.Validator{valset.NewValidator(addr, 10)} } + spanner := getMockedSpanner(t, currentValidators) + _bor.SetSpanner(spanner) + block = buildNextBlock(t, _bor, chain, block, nil, init.genesis.Config.Bor, nil, res.Result.ValidatorSet.Validators) insertNewBlock(t, chain, block) } @@ -580,6 +595,8 @@ func TestOutOfTurnSigning(t *testing.T) { h.EXPECT().Close().AnyTimes() + spanner := getMockedSpanner(t, heimdallSpan.ValidatorSet.Validators) + _bor.SetSpanner(spanner) _bor.SetHeimdallClient(h) db := init.ethereum.ChainDb() @@ -1082,6 +1099,9 @@ func TestJaipurFork(t *testing.T) { res, _ := loadSpanFromFile(t) + spanner := getMockedSpanner(t, res.Result.ValidatorSet.Validators) + _bor.SetSpanner(spanner) + for i := uint64(1); i < sprintSize; i++ { block = buildNextBlock(t, _bor, chain, block, nil, init.genesis.Config.Bor, nil, res.Result.ValidatorSet.Validators) insertNewBlock(t, chain, block) diff --git a/tests/bor/helper.go b/tests/bor/helper.go index e28076a3b1..e1b83858b3 100644 --- a/tests/bor/helper.go +++ b/tests/bor/helper.go @@ -352,6 +352,16 @@ func getMockedHeimdallClient(t *testing.T, heimdallSpan *span.HeimdallSpan) (*mo return h, ctrl } +func getMockedSpanner(t *testing.T, validators []*valset.Validator) *bor.MockSpanner { + t.Helper() + + spanner := bor.NewMockSpanner(gomock.NewController(t)) + spanner.EXPECT().GetCurrentValidators(gomock.Any(), gomock.Any(), gomock.Any()).Return(validators, nil).AnyTimes() + spanner.EXPECT().GetCurrentSpan(gomock.Any(), gomock.Any()).Return(&span.Span{0, 0, 0}, nil).AnyTimes() + spanner.EXPECT().CommitSpan(gomock.Any(), gomock.Any(), gomock.Any(), gomock.Any(), gomock.Any()).Return(nil).AnyTimes() + return spanner +} + func generateFakeStateSyncEvents(sample *clerk.EventRecordWithTime, count int) []*clerk.EventRecordWithTime { events := make([]*clerk.EventRecordWithTime, count) event := *sample From 0d29c36734c5737f42e77c7e4dd541c22c76e147 Mon Sep 17 00:00:00 2001 From: Manav Darji Date: Tue, 14 Mar 2023 13:20:21 +0530 Subject: [PATCH 17/29] core/txpool: implement DoS defenses from geth (#778) --- core/tx_list.go | 51 ++++++++-- core/tx_pool.go | 85 ++++++++++++++-- core/tx_pool_test.go | 31 ++++-- core/txpool2_test.go | 229 +++++++++++++++++++++++++++++++++++++++++++ 4 files changed, 371 insertions(+), 25 deletions(-) create mode 100644 core/txpool2_test.go diff --git a/core/tx_list.go b/core/tx_list.go index f141a03bbd..14c409b88e 100644 --- a/core/tx_list.go +++ b/core/tx_list.go @@ -253,17 +253,19 @@ type txList struct { strict bool // Whether nonces are strictly continuous or not txs *txSortedMap // Heap indexed sorted hash map of the transactions - costcap *big.Int // Price of the highest costing transaction (reset only if exceeds balance) - gascap uint64 // Gas limit of the highest spending transaction (reset only if exceeds block limit) + costcap *big.Int // Price of the highest costing transaction (reset only if exceeds balance) + gascap uint64 // Gas limit of the highest spending transaction (reset only if exceeds block limit) + totalcost *big.Int // Total cost of all transactions in the list } // newTxList create a new transaction list for maintaining nonce-indexable fast, // gapped, sortable transaction lists. func newTxList(strict bool) *txList { return &txList{ - strict: strict, - txs: newTxSortedMap(), - costcap: new(big.Int), + strict: strict, + txs: newTxSortedMap(), + costcap: new(big.Int), + totalcost: new(big.Int), } } @@ -301,7 +303,13 @@ func (l *txList) Add(tx *types.Transaction, priceBump uint64) (bool, *types.Tran if tx.GasFeeCapIntCmp(thresholdFeeCap) < 0 || tx.GasTipCapIntCmp(thresholdTip) < 0 { return false, nil } + // Old is being replaced, subtract old cost + l.subTotalCost([]*types.Transaction{old}) } + + // Add new tx cost to totalcost + l.totalcost.Add(l.totalcost, tx.Cost()) + // Otherwise overwrite the old transaction with the current one l.txs.Put(tx) if cost := tx.Cost(); l.costcap.Cmp(cost) < 0 { @@ -317,7 +325,10 @@ func (l *txList) Add(tx *types.Transaction, priceBump uint64) (bool, *types.Tran // provided threshold. Every removed transaction is returned for any post-removal // maintenance. func (l *txList) Forward(threshold uint64) types.Transactions { - return l.txs.Forward(threshold) + txs := l.txs.Forward(threshold) + l.subTotalCost(txs) + + return txs } // Filter removes all transactions from the list with a cost or gas limit higher @@ -356,6 +367,9 @@ func (l *txList) Filter(costLimit *big.Int, gasLimit uint64) (types.Transactions } invalids = l.txs.filter(func(tx *types.Transaction) bool { return tx.Nonce() > lowest }) } + // Reset total cost + l.subTotalCost(removed) + l.subTotalCost(invalids) l.txs.reheap() return removed, invalids } @@ -363,7 +377,10 @@ func (l *txList) Filter(costLimit *big.Int, gasLimit uint64) (types.Transactions // Cap places a hard limit on the number of items, returning all transactions // exceeding that limit. func (l *txList) Cap(threshold int) types.Transactions { - return l.txs.Cap(threshold) + txs := l.txs.Cap(threshold) + l.subTotalCost(txs) + + return txs } // Remove deletes a transaction from the maintained list, returning whether the @@ -375,9 +392,14 @@ func (l *txList) Remove(tx *types.Transaction) (bool, types.Transactions) { if removed := l.txs.Remove(nonce); !removed { return false, nil } + + l.subTotalCost([]*types.Transaction{tx}) // In strict mode, filter out non-executable transactions if l.strict { - return true, l.txs.Filter(func(tx *types.Transaction) bool { return tx.Nonce() > nonce }) + txs := l.txs.Filter(func(tx *types.Transaction) bool { return tx.Nonce() > nonce }) + l.subTotalCost(txs) + + return true, txs } return true, nil } @@ -390,7 +412,10 @@ func (l *txList) Remove(tx *types.Transaction) (bool, types.Transactions) { // prevent getting into and invalid state. This is not something that should ever // happen but better to be self correcting than failing! func (l *txList) Ready(start uint64) types.Transactions { - return l.txs.Ready(start) + txs := l.txs.Ready(start) + l.subTotalCost(txs) + + return txs } // Len returns the length of the transaction list. @@ -416,6 +441,14 @@ func (l *txList) LastElement() *types.Transaction { return l.txs.LastElement() } +// subTotalCost subtracts the cost of the given transactions from the +// total cost of all transactions. +func (l *txList) subTotalCost(txs []*types.Transaction) { + for _, tx := range txs { + l.totalcost.Sub(l.totalcost, tx.Cost()) + } +} + // priceHeap is a heap.Interface implementation over transactions for retrieving // price-sorted transactions to discard when the pool fills up. If baseFee is set // then the heap is sorted based on the effective tip based on the given base fee. diff --git a/core/tx_pool.go b/core/tx_pool.go index 3d3f01eecb..424f1d67dd 100644 --- a/core/tx_pool.go +++ b/core/tx_pool.go @@ -17,6 +17,7 @@ package core import ( + "container/heap" "errors" "fmt" "math" @@ -86,6 +87,14 @@ var ( // than some meaningful limit a user might use. This is not a consensus error // making the transaction invalid, rather a DOS protection. ErrOversizedData = errors.New("oversized data") + + // ErrFutureReplacePending is returned if a future transaction replaces a pending + // transaction. Future transactions should only be able to replace other future transactions. + ErrFutureReplacePending = errors.New("future transaction tries to replace pending") + + // ErrOverdraft is returned if a transaction would cause the senders balance to go negative + // thus invalidating a potential large number of transactions. + ErrOverdraft = errors.New("transaction would cause overdraft") ) var ( @@ -645,9 +654,24 @@ func (pool *TxPool) validateTx(tx *types.Transaction, local bool) error { } // Transactor should have enough funds to cover the costs // cost == V + GP * GL - if pool.currentState.GetBalance(from).Cmp(tx.Cost()) < 0 { + balance := pool.currentState.GetBalance(from) + if balance.Cmp(tx.Cost()) < 0 { return ErrInsufficientFunds } + // Verify that replacing transactions will not result in overdraft + list := pool.pending[from] + if list != nil { // Sender already has pending txs + sum := new(big.Int).Add(tx.Cost(), list.totalcost) + if repl := list.txs.Get(tx.Nonce()); repl != nil { + // Deduct the cost of a transaction replaced by this + sum.Sub(sum, repl.Cost()) + } + + if balance.Cmp(sum) < 0 { + log.Trace("Replacing transactions would overdraft", "sender", from, "balance", pool.currentState.GetBalance(from), "required", sum) + return ErrOverdraft + } + } // Ensure the transaction has more gas than the basic tx fee. intrGas, err := IntrinsicGas(tx.Data(), tx.AccessList(), tx.To() == nil, true, pool.istanbul) if err != nil { @@ -684,6 +708,10 @@ func (pool *TxPool) add(tx *types.Transaction, local bool) (replaced bool, err e invalidTxMeter.Mark(1) return false, err } + + // already validated by this point + from, _ := types.Sender(pool.signer, tx) + // If the transaction pool is full, discard underpriced transactions if uint64(pool.all.Slots()+numSlots(tx)) > pool.config.GlobalSlots+pool.config.GlobalQueue { // If the new transaction is underpriced, don't accept it @@ -692,6 +720,7 @@ func (pool *TxPool) add(tx *types.Transaction, local bool) (replaced bool, err e underpricedTxMeter.Mark(1) return false, ErrUnderpriced } + // We're about to replace a transaction. The reorg does a more thorough // analysis of what to remove and how, but it runs async. We don't want to // do too many replacements between reorg-runs, so we cap the number of @@ -712,17 +741,39 @@ func (pool *TxPool) add(tx *types.Transaction, local bool) (replaced bool, err e overflowedTxMeter.Mark(1) return false, ErrTxPoolOverflow } - // Bump the counter of rejections-since-reorg - pool.changesSinceReorg += len(drop) + // If the new transaction is a future transaction it should never churn pending transactions + if pool.isFuture(from, tx) { + var replacesPending bool + + for _, dropTx := range drop { + dropSender, _ := types.Sender(pool.signer, dropTx) + if list := pool.pending[dropSender]; list != nil && list.Overlaps(dropTx) { + replacesPending = true + break + } + } + // Add all transactions back to the priced queue + if replacesPending { + for _, dropTx := range drop { + heap.Push(&pool.priced.urgent, dropTx) + } + + log.Trace("Discarding future transaction replacing pending tx", "hash", hash) + + return false, ErrFutureReplacePending + } + } // Kick out the underpriced remote transactions. for _, tx := range drop { log.Trace("Discarding freshly underpriced transaction", "hash", tx.Hash(), "gasTipCap", tx.GasTipCap(), "gasFeeCap", tx.GasFeeCap()) underpricedTxMeter.Mark(1) - pool.removeTx(tx.Hash(), false) + + dropped := pool.removeTx(tx.Hash(), false) + pool.changesSinceReorg += dropped } } + // Try to replace an existing transaction in the pending pool - from, _ := types.Sender(pool.signer, tx) // already validated if list := pool.pending[from]; list != nil && list.Overlaps(tx) { // Nonce already pending, check if required price bump is met inserted, old := list.Add(tx, pool.config.PriceBump) @@ -766,6 +817,20 @@ func (pool *TxPool) add(tx *types.Transaction, local bool) (replaced bool, err e return replaced, nil } +// isFuture reports whether the given transaction is immediately executable. +func (pool *TxPool) isFuture(from common.Address, tx *types.Transaction) bool { + list := pool.pending[from] + if list == nil { + return pool.pendingNonces.get(from) != tx.Nonce() + } + // Sender has pending transactions. + if old := list.txs.Get(tx.Nonce()); old != nil { + return false // It replaces a pending transaction. + } + // Not replacing, check if parent nonce exists in pending. + return list.txs.Get(tx.Nonce()-1) == nil +} + // enqueueTx inserts a new transaction into the non-executable transaction queue. // // Note, this method assumes the pool lock is held! @@ -1013,11 +1078,12 @@ func (pool *TxPool) Has(hash common.Hash) bool { // removeTx removes a single transaction from the queue, moving all subsequent // transactions back to the future queue. -func (pool *TxPool) removeTx(hash common.Hash, outofbound bool) { +// Returns the number of transactions removed from the pending queue. +func (pool *TxPool) removeTx(hash common.Hash, outofbound bool) int { // Fetch the transaction we wish to delete tx := pool.all.Get(hash) if tx == nil { - return + return 0 } addr, _ := types.Sender(pool.signer, tx) // already validated during insertion @@ -1045,7 +1111,8 @@ func (pool *TxPool) removeTx(hash common.Hash, outofbound bool) { pool.pendingNonces.setIfLower(addr, tx.Nonce()) // Reduce the pending counter pendingGauge.Dec(int64(1 + len(invalids))) - return + + return 1 + len(invalids) } } // Transaction is in the future queue @@ -1059,6 +1126,8 @@ func (pool *TxPool) removeTx(hash common.Hash, outofbound bool) { delete(pool.beats, addr) } } + + return 0 } // requestReset requests a pool reset to the new head block. diff --git a/core/tx_pool_test.go b/core/tx_pool_test.go index 664ca6c9d4..883a65a23f 100644 --- a/core/tx_pool_test.go +++ b/core/tx_pool_test.go @@ -170,6 +170,10 @@ func validateTxPoolInternals(pool *TxPool) error { if nonce := pool.pendingNonces.get(addr); nonce != last+1 { return fmt.Errorf("pending nonce mismatch: have %v, want %v", nonce, last+1) } + + if txs.totalcost.Cmp(common.Big0) < 0 { + return fmt.Errorf("totalcost went negative: %v", txs.totalcost) + } } return nil } @@ -1128,7 +1132,7 @@ func TestTransactionPendingLimiting(t *testing.T) { defer pool.Stop() account := crypto.PubkeyToAddress(key.PublicKey) - testAddBalance(pool, account, big.NewInt(1000000)) + testAddBalance(pool, account, big.NewInt(1000000000000)) // Keep track of transaction events to ensure all executables get announced events := make(chan NewTxsEvent, testTxPoolConfig.AccountQueue+5) @@ -1607,7 +1611,7 @@ func TestTransactionPoolRepricingKeepsLocals(t *testing.T) { keys := make([]*ecdsa.PrivateKey, 3) for i := 0; i < len(keys); i++ { keys[i], _ = crypto.GenerateKey() - testAddBalance(pool, crypto.PubkeyToAddress(keys[i].PublicKey), big.NewInt(1000*1000000)) + testAddBalance(pool, crypto.PubkeyToAddress(keys[i].PublicKey), big.NewInt(100000*1000000)) } // Create transaction (both pending and queued) with a linearly growing gasprice for i := uint64(0); i < 500; i++ { @@ -1686,7 +1690,7 @@ func TestTransactionPoolUnderpricing(t *testing.T) { defer sub.Unsubscribe() // Create a number of test accounts and fund them - keys := make([]*ecdsa.PrivateKey, 4) + keys := make([]*ecdsa.PrivateKey, 5) for i := 0; i < len(keys); i++ { keys[i], _ = crypto.GenerateKey() testAddBalance(pool, crypto.PubkeyToAddress(keys[i].PublicKey), big.NewInt(1000000)) @@ -1722,6 +1726,10 @@ func TestTransactionPoolUnderpricing(t *testing.T) { if err := pool.AddRemote(pricedTransaction(0, 100000, big.NewInt(1), keys[1])); err != ErrUnderpriced { t.Fatalf("adding underpriced pending transaction error mismatch: have %v, want %v", err, ErrUnderpriced) } + // Replace a future transaction with a future transaction + if err := pool.AddRemote(pricedTransaction(1, 100000, big.NewInt(2), keys[1])); err != nil { // +K1:1 => -K1:1 => Pend K0:0, K0:1, K2:0; Que K1:1 + t.Fatalf("failed to add well priced transaction: %v", err) + } // Ensure that adding high priced transactions drops cheap ones, but not own if err := pool.AddRemote(pricedTransaction(0, 100000, big.NewInt(3), keys[1])); err != nil { // +K1:0 => -K1:1 => Pend K0:0, K0:1, K1:0, K2:0; Que - t.Fatalf("failed to add well priced transaction: %v", err) @@ -1732,6 +1740,10 @@ func TestTransactionPoolUnderpricing(t *testing.T) { if err := pool.AddRemote(pricedTransaction(3, 100000, big.NewInt(5), keys[1])); err != nil { // +K1:3 => -K0:1 => Pend K1:0, K2:0; Que K1:2 K1:3 t.Fatalf("failed to add well priced transaction: %v", err) } + // Ensure that replacing a pending transaction with a future transaction fails + if err := pool.AddRemote(pricedTransaction(5, 100000, big.NewInt(6), keys[1])); err != ErrFutureReplacePending { + t.Fatalf("adding future replace transaction error mismatch: have %v, want %v", err, ErrFutureReplacePending) + } pending, queued = pool.Stats() if pending != 2 { t.Fatalf("pending transactions mismatched: have %d, want %d", pending, 2) @@ -1739,7 +1751,8 @@ func TestTransactionPoolUnderpricing(t *testing.T) { if queued != 2 { t.Fatalf("queued transactions mismatched: have %d, want %d", queued, 2) } - if err := validateEvents(events, 1); err != nil { + + if err := validateEvents(events, 2); err != nil { t.Fatalf("additional event firing failed: %v", err) } if err := validateTxPoolInternals(pool); err != nil { @@ -1901,11 +1914,12 @@ func TestTransactionPoolUnderpricingDynamicFee(t *testing.T) { t.Fatalf("failed to add well priced transaction: %v", err) } - tx = pricedTransaction(2, 100000, big.NewInt(3), keys[1]) + tx = pricedTransaction(1, 100000, big.NewInt(3), keys[1]) if err := pool.AddRemote(tx); err != nil { // +K1:2, -K0:1 => Pend K0:0 K1:0, K2:0; Que K1:2 t.Fatalf("failed to add well priced transaction: %v", err) } - tx = dynamicFeeTx(3, 100000, big.NewInt(4), big.NewInt(1), keys[1]) + + tx = dynamicFeeTx(2, 100000, big.NewInt(4), big.NewInt(1), keys[1]) if err := pool.AddRemote(tx); err != nil { // +K1:3, -K1:0 => Pend K0:0 K2:0; Que K1:2 K1:3 t.Fatalf("failed to add well priced transaction: %v", err) } @@ -1916,7 +1930,8 @@ func TestTransactionPoolUnderpricingDynamicFee(t *testing.T) { if queued != 2 { t.Fatalf("queued transactions mismatched: have %d, want %d", queued, 2) } - if err := validateEvents(events, 1); err != nil { + + if err := validateEvents(events, 2); err != nil { t.Fatalf("additional event firing failed: %v", err) } if err := validateTxPoolInternals(pool); err != nil { @@ -2510,7 +2525,7 @@ func benchmarkPoolBatchInsert(b *testing.B, size int, local bool) { defer pool.Stop() account := crypto.PubkeyToAddress(key.PublicKey) - testAddBalance(pool, account, big.NewInt(1000000)) + testAddBalance(pool, account, big.NewInt(1000000000000000000)) batches := make([]types.Transactions, b.N) for i := 0; i < b.N; i++ { diff --git a/core/txpool2_test.go b/core/txpool2_test.go new file mode 100644 index 0000000000..45f784f343 --- /dev/null +++ b/core/txpool2_test.go @@ -0,0 +1,229 @@ +// Copyright 2023 The go-ethereum Authors +// This file is part of the go-ethereum library. +// +// The go-ethereum library is free software: you can redistribute it and/or modify +// it under the terms of the GNU Lesser General Public License as published by +// the Free Software Foundation, either version 3 of the License, or +// (at your option) any later version. +// +// The go-ethereum library is distributed in the hope that it will be useful, +// but WITHOUT ANY WARRANTY; without even the implied warranty of +// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the +// GNU Lesser General Public License for more details. +// +// You should have received a copy of the GNU Lesser General Public License +// along with the go-ethereum library. If not, see . +package core + +import ( + "crypto/ecdsa" + "math/big" + "testing" + + "github.com/ethereum/go-ethereum/common" + "github.com/ethereum/go-ethereum/core/rawdb" + "github.com/ethereum/go-ethereum/core/state" + "github.com/ethereum/go-ethereum/core/types" + "github.com/ethereum/go-ethereum/crypto" + "github.com/ethereum/go-ethereum/event" +) + +func pricedValuedTransaction(nonce uint64, value int64, gaslimit uint64, gasprice *big.Int, key *ecdsa.PrivateKey) *types.Transaction { + tx, _ := types.SignTx(types.NewTransaction(nonce, common.Address{}, big.NewInt(value), gaslimit, gasprice, nil), types.HomesteadSigner{}, key) + return tx +} + +func count(t *testing.T, pool *TxPool) (pending int, queued int) { + t.Helper() + + pending, queued = pool.stats() + + if err := validateTxPoolInternals(pool); err != nil { + t.Fatalf("pool internal state corrupted: %v", err) + } + + return pending, queued +} + +func fillPool(t *testing.T, pool *TxPool) { + t.Helper() + // Create a number of test accounts, fund them and make transactions + executableTxs := types.Transactions{} + nonExecutableTxs := types.Transactions{} + + for i := 0; i < 384; i++ { + key, _ := crypto.GenerateKey() + pool.currentState.AddBalance(crypto.PubkeyToAddress(key.PublicKey), big.NewInt(10000000000)) + // Add executable ones + for j := 0; j < int(pool.config.AccountSlots); j++ { + executableTxs = append(executableTxs, pricedTransaction(uint64(j), 100000, big.NewInt(300), key)) + } + } + // Import the batch and verify that limits have been enforced + pool.AddRemotesSync(executableTxs) + pool.AddRemotesSync(nonExecutableTxs) + pending, queued := pool.Stats() + slots := pool.all.Slots() + // sanity-check that the test prerequisites are ok (pending full) + if have, want := pending, slots; have != want { + t.Fatalf("have %d, want %d", have, want) + } + + if have, want := queued, 0; have != want { + t.Fatalf("have %d, want %d", have, want) + } + + t.Logf("pool.config: GlobalSlots=%d, GlobalQueue=%d\n", pool.config.GlobalSlots, pool.config.GlobalQueue) + t.Logf("pending: %d queued: %d, all: %d\n", pending, queued, slots) +} + +// Tests that if a batch high-priced of non-executables arrive, they do not kick out +// executable transactions +func TestTransactionFutureAttack(t *testing.T) { + t.Parallel() + + // Create the pool to test the limit enforcement with + statedb, _ := state.New(common.Hash{}, state.NewDatabase(rawdb.NewMemoryDatabase()), nil) + blockchain := &testBlockChain{1000000, statedb, new(event.Feed)} + config := testTxPoolConfig + config.GlobalQueue = 100 + config.GlobalSlots = 100 + pool := NewTxPool(config, eip1559Config, blockchain) + + defer pool.Stop() + fillPool(t, pool) + pending, _ := pool.Stats() + // Now, future transaction attack starts, let's add a bunch of expensive non-executables, and see if the pending-count drops + { + key, _ := crypto.GenerateKey() + pool.currentState.AddBalance(crypto.PubkeyToAddress(key.PublicKey), big.NewInt(100000000000)) + futureTxs := types.Transactions{} + for j := 0; j < int(pool.config.GlobalSlots+pool.config.GlobalQueue); j++ { + futureTxs = append(futureTxs, pricedTransaction(1000+uint64(j), 100000, big.NewInt(500), key)) + } + for i := 0; i < 5; i++ { + pool.AddRemotesSync(futureTxs) + newPending, newQueued := count(t, pool) + t.Logf("pending: %d queued: %d, all: %d\n", newPending, newQueued, pool.all.Slots()) + } + } + + newPending, _ := pool.Stats() + // Pending should not have been touched + if have, want := newPending, pending; have < want { + t.Errorf("wrong pending-count, have %d, want %d (GlobalSlots: %d)", + have, want, pool.config.GlobalSlots) + } +} + +// Tests that if a batch high-priced of non-executables arrive, they do not kick out +// executable transactions +func TestTransactionFuture1559(t *testing.T) { + t.Parallel() + // Create the pool to test the pricing enforcement with + statedb, _ := state.New(common.Hash{}, state.NewDatabase(rawdb.NewMemoryDatabase()), nil) + blockchain := &testBlockChain{1000000, statedb, new(event.Feed)} + pool := NewTxPool(testTxPoolConfig, eip1559Config, blockchain) + + defer pool.Stop() + + // Create a number of test accounts, fund them and make transactions + fillPool(t, pool) + pending, _ := pool.Stats() + + // Now, future transaction attack starts, let's add a bunch of expensive non-executables, and see if the pending-count drops + { + key, _ := crypto.GenerateKey() + pool.currentState.AddBalance(crypto.PubkeyToAddress(key.PublicKey), big.NewInt(100000000000)) + futureTxs := types.Transactions{} + for j := 0; j < int(pool.config.GlobalSlots+pool.config.GlobalQueue); j++ { + futureTxs = append(futureTxs, dynamicFeeTx(1000+uint64(j), 100000, big.NewInt(200), big.NewInt(101), key)) + } + pool.AddRemotesSync(futureTxs) + } + + newPending, _ := pool.Stats() + // Pending should not have been touched + if have, want := newPending, pending; have != want { + t.Errorf("Wrong pending-count, have %d, want %d (GlobalSlots: %d)", + have, want, pool.config.GlobalSlots) + } +} + +// Tests that if a batch of balance-overdraft txs arrive, they do not kick out +// executable transactions +func TestTransactionZAttack(t *testing.T) { + t.Parallel() + // Create the pool to test the pricing enforcement with + statedb, _ := state.New(common.Hash{}, state.NewDatabase(rawdb.NewMemoryDatabase()), nil) + blockchain := &testBlockChain{1000000, statedb, new(event.Feed)} + pool := NewTxPool(testTxPoolConfig, eip1559Config, blockchain) + + defer pool.Stop() + + // Create a number of test accounts, fund them and make transactions + fillPool(t, pool) + + countInvalidPending := func() int { + t.Helper() + + var ivpendingNum int + + pendingtxs, _ := pool.Content() + + for account, txs := range pendingtxs { + cur_balance := new(big.Int).Set(pool.currentState.GetBalance(account)) + for _, tx := range txs { + if cur_balance.Cmp(tx.Value()) <= 0 { + ivpendingNum++ + } else { + cur_balance.Sub(cur_balance, tx.Value()) + } + } + } + + if err := validateTxPoolInternals(pool); err != nil { + t.Fatalf("pool internal state corrupted: %v", err) + } + + return ivpendingNum + } + ivPending := countInvalidPending() + t.Logf("invalid pending: %d\n", ivPending) + + // Now, DETER-Z attack starts, let's add a bunch of expensive non-executables (from N accounts) along with balance-overdraft txs (from one account), and see if the pending-count drops + for j := 0; j < int(pool.config.GlobalQueue); j++ { + futureTxs := types.Transactions{} + key, _ := crypto.GenerateKey() + pool.currentState.AddBalance(crypto.PubkeyToAddress(key.PublicKey), big.NewInt(100000000000)) + futureTxs = append(futureTxs, pricedTransaction(1000+uint64(j), 21000, big.NewInt(500), key)) + pool.AddRemotesSync(futureTxs) + } + + overDraftTxs := types.Transactions{} + { + key, _ := crypto.GenerateKey() + pool.currentState.AddBalance(crypto.PubkeyToAddress(key.PublicKey), big.NewInt(100000000000)) + for j := 0; j < int(pool.config.GlobalSlots); j++ { + overDraftTxs = append(overDraftTxs, pricedValuedTransaction(uint64(j), 60000000000, 21000, big.NewInt(500), key)) + } + } + pool.AddRemotesSync(overDraftTxs) + pool.AddRemotesSync(overDraftTxs) + pool.AddRemotesSync(overDraftTxs) + pool.AddRemotesSync(overDraftTxs) + pool.AddRemotesSync(overDraftTxs) + + newPending, newQueued := count(t, pool) + newIvPending := countInvalidPending() + + t.Logf("pool.all.Slots(): %d\n", pool.all.Slots()) + t.Logf("pending: %d queued: %d, all: %d\n", newPending, newQueued, pool.all.Slots()) + t.Logf("invalid pending: %d\n", newIvPending) + + // Pending should not have been touched + if newIvPending != ivPending { + t.Errorf("Wrong invalid pending-count, have %d, want %d (GlobalSlots: %d, queued: %d)", + newIvPending, ivPending, pool.config.GlobalSlots, newQueued) + } +} From 3ed22568fd39b4e690ed45a90d9cd44370d01e26 Mon Sep 17 00:00:00 2001 From: marcello33 Date: Tue, 14 Mar 2023 12:40:23 +0100 Subject: [PATCH 18/29] Hotfixes and deps bump (#776) * dev: chg: bump deps * internal/cli/server, rpc: lower down http readtimeout to 10s * dev: chg: get p2p adapter * dev: chg: lower down jsonrpc readtimeout to 10s * cherry-pick txpool optimisation changes * add check for empty lists in txpool (#704) * add check * linters * core, miner: add empty instrumentation name for tracing --------- Co-authored-by: Raneet Debnath Co-authored-by: SHIVAM SHARMA Co-authored-by: Evgeny Danilenko <6655321@bk.ru> Co-authored-by: Manav Darji --- Makefile | 7 +- builder/files/config.toml | 2 +- cmd/evm/internal/t8ntool/transaction.go | 3 +- common/debug/debug.go | 24 + common/math/big.go | 29 +- common/math/uint.go | 23 + common/time.go | 9 + common/tracing/context.go | 10 +- consensus/bor/bor.go | 8 +- consensus/misc/eip1559.go | 53 + core/blockchain.go | 8 +- core/tx_journal.go | 17 +- core/tx_list.go | 236 ++- core/tx_list_test.go | 9 +- core/tx_pool.go | 1067 ++++++++--- core/tx_pool_test.go | 1683 ++++++++++++++++- core/types/access_list_tx.go | 65 +- core/types/dynamic_fee_tx.go | 71 +- core/types/legacy_tx.go | 60 +- core/types/transaction.go | 180 +- core/types/transaction_signing.go | 13 +- core/types/transaction_test.go | 32 +- docs/cli/example_config.toml | 2 +- eth/api_backend.go | 11 +- eth/bor_checkpoint_verifier.go | 1 + eth/handler.go | 3 +- eth/handler_test.go | 3 +- eth/sync.go | 7 +- go.mod | 4 +- go.sum | 19 +- internal/cli/server/config.go | 2 +- internal/cli/server/pprof/pprof.go | 22 + internal/ethapi/api.go | 16 + internal/web3ext/web3ext.go | 5 + les/handler_test.go | 2 +- les/server_requests.go | 22 +- miner/worker.go | 194 +- .../templates/mainnet-v1/archive/config.toml | 2 +- .../mainnet-v1/sentry/sentry/bor/config.toml | 2 +- .../sentry/validator/bor/config.toml | 2 +- .../mainnet-v1/without-sentry/bor/config.toml | 2 +- .../templates/testnet-v4/archive/config.toml | 2 +- .../testnet-v4/sentry/sentry/bor/config.toml | 2 +- .../sentry/validator/bor/config.toml | 2 +- .../testnet-v4/without-sentry/bor/config.toml | 2 +- rpc/http.go | 2 +- tests/init_test.go | 3 - 47 files changed, 3418 insertions(+), 525 deletions(-) create mode 100644 common/math/uint.go create mode 100644 common/time.go diff --git a/Makefile b/Makefile index 242435df76..a8a4b66e8d 100644 --- a/Makefile +++ b/Makefile @@ -59,7 +59,10 @@ ios: @echo "Import \"$(GOBIN)/Geth.framework\" to use the library." test: - $(GOTEST) --timeout 5m -shuffle=on -cover -coverprofile=cover.out $(TESTALL) + $(GOTEST) --timeout 5m -shuffle=on -cover -short -coverprofile=cover.out -covermode=atomic $(TESTALL) + +test-txpool-race: + $(GOTEST) -run=TestPoolMiningDataRaces --timeout 600m -race -v ./core/ test-race: $(GOTEST) --timeout 15m -race -shuffle=on $(TESTALL) @@ -75,7 +78,7 @@ lint: lintci-deps: rm -f ./build/bin/golangci-lint - curl -sSfL https://raw.githubusercontent.com/golangci/golangci-lint/master/install.sh | sh -s -- -b ./build/bin v1.48.0 + curl -sSfL https://raw.githubusercontent.com/golangci/golangci-lint/master/install.sh | sh -s -- -b ./build/bin v1.50.1 goimports: goimports -local "$(PACKAGE)" -w . diff --git a/builder/files/config.toml b/builder/files/config.toml index 1b8d915b7b..cb790f371c 100644 --- a/builder/files/config.toml +++ b/builder/files/config.toml @@ -94,7 +94,7 @@ syncmode = "full" # vhosts = ["*"] # corsdomain = ["*"] # [jsonrpc.timeouts] -# read = "30s" +# read = "10s" # write = "30s" # idle = "2m0s" diff --git a/cmd/evm/internal/t8ntool/transaction.go b/cmd/evm/internal/t8ntool/transaction.go index 6f1c964ada..cf2039b66c 100644 --- a/cmd/evm/internal/t8ntool/transaction.go +++ b/cmd/evm/internal/t8ntool/transaction.go @@ -24,6 +24,8 @@ import ( "os" "strings" + "gopkg.in/urfave/cli.v1" + "github.com/ethereum/go-ethereum/common" "github.com/ethereum/go-ethereum/common/hexutil" "github.com/ethereum/go-ethereum/core" @@ -32,7 +34,6 @@ import ( "github.com/ethereum/go-ethereum/params" "github.com/ethereum/go-ethereum/rlp" "github.com/ethereum/go-ethereum/tests" - "gopkg.in/urfave/cli.v1" ) type result struct { diff --git a/common/debug/debug.go b/common/debug/debug.go index 6a677e495d..056ebe2fa7 100644 --- a/common/debug/debug.go +++ b/common/debug/debug.go @@ -1,6 +1,7 @@ package debug import ( + "fmt" "runtime" ) @@ -26,3 +27,26 @@ func Callers(show int) []string { return callers } + +func CodeLine() (string, string, int) { + pc, filename, line, _ := runtime.Caller(1) + return runtime.FuncForPC(pc).Name(), filename, line +} + +func CodeLineStr() string { + pc, filename, line, _ := runtime.Caller(1) + return fmt.Sprintf("%s:%d - %s", filename, line, runtime.FuncForPC(pc).Name()) +} + +func Stack(all bool) []byte { + buf := make([]byte, 4096) + + for { + n := runtime.Stack(buf, all) + if n < len(buf) { + return buf[:n] + } + + buf = make([]byte, 2*len(buf)) + } +} diff --git a/common/math/big.go b/common/math/big.go index 1af5b4d879..4ccf89e38c 100644 --- a/common/math/big.go +++ b/common/math/big.go @@ -20,6 +20,8 @@ package math import ( "fmt" "math/big" + + "github.com/holiman/uint256" ) // Various big integer limit values. @@ -132,6 +134,7 @@ func MustParseBig256(s string) *big.Int { // BigPow returns a ** b as a big integer. func BigPow(a, b int64) *big.Int { r := big.NewInt(a) + return r.Exp(r, big.NewInt(b), nil) } @@ -140,6 +143,15 @@ func BigMax(x, y *big.Int) *big.Int { if x.Cmp(y) < 0 { return y } + + return x +} + +func BigMaxUint(x, y *uint256.Int) *uint256.Int { + if x.Lt(y) { + return y + } + return x } @@ -148,6 +160,15 @@ func BigMin(x, y *big.Int) *big.Int { if x.Cmp(y) > 0 { return y } + + return x +} + +func BigMinUint256(x, y *uint256.Int) *uint256.Int { + if x.Gt(y) { + return y + } + return x } @@ -227,10 +248,10 @@ func U256Bytes(n *big.Int) []byte { // S256 interprets x as a two's complement number. // x must not exceed 256 bits (the result is undefined if it does) and is not modified. // -// S256(0) = 0 -// S256(1) = 1 -// S256(2**255) = -2**255 -// S256(2**256-1) = -1 +// S256(0) = 0 +// S256(1) = 1 +// S256(2**255) = -2**255 +// S256(2**256-1) = -1 func S256(x *big.Int) *big.Int { if x.Cmp(tt255) < 0 { return x diff --git a/common/math/uint.go b/common/math/uint.go new file mode 100644 index 0000000000..96b8261884 --- /dev/null +++ b/common/math/uint.go @@ -0,0 +1,23 @@ +package math + +import ( + "math/big" + + "github.com/holiman/uint256" +) + +var ( + U0 = uint256.NewInt(0) + U1 = uint256.NewInt(1) + U100 = uint256.NewInt(100) +) + +func U256LTE(a, b *uint256.Int) bool { + return a.Lt(b) || a.Eq(b) +} + +func FromBig(v *big.Int) *uint256.Int { + u, _ := uint256.FromBig(v) + + return u +} diff --git a/common/time.go b/common/time.go new file mode 100644 index 0000000000..6c7662e04c --- /dev/null +++ b/common/time.go @@ -0,0 +1,9 @@ +package common + +import "time" + +const TimeMilliseconds = "15:04:05.000" + +func NowMilliseconds() string { + return time.Now().Format(TimeMilliseconds) +} diff --git a/common/tracing/context.go b/common/tracing/context.go index 510e45d775..c3c6342502 100644 --- a/common/tracing/context.go +++ b/common/tracing/context.go @@ -4,6 +4,7 @@ import ( "context" "time" + "go.opentelemetry.io/otel" "go.opentelemetry.io/otel/attribute" "go.opentelemetry.io/otel/trace" ) @@ -51,11 +52,16 @@ func Trace(ctx context.Context, spanName string) (context.Context, trace.Span) { return tr.Start(ctx, spanName) } -func Exec(ctx context.Context, spanName string, opts ...Option) { +func Exec(ctx context.Context, instrumentationName, spanName string, opts ...Option) { var span trace.Span tr := FromContext(ctx) + if tr == nil && len(instrumentationName) != 0 { + tr = otel.GetTracerProvider().Tracer(instrumentationName) + ctx = WithTracer(ctx, tr) + } + if tr != nil { ctx, span = tr.Start(ctx, spanName) } @@ -85,7 +91,7 @@ func ElapsedTime(ctx context.Context, span trace.Span, msg string, fn func(conte fn(ctx, span) if span != nil { - span.SetAttributes(attribute.Int(msg, int(time.Since(now).Milliseconds()))) + span.SetAttributes(attribute.Int(msg, int(time.Since(now).Microseconds()))) } } diff --git a/consensus/bor/bor.go b/consensus/bor/bor.go index cffe692843..a640526f22 100644 --- a/consensus/bor/bor.go +++ b/consensus/bor/bor.go @@ -866,7 +866,7 @@ func (c *Bor) FinalizeAndAssemble(ctx context.Context, chain consensus.ChainHead if IsSprintStart(headerNumber, c.config.CalculateSprint(headerNumber)) { cx := statefull.ChainContext{Chain: chain, Bor: c} - tracing.Exec(finalizeCtx, "bor.checkAndCommitSpan", func(ctx context.Context, span trace.Span) { + tracing.Exec(finalizeCtx, "", "bor.checkAndCommitSpan", func(ctx context.Context, span trace.Span) { // check and commit span err = c.checkAndCommitSpan(finalizeCtx, state, header, cx) }) @@ -877,7 +877,7 @@ func (c *Bor) FinalizeAndAssemble(ctx context.Context, chain consensus.ChainHead } if c.HeimdallClient != nil { - tracing.Exec(finalizeCtx, "bor.checkAndCommitSpan", func(ctx context.Context, span trace.Span) { + tracing.Exec(finalizeCtx, "", "bor.checkAndCommitSpan", func(ctx context.Context, span trace.Span) { // commit states stateSyncData, err = c.CommitStates(finalizeCtx, state, header, cx) }) @@ -889,7 +889,7 @@ func (c *Bor) FinalizeAndAssemble(ctx context.Context, chain consensus.ChainHead } } - tracing.Exec(finalizeCtx, "bor.changeContractCodeIfNeeded", func(ctx context.Context, span trace.Span) { + tracing.Exec(finalizeCtx, "", "bor.changeContractCodeIfNeeded", func(ctx context.Context, span trace.Span) { err = c.changeContractCodeIfNeeded(headerNumber, state) }) @@ -899,7 +899,7 @@ func (c *Bor) FinalizeAndAssemble(ctx context.Context, chain consensus.ChainHead } // No block rewards in PoA, so the state remains as it is - tracing.Exec(finalizeCtx, "bor.IntermediateRoot", func(ctx context.Context, span trace.Span) { + tracing.Exec(finalizeCtx, "", "bor.IntermediateRoot", func(ctx context.Context, span trace.Span) { header.Root = state.IntermediateRoot(chain.Config().IsEIP158(header.Number)) }) diff --git a/consensus/misc/eip1559.go b/consensus/misc/eip1559.go index 193a5b84e2..00a8ab5b58 100644 --- a/consensus/misc/eip1559.go +++ b/consensus/misc/eip1559.go @@ -20,6 +20,8 @@ import ( "fmt" "math/big" + "github.com/holiman/uint256" + "github.com/ethereum/go-ethereum/common" "github.com/ethereum/go-ethereum/common/math" "github.com/ethereum/go-ethereum/core/types" @@ -92,3 +94,54 @@ func CalcBaseFee(config *params.ChainConfig, parent *types.Header) *big.Int { ) } } + +// CalcBaseFee calculates the basefee of the header. +func CalcBaseFeeUint(config *params.ChainConfig, parent *types.Header) *uint256.Int { + var ( + initialBaseFeeUint = uint256.NewInt(params.InitialBaseFee) + baseFeeChangeDenominatorUint64 = params.BaseFeeChangeDenominator(config.Bor, parent.Number) + baseFeeChangeDenominatorUint = uint256.NewInt(baseFeeChangeDenominatorUint64) + ) + + // If the current block is the first EIP-1559 block, return the InitialBaseFee. + if !config.IsLondon(parent.Number) { + return initialBaseFeeUint.Clone() + } + + var ( + parentGasTarget = parent.GasLimit / params.ElasticityMultiplier + parentGasTargetBig = uint256.NewInt(parentGasTarget) + ) + + // If the parent gasUsed is the same as the target, the baseFee remains unchanged. + if parent.GasUsed == parentGasTarget { + return math.FromBig(parent.BaseFee) + } + + if parent.GasUsed > parentGasTarget { + // If the parent block used more gas than its target, the baseFee should increase. + gasUsedDelta := uint256.NewInt(parent.GasUsed - parentGasTarget) + + parentBaseFee := math.FromBig(parent.BaseFee) + x := gasUsedDelta.Mul(parentBaseFee, gasUsedDelta) + y := x.Div(x, parentGasTargetBig) + baseFeeDelta := math.BigMaxUint( + x.Div(y, baseFeeChangeDenominatorUint), + math.U1, + ) + + return x.Add(parentBaseFee, baseFeeDelta) + } + + // Otherwise if the parent block used less gas than its target, the baseFee should decrease. + gasUsedDelta := uint256.NewInt(parentGasTarget - parent.GasUsed) + parentBaseFee := math.FromBig(parent.BaseFee) + x := gasUsedDelta.Mul(parentBaseFee, gasUsedDelta) + y := x.Div(x, parentGasTargetBig) + baseFeeDelta := x.Div(y, baseFeeChangeDenominatorUint) + + return math.BigMaxUint( + x.Sub(parentBaseFee, baseFeeDelta), + math.U0.Clone(), + ) +} diff --git a/core/blockchain.go b/core/blockchain.go index 9ff04c4cf8..680cb7dce6 100644 --- a/core/blockchain.go +++ b/core/blockchain.go @@ -1375,7 +1375,7 @@ func (bc *BlockChain) writeBlockAndSetHead(ctx context.Context, block *types.Blo var stateSyncLogs []*types.Log - tracing.Exec(writeBlockAndSetHeadCtx, "blockchain.writeBlockWithState", func(_ context.Context, span trace.Span) { + tracing.Exec(writeBlockAndSetHeadCtx, "", "blockchain.writeBlockWithState", func(_ context.Context, span trace.Span) { stateSyncLogs, err = bc.writeBlockWithState(block, receipts, logs, state) tracing.SetAttributes( span, @@ -1392,7 +1392,7 @@ func (bc *BlockChain) writeBlockAndSetHead(ctx context.Context, block *types.Blo var reorg bool - tracing.Exec(writeBlockAndSetHeadCtx, "blockchain.ReorgNeeded", func(_ context.Context, span trace.Span) { + tracing.Exec(writeBlockAndSetHeadCtx, "", "blockchain.ReorgNeeded", func(_ context.Context, span trace.Span) { reorg, err = bc.forker.ReorgNeeded(currentBlock.Header(), block.Header()) tracing.SetAttributes( span, @@ -1406,7 +1406,7 @@ func (bc *BlockChain) writeBlockAndSetHead(ctx context.Context, block *types.Blo return NonStatTy, err } - tracing.Exec(writeBlockAndSetHeadCtx, "blockchain.reorg", func(_ context.Context, span trace.Span) { + tracing.Exec(writeBlockAndSetHeadCtx, "", "blockchain.reorg", func(_ context.Context, span trace.Span) { if reorg { // Reorganise the chain if the parent is not the head block if block.ParentHash() != currentBlock.Hash() { @@ -1435,7 +1435,7 @@ func (bc *BlockChain) writeBlockAndSetHead(ctx context.Context, block *types.Blo // Set new head. if status == CanonStatTy { - tracing.Exec(writeBlockAndSetHeadCtx, "blockchain.writeHeadBlock", func(_ context.Context, _ trace.Span) { + tracing.Exec(writeBlockAndSetHeadCtx, "", "blockchain.writeHeadBlock", func(_ context.Context, _ trace.Span) { bc.writeHeadBlock(block) }) } diff --git a/core/tx_journal.go b/core/tx_journal.go index d282126a08..980bdb9864 100644 --- a/core/tx_journal.go +++ b/core/tx_journal.go @@ -61,11 +61,13 @@ func (journal *txJournal) load(add func([]*types.Transaction) []error) error { if _, err := os.Stat(journal.path); os.IsNotExist(err) { return nil } + // Open the journal for loading any past transactions input, err := os.Open(journal.path) if err != nil { return err } + defer input.Close() // Temporarily discard any journal additions (don't double add on load) @@ -80,29 +82,35 @@ func (journal *txJournal) load(add func([]*types.Transaction) []error) error { // appropriate progress counters. Then use this method to load all the // journaled transactions in small-ish batches. loadBatch := func(txs types.Transactions) { + errs := add(txs) + + dropped = len(errs) + for _, err := range add(txs) { - if err != nil { - log.Debug("Failed to add journaled transaction", "err", err) - dropped++ - } + log.Debug("Failed to add journaled transaction", "err", err) } } var ( failure error batch types.Transactions ) + for { // Parse the next transaction and terminate on error tx := new(types.Transaction) + if err = stream.Decode(tx); err != nil { if err != io.EOF { failure = err } + if batch.Len() > 0 { loadBatch(batch) } + break } + // New transaction parsed, queue up for later, import if threshold is reached total++ @@ -111,6 +119,7 @@ func (journal *txJournal) load(add func([]*types.Transaction) []error) error { batch = batch[:0] } } + log.Info("Loaded local transaction journal", "transactions", total, "dropped", dropped) return failure diff --git a/core/tx_list.go b/core/tx_list.go index 14c409b88e..f7c3d08295 100644 --- a/core/tx_list.go +++ b/core/tx_list.go @@ -25,7 +25,10 @@ import ( "sync/atomic" "time" + "github.com/holiman/uint256" + "github.com/ethereum/go-ethereum/common" + cmath "github.com/ethereum/go-ethereum/common/math" "github.com/ethereum/go-ethereum/core/types" ) @@ -54,36 +57,67 @@ func (h *nonceHeap) Pop() interface{} { type txSortedMap struct { items map[uint64]*types.Transaction // Hash map storing the transaction data index *nonceHeap // Heap of nonces of all the stored transactions (non-strict mode) - cache types.Transactions // Cache of the transactions already sorted + m sync.RWMutex + + cache types.Transactions // Cache of the transactions already sorted + isEmpty bool + cacheMu sync.RWMutex } // newTxSortedMap creates a new nonce-sorted transaction map. func newTxSortedMap() *txSortedMap { return &txSortedMap{ - items: make(map[uint64]*types.Transaction), - index: new(nonceHeap), + items: make(map[uint64]*types.Transaction), + index: new(nonceHeap), + isEmpty: true, } } // Get retrieves the current transactions associated with the given nonce. func (m *txSortedMap) Get(nonce uint64) *types.Transaction { + m.m.RLock() + defer m.m.RUnlock() + return m.items[nonce] } +func (m *txSortedMap) Has(nonce uint64) bool { + if m == nil { + return false + } + + m.m.RLock() + defer m.m.RUnlock() + + return m.items[nonce] != nil +} + // Put inserts a new transaction into the map, also updating the map's nonce // index. If a transaction already exists with the same nonce, it's overwritten. func (m *txSortedMap) Put(tx *types.Transaction) { + m.m.Lock() + defer m.m.Unlock() + nonce := tx.Nonce() if m.items[nonce] == nil { heap.Push(m.index, nonce) } - m.items[nonce], m.cache = tx, nil + + m.items[nonce] = tx + + m.cacheMu.Lock() + m.isEmpty = true + m.cache = nil + m.cacheMu.Unlock() } // Forward removes all transactions from the map with a nonce lower than the // provided threshold. Every removed transaction is returned for any post-removal // maintenance. func (m *txSortedMap) Forward(threshold uint64) types.Transactions { + m.m.Lock() + defer m.m.Unlock() + var removed types.Transactions // Pop off heap items until the threshold is reached @@ -92,10 +126,15 @@ func (m *txSortedMap) Forward(threshold uint64) types.Transactions { removed = append(removed, m.items[nonce]) delete(m.items, nonce) } + // If we had a cached order, shift the front + m.cacheMu.Lock() if m.cache != nil { + hitCacheCounter.Inc(1) m.cache = m.cache[len(removed):] } + m.cacheMu.Unlock() + return removed } @@ -105,6 +144,9 @@ func (m *txSortedMap) Forward(threshold uint64) types.Transactions { // If you want to do several consecutive filterings, it's therefore better to first // do a .filter(func1) followed by .Filter(func2) or reheap() func (m *txSortedMap) Filter(filter func(*types.Transaction) bool) types.Transactions { + m.m.Lock() + defer m.m.Unlock() + removed := m.filter(filter) // If transactions were removed, the heap and cache are ruined if len(removed) > 0 { @@ -115,11 +157,19 @@ func (m *txSortedMap) Filter(filter func(*types.Transaction) bool) types.Transac func (m *txSortedMap) reheap() { *m.index = make([]uint64, 0, len(m.items)) + for nonce := range m.items { *m.index = append(*m.index, nonce) } + heap.Init(m.index) + + m.cacheMu.Lock() m.cache = nil + m.isEmpty = true + m.cacheMu.Unlock() + + resetCacheGauge.Inc(1) } // filter is identical to Filter, but **does not** regenerate the heap. This method @@ -135,7 +185,12 @@ func (m *txSortedMap) filter(filter func(*types.Transaction) bool) types.Transac } } if len(removed) > 0 { + m.cacheMu.Lock() m.cache = nil + m.isEmpty = true + m.cacheMu.Unlock() + + resetCacheGauge.Inc(1) } return removed } @@ -143,45 +198,66 @@ func (m *txSortedMap) filter(filter func(*types.Transaction) bool) types.Transac // Cap places a hard limit on the number of items, returning all transactions // exceeding that limit. func (m *txSortedMap) Cap(threshold int) types.Transactions { + m.m.Lock() + defer m.m.Unlock() + // Short circuit if the number of items is under the limit if len(m.items) <= threshold { return nil } + // Otherwise gather and drop the highest nonce'd transactions var drops types.Transactions sort.Sort(*m.index) + for size := len(m.items); size > threshold; size-- { drops = append(drops, m.items[(*m.index)[size-1]]) delete(m.items, (*m.index)[size-1]) } + *m.index = (*m.index)[:threshold] heap.Init(m.index) // If we had a cache, shift the back + m.cacheMu.Lock() if m.cache != nil { m.cache = m.cache[:len(m.cache)-len(drops)] } + m.cacheMu.Unlock() + return drops } // Remove deletes a transaction from the maintained map, returning whether the // transaction was found. func (m *txSortedMap) Remove(nonce uint64) bool { + m.m.Lock() + defer m.m.Unlock() + // Short circuit if no transaction is present _, ok := m.items[nonce] if !ok { return false } + // Otherwise delete the transaction and fix the heap index for i := 0; i < m.index.Len(); i++ { if (*m.index)[i] == nonce { heap.Remove(m.index, i) + break } } + delete(m.items, nonce) + + m.cacheMu.Lock() m.cache = nil + m.isEmpty = true + m.cacheMu.Unlock() + + resetCacheGauge.Inc(1) return true } @@ -194,55 +270,129 @@ func (m *txSortedMap) Remove(nonce uint64) bool { // prevent getting into and invalid state. This is not something that should ever // happen but better to be self correcting than failing! func (m *txSortedMap) Ready(start uint64) types.Transactions { + m.m.Lock() + defer m.m.Unlock() + // Short circuit if no transactions are available if m.index.Len() == 0 || (*m.index)[0] > start { return nil } + // Otherwise start accumulating incremental transactions var ready types.Transactions + for next := (*m.index)[0]; m.index.Len() > 0 && (*m.index)[0] == next; next++ { ready = append(ready, m.items[next]) delete(m.items, next) heap.Pop(m.index) } + + m.cacheMu.Lock() m.cache = nil + m.isEmpty = true + m.cacheMu.Unlock() + + resetCacheGauge.Inc(1) return ready } // Len returns the length of the transaction map. func (m *txSortedMap) Len() int { + m.m.RLock() + defer m.m.RUnlock() + return len(m.items) } func (m *txSortedMap) flatten() types.Transactions { // If the sorting was not cached yet, create and cache it - if m.cache == nil { - m.cache = make(types.Transactions, 0, len(m.items)) + m.cacheMu.Lock() + defer m.cacheMu.Unlock() + + if m.isEmpty { + m.isEmpty = false // to simulate sync.Once + + m.cacheMu.Unlock() + + m.m.RLock() + + cache := make(types.Transactions, 0, len(m.items)) + for _, tx := range m.items { - m.cache = append(m.cache, tx) + cache = append(cache, tx) } - sort.Sort(types.TxByNonce(m.cache)) + + m.m.RUnlock() + + // exclude sorting from locks + sort.Sort(types.TxByNonce(cache)) + + m.cacheMu.Lock() + m.cache = cache + + reinitCacheGauge.Inc(1) + missCacheCounter.Inc(1) + } else { + hitCacheCounter.Inc(1) } + return m.cache } +func (m *txSortedMap) lastElement() *types.Transaction { + // If the sorting was not cached yet, create and cache it + m.cacheMu.Lock() + defer m.cacheMu.Unlock() + + cache := m.cache + + if m.isEmpty { + m.isEmpty = false // to simulate sync.Once + + m.cacheMu.Unlock() + + m.m.RLock() + cache = make(types.Transactions, 0, len(m.items)) + + for _, tx := range m.items { + cache = append(cache, tx) + } + + m.m.RUnlock() + + // exclude sorting from locks + sort.Sort(types.TxByNonce(cache)) + + m.cacheMu.Lock() + m.cache = cache + + reinitCacheGauge.Inc(1) + missCacheCounter.Inc(1) + } else { + hitCacheCounter.Inc(1) + } + + ln := len(cache) + if ln == 0 { + return nil + } + + return cache[len(cache)-1] +} + // Flatten creates a nonce-sorted slice of transactions based on the loosely // sorted internal representation. The result of the sorting is cached in case // it's requested again before any modifications are made to the contents. func (m *txSortedMap) Flatten() types.Transactions { // Copy the cache to prevent accidental modifications - cache := m.flatten() - txs := make(types.Transactions, len(cache)) - copy(txs, cache) - return txs + return m.flatten() } // LastElement returns the last element of a flattened list, thus, the // transaction with the highest nonce func (m *txSortedMap) LastElement() *types.Transaction { - cache := m.flatten() - return cache[len(cache)-1] + return m.lastElement() } // txList is a "list" of transactions belonging to an account, sorted by account @@ -253,9 +403,9 @@ type txList struct { strict bool // Whether nonces are strictly continuous or not txs *txSortedMap // Heap indexed sorted hash map of the transactions - costcap *big.Int // Price of the highest costing transaction (reset only if exceeds balance) - gascap uint64 // Gas limit of the highest spending transaction (reset only if exceeds block limit) - totalcost *big.Int // Total cost of all transactions in the list + costcap *uint256.Int // Price of the highest costing transaction (reset only if exceeds balance) + gascap uint64 // Gas limit of the highest spending transaction (reset only if exceeds block limit) + totalcost *big.Int // Total cost of all transactions in the list } // newTxList create a new transaction list for maintaining nonce-indexable fast, @@ -264,7 +414,6 @@ func newTxList(strict bool) *txList { return &txList{ strict: strict, txs: newTxSortedMap(), - costcap: new(big.Int), totalcost: new(big.Int), } } @@ -287,20 +436,21 @@ func (l *txList) Add(tx *types.Transaction, priceBump uint64) (bool, *types.Tran if old.GasFeeCapCmp(tx) >= 0 || old.GasTipCapCmp(tx) >= 0 { return false, nil } + // thresholdFeeCap = oldFC * (100 + priceBump) / 100 - a := big.NewInt(100 + int64(priceBump)) - aFeeCap := new(big.Int).Mul(a, old.GasFeeCap()) - aTip := a.Mul(a, old.GasTipCap()) + a := uint256.NewInt(100 + priceBump) + aFeeCap := uint256.NewInt(0).Mul(a, old.GasFeeCapUint()) + aTip := a.Mul(a, old.GasTipCapUint()) // thresholdTip = oldTip * (100 + priceBump) / 100 - b := big.NewInt(100) + b := cmath.U100 thresholdFeeCap := aFeeCap.Div(aFeeCap, b) thresholdTip := aTip.Div(aTip, b) // We have to ensure that both the new fee cap and tip are higher than the // old ones as well as checking the percentage threshold to ensure that // this is accurate for low (Wei-level) gas price replacements. - if tx.GasFeeCapIntCmp(thresholdFeeCap) < 0 || tx.GasTipCapIntCmp(thresholdTip) < 0 { + if tx.GasFeeCapUIntLt(thresholdFeeCap) || tx.GasTipCapUIntLt(thresholdTip) { return false, nil } // Old is being replaced, subtract old cost @@ -312,12 +462,15 @@ func (l *txList) Add(tx *types.Transaction, priceBump uint64) (bool, *types.Tran // Otherwise overwrite the old transaction with the current one l.txs.Put(tx) - if cost := tx.Cost(); l.costcap.Cmp(cost) < 0 { + + if cost := tx.CostUint(); l.costcap == nil || l.costcap.Lt(cost) { l.costcap = cost } + if gas := tx.Gas(); l.gascap < gas { l.gascap = gas } + return true, old } @@ -340,17 +493,20 @@ func (l *txList) Forward(threshold uint64) types.Transactions { // a point in calculating all the costs or if the balance covers all. If the threshold // is lower than the costgas cap, the caps will be reset to a new high after removing // the newly invalidated transactions. -func (l *txList) Filter(costLimit *big.Int, gasLimit uint64) (types.Transactions, types.Transactions) { +func (l *txList) Filter(costLimit *uint256.Int, gasLimit uint64) (types.Transactions, types.Transactions) { // If all transactions are below the threshold, short circuit - if l.costcap.Cmp(costLimit) <= 0 && l.gascap <= gasLimit { + if cmath.U256LTE(l.costcap, costLimit) && l.gascap <= gasLimit { return nil, nil } - l.costcap = new(big.Int).Set(costLimit) // Lower the caps to the thresholds + + l.costcap = costLimit.Clone() // Lower the caps to the thresholds l.gascap = gasLimit // Filter out all the transactions above the account's funds + cost := uint256.NewInt(0) removed := l.txs.Filter(func(tx *types.Transaction) bool { - return tx.Gas() > gasLimit || tx.Cost().Cmp(costLimit) > 0 + cost.SetFromBig(tx.Cost()) + return tx.Gas() > gasLimit || cost.Gt(costLimit) }) if len(removed) == 0 { @@ -441,6 +597,10 @@ func (l *txList) LastElement() *types.Transaction { return l.txs.LastElement() } +func (l *txList) Has(nonce uint64) bool { + return l != nil && l.txs.items[nonce] != nil +} + // subTotalCost subtracts the cost of the given transactions from the // total cost of all transactions. func (l *txList) subTotalCost(txs []*types.Transaction) { @@ -454,8 +614,9 @@ func (l *txList) subTotalCost(txs []*types.Transaction) { // then the heap is sorted based on the effective tip based on the given base fee. // If baseFee is nil then the sorting is based on gasFeeCap. type priceHeap struct { - baseFee *big.Int // heap should always be re-sorted after baseFee is changed - list []*types.Transaction + baseFee *uint256.Int // heap should always be re-sorted after baseFee is changed + list []*types.Transaction + baseFeeMu sync.RWMutex } func (h *priceHeap) Len() int { return len(h.list) } @@ -473,16 +634,24 @@ func (h *priceHeap) Less(i, j int) bool { } func (h *priceHeap) cmp(a, b *types.Transaction) int { + h.baseFeeMu.RLock() + if h.baseFee != nil { // Compare effective tips if baseFee is specified - if c := a.EffectiveGasTipCmp(b, h.baseFee); c != 0 { + if c := a.EffectiveGasTipTxUintCmp(b, h.baseFee); c != 0 { + h.baseFeeMu.RUnlock() + return c } } + + h.baseFeeMu.RUnlock() + // Compare fee caps if baseFee is not specified or effective tips are equal if c := a.GasFeeCapCmp(b); c != 0 { return c } + // Compare tips if effective tips and fee caps are equal return a.GasTipCapCmp(b) } @@ -662,7 +831,10 @@ func (l *txPricedList) Reheap() { // SetBaseFee updates the base fee and triggers a re-heap. Note that Removed is not // necessary to call right before SetBaseFee when processing a new block. -func (l *txPricedList) SetBaseFee(baseFee *big.Int) { +func (l *txPricedList) SetBaseFee(baseFee *uint256.Int) { + l.urgent.baseFeeMu.Lock() l.urgent.baseFee = baseFee + l.urgent.baseFeeMu.Unlock() + l.Reheap() } diff --git a/core/tx_list_test.go b/core/tx_list_test.go index ef49cae1dd..80b8c1ef32 100644 --- a/core/tx_list_test.go +++ b/core/tx_list_test.go @@ -17,10 +17,11 @@ package core import ( - "math/big" "math/rand" "testing" + "github.com/holiman/uint256" + "github.com/ethereum/go-ethereum/core/types" "github.com/ethereum/go-ethereum/crypto" ) @@ -59,11 +60,15 @@ func BenchmarkTxListAdd(b *testing.B) { for i := 0; i < len(txs); i++ { txs[i] = transaction(uint64(i), 0, key) } + // Insert the transactions in a random order - priceLimit := big.NewInt(int64(DefaultTxPoolConfig.PriceLimit)) + priceLimit := uint256.NewInt(DefaultTxPoolConfig.PriceLimit) b.ResetTimer() + b.ReportAllocs() + for i := 0; i < b.N; i++ { list := newTxList(true) + for _, v := range rand.Perm(len(txs)) { list.Add(txs[v], DefaultTxPoolConfig.PriceBump) list.Filter(priceLimit, DefaultTxPoolConfig.PriceBump) diff --git a/core/tx_pool.go b/core/tx_pool.go index 424f1d67dd..ce73aa26ac 100644 --- a/core/tx_pool.go +++ b/core/tx_pool.go @@ -18,6 +18,7 @@ package core import ( "container/heap" + "context" "errors" "fmt" "math" @@ -27,8 +28,12 @@ import ( "sync/atomic" "time" + "github.com/holiman/uint256" + "go.opentelemetry.io/otel/attribute" + "go.opentelemetry.io/otel/trace" + "github.com/ethereum/go-ethereum/common" - "github.com/ethereum/go-ethereum/common/prque" + "github.com/ethereum/go-ethereum/common/tracing" "github.com/ethereum/go-ethereum/consensus/misc" "github.com/ethereum/go-ethereum/core/state" "github.com/ethereum/go-ethereum/core/types" @@ -136,6 +141,11 @@ var ( localGauge = metrics.NewRegisteredGauge("txpool/local", nil) slotsGauge = metrics.NewRegisteredGauge("txpool/slots", nil) + resetCacheGauge = metrics.NewRegisteredGauge("txpool/resetcache", nil) + reinitCacheGauge = metrics.NewRegisteredGauge("txpool/reinittcache", nil) + hitCacheCounter = metrics.NewRegisteredCounter("txpool/cachehit", nil) + missCacheCounter = metrics.NewRegisteredCounter("txpool/cachemiss", nil) + reheapTimer = metrics.NewRegisteredTimer("txpool/reheap", nil) ) @@ -241,14 +251,17 @@ func (config *TxPoolConfig) sanitize() TxPoolConfig { // current state) and future transactions. Transactions move between those // two states over time as they are received and processed. type TxPool struct { - config TxPoolConfig - chainconfig *params.ChainConfig - chain blockChain - gasPrice *big.Int - txFeed event.Feed - scope event.SubscriptionScope - signer types.Signer - mu sync.RWMutex + config TxPoolConfig + chainconfig *params.ChainConfig + chain blockChain + gasPrice *big.Int + gasPriceUint *uint256.Int + gasPriceMu sync.RWMutex + + txFeed event.Feed + scope event.SubscriptionScope + signer types.Signer + mu sync.RWMutex istanbul bool // Fork indicator whether we are in the istanbul stage. eip2718 bool // Fork indicator whether we are using EIP-2718 type transactions. @@ -261,11 +274,13 @@ type TxPool struct { locals *accountSet // Set of local transaction to exempt from eviction rules journal *txJournal // Journal of local transaction to back up to disk - pending map[common.Address]*txList // All currently processable transactions - queue map[common.Address]*txList // Queued but non-processable transactions - beats map[common.Address]time.Time // Last heartbeat from each known account - all *txLookup // All transactions to allow lookups - priced *txPricedList // All transactions sorted by price + pending map[common.Address]*txList // All currently processable transactions + pendingCount int + pendingMu sync.RWMutex + queue map[common.Address]*txList // Queued but non-processable transactions + beats map[common.Address]time.Time // Last heartbeat from each known account + all *txLookup // All transactions to allow lookups + priced *txPricedList // All transactions sorted by price chainHeadCh chan ChainHeadEvent chainHeadSub event.Subscription @@ -310,6 +325,7 @@ func NewTxPool(config TxPoolConfig, chainconfig *params.ChainConfig, chain block reorgShutdownCh: make(chan struct{}), initDoneCh: make(chan struct{}), gasPrice: new(big.Int).SetUint64(config.PriceLimit), + gasPriceUint: uint256.NewInt(config.PriceLimit), } pool.locals = newAccountSet(pool.signer) @@ -386,9 +402,7 @@ func (pool *TxPool) loop() { // Handle stats reporting ticks case <-report.C: - pool.mu.RLock() pending, queued := pool.stats() - pool.mu.RUnlock() stales := int(atomic.LoadInt64(&pool.priced.stales)) if pending != prevPending || queued != prevQueued || stales != prevStales { @@ -398,22 +412,45 @@ func (pool *TxPool) loop() { // Handle inactive account transaction eviction case <-evict.C: - pool.mu.Lock() + now := time.Now() + + var ( + list types.Transactions + tx *types.Transaction + toRemove []common.Hash + ) + + pool.mu.RLock() for addr := range pool.queue { // Skip local transactions from the eviction mechanism if pool.locals.contains(addr) { continue } + // Any non-locals old enough should be removed - if time.Since(pool.beats[addr]) > pool.config.Lifetime { - list := pool.queue[addr].Flatten() - for _, tx := range list { - pool.removeTx(tx.Hash(), true) + if now.Sub(pool.beats[addr]) > pool.config.Lifetime { + list = pool.queue[addr].Flatten() + for _, tx = range list { + toRemove = append(toRemove, tx.Hash()) } + queuedEvictionMeter.Mark(int64(len(list))) } } - pool.mu.Unlock() + + pool.mu.RUnlock() + + if len(toRemove) > 0 { + pool.mu.Lock() + + var hash common.Hash + + for _, hash = range toRemove { + pool.removeTx(hash, true) + } + + pool.mu.Unlock() + } // Handle local transaction journal rotation case <-journal.C: @@ -451,27 +488,45 @@ func (pool *TxPool) SubscribeNewTxsEvent(ch chan<- NewTxsEvent) event.Subscripti // GasPrice returns the current gas price enforced by the transaction pool. func (pool *TxPool) GasPrice() *big.Int { - pool.mu.RLock() - defer pool.mu.RUnlock() + pool.gasPriceMu.RLock() + defer pool.gasPriceMu.RUnlock() return new(big.Int).Set(pool.gasPrice) } +func (pool *TxPool) GasPriceUint256() *uint256.Int { + pool.gasPriceMu.RLock() + defer pool.gasPriceMu.RUnlock() + + return pool.gasPriceUint.Clone() +} + // SetGasPrice updates the minimum price required by the transaction pool for a // new transaction, and drops all transactions below this threshold. func (pool *TxPool) SetGasPrice(price *big.Int) { - pool.mu.Lock() - defer pool.mu.Unlock() + pool.gasPriceMu.Lock() + defer pool.gasPriceMu.Unlock() old := pool.gasPrice pool.gasPrice = price + + if pool.gasPriceUint == nil { + pool.gasPriceUint, _ = uint256.FromBig(price) + } else { + pool.gasPriceUint.SetFromBig(price) + } + // if the min miner fee increased, remove transactions below the new threshold if price.Cmp(old) > 0 { + pool.mu.Lock() + defer pool.mu.Unlock() + // pool.priced is sorted by GasFeeCap, so we have to iterate through pool.all instead drop := pool.all.RemotesBelowTip(price) for _, tx := range drop { pool.removeTx(tx.Hash(), false) } + pool.priced.Removed(len(drop)) } @@ -490,9 +545,6 @@ func (pool *TxPool) Nonce(addr common.Address) uint64 { // Stats retrieves the current pool stats, namely the number of pending and the // number of queued (non-executable) transactions. func (pool *TxPool) Stats() (int, int) { - pool.mu.RLock() - defer pool.mu.RUnlock() - return pool.stats() } @@ -500,47 +552,69 @@ func (pool *TxPool) Stats() (int, int) { // number of queued (non-executable) transactions. func (pool *TxPool) stats() (int, int) { pending := 0 + + pool.pendingMu.RLock() for _, list := range pool.pending { pending += list.Len() } + pool.pendingMu.RUnlock() + + pool.mu.RLock() + queued := 0 for _, list := range pool.queue { queued += list.Len() } + + pool.mu.RUnlock() + return pending, queued } // Content retrieves the data content of the transaction pool, returning all the // pending as well as queued transactions, grouped by account and sorted by nonce. func (pool *TxPool) Content() (map[common.Address]types.Transactions, map[common.Address]types.Transactions) { - pool.mu.Lock() - defer pool.mu.Unlock() - pending := make(map[common.Address]types.Transactions) + + pool.pendingMu.RLock() for addr, list := range pool.pending { pending[addr] = list.Flatten() } + pool.pendingMu.RUnlock() + queued := make(map[common.Address]types.Transactions) + + pool.mu.RLock() + for addr, list := range pool.queue { queued[addr] = list.Flatten() } + + pool.mu.RUnlock() + return pending, queued } // ContentFrom retrieves the data content of the transaction pool, returning the // pending as well as queued transactions of this address, grouped by nonce. func (pool *TxPool) ContentFrom(addr common.Address) (types.Transactions, types.Transactions) { - pool.mu.RLock() - defer pool.mu.RUnlock() - var pending types.Transactions + + pool.pendingMu.RLock() if list, ok := pool.pending[addr]; ok { pending = list.Flatten() } + pool.pendingMu.RUnlock() + + pool.mu.RLock() + var queued types.Transactions if list, ok := pool.queue[addr]; ok { queued = list.Flatten() } + + pool.mu.RUnlock() + return pending, queued } @@ -551,35 +625,74 @@ func (pool *TxPool) ContentFrom(addr common.Address) (types.Transactions, types. // The enforceTips parameter can be used to do an extra filtering on the pending // transactions and only return those whose **effective** tip is large enough in // the next pending execution environment. -func (pool *TxPool) Pending(enforceTips bool) map[common.Address]types.Transactions { - pool.mu.Lock() - defer pool.mu.Unlock() +// +//nolint:gocognit +func (pool *TxPool) Pending(ctx context.Context, enforceTips bool) map[common.Address]types.Transactions { + pending := make(map[common.Address]types.Transactions, 10) - pending := make(map[common.Address]types.Transactions) - for addr, list := range pool.pending { - txs := list.Flatten() + tracing.Exec(ctx, "TxpoolPending", "txpool.Pending()", func(ctx context.Context, span trace.Span) { + tracing.ElapsedTime(ctx, span, "txpool.Pending.RLock()", func(ctx context.Context, s trace.Span) { + pool.pendingMu.RLock() + }) - // If the miner requests tip enforcement, cap the lists now - if enforceTips && !pool.locals.contains(addr) { - for i, tx := range txs { - if tx.EffectiveGasTipIntCmp(pool.gasPrice, pool.priced.urgent.baseFee) < 0 { - txs = txs[:i] - break + defer pool.pendingMu.RUnlock() + + pendingAccounts := len(pool.pending) + + var pendingTxs int + + tracing.ElapsedTime(ctx, span, "Loop", func(ctx context.Context, s trace.Span) { + gasPriceUint := uint256.NewInt(0) + baseFee := uint256.NewInt(0) + + for addr, list := range pool.pending { + txs := list.Flatten() + + // If the miner requests tip enforcement, cap the lists now + if enforceTips && !pool.locals.contains(addr) { + for i, tx := range txs { + pool.pendingMu.RUnlock() + + pool.gasPriceMu.RLock() + if pool.gasPriceUint != nil { + gasPriceUint.Set(pool.gasPriceUint) + } + + pool.priced.urgent.baseFeeMu.Lock() + if pool.priced.urgent.baseFee != nil { + baseFee.Set(pool.priced.urgent.baseFee) + } + pool.priced.urgent.baseFeeMu.Unlock() + + pool.gasPriceMu.RUnlock() + + pool.pendingMu.RLock() + + if tx.EffectiveGasTipUintLt(gasPriceUint, baseFee) { + txs = txs[:i] + break + } + } + } + + if len(txs) > 0 { + pending[addr] = txs + pendingTxs += len(txs) } } - } - if len(txs) > 0 { - pending[addr] = txs - } - } + + tracing.SetAttributes(span, + attribute.Int("pending-transactions", pendingTxs), + attribute.Int("pending-accounts", pendingAccounts), + ) + }) + }) + return pending } // Locals retrieves the accounts currently considered local by the pool. func (pool *TxPool) Locals() []common.Address { - pool.mu.Lock() - defer pool.mu.Unlock() - return pool.locals.flatten() } @@ -588,14 +701,22 @@ func (pool *TxPool) Locals() []common.Address { // freely modified by calling code. func (pool *TxPool) local() map[common.Address]types.Transactions { txs := make(map[common.Address]types.Transactions) + + pool.locals.m.RLock() + defer pool.locals.m.RUnlock() + for addr := range pool.locals.accounts { + pool.pendingMu.RLock() if pending := pool.pending[addr]; pending != nil { txs[addr] = append(txs[addr], pending.Flatten()...) } + pool.pendingMu.RUnlock() + if queued := pool.queue[addr]; queued != nil { txs[addr] = append(txs[addr], queued.Flatten()...) } } + return txs } @@ -606,10 +727,12 @@ func (pool *TxPool) validateTx(tx *types.Transaction, local bool) error { if !pool.eip2718 && tx.Type() != types.LegacyTxType { return ErrTxTypeNotSupported } + // Reject dynamic fee transactions until EIP-1559 activates. if !pool.eip1559 && tx.Type() == types.DynamicFeeTxType { return ErrTxTypeNotSupported } + // Reject transactions over defined size to prevent DOS attacks if uint64(tx.Size()) > txMaxSize { return ErrOversizedData @@ -624,34 +747,52 @@ func (pool *TxPool) validateTx(tx *types.Transaction, local bool) error { if tx.Value().Sign() < 0 { return ErrNegativeValue } + // Ensure the transaction doesn't exceed the current block limit gas. if pool.currentMaxGas < tx.Gas() { return ErrGasLimit } + // Sanity check for extremely large numbers - if tx.GasFeeCap().BitLen() > 256 { + gasFeeCap := tx.GasFeeCapRef() + if gasFeeCap.BitLen() > 256 { return ErrFeeCapVeryHigh } - if tx.GasTipCap().BitLen() > 256 { + + // do NOT use uint256 here. results vs *big.Int are different + gasTipCap := tx.GasTipCapRef() + if gasTipCap.BitLen() > 256 { return ErrTipVeryHigh } + // Ensure gasFeeCap is greater than or equal to gasTipCap. - if tx.GasFeeCapIntCmp(tx.GasTipCap()) < 0 { + gasTipCapU, _ := uint256.FromBig(gasTipCap) + if tx.GasFeeCapUIntLt(gasTipCapU) { return ErrTipAboveFeeCap } + // Make sure the transaction is signed properly. from, err := types.Sender(pool.signer, tx) if err != nil { return ErrInvalidSender } + // Drop non-local transactions under our own minimal accepted gas price or tip - if !local && tx.GasTipCapIntCmp(pool.gasPrice) < 0 { + pool.gasPriceMu.RLock() + + if !local && tx.GasTipCapUIntLt(pool.gasPriceUint) { + pool.gasPriceMu.RUnlock() + return ErrUnderpriced } + + pool.gasPriceMu.RUnlock() + // Ensure the transaction adheres to nonce ordering if pool.currentState.GetNonce(from) > tx.Nonce() { return ErrNonceTooLow } + // Transactor should have enough funds to cover the costs // cost == V + GP * GL balance := pool.currentState.GetBalance(from) @@ -677,9 +818,11 @@ func (pool *TxPool) validateTx(tx *types.Transaction, local bool) error { if err != nil { return err } + if tx.Gas() < intrGas { return ErrIntrinsicGas } + return nil } @@ -716,7 +859,7 @@ func (pool *TxPool) add(tx *types.Transaction, local bool) (replaced bool, err e if uint64(pool.all.Slots()+numSlots(tx)) > pool.config.GlobalSlots+pool.config.GlobalQueue { // If the new transaction is underpriced, don't accept it if !isLocal && pool.priced.Underpriced(tx) { - log.Trace("Discarding underpriced transaction", "hash", hash, "gasTipCap", tx.GasTipCap(), "gasFeeCap", tx.GasFeeCap()) + log.Trace("Discarding underpriced transaction", "hash", hash, "gasTipCap", tx.GasTipCapUint(), "gasFeeCap", tx.GasFeeCapUint()) underpricedTxMeter.Mark(1) return false, ErrUnderpriced } @@ -765,7 +908,7 @@ func (pool *TxPool) add(tx *types.Transaction, local bool) (replaced bool, err e } // Kick out the underpriced remote transactions. for _, tx := range drop { - log.Trace("Discarding freshly underpriced transaction", "hash", tx.Hash(), "gasTipCap", tx.GasTipCap(), "gasFeeCap", tx.GasFeeCap()) + log.Trace("Discarding freshly underpriced transaction", "hash", tx.Hash(), "gasTipCap", tx.GasTipCapUint(), "gasFeeCap", tx.GasFeeCapUint()) underpricedTxMeter.Mark(1) dropped := pool.removeTx(tx.Hash(), false) @@ -774,19 +917,28 @@ func (pool *TxPool) add(tx *types.Transaction, local bool) (replaced bool, err e } // Try to replace an existing transaction in the pending pool - if list := pool.pending[from]; list != nil && list.Overlaps(tx) { + pool.pendingMu.RLock() + + list := pool.pending[from] + + if list != nil && list.Overlaps(tx) { // Nonce already pending, check if required price bump is met inserted, old := list.Add(tx, pool.config.PriceBump) + pool.pendingCount++ + pool.pendingMu.RUnlock() + if !inserted { pendingDiscardMeter.Mark(1) return false, ErrReplaceUnderpriced } + // New transaction is better, replace old one if old != nil { pool.all.Remove(old.Hash()) pool.priced.Removed(1) pendingReplaceMeter.Mark(1) } + pool.all.Add(tx, isLocal) pool.priced.Put(tx, isLocal) pool.journalTx(from, tx) @@ -795,8 +947,13 @@ func (pool *TxPool) add(tx *types.Transaction, local bool) (replaced bool, err e // Successful promotion, bump the heartbeat pool.beats[from] = time.Now() + return old != nil, nil } + + // it is not an unlocking of unlocked because of the return in previous 'if' + pool.pendingMu.RUnlock() + // New transaction isn't replacing a pending one, push into queue replaced, err = pool.enqueueTx(hash, tx, isLocal, true) if err != nil { @@ -900,19 +1057,25 @@ func (pool *TxPool) promoteTx(addr common.Address, hash common.Hash, tx *types.T }() // Try to insert the transaction into the pending queue + pool.pendingMu.Lock() if pool.pending[addr] == nil { pool.pending[addr] = newTxList(true) } list := pool.pending[addr] inserted, old := list.Add(tx, pool.config.PriceBump) + pool.pendingCount++ + pool.pendingMu.Unlock() + if !inserted { // An older transaction was better, discard this pool.all.Remove(hash) pool.priced.Removed(1) pendingDiscardMeter.Mark(1) + return false } + // Otherwise discard any previous transaction and mark this if old != nil { pool.all.Remove(old.Hash()) @@ -922,11 +1085,13 @@ func (pool *TxPool) promoteTx(addr common.Address, hash common.Hash, tx *types.T // Nothing was replaced, bump the pending counter pendingGauge.Inc(1) } + // Set the potentially new pending nonce and notify any subsystems of the new tx pool.pendingNonces.set(addr, tx.Nonce()+1) // Successful promotion, bump the heartbeat pool.beats[addr] = time.Now() + return true } @@ -942,8 +1107,7 @@ func (pool *TxPool) AddLocals(txs []*types.Transaction) []error { // AddLocal enqueues a single local transaction into the pool if it is valid. This is // a convenience wrapper aroundd AddLocals. func (pool *TxPool) AddLocal(tx *types.Transaction) error { - errs := pool.AddLocals([]*types.Transaction{tx}) - return errs[0] + return pool.addTx(tx, !pool.config.NoLocals, true) } // AddRemotes enqueues a batch of transactions into the pool if they are valid. If the @@ -960,108 +1124,216 @@ func (pool *TxPool) AddRemotesSync(txs []*types.Transaction) []error { return pool.addTxs(txs, false, true) } +func (pool *TxPool) AddRemoteSync(txs *types.Transaction) error { + return pool.addTx(txs, false, true) +} + // This is like AddRemotes with a single transaction, but waits for pool reorganization. Tests use this method. func (pool *TxPool) addRemoteSync(tx *types.Transaction) error { - errs := pool.AddRemotesSync([]*types.Transaction{tx}) - return errs[0] + return pool.AddRemoteSync(tx) } // AddRemote enqueues a single transaction into the pool if it is valid. This is a convenience // wrapper around AddRemotes. -// -// Deprecated: use AddRemotes func (pool *TxPool) AddRemote(tx *types.Transaction) error { - errs := pool.AddRemotes([]*types.Transaction{tx}) - return errs[0] + return pool.addTx(tx, false, false) } // addTxs attempts to queue a batch of transactions if they are valid. func (pool *TxPool) addTxs(txs []*types.Transaction, local, sync bool) []error { // Filter out known ones without obtaining the pool lock or recovering signatures var ( - errs = make([]error, len(txs)) + errs []error news = make([]*types.Transaction, 0, len(txs)) + err error + + hash common.Hash ) - for i, tx := range txs { + + for _, tx := range txs { // If the transaction is known, pre-set the error slot - if pool.all.Get(tx.Hash()) != nil { - errs[i] = ErrAlreadyKnown + hash = tx.Hash() + + if pool.all.Get(hash) != nil { + errs = append(errs, ErrAlreadyKnown) knownTxMeter.Mark(1) + continue } + // Exclude transactions with invalid signatures as soon as // possible and cache senders in transactions before // obtaining lock - _, err := types.Sender(pool.signer, tx) + _, err = types.Sender(pool.signer, tx) if err != nil { - errs[i] = ErrInvalidSender + errs = append(errs, ErrInvalidSender) invalidTxMeter.Mark(1) + continue } + // Accumulate all unknown transactions for deeper processing news = append(news, tx) } + if len(news) == 0 { return errs } // Process all the new transaction and merge any errors into the original slice pool.mu.Lock() - newErrs, dirtyAddrs := pool.addTxsLocked(news, local) + errs, dirtyAddrs := pool.addTxsLocked(news, local) pool.mu.Unlock() - var nilSlot = 0 - for _, err := range newErrs { - for errs[nilSlot] != nil { - nilSlot++ + // Reorg the pool internals if needed and return + done := pool.requestPromoteExecutables(dirtyAddrs) + if sync { + <-done + } + + return errs +} + +// addTxs attempts to queue a batch of transactions if they are valid. +func (pool *TxPool) addTx(tx *types.Transaction, local, sync bool) error { + // Filter out known ones without obtaining the pool lock or recovering signatures + var ( + err error + hash common.Hash + ) + + func() { + // If the transaction is known, pre-set the error slot + hash = tx.Hash() + + if pool.all.Get(hash) != nil { + err = ErrAlreadyKnown + + knownTxMeter.Mark(1) + + return + } + + // Exclude transactions with invalid signatures as soon as + // possible and cache senders in transactions before + // obtaining lock + _, err = types.Sender(pool.signer, tx) + if err != nil { + invalidTxMeter.Mark(1) + + return } - errs[nilSlot] = err - nilSlot++ + }() + + if err != nil { + return err } + + var dirtyAddrs *accountSet + + // Process all the new transaction and merge any errors into the original slice + pool.mu.Lock() + err, dirtyAddrs = pool.addTxLocked(tx, local) + pool.mu.Unlock() + // Reorg the pool internals if needed and return done := pool.requestPromoteExecutables(dirtyAddrs) if sync { <-done } - return errs + + return err } // addTxsLocked attempts to queue a batch of transactions if they are valid. // The transaction pool lock must be held. func (pool *TxPool) addTxsLocked(txs []*types.Transaction, local bool) ([]error, *accountSet) { dirty := newAccountSet(pool.signer) - errs := make([]error, len(txs)) - for i, tx := range txs { - replaced, err := pool.add(tx, local) - errs[i] = err + + var ( + replaced bool + errs []error + ) + + for _, tx := range txs { + var err error + + replaced, err = pool.add(tx, local) if err == nil && !replaced { dirty.addTx(tx) } + + if err != nil { + errs = append(errs, err) + } } + validTxMeter.Mark(int64(len(dirty.accounts))) + return errs, dirty } +func (pool *TxPool) addTxLocked(tx *types.Transaction, local bool) (error, *accountSet) { + dirty := newAccountSet(pool.signer) + + var ( + replaced bool + err error + ) + + replaced, err = pool.add(tx, local) + if err == nil && !replaced { + dirty.addTx(tx) + } + + validTxMeter.Mark(int64(len(dirty.accounts))) + + return err, dirty +} + // Status returns the status (unknown/pending/queued) of a batch of transactions // identified by their hashes. func (pool *TxPool) Status(hashes []common.Hash) []TxStatus { status := make([]TxStatus, len(hashes)) + + var ( + txList *txList + isPending bool + ) + for i, hash := range hashes { tx := pool.Get(hash) if tx == nil { continue } + from, _ := types.Sender(pool.signer, tx) // already validated - pool.mu.RLock() - if txList := pool.pending[from]; txList != nil && txList.txs.items[tx.Nonce()] != nil { + + pool.pendingMu.RLock() + + if txList = pool.pending[from]; txList != nil && txList.txs.Has(tx.Nonce()) { status[i] = TxStatusPending - } else if txList := pool.queue[from]; txList != nil && txList.txs.items[tx.Nonce()] != nil { - status[i] = TxStatusQueued + isPending = true + } else { + isPending = false + } + + pool.pendingMu.RUnlock() + + if !isPending { + pool.mu.RLock() + + if txList := pool.queue[from]; txList != nil && txList.txs.Has(tx.Nonce()) { + status[i] = TxStatusQueued + } + + pool.mu.RUnlock() } + // implicit else: the tx may have been included into a block between // checking pool.Get and obtaining the lock. In that case, TxStatusUnknown is correct - pool.mu.RUnlock() } + return status } @@ -1085,6 +1357,7 @@ func (pool *TxPool) removeTx(hash common.Hash, outofbound bool) int { if tx == nil { return 0 } + addr, _ := types.Sender(pool.signer, tx) // already validated during insertion // Remove it from the list of known transactions @@ -1092,35 +1365,52 @@ func (pool *TxPool) removeTx(hash common.Hash, outofbound bool) int { if outofbound { pool.priced.Removed(1) } + if pool.locals.contains(addr) { localGauge.Dec(1) } + // Remove the transaction from the pending lists and reset the account nonce + pool.pendingMu.Lock() + if pending := pool.pending[addr]; pending != nil { if removed, invalids := pending.Remove(tx); removed { + pool.pendingCount-- + // If no more pending transactions are left, remove the list if pending.Empty() { delete(pool.pending, addr) } + + pool.pendingMu.Unlock() + // Postpone any invalidated transactions for _, tx := range invalids { // Internal shuffle shouldn't touch the lookup set. pool.enqueueTx(tx.Hash(), tx, false, false) } + // Update the account nonce if needed pool.pendingNonces.setIfLower(addr, tx.Nonce()) + // Reduce the pending counter pendingGauge.Dec(int64(1 + len(invalids))) return 1 + len(invalids) } + + pool.pendingMu.TryLock() } + + pool.pendingMu.Unlock() + // Transaction is in the future queue if future := pool.queue[addr]; future != nil { if removed, _ := future.Remove(tx); removed { // Reduce the queued counter queuedGauge.Dec(1) } + if future.Empty() { delete(pool.queue, addr) delete(pool.beats, addr) @@ -1178,8 +1468,10 @@ func (pool *TxPool) scheduleReorgLoop() { for { // Launch next background reorg if needed if curDone == nil && launchNextRun { + ctx := context.Background() + // Run the background reorg and announcements - go pool.runReorg(nextDone, reset, dirtyAccounts, queuedEvents) + go pool.runReorg(ctx, nextDone, reset, dirtyAccounts, queuedEvents) // Prepare everything for the next round of reorg curDone, nextDone = nextDone, make(chan struct{}) @@ -1234,86 +1526,178 @@ func (pool *TxPool) scheduleReorgLoop() { } // runReorg runs reset and promoteExecutables on behalf of scheduleReorgLoop. -func (pool *TxPool) runReorg(done chan struct{}, reset *txpoolResetRequest, dirtyAccounts *accountSet, events map[common.Address]*txSortedMap) { - defer func(t0 time.Time) { - reorgDurationTimer.Update(time.Since(t0)) - }(time.Now()) - defer close(done) - - var promoteAddrs []common.Address - if dirtyAccounts != nil && reset == nil { - // Only dirty accounts need to be promoted, unless we're resetting. - // For resets, all addresses in the tx queue will be promoted and - // the flatten operation can be avoided. - promoteAddrs = dirtyAccounts.flatten() - } - pool.mu.Lock() - if reset != nil { - // Reset from the old head to the new, rescheduling any reorged transactions - pool.reset(reset.oldHead, reset.newHead) - - // Nonces were reset, discard any events that became stale - for addr := range events { - events[addr].Forward(pool.pendingNonces.get(addr)) - if events[addr].Len() == 0 { - delete(events, addr) +// +//nolint:gocognit +func (pool *TxPool) runReorg(ctx context.Context, done chan struct{}, reset *txpoolResetRequest, dirtyAccounts *accountSet, events map[common.Address]*txSortedMap) { + tracing.Exec(ctx, "TxPoolReorg", "txpool-reorg", func(ctx context.Context, span trace.Span) { + defer func(t0 time.Time) { + reorgDurationTimer.Update(time.Since(t0)) + }(time.Now()) + + defer close(done) + + var promoteAddrs []common.Address + + tracing.ElapsedTime(ctx, span, "01 dirty accounts flattening", func(_ context.Context, innerSpan trace.Span) { + if dirtyAccounts != nil && reset == nil { + // Only dirty accounts need to be promoted, unless we're resetting. + // For resets, all addresses in the tx queue will be promoted and + // the flatten operation can be avoided. + promoteAddrs = dirtyAccounts.flatten() } + + tracing.SetAttributes( + innerSpan, + attribute.Int("promoteAddresses-flatten", len(promoteAddrs)), + ) + }) + + tracing.ElapsedTime(ctx, span, "02 obtaining pool.WMutex", func(_ context.Context, _ trace.Span) { + pool.mu.Lock() + }) + + if reset != nil { + tracing.ElapsedTime(ctx, span, "03 reset-head reorg", func(_ context.Context, innerSpan trace.Span) { + + // Reset from the old head to the new, rescheduling any reorged transactions + tracing.ElapsedTime(ctx, innerSpan, "04 reset-head-itself reorg", func(_ context.Context, innerSpan trace.Span) { + pool.reset(reset.oldHead, reset.newHead) + }) + + tracing.SetAttributes( + innerSpan, + attribute.Int("events-reset-head", len(events)), + ) + + // Nonces were reset, discard any events that became stale + for addr := range events { + events[addr].Forward(pool.pendingNonces.get(addr)) + + if events[addr].Len() == 0 { + delete(events, addr) + } + } + + // Reset needs promote for all addresses + promoteAddrs = make([]common.Address, 0, len(pool.queue)) + for addr := range pool.queue { + promoteAddrs = append(promoteAddrs, addr) + } + + tracing.SetAttributes( + innerSpan, + attribute.Int("promoteAddresses-reset-head", len(promoteAddrs)), + ) + }) } - // Reset needs promote for all addresses - promoteAddrs = make([]common.Address, 0, len(pool.queue)) - for addr := range pool.queue { - promoteAddrs = append(promoteAddrs, addr) + + // Check for pending transactions for every account that sent new ones + var promoted []*types.Transaction + + tracing.ElapsedTime(ctx, span, "05 promoteExecutables", func(_ context.Context, _ trace.Span) { + promoted = pool.promoteExecutables(promoteAddrs) + }) + + tracing.SetAttributes( + span, + attribute.Int("count.promoteAddresses-reset-head", len(promoteAddrs)), + attribute.Int("count.all", pool.all.Count()), + attribute.Int("count.pending", len(pool.pending)), + attribute.Int("count.queue", len(pool.queue)), + ) + + // If a new block appeared, validate the pool of pending transactions. This will + // remove any transaction that has been included in the block or was invalidated + // because of another transaction (e.g. higher gas price). + + //nolint:nestif + if reset != nil { + tracing.ElapsedTime(ctx, span, "new block", func(_ context.Context, innerSpan trace.Span) { + + tracing.ElapsedTime(ctx, innerSpan, "06 demoteUnexecutables", func(_ context.Context, _ trace.Span) { + pool.demoteUnexecutables() + }) + + var nonces map[common.Address]uint64 + + tracing.ElapsedTime(ctx, innerSpan, "07 set_base_fee", func(_ context.Context, _ trace.Span) { + if reset.newHead != nil { + if pool.chainconfig.IsLondon(new(big.Int).Add(reset.newHead.Number, big.NewInt(1))) { + // london fork enabled, reset given the base fee + pendingBaseFee := misc.CalcBaseFeeUint(pool.chainconfig, reset.newHead) + pool.priced.SetBaseFee(pendingBaseFee) + } else { + // london fork not enabled, reheap to "reset" the priced list + pool.priced.Reheap() + } + } + + // Update all accounts to the latest known pending nonce + nonces = make(map[common.Address]uint64, len(pool.pending)) + }) + + tracing.ElapsedTime(ctx, innerSpan, "08 obtaining pendingMu.RMutex", func(_ context.Context, _ trace.Span) { + pool.pendingMu.RLock() + }) + + var highestPending *types.Transaction + + tracing.ElapsedTime(ctx, innerSpan, "09 fill nonces", func(_ context.Context, innerSpan trace.Span) { + for addr, list := range pool.pending { + highestPending = list.LastElement() + if highestPending != nil { + nonces[addr] = highestPending.Nonce() + 1 + } + } + }) + + pool.pendingMu.RUnlock() + + tracing.ElapsedTime(ctx, innerSpan, "10 reset nonces", func(_ context.Context, _ trace.Span) { + pool.pendingNonces.setAll(nonces) + }) + }) } - } - // Check for pending transactions for every account that sent new ones - promoted := pool.promoteExecutables(promoteAddrs) - - // If a new block appeared, validate the pool of pending transactions. This will - // remove any transaction that has been included in the block or was invalidated - // because of another transaction (e.g. higher gas price). - if reset != nil { - pool.demoteUnexecutables() - if reset.newHead != nil { - if pool.chainconfig.IsLondon(new(big.Int).Add(reset.newHead.Number, big.NewInt(1))) { - // london fork enabled, reset given the base fee - pendingBaseFee := misc.CalcBaseFee(pool.chainconfig, reset.newHead) - pool.priced.SetBaseFee(pendingBaseFee) - } else { - // london fork not enabled, reheap to "reset" the priced list - pool.priced.Reheap() + + // Ensure pool.queue and pool.pending sizes stay within the configured limits. + tracing.ElapsedTime(ctx, span, "11 truncatePending", func(_ context.Context, _ trace.Span) { + pool.truncatePending() + }) + + tracing.ElapsedTime(ctx, span, "12 truncateQueue", func(_ context.Context, _ trace.Span) { + pool.truncateQueue() + }) + + dropBetweenReorgHistogram.Update(int64(pool.changesSinceReorg)) + pool.changesSinceReorg = 0 // Reset change counter + + pool.mu.Unlock() + + // Notify subsystems for newly added transactions + tracing.ElapsedTime(ctx, span, "13 notify about new transactions", func(_ context.Context, _ trace.Span) { + for _, tx := range promoted { + addr, _ := types.Sender(pool.signer, tx) + + if _, ok := events[addr]; !ok { + events[addr] = newTxSortedMap() + } + + events[addr].Put(tx) } - } - // Update all accounts to the latest known pending nonce - nonces := make(map[common.Address]uint64, len(pool.pending)) - for addr, list := range pool.pending { - highestPending := list.LastElement() - nonces[addr] = highestPending.Nonce() + 1 - } - pool.pendingNonces.setAll(nonces) - } - // Ensure pool.queue and pool.pending sizes stay within the configured limits. - pool.truncatePending() - pool.truncateQueue() + }) - dropBetweenReorgHistogram.Update(int64(pool.changesSinceReorg)) - pool.changesSinceReorg = 0 // Reset change counter - pool.mu.Unlock() + if len(events) > 0 { + tracing.ElapsedTime(ctx, span, "14 txFeed", func(_ context.Context, _ trace.Span) { + var txs []*types.Transaction - // Notify subsystems for newly added transactions - for _, tx := range promoted { - addr, _ := types.Sender(pool.signer, tx) - if _, ok := events[addr]; !ok { - events[addr] = newTxSortedMap() - } - events[addr].Put(tx) - } - if len(events) > 0 { - var txs []*types.Transaction - for _, set := range events { - txs = append(txs, set.Flatten()...) + for _, set := range events { + txs = append(txs, set.Flatten()...) + } + + pool.txFeed.Send(NewTxsEvent{txs}) + }) } - pool.txFeed.Send(NewTxsEvent{txs}) - } + }) } // reset retrieves the current state of the blockchain and ensures the content @@ -1412,64 +1796,100 @@ func (pool *TxPool) reset(oldHead, newHead *types.Header) { // invalidated transactions (low nonce, low balance) are deleted. func (pool *TxPool) promoteExecutables(accounts []common.Address) []*types.Transaction { // Track the promoted transactions to broadcast them at once - var promoted []*types.Transaction + var ( + promoted []*types.Transaction + promotedLen int + forwards types.Transactions + forwardsLen int + caps types.Transactions + capsLen int + drops types.Transactions + dropsLen int + list *txList + hash common.Hash + readies types.Transactions + readiesLen int + ) + + balance := uint256.NewInt(0) // Iterate over all accounts and promote any executable transactions for _, addr := range accounts { - list := pool.queue[addr] + list = pool.queue[addr] if list == nil { continue // Just in case someone calls with a non existing account } + // Drop all transactions that are deemed too old (low nonce) - forwards := list.Forward(pool.currentState.GetNonce(addr)) + forwards = list.Forward(pool.currentState.GetNonce(addr)) + forwardsLen = len(forwards) + for _, tx := range forwards { - hash := tx.Hash() + hash = tx.Hash() pool.all.Remove(hash) } - log.Trace("Removed old queued transactions", "count", len(forwards)) + + log.Trace("Removed old queued transactions", "count", forwardsLen) + // Drop all transactions that are too costly (low balance or out of gas) - drops, _ := list.Filter(pool.currentState.GetBalance(addr), pool.currentMaxGas) + balance.SetFromBig(pool.currentState.GetBalance(addr)) + + drops, _ = list.Filter(balance, pool.currentMaxGas) + dropsLen = len(drops) + for _, tx := range drops { - hash := tx.Hash() + hash = tx.Hash() pool.all.Remove(hash) } - log.Trace("Removed unpayable queued transactions", "count", len(drops)) - queuedNofundsMeter.Mark(int64(len(drops))) + + log.Trace("Removed unpayable queued transactions", "count", dropsLen) + queuedNofundsMeter.Mark(int64(dropsLen)) // Gather all executable transactions and promote them - readies := list.Ready(pool.pendingNonces.get(addr)) + readies = list.Ready(pool.pendingNonces.get(addr)) + readiesLen = len(readies) + for _, tx := range readies { - hash := tx.Hash() + hash = tx.Hash() if pool.promoteTx(addr, hash, tx) { promoted = append(promoted, tx) } } - log.Trace("Promoted queued transactions", "count", len(promoted)) - queuedGauge.Dec(int64(len(readies))) + + log.Trace("Promoted queued transactions", "count", promotedLen) + queuedGauge.Dec(int64(readiesLen)) // Drop all transactions over the allowed limit - var caps types.Transactions if !pool.locals.contains(addr) { caps = list.Cap(int(pool.config.AccountQueue)) + capsLen = len(caps) + for _, tx := range caps { - hash := tx.Hash() + hash = tx.Hash() pool.all.Remove(hash) + log.Trace("Removed cap-exceeding queued transaction", "hash", hash) } - queuedRateLimitMeter.Mark(int64(len(caps))) + + queuedRateLimitMeter.Mark(int64(capsLen)) } + // Mark all the items dropped as removed - pool.priced.Removed(len(forwards) + len(drops) + len(caps)) - queuedGauge.Dec(int64(len(forwards) + len(drops) + len(caps))) + pool.priced.Removed(forwardsLen + dropsLen + capsLen) + + queuedGauge.Dec(int64(forwardsLen + dropsLen + capsLen)) + if pool.locals.contains(addr) { - localGauge.Dec(int64(len(forwards) + len(drops) + len(caps))) + localGauge.Dec(int64(forwardsLen + dropsLen + capsLen)) } + // Delete the entire queue entry if it became empty. if list.Empty() { delete(pool.queue, addr) delete(pool.beats, addr) } } + return promoted } @@ -1477,86 +1897,162 @@ func (pool *TxPool) promoteExecutables(accounts []common.Address) []*types.Trans // pending limit. The algorithm tries to reduce transaction counts by an approximately // equal number for all for accounts with many pending transactions. func (pool *TxPool) truncatePending() { - pending := uint64(0) - for _, list := range pool.pending { - pending += uint64(list.Len()) - } + pending := uint64(pool.pendingCount) if pending <= pool.config.GlobalSlots { return } pendingBeforeCap := pending + + var listLen int + + type pair struct { + address common.Address + value int64 + } + // Assemble a spam order to penalize large transactors first - spammers := prque.New(nil) + spammers := make([]pair, 0, 8) + count := 0 + + var ok bool + + pool.pendingMu.RLock() for addr, list := range pool.pending { // Only evict transactions from high rollers - if !pool.locals.contains(addr) && uint64(list.Len()) > pool.config.AccountSlots { - spammers.Push(addr, int64(list.Len())) + listLen = len(list.txs.items) + + pool.pendingMu.RUnlock() + + pool.locals.m.RLock() + + if uint64(listLen) > pool.config.AccountSlots { + if _, ok = pool.locals.accounts[addr]; ok { + pool.locals.m.RUnlock() + + pool.pendingMu.RLock() + + continue + } + + count++ + + spammers = append(spammers, pair{addr, int64(listLen)}) } + + pool.locals.m.RUnlock() + + pool.pendingMu.RLock() } + + pool.pendingMu.RUnlock() + // Gradually drop transactions from offenders - offenders := []common.Address{} - for pending > pool.config.GlobalSlots && !spammers.Empty() { + offenders := make([]common.Address, 0, len(spammers)) + sort.Slice(spammers, func(i, j int) bool { + return spammers[i].value < spammers[j].value + }) + + var ( + offender common.Address + caps types.Transactions + capsLen int + list *txList + hash common.Hash + ) + + // todo: metrics: spammers, offenders, total loops + for len(spammers) != 0 && pending > pool.config.GlobalSlots { // Retrieve the next offender if not local address - offender, _ := spammers.Pop() - offenders = append(offenders, offender.(common.Address)) + offender, spammers = spammers[len(spammers)-1].address, spammers[:len(spammers)-1] + offenders = append(offenders, offender) + + var threshold int // Equalize balances until all the same or below threshold if len(offenders) > 1 { // Calculate the equalization threshold for all current offenders - threshold := pool.pending[offender.(common.Address)].Len() + pool.pendingMu.RLock() + threshold = len(pool.pending[offender].txs.items) // Iteratively reduce all offenders until below limit or threshold reached for pending > pool.config.GlobalSlots && pool.pending[offenders[len(offenders)-2]].Len() > threshold { for i := 0; i < len(offenders)-1; i++ { - list := pool.pending[offenders[i]] + list = pool.pending[offenders[i]] + + caps = list.Cap(len(list.txs.items) - 1) + capsLen = len(caps) + + pool.pendingMu.RUnlock() - caps := list.Cap(list.Len() - 1) for _, tx := range caps { // Drop the transaction from the global pools too - hash := tx.Hash() + hash = tx.Hash() pool.all.Remove(hash) // Update the account nonce to the dropped transaction pool.pendingNonces.setIfLower(offenders[i], tx.Nonce()) log.Trace("Removed fairness-exceeding pending transaction", "hash", hash) } - pool.priced.Removed(len(caps)) - pendingGauge.Dec(int64(len(caps))) + + pool.priced.Removed(capsLen) + + pendingGauge.Dec(int64(capsLen)) if pool.locals.contains(offenders[i]) { - localGauge.Dec(int64(len(caps))) + localGauge.Dec(int64(capsLen)) } + pending-- + + pool.pendingMu.RLock() } } + + pool.pendingMu.RUnlock() } } // If still above threshold, reduce to limit or min allowance if pending > pool.config.GlobalSlots && len(offenders) > 0 { + + pool.pendingMu.RLock() + for pending > pool.config.GlobalSlots && uint64(pool.pending[offenders[len(offenders)-1]].Len()) > pool.config.AccountSlots { for _, addr := range offenders { - list := pool.pending[addr] + list = pool.pending[addr] + + caps = list.Cap(len(list.txs.items) - 1) + capsLen = len(caps) + + pool.pendingMu.RUnlock() - caps := list.Cap(list.Len() - 1) for _, tx := range caps { // Drop the transaction from the global pools too - hash := tx.Hash() + hash = tx.Hash() pool.all.Remove(hash) // Update the account nonce to the dropped transaction pool.pendingNonces.setIfLower(addr, tx.Nonce()) log.Trace("Removed fairness-exceeding pending transaction", "hash", hash) } - pool.priced.Removed(len(caps)) - pendingGauge.Dec(int64(len(caps))) - if pool.locals.contains(addr) { - localGauge.Dec(int64(len(caps))) + + pool.priced.Removed(capsLen) + + pendingGauge.Dec(int64(capsLen)) + + if _, ok = pool.locals.accounts[addr]; ok { + localGauge.Dec(int64(capsLen)) } + pending-- + + pool.pendingMu.RLock() } } + + pool.pendingMu.RUnlock() } + pendingRateLimitMeter.Mark(int64(pendingBeforeCap - pending)) } @@ -1579,27 +2075,52 @@ func (pool *TxPool) truncateQueue() { } sort.Sort(addresses) + var ( + tx *types.Transaction + txs types.Transactions + list *txList + addr addressByHeartbeat + size uint64 + ) + // Drop transactions until the total is below the limit or only locals remain for drop := queued - pool.config.GlobalQueue; drop > 0 && len(addresses) > 0; { - addr := addresses[len(addresses)-1] - list := pool.queue[addr.address] + addr = addresses[len(addresses)-1] + list = pool.queue[addr.address] addresses = addresses[:len(addresses)-1] + var ( + listFlatten types.Transactions + isSet bool + ) + // Drop all transactions if they are less than the overflow - if size := uint64(list.Len()); size <= drop { - for _, tx := range list.Flatten() { + if size = uint64(list.Len()); size <= drop { + listFlatten = list.Flatten() + isSet = true + + for _, tx = range listFlatten { pool.removeTx(tx.Hash(), true) } + drop -= size queuedRateLimitMeter.Mark(int64(size)) + continue } + // Otherwise drop only last few transactions - txs := list.Flatten() + if !isSet { + listFlatten = list.Flatten() + } + + txs = listFlatten for i := len(txs) - 1; i >= 0 && drop > 0; i-- { pool.removeTx(txs[i].Hash(), true) + drop-- + queuedRateLimitMeter.Mark(1) } } @@ -1613,56 +2134,98 @@ func (pool *TxPool) truncateQueue() { // is always explicitly triggered by SetBaseFee and it would be unnecessary and wasteful // to trigger a re-heap is this function func (pool *TxPool) demoteUnexecutables() { + balance := uint256.NewInt(0) + + var ( + olds types.Transactions + oldsLen int + hash common.Hash + drops types.Transactions + dropsLen int + invalids types.Transactions + invalidsLen int + gapped types.Transactions + gappedLen int + ) + // Iterate over all accounts and demote any non-executable transactions + pool.pendingMu.RLock() + for addr, list := range pool.pending { nonce := pool.currentState.GetNonce(addr) // Drop all transactions that are deemed too old (low nonce) - olds := list.Forward(nonce) + olds = list.Forward(nonce) + oldsLen = len(olds) + for _, tx := range olds { - hash := tx.Hash() + hash = tx.Hash() pool.all.Remove(hash) log.Trace("Removed old pending transaction", "hash", hash) } + // Drop all transactions that are too costly (low balance or out of gas), and queue any invalids back for later - drops, invalids := list.Filter(pool.currentState.GetBalance(addr), pool.currentMaxGas) + balance.SetFromBig(pool.currentState.GetBalance(addr)) + drops, invalids = list.Filter(balance, pool.currentMaxGas) + dropsLen = len(drops) + invalidsLen = len(invalids) + for _, tx := range drops { - hash := tx.Hash() + hash = tx.Hash() + log.Trace("Removed unpayable pending transaction", "hash", hash) + pool.all.Remove(hash) } - pendingNofundsMeter.Mark(int64(len(drops))) + + pendingNofundsMeter.Mark(int64(dropsLen)) for _, tx := range invalids { - hash := tx.Hash() + hash = tx.Hash() + log.Trace("Demoting pending transaction", "hash", hash) // Internal shuffle shouldn't touch the lookup set. pool.enqueueTx(hash, tx, false, false) } - pendingGauge.Dec(int64(len(olds) + len(drops) + len(invalids))) + + pendingGauge.Dec(int64(oldsLen + dropsLen + invalidsLen)) + if pool.locals.contains(addr) { - localGauge.Dec(int64(len(olds) + len(drops) + len(invalids))) + localGauge.Dec(int64(oldsLen + dropsLen + invalidsLen)) } // If there's a gap in front, alert (should never happen) and postpone all transactions if list.Len() > 0 && list.txs.Get(nonce) == nil { - gapped := list.Cap(0) + gapped = list.Cap(0) + gappedLen = len(gapped) + for _, tx := range gapped { - hash := tx.Hash() + hash = tx.Hash() log.Error("Demoting invalidated transaction", "hash", hash) // Internal shuffle shouldn't touch the lookup set. pool.enqueueTx(hash, tx, false, false) } - pendingGauge.Dec(int64(len(gapped))) + + pendingGauge.Dec(int64(gappedLen)) // This might happen in a reorg, so log it to the metering - blockReorgInvalidatedTx.Mark(int64(len(gapped))) + blockReorgInvalidatedTx.Mark(int64(gappedLen)) } + // Delete the entire pending entry if it became empty. if list.Empty() { + pool.pendingMu.RUnlock() + pool.pendingMu.Lock() + + pool.pendingCount -= pool.pending[addr].Len() delete(pool.pending, addr) + + pool.pendingMu.Unlock() + pool.pendingMu.RLock() } } + + pool.pendingMu.RUnlock() } // addressByHeartbeat is an account address tagged with its last activity timestamp. @@ -1680,9 +2243,10 @@ func (a addressesByHeartbeat) Swap(i, j int) { a[i], a[j] = a[j], a[i] } // accountSet is simply a set of addresses to check for existence, and a signer // capable of deriving addresses from transactions. type accountSet struct { - accounts map[common.Address]struct{} - signer types.Signer - cache *[]common.Address + accounts map[common.Address]struct{} + accountsFlatted []common.Address + signer types.Signer + m sync.RWMutex } // newAccountSet creates a new address set with an associated signer for sender @@ -1700,17 +2264,26 @@ func newAccountSet(signer types.Signer, addrs ...common.Address) *accountSet { // contains checks if a given address is contained within the set. func (as *accountSet) contains(addr common.Address) bool { + as.m.RLock() + defer as.m.RUnlock() + _, exist := as.accounts[addr] return exist } func (as *accountSet) empty() bool { + as.m.RLock() + defer as.m.RUnlock() + return len(as.accounts) == 0 } // containsTx checks if the sender of a given tx is within the set. If the sender // cannot be derived, this method returns false. func (as *accountSet) containsTx(tx *types.Transaction) bool { + as.m.RLock() + defer as.m.RUnlock() + if addr, err := types.Sender(as.signer, tx); err == nil { return as.contains(addr) } @@ -1719,8 +2292,14 @@ func (as *accountSet) containsTx(tx *types.Transaction) bool { // add inserts a new address into the set to track. func (as *accountSet) add(addr common.Address) { + as.m.Lock() + defer as.m.Unlock() + + if _, ok := as.accounts[addr]; !ok { + as.accountsFlatted = append(as.accountsFlatted, addr) + } + as.accounts[addr] = struct{}{} - as.cache = nil } // addTx adds the sender of tx into the set. @@ -1733,22 +2312,25 @@ func (as *accountSet) addTx(tx *types.Transaction) { // flatten returns the list of addresses within this set, also caching it for later // reuse. The returned slice should not be changed! func (as *accountSet) flatten() []common.Address { - if as.cache == nil { - accounts := make([]common.Address, 0, len(as.accounts)) - for account := range as.accounts { - accounts = append(accounts, account) - } - as.cache = &accounts - } - return *as.cache + as.m.RLock() + defer as.m.RUnlock() + + return as.accountsFlatted } // merge adds all addresses from the 'other' set into 'as'. func (as *accountSet) merge(other *accountSet) { + var ok bool + + as.m.Lock() + defer as.m.Unlock() + for addr := range other.accounts { + if _, ok = as.accounts[addr]; !ok { + as.accountsFlatted = append(as.accountsFlatted, addr) + } as.accounts[addr] = struct{}{} } - as.cache = nil } // txLookup is used internally by TxPool to track transactions while allowing @@ -1904,7 +2486,10 @@ func (t *txLookup) RemoteToLocals(locals *accountSet) int { var migrated int for hash, tx := range t.remotes { if locals.containsTx(tx) { + locals.m.Lock() t.locals[hash] = tx + locals.m.Unlock() + delete(t.remotes, hash) migrated += 1 } diff --git a/core/tx_pool_test.go b/core/tx_pool_test.go index 883a65a23f..13fa4ff20d 100644 --- a/core/tx_pool_test.go +++ b/core/tx_pool_test.go @@ -21,6 +21,7 @@ import ( "crypto/ecdsa" "errors" "fmt" + "io" "io/ioutil" "math/big" "math/rand" @@ -32,11 +33,15 @@ import ( "testing" "time" + "github.com/holiman/uint256" + "go.uber.org/goleak" "gonum.org/v1/gonum/floats" "gonum.org/v1/gonum/stat" "pgregory.net/rapid" "github.com/ethereum/go-ethereum/common" + "github.com/ethereum/go-ethereum/common/debug" + "github.com/ethereum/go-ethereum/common/leak" "github.com/ethereum/go-ethereum/core/rawdb" "github.com/ethereum/go-ethereum/core/state" "github.com/ethereum/go-ethereum/core/types" @@ -98,7 +103,7 @@ func transaction(nonce uint64, gaslimit uint64, key *ecdsa.PrivateKey) *types.Tr } func pricedTransaction(nonce uint64, gaslimit uint64, gasprice *big.Int, key *ecdsa.PrivateKey) *types.Transaction { - tx, _ := types.SignTx(types.NewTransaction(nonce, common.Address{}, big.NewInt(100), gaslimit, gasprice, nil), types.HomesteadSigner{}, key) + tx, _ := types.SignTx(types.NewTransaction(nonce, common.Address{0x01}, big.NewInt(100), gaslimit, gasprice, nil), types.HomesteadSigner{}, key) return tx } @@ -153,12 +158,17 @@ func validateTxPoolInternals(pool *TxPool) error { if total := pool.all.Count(); total != pending+queued { return fmt.Errorf("total transaction count %d != %d pending + %d queued", total, pending, queued) } + pool.priced.Reheap() priced, remote := pool.priced.urgent.Len()+pool.priced.floating.Len(), pool.all.RemoteCount() if priced != remote { return fmt.Errorf("total priced transaction count %d != %d", priced, remote) } + // Ensure the next nonce to assign is the correct one + pool.pendingMu.RLock() + defer pool.pendingMu.RUnlock() + for addr, txs := range pool.pending { // Find the last transaction var last uint64 @@ -167,6 +177,7 @@ func validateTxPoolInternals(pool *TxPool) error { last = nonce } } + if nonce := pool.pendingNonces.get(addr); nonce != last+1 { return fmt.Errorf("pending nonce mismatch: have %v, want %v", nonce, last+1) } @@ -175,6 +186,7 @@ func validateTxPoolInternals(pool *TxPool) error { return fmt.Errorf("totalcost went negative: %v", txs.totalcost) } } + return nil } @@ -329,10 +341,18 @@ func TestInvalidTransactions(t *testing.T) { } tx = transaction(1, 100000, key) + + pool.gasPriceMu.Lock() + pool.gasPrice = big.NewInt(1000) - if err := pool.AddRemote(tx); err != ErrUnderpriced { + pool.gasPriceUint = uint256.NewInt(1000) + + pool.gasPriceMu.Unlock() + + if err := pool.AddRemote(tx); !errors.Is(err, ErrUnderpriced) { t.Error("expected", ErrUnderpriced, "got", err) } + if err := pool.AddLocal(tx); err != nil { t.Error("expected", nil, "got", err) } @@ -351,9 +371,12 @@ func TestTransactionQueue(t *testing.T) { pool.enqueueTx(tx.Hash(), tx, false, true) <-pool.requestPromoteExecutables(newAccountSet(pool.signer, from)) + + pool.pendingMu.RLock() if len(pool.pending) != 1 { t.Error("expected valid txs to be 1 is", len(pool.pending)) } + pool.pendingMu.RUnlock() tx = transaction(1, 100, key) from, _ = deriveSender(tx) @@ -361,9 +384,13 @@ func TestTransactionQueue(t *testing.T) { pool.enqueueTx(tx.Hash(), tx, false, true) <-pool.requestPromoteExecutables(newAccountSet(pool.signer, from)) + + pool.pendingMu.RLock() if _, ok := pool.pending[from].txs.items[tx.Nonce()]; ok { t.Error("expected transaction to be in tx pool") } + pool.pendingMu.RUnlock() + if len(pool.queue) > 0 { t.Error("expected transaction queue to be empty. is", len(pool.queue)) } @@ -387,9 +414,13 @@ func TestTransactionQueue2(t *testing.T) { pool.enqueueTx(tx3.Hash(), tx3, false, true) pool.promoteExecutables([]common.Address{from}) + + pool.pendingMu.RLock() if len(pool.pending) != 1 { t.Error("expected pending length to be 1, got", len(pool.pending)) } + pool.pendingMu.RUnlock() + if pool.queue[from].Len() != 2 { t.Error("expected len(queue) == 2, got", pool.queue[from].Len()) } @@ -403,8 +434,10 @@ func TestTransactionNegativeValue(t *testing.T) { tx, _ := types.SignTx(types.NewTransaction(0, common.Address{}, big.NewInt(-1), 100, big.NewInt(1), nil), types.HomesteadSigner{}, key) from, _ := deriveSender(tx) + testAddBalance(pool, from, big.NewInt(1)) - if err := pool.AddRemote(tx); err != ErrNegativeValue { + + if err := pool.AddRemote(tx); !errors.Is(err, ErrNegativeValue) { t.Error("expected", ErrNegativeValue, "got", err) } } @@ -417,7 +450,7 @@ func TestTransactionTipAboveFeeCap(t *testing.T) { tx := dynamicFeeTx(0, 100, big.NewInt(1), big.NewInt(2), key) - if err := pool.AddRemote(tx); err != ErrTipAboveFeeCap { + if err := pool.AddRemote(tx); !errors.Is(err, ErrTipAboveFeeCap) { t.Error("expected", ErrTipAboveFeeCap, "got", err) } } @@ -432,12 +465,12 @@ func TestTransactionVeryHighValues(t *testing.T) { veryBigNumber.Lsh(veryBigNumber, 300) tx := dynamicFeeTx(0, 100, big.NewInt(1), veryBigNumber, key) - if err := pool.AddRemote(tx); err != ErrTipVeryHigh { + if err := pool.AddRemote(tx); !errors.Is(err, ErrTipVeryHigh) { t.Error("expected", ErrTipVeryHigh, "got", err) } tx2 := dynamicFeeTx(0, 100, veryBigNumber, big.NewInt(1), key) - if err := pool.AddRemote(tx2); err != ErrFeeCapVeryHigh { + if err := pool.AddRemote(tx2); !errors.Is(err, ErrFeeCapVeryHigh) { t.Error("expected", ErrFeeCapVeryHigh, "got", err) } } @@ -499,23 +532,32 @@ func TestTransactionDoubleNonce(t *testing.T) { if replace, err := pool.add(tx2, false); err != nil || !replace { t.Errorf("second transaction insert failed (%v) or not reported replacement (%v)", err, replace) } + <-pool.requestPromoteExecutables(newAccountSet(signer, addr)) + + pool.pendingMu.RLock() if pool.pending[addr].Len() != 1 { t.Error("expected 1 pending transactions, got", pool.pending[addr].Len()) } if tx := pool.pending[addr].txs.items[0]; tx.Hash() != tx2.Hash() { t.Errorf("transaction mismatch: have %x, want %x", tx.Hash(), tx2.Hash()) } + pool.pendingMu.RUnlock() // Add the third transaction and ensure it's not saved (smaller price) pool.add(tx3, false) + <-pool.requestPromoteExecutables(newAccountSet(signer, addr)) + + pool.pendingMu.RLock() if pool.pending[addr].Len() != 1 { t.Error("expected 1 pending transactions, got", pool.pending[addr].Len()) } if tx := pool.pending[addr].txs.items[0]; tx.Hash() != tx2.Hash() { t.Errorf("transaction mismatch: have %x, want %x", tx.Hash(), tx2.Hash()) } + pool.pendingMu.RUnlock() + // Ensure the total transaction count is correct if pool.all.Count() != 1 { t.Error("expected 1 total transactions, got", pool.all.Count()) @@ -534,9 +576,13 @@ func TestTransactionMissingNonce(t *testing.T) { if _, err := pool.add(tx, false); err != nil { t.Error("didn't expect error", err) } + + pool.pendingMu.RLock() if len(pool.pending) != 0 { t.Error("expected 0 pending transactions, got", len(pool.pending)) } + pool.pendingMu.RUnlock() + if pool.queue[addr].Len() != 1 { t.Error("expected 1 queued transaction, got", pool.queue[addr].Len()) } @@ -607,19 +653,27 @@ func TestTransactionDropping(t *testing.T) { pool.enqueueTx(tx12.Hash(), tx12, false, true) // Check that pre and post validations leave the pool as is + pool.pendingMu.RLock() if pool.pending[account].Len() != 3 { t.Errorf("pending transaction mismatch: have %d, want %d", pool.pending[account].Len(), 3) } + pool.pendingMu.RUnlock() + if pool.queue[account].Len() != 3 { t.Errorf("queued transaction mismatch: have %d, want %d", pool.queue[account].Len(), 3) } if pool.all.Count() != 6 { t.Errorf("total transaction mismatch: have %d, want %d", pool.all.Count(), 6) } + <-pool.requestReset(nil, nil) + + pool.pendingMu.RLock() if pool.pending[account].Len() != 3 { t.Errorf("pending transaction mismatch: have %d, want %d", pool.pending[account].Len(), 3) } + pool.pendingMu.RUnlock() + if pool.queue[account].Len() != 3 { t.Errorf("queued transaction mismatch: have %d, want %d", pool.queue[account].Len(), 3) } @@ -630,6 +684,7 @@ func TestTransactionDropping(t *testing.T) { testAddBalance(pool, account, big.NewInt(-650)) <-pool.requestReset(nil, nil) + pool.pendingMu.RLock() if _, ok := pool.pending[account].txs.items[tx0.Nonce()]; !ok { t.Errorf("funded pending transaction missing: %v", tx0) } @@ -639,6 +694,8 @@ func TestTransactionDropping(t *testing.T) { if _, ok := pool.pending[account].txs.items[tx2.Nonce()]; ok { t.Errorf("out-of-fund pending transaction present: %v", tx1) } + pool.pendingMu.RUnlock() + if _, ok := pool.queue[account].txs.items[tx10.Nonce()]; !ok { t.Errorf("funded queued transaction missing: %v", tx10) } @@ -655,12 +712,15 @@ func TestTransactionDropping(t *testing.T) { atomic.StoreUint64(&pool.chain.(*testBlockChain).gasLimit, 100) <-pool.requestReset(nil, nil) + pool.pendingMu.RLock() if _, ok := pool.pending[account].txs.items[tx0.Nonce()]; !ok { t.Errorf("funded pending transaction missing: %v", tx0) } if _, ok := pool.pending[account].txs.items[tx1.Nonce()]; ok { t.Errorf("over-gased pending transaction present: %v", tx1) } + pool.pendingMu.RUnlock() + if _, ok := pool.queue[account].txs.items[tx10.Nonce()]; !ok { t.Errorf("funded queued transaction missing: %v", tx10) } @@ -715,19 +775,27 @@ func TestTransactionPostponing(t *testing.T) { } } // Check that pre and post validations leave the pool as is + pool.pendingMu.RLock() if pending := pool.pending[accs[0]].Len() + pool.pending[accs[1]].Len(); pending != len(txs) { t.Errorf("pending transaction mismatch: have %d, want %d", pending, len(txs)) } + pool.pendingMu.RUnlock() + if len(pool.queue) != 0 { t.Errorf("queued accounts mismatch: have %d, want %d", len(pool.queue), 0) } if pool.all.Count() != len(txs) { t.Errorf("total transaction mismatch: have %d, want %d", pool.all.Count(), len(txs)) } + <-pool.requestReset(nil, nil) + + pool.pendingMu.RLock() if pending := pool.pending[accs[0]].Len() + pool.pending[accs[1]].Len(); pending != len(txs) { t.Errorf("pending transaction mismatch: have %d, want %d", pending, len(txs)) } + pool.pendingMu.RUnlock() + if len(pool.queue) != 0 { t.Errorf("queued accounts mismatch: have %d, want %d", len(pool.queue), 0) } @@ -742,12 +810,17 @@ func TestTransactionPostponing(t *testing.T) { // The first account's first transaction remains valid, check that subsequent // ones are either filtered out, or queued up for later. + pool.pendingMu.RLock() if _, ok := pool.pending[accs[0]].txs.items[txs[0].Nonce()]; !ok { t.Errorf("tx %d: valid and funded transaction missing from pending pool: %v", 0, txs[0]) } + pool.pendingMu.RUnlock() + if _, ok := pool.queue[accs[0]].txs.items[txs[0].Nonce()]; ok { t.Errorf("tx %d: valid and funded transaction present in future queue: %v", 0, txs[0]) } + + pool.pendingMu.RLock() for i, tx := range txs[1:100] { if i%2 == 1 { if _, ok := pool.pending[accs[0]].txs.items[tx.Nonce()]; ok { @@ -765,11 +838,16 @@ func TestTransactionPostponing(t *testing.T) { } } } + pool.pendingMu.RUnlock() + // The second account's first transaction got invalid, check that all transactions // are either filtered out, or queued up for later. + pool.pendingMu.RLock() if pool.pending[accs[1]] != nil { t.Errorf("invalidated account still has pending transactions") } + pool.pendingMu.RUnlock() + for i, tx := range txs[100:] { if i%2 == 1 { if _, ok := pool.queue[accs[1]].txs.items[tx.Nonce()]; !ok { @@ -858,9 +936,13 @@ func TestTransactionQueueAccountLimiting(t *testing.T) { if err := pool.addRemoteSync(transaction(i, 100000, key)); err != nil { t.Fatalf("tx %d: failed to add transaction: %v", i, err) } + + pool.pendingMu.RLock() if len(pool.pending) != 0 { t.Errorf("tx %d: pending pool size mismatch: have %d, want %d", i, len(pool.pending), 0) } + pool.pendingMu.RUnlock() + if i <= testTxPoolConfig.AccountQueue { if pool.queue[account].Len() != int(i) { t.Errorf("tx %d: queue size mismatch: have %d, want %d", i, pool.queue[account].Len(), i) @@ -939,6 +1021,7 @@ func testTransactionQueueGlobalLimiting(t *testing.T, nolocals bool) { for i := uint64(0); i < 3*config.GlobalQueue; i++ { txs = append(txs, transaction(i+1, 100000, local)) } + pool.AddLocals(txs) // If locals are disabled, the previous eviction algorithm should apply here too @@ -1116,6 +1199,7 @@ func testTransactionQueueTimeLimiting(t *testing.T, nolocals bool) { t.Fatalf("queued transactions mismatched: have %d, want %d", queued, 1) } } + if err := validateTxPoolInternals(pool); err != nil { t.Fatalf("pool internal state corrupted: %v", err) } @@ -1144,9 +1228,13 @@ func TestTransactionPendingLimiting(t *testing.T) { if err := pool.addRemoteSync(transaction(i, 100000, key)); err != nil { t.Fatalf("tx %d: failed to add transaction: %v", i, err) } + + pool.pendingMu.RLock() if pool.pending[account].Len() != int(i)+1 { t.Errorf("tx %d: pending pool size mismatch: have %d, want %d", i, pool.pending[account].Len(), i+1) } + pool.pendingMu.RUnlock() + if len(pool.queue) != 0 { t.Errorf("tx %d: queue size mismatch: have %d, want %d", i, pool.queue[account].Len(), 0) } @@ -1199,9 +1287,13 @@ func TestTransactionPendingGlobalLimiting(t *testing.T) { pool.AddRemotesSync(txs) pending := 0 + + pool.pendingMu.RLock() for _, list := range pool.pending { pending += list.Len() } + pool.pendingMu.RUnlock() + if pending > int(config.GlobalSlots) { t.Fatalf("total pending transactions overflow allowance: %d > %d", pending, config.GlobalSlots) } @@ -1334,11 +1426,14 @@ func TestTransactionPendingMinimumAllowance(t *testing.T) { // Import the batch and verify that limits have been enforced pool.AddRemotesSync(txs) + pool.pendingMu.RLock() for addr, list := range pool.pending { if list.Len() != int(config.AccountSlots) { t.Errorf("addr %x: total pending transactions mismatch: have %d, want %d", addr, list.Len(), config.AccountSlots) } } + pool.pendingMu.RUnlock() + if err := validateTxPoolInternals(pool); err != nil { t.Fatalf("pool internal state corrupted: %v", err) } @@ -1395,15 +1490,19 @@ func TestTransactionPoolRepricing(t *testing.T) { if pending != 7 { t.Fatalf("pending transactions mismatched: have %d, want %d", pending, 7) } + if queued != 3 { t.Fatalf("queued transactions mismatched: have %d, want %d", queued, 3) } + if err := validateEvents(events, 7); err != nil { t.Fatalf("original event firing failed: %v", err) } + if err := validateTxPoolInternals(pool); err != nil { t.Fatalf("pool internal state corrupted: %v", err) } + // Reprice the pool and check that underpriced transactions get dropped pool.SetGasPrice(big.NewInt(2)) @@ -1411,58 +1510,76 @@ func TestTransactionPoolRepricing(t *testing.T) { if pending != 2 { t.Fatalf("pending transactions mismatched: have %d, want %d", pending, 2) } + if queued != 5 { t.Fatalf("queued transactions mismatched: have %d, want %d", queued, 5) } + if err := validateEvents(events, 0); err != nil { t.Fatalf("reprice event firing failed: %v", err) } + if err := validateTxPoolInternals(pool); err != nil { t.Fatalf("pool internal state corrupted: %v", err) } + // Check that we can't add the old transactions back - if err := pool.AddRemote(pricedTransaction(1, 100000, big.NewInt(1), keys[0])); err != ErrUnderpriced { + if err := pool.AddRemote(pricedTransaction(1, 100000, big.NewInt(1), keys[0])); !errors.Is(err, ErrUnderpriced) { t.Fatalf("adding underpriced pending transaction error mismatch: have %v, want %v", err, ErrUnderpriced) } - if err := pool.AddRemote(pricedTransaction(0, 100000, big.NewInt(1), keys[1])); err != ErrUnderpriced { + + if err := pool.AddRemote(pricedTransaction(0, 100000, big.NewInt(1), keys[1])); !errors.Is(err, ErrUnderpriced) { t.Fatalf("adding underpriced pending transaction error mismatch: have %v, want %v", err, ErrUnderpriced) } - if err := pool.AddRemote(pricedTransaction(2, 100000, big.NewInt(1), keys[2])); err != ErrUnderpriced { + + if err := pool.AddRemote(pricedTransaction(2, 100000, big.NewInt(1), keys[2])); !errors.Is(err, ErrUnderpriced) { t.Fatalf("adding underpriced queued transaction error mismatch: have %v, want %v", err, ErrUnderpriced) } + if err := validateEvents(events, 0); err != nil { t.Fatalf("post-reprice event firing failed: %v", err) } + if err := validateTxPoolInternals(pool); err != nil { t.Fatalf("pool internal state corrupted: %v", err) } + // However we can add local underpriced transactions tx := pricedTransaction(1, 100000, big.NewInt(1), keys[3]) + if err := pool.AddLocal(tx); err != nil { t.Fatalf("failed to add underpriced local transaction: %v", err) } + if pending, _ = pool.Stats(); pending != 3 { t.Fatalf("pending transactions mismatched: have %d, want %d", pending, 3) } + if err := validateEvents(events, 1); err != nil { t.Fatalf("post-reprice local event firing failed: %v", err) } + if err := validateTxPoolInternals(pool); err != nil { t.Fatalf("pool internal state corrupted: %v", err) } + // And we can fill gaps with properly priced transactions if err := pool.AddRemote(pricedTransaction(1, 100000, big.NewInt(2), keys[0])); err != nil { t.Fatalf("failed to add pending transaction: %v", err) } + if err := pool.AddRemote(pricedTransaction(0, 100000, big.NewInt(2), keys[1])); err != nil { t.Fatalf("failed to add pending transaction: %v", err) } + if err := pool.AddRemote(pricedTransaction(2, 100000, big.NewInt(2), keys[2])); err != nil { t.Fatalf("failed to add queued transaction: %v", err) } + if err := validateEvents(events, 5); err != nil { t.Fatalf("post-reprice event firing failed: %v", err) } + if err := validateTxPoolInternals(pool); err != nil { t.Fatalf("pool internal state corrupted: %v", err) } @@ -1491,6 +1608,7 @@ func TestTransactionPoolRepricingDynamicFee(t *testing.T) { keys[i], _ = crypto.GenerateKey() testAddBalance(pool, crypto.PubkeyToAddress(keys[i].PublicKey), big.NewInt(1000000)) } + // Generate and queue a batch of transactions, both pending and queued txs := types.Transactions{} @@ -1516,15 +1634,19 @@ func TestTransactionPoolRepricingDynamicFee(t *testing.T) { if pending != 7 { t.Fatalf("pending transactions mismatched: have %d, want %d", pending, 7) } + if queued != 3 { t.Fatalf("queued transactions mismatched: have %d, want %d", queued, 3) } + if err := validateEvents(events, 7); err != nil { t.Fatalf("original event firing failed: %v", err) } + if err := validateTxPoolInternals(pool); err != nil { t.Fatalf("pool internal state corrupted: %v", err) } + // Reprice the pool and check that underpriced transactions get dropped pool.SetGasPrice(big.NewInt(2)) @@ -1532,64 +1654,87 @@ func TestTransactionPoolRepricingDynamicFee(t *testing.T) { if pending != 2 { t.Fatalf("pending transactions mismatched: have %d, want %d", pending, 2) } + if queued != 5 { t.Fatalf("queued transactions mismatched: have %d, want %d", queued, 5) } + if err := validateEvents(events, 0); err != nil { t.Fatalf("reprice event firing failed: %v", err) } + if err := validateTxPoolInternals(pool); err != nil { t.Fatalf("pool internal state corrupted: %v", err) } + // Check that we can't add the old transactions back tx := pricedTransaction(1, 100000, big.NewInt(1), keys[0]) - if err := pool.AddRemote(tx); err != ErrUnderpriced { + + if err := pool.AddRemote(tx); !errors.Is(err, ErrUnderpriced) { t.Fatalf("adding underpriced pending transaction error mismatch: have %v, want %v", err, ErrUnderpriced) } + tx = dynamicFeeTx(0, 100000, big.NewInt(2), big.NewInt(1), keys[1]) - if err := pool.AddRemote(tx); err != ErrUnderpriced { + + if err := pool.AddRemote(tx); !errors.Is(err, ErrUnderpriced) { t.Fatalf("adding underpriced pending transaction error mismatch: have %v, want %v", err, ErrUnderpriced) } + tx = dynamicFeeTx(2, 100000, big.NewInt(1), big.NewInt(1), keys[2]) - if err := pool.AddRemote(tx); err != ErrUnderpriced { + if err := pool.AddRemote(tx); !errors.Is(err, ErrUnderpriced) { t.Fatalf("adding underpriced queued transaction error mismatch: have %v, want %v", err, ErrUnderpriced) } + if err := validateEvents(events, 0); err != nil { t.Fatalf("post-reprice event firing failed: %v", err) } + if err := validateTxPoolInternals(pool); err != nil { t.Fatalf("pool internal state corrupted: %v", err) } + // However we can add local underpriced transactions tx = dynamicFeeTx(1, 100000, big.NewInt(1), big.NewInt(1), keys[3]) + if err := pool.AddLocal(tx); err != nil { t.Fatalf("failed to add underpriced local transaction: %v", err) } + if pending, _ = pool.Stats(); pending != 3 { t.Fatalf("pending transactions mismatched: have %d, want %d", pending, 3) } + if err := validateEvents(events, 1); err != nil { t.Fatalf("post-reprice local event firing failed: %v", err) } + if err := validateTxPoolInternals(pool); err != nil { t.Fatalf("pool internal state corrupted: %v", err) } + // And we can fill gaps with properly priced transactions tx = pricedTransaction(1, 100000, big.NewInt(2), keys[0]) + if err := pool.AddRemote(tx); err != nil { t.Fatalf("failed to add pending transaction: %v", err) } + tx = dynamicFeeTx(0, 100000, big.NewInt(3), big.NewInt(2), keys[1]) + if err := pool.AddRemote(tx); err != nil { t.Fatalf("failed to add pending transaction: %v", err) } + tx = dynamicFeeTx(2, 100000, big.NewInt(2), big.NewInt(2), keys[2]) + if err := pool.AddRemote(tx); err != nil { t.Fatalf("failed to add queued transaction: %v", err) } + if err := validateEvents(events, 5); err != nil { t.Fatalf("post-reprice event firing failed: %v", err) } + if err := validateTxPoolInternals(pool); err != nil { t.Fatalf("pool internal state corrupted: %v", err) } @@ -1723,7 +1868,7 @@ func TestTransactionPoolUnderpricing(t *testing.T) { t.Fatalf("pool internal state corrupted: %v", err) } // Ensure that adding an underpriced transaction on block limit fails - if err := pool.AddRemote(pricedTransaction(0, 100000, big.NewInt(1), keys[1])); err != ErrUnderpriced { + if err := pool.AddRemote(pricedTransaction(0, 100000, big.NewInt(1), keys[1])); !errors.Is(err, ErrUnderpriced) { t.Fatalf("adding underpriced pending transaction error mismatch: have %v, want %v", err, ErrUnderpriced) } // Replace a future transaction with a future transaction @@ -1904,7 +2049,7 @@ func TestTransactionPoolUnderpricingDynamicFee(t *testing.T) { // Ensure that adding an underpriced transaction fails tx := dynamicFeeTx(0, 100000, big.NewInt(2), big.NewInt(1), keys[1]) - if err := pool.AddRemote(tx); err != ErrUnderpriced { // Pend K0:0, K0:1, K2:0; Que K1:1 + if err := pool.AddRemote(tx); !errors.Is(err, ErrUnderpriced) { // Pend K0:0, K0:1, K2:0; Que K1:1 t.Fatalf("adding underpriced pending transaction error mismatch: have %v, want %v", err, ErrUnderpriced) } @@ -2006,7 +2151,7 @@ func TestDualHeapEviction(t *testing.T) { add(false) for baseFee = 0; baseFee <= 1000; baseFee += 100 { - pool.priced.SetBaseFee(big.NewInt(int64(baseFee))) + pool.priced.SetBaseFee(uint256.NewInt(uint64(baseFee))) add(true) check(highCap, "fee cap") add(false) @@ -2035,49 +2180,65 @@ func TestTransactionDeduplication(t *testing.T) { // Create a batch of transactions and add a few of them txs := make([]*types.Transaction, 16) + for i := 0; i < len(txs); i++ { txs[i] = pricedTransaction(uint64(i), 100000, big.NewInt(1), key) } + var firsts []*types.Transaction + for i := 0; i < len(txs); i += 2 { firsts = append(firsts, txs[i]) } + errs := pool.AddRemotesSync(firsts) - if len(errs) != len(firsts) { - t.Fatalf("first add mismatching result count: have %d, want %d", len(errs), len(firsts)) + + if len(errs) != 0 { + t.Fatalf("first add mismatching result count: have %d, want %d", len(errs), 0) } + for i, err := range errs { if err != nil { t.Errorf("add %d failed: %v", i, err) } } + pending, queued := pool.Stats() + if pending != 1 { t.Fatalf("pending transactions mismatched: have %d, want %d", pending, 1) } + if queued != len(txs)/2-1 { t.Fatalf("queued transactions mismatched: have %d, want %d", queued, len(txs)/2-1) } + // Try to add all of them now and ensure previous ones error out as knowns errs = pool.AddRemotesSync(txs) - if len(errs) != len(txs) { - t.Fatalf("all add mismatching result count: have %d, want %d", len(errs), len(txs)) + if len(errs) != 0 { + t.Fatalf("all add mismatching result count: have %d, want %d", len(errs), 0) } + for i, err := range errs { if i%2 == 0 && err == nil { t.Errorf("add %d succeeded, should have failed as known", i) } + if i%2 == 1 && err != nil { t.Errorf("add %d failed: %v", i, err) } } + pending, queued = pool.Stats() + if pending != len(txs) { t.Fatalf("pending transactions mismatched: have %d, want %d", pending, len(txs)) } + if queued != 0 { t.Fatalf("queued transactions mismatched: have %d, want %d", queued, 0) } + if err := validateTxPoolInternals(pool); err != nil { t.Fatalf("pool internal state corrupted: %v", err) } @@ -2111,12 +2272,15 @@ func TestTransactionReplacement(t *testing.T) { if err := pool.addRemoteSync(pricedTransaction(0, 100000, big.NewInt(1), key)); err != nil { t.Fatalf("failed to add original cheap pending transaction: %v", err) } - if err := pool.AddRemote(pricedTransaction(0, 100001, big.NewInt(1), key)); err != ErrReplaceUnderpriced { + + if err := pool.AddRemote(pricedTransaction(0, 100001, big.NewInt(1), key)); !errors.Is(err, ErrReplaceUnderpriced) { t.Fatalf("original cheap pending transaction replacement error mismatch: have %v, want %v", err, ErrReplaceUnderpriced) } + if err := pool.AddRemote(pricedTransaction(0, 100000, big.NewInt(2), key)); err != nil { t.Fatalf("failed to replace original cheap pending transaction: %v", err) } + if err := validateEvents(events, 2); err != nil { t.Fatalf("cheap replacement event firing failed: %v", err) } @@ -2124,12 +2288,15 @@ func TestTransactionReplacement(t *testing.T) { if err := pool.addRemoteSync(pricedTransaction(0, 100000, big.NewInt(price), key)); err != nil { t.Fatalf("failed to add original proper pending transaction: %v", err) } - if err := pool.AddRemote(pricedTransaction(0, 100001, big.NewInt(threshold-1), key)); err != ErrReplaceUnderpriced { + + if err := pool.AddRemote(pricedTransaction(0, 100001, big.NewInt(threshold-1), key)); !errors.Is(err, ErrReplaceUnderpriced) { t.Fatalf("original proper pending transaction replacement error mismatch: have %v, want %v", err, ErrReplaceUnderpriced) } + if err := pool.AddRemote(pricedTransaction(0, 100000, big.NewInt(threshold), key)); err != nil { t.Fatalf("failed to replace original proper pending transaction: %v", err) } + if err := validateEvents(events, 2); err != nil { t.Fatalf("proper replacement event firing failed: %v", err) } @@ -2138,9 +2305,11 @@ func TestTransactionReplacement(t *testing.T) { if err := pool.AddRemote(pricedTransaction(2, 100000, big.NewInt(1), key)); err != nil { t.Fatalf("failed to add original cheap queued transaction: %v", err) } - if err := pool.AddRemote(pricedTransaction(2, 100001, big.NewInt(1), key)); err != ErrReplaceUnderpriced { + + if err := pool.AddRemote(pricedTransaction(2, 100001, big.NewInt(1), key)); !errors.Is(err, ErrReplaceUnderpriced) { t.Fatalf("original cheap queued transaction replacement error mismatch: have %v, want %v", err, ErrReplaceUnderpriced) } + if err := pool.AddRemote(pricedTransaction(2, 100000, big.NewInt(2), key)); err != nil { t.Fatalf("failed to replace original cheap queued transaction: %v", err) } @@ -2148,9 +2317,11 @@ func TestTransactionReplacement(t *testing.T) { if err := pool.AddRemote(pricedTransaction(2, 100000, big.NewInt(price), key)); err != nil { t.Fatalf("failed to add original proper queued transaction: %v", err) } - if err := pool.AddRemote(pricedTransaction(2, 100001, big.NewInt(threshold-1), key)); err != ErrReplaceUnderpriced { + + if err := pool.AddRemote(pricedTransaction(2, 100001, big.NewInt(threshold-1), key)); !errors.Is(err, ErrReplaceUnderpriced) { t.Fatalf("original proper queued transaction replacement error mismatch: have %v, want %v", err, ErrReplaceUnderpriced) } + if err := pool.AddRemote(pricedTransaction(2, 100000, big.NewInt(threshold), key)); err != nil { t.Fatalf("failed to replace original proper queued transaction: %v", err) } @@ -2158,6 +2329,7 @@ func TestTransactionReplacement(t *testing.T) { if err := validateEvents(events, 0); err != nil { t.Fatalf("queued replacement event firing failed: %v", err) } + if err := validateTxPoolInternals(pool); err != nil { t.Fatalf("pool internal state corrupted: %v", err) } @@ -2212,7 +2384,7 @@ func TestTransactionReplacementDynamicFee(t *testing.T) { } // 2. Don't bump tip or feecap => discard tx = dynamicFeeTx(nonce, 100001, big.NewInt(2), big.NewInt(1), key) - if err := pool.AddRemote(tx); err != ErrReplaceUnderpriced { + if err := pool.AddRemote(tx); !errors.Is(err, ErrReplaceUnderpriced) { t.Fatalf("original cheap %s transaction replacement error mismatch: have %v, want %v", stage, err, ErrReplaceUnderpriced) } // 3. Bump both more than min => accept @@ -2235,22 +2407,22 @@ func TestTransactionReplacementDynamicFee(t *testing.T) { } // 6. Bump tip max allowed so it's still underpriced => discard tx = dynamicFeeTx(nonce, 100000, big.NewInt(gasFeeCap), big.NewInt(tipThreshold-1), key) - if err := pool.AddRemote(tx); err != ErrReplaceUnderpriced { + if err := pool.AddRemote(tx); !errors.Is(err, ErrReplaceUnderpriced) { t.Fatalf("original proper %s transaction replacement error mismatch: have %v, want %v", stage, err, ErrReplaceUnderpriced) } // 7. Bump fee cap max allowed so it's still underpriced => discard tx = dynamicFeeTx(nonce, 100000, big.NewInt(feeCapThreshold-1), big.NewInt(gasTipCap), key) - if err := pool.AddRemote(tx); err != ErrReplaceUnderpriced { + if err := pool.AddRemote(tx); !errors.Is(err, ErrReplaceUnderpriced) { t.Fatalf("original proper %s transaction replacement error mismatch: have %v, want %v", stage, err, ErrReplaceUnderpriced) } // 8. Bump tip min for acceptance => accept tx = dynamicFeeTx(nonce, 100000, big.NewInt(gasFeeCap), big.NewInt(tipThreshold), key) - if err := pool.AddRemote(tx); err != ErrReplaceUnderpriced { + if err := pool.AddRemote(tx); !errors.Is(err, ErrReplaceUnderpriced) { t.Fatalf("original proper %s transaction replacement error mismatch: have %v, want %v", stage, err, ErrReplaceUnderpriced) } // 9. Bump fee cap min for acceptance => accept tx = dynamicFeeTx(nonce, 100000, big.NewInt(feeCapThreshold), big.NewInt(gasTipCap), key) - if err := pool.AddRemote(tx); err != ErrReplaceUnderpriced { + if err := pool.AddRemote(tx); !errors.Is(err, ErrReplaceUnderpriced) { t.Fatalf("original proper %s transaction replacement error mismatch: have %v, want %v", stage, err, ErrReplaceUnderpriced) } // 10. Check events match expected (3 new executable txs during pending, 0 during queue) @@ -2480,6 +2652,7 @@ func benchmarkPendingDemotion(b *testing.B, size int) { } // Benchmark the speed of pool validation b.ResetTimer() + b.ReportAllocs() for i := 0; i < b.N; i++ { pool.demoteUnexecutables() } @@ -2511,15 +2684,7 @@ func benchmarkFuturePromotion(b *testing.B, size int) { } // Benchmarks the speed of batched transaction insertion. -func BenchmarkPoolBatchInsert100(b *testing.B) { benchmarkPoolBatchInsert(b, 100, false) } -func BenchmarkPoolBatchInsert1000(b *testing.B) { benchmarkPoolBatchInsert(b, 1000, false) } -func BenchmarkPoolBatchInsert10000(b *testing.B) { benchmarkPoolBatchInsert(b, 10000, false) } - -func BenchmarkPoolBatchLocalInsert100(b *testing.B) { benchmarkPoolBatchInsert(b, 100, true) } -func BenchmarkPoolBatchLocalInsert1000(b *testing.B) { benchmarkPoolBatchInsert(b, 1000, true) } -func BenchmarkPoolBatchLocalInsert10000(b *testing.B) { benchmarkPoolBatchInsert(b, 10000, true) } - -func benchmarkPoolBatchInsert(b *testing.B, size int, local bool) { +func BenchmarkPoolBatchInsert(b *testing.B) { // Generate a batch of transactions to enqueue into the pool pool, key := setupTxPool() defer pool.Stop() @@ -2527,21 +2692,153 @@ func benchmarkPoolBatchInsert(b *testing.B, size int, local bool) { account := crypto.PubkeyToAddress(key.PublicKey) testAddBalance(pool, account, big.NewInt(1000000000000000000)) - batches := make([]types.Transactions, b.N) - for i := 0; i < b.N; i++ { - batches[i] = make(types.Transactions, size) - for j := 0; j < size; j++ { - batches[i][j] = transaction(uint64(size*i+j), 100000, key) - } + const format = "size %d, is local %t" + + cases := []struct { + name string + size int + isLocal bool + }{ + {size: 100, isLocal: false}, + {size: 1000, isLocal: false}, + {size: 10000, isLocal: false}, + + {size: 100, isLocal: true}, + {size: 1000, isLocal: true}, + {size: 10000, isLocal: true}, } + + for i := range cases { + cases[i].name = fmt.Sprintf(format, cases[i].size, cases[i].isLocal) + } + // Benchmark importing the transactions into the queue - b.ResetTimer() - for _, batch := range batches { - if local { - pool.AddLocals(batch) - } else { - pool.AddRemotes(batch) - } + + for _, testCase := range cases { + singleCase := testCase + + b.Run(singleCase.name, func(b *testing.B) { + batches := make([]types.Transactions, b.N) + + for i := 0; i < b.N; i++ { + batches[i] = make(types.Transactions, singleCase.size) + + for j := 0; j < singleCase.size; j++ { + batches[i][j] = transaction(uint64(singleCase.size*i+j), 100000, key) + } + } + + b.ResetTimer() + b.ReportAllocs() + + for _, batch := range batches { + if testCase.isLocal { + pool.AddLocals(batch) + } else { + pool.AddRemotes(batch) + } + } + }) + } +} + +func BenchmarkPoolMining(b *testing.B) { + const format = "size %d" + + cases := []struct { + name string + size int + }{ + {size: 1}, + {size: 5}, + {size: 10}, + {size: 20}, + } + + for i := range cases { + cases[i].name = fmt.Sprintf(format, cases[i].size) + } + + const blockGasLimit = 30_000_000 + + // Benchmark importing the transactions into the queue + + for _, testCase := range cases { + singleCase := testCase + + b.Run(singleCase.name, func(b *testing.B) { + // Generate a batch of transactions to enqueue into the pool + pendingAddedCh := make(chan struct{}, 1024) + + pool, localKey := setupTxPoolWithConfig(params.TestChainConfig, testTxPoolConfig, txPoolGasLimit, MakeWithPromoteTxCh(pendingAddedCh)) + defer pool.Stop() + + localKeyPub := localKey.PublicKey + account := crypto.PubkeyToAddress(localKeyPub) + + const balanceStr = "1_000_000_000" + balance, ok := big.NewInt(0).SetString(balanceStr, 0) + if !ok { + b.Fatal("incorrect initial balance", balanceStr) + } + + testAddBalance(pool, account, balance) + + signer := types.NewEIP155Signer(big.NewInt(1)) + baseFee := uint256.NewInt(1) + + const batchesSize = 100 + + batches := make([]types.Transactions, batchesSize) + + for i := 0; i < batchesSize; i++ { + batches[i] = make(types.Transactions, singleCase.size) + + for j := 0; j < singleCase.size; j++ { + batches[i][j] = transaction(uint64(singleCase.size*i+j), 100_000, localKey) + } + + for _, batch := range batches { + pool.AddRemotes(batch) + } + } + + var promoted int + + for range pendingAddedCh { + promoted++ + + if promoted >= batchesSize*singleCase.size/2 { + break + } + } + + var total int + + b.ResetTimer() + b.ReportAllocs() + + pendingDurations := make([]time.Duration, b.N) + + var added int + + for i := 0; i < b.N; i++ { + added, pendingDurations[i], _ = mining(b, pool, signer, baseFee, blockGasLimit, i) + total += added + } + + b.StopTimer() + + pendingDurationsFloat := make([]float64, len(pendingDurations)) + + for i, v := range pendingDurations { + pendingDurationsFloat[i] = float64(v.Nanoseconds()) + } + + mean, stddev := stat.MeanStdDev(pendingDurationsFloat, nil) + b.Logf("[%s] pending mean %v, stdev %v, %v-%v", + common.NowMilliseconds(), time.Duration(mean), time.Duration(stddev), time.Duration(floats.Min(pendingDurationsFloat)), time.Duration(floats.Max(pendingDurationsFloat))) + }) } } @@ -2581,79 +2878,372 @@ func BenchmarkInsertRemoteWithAllLocals(b *testing.B) { } // Benchmarks the speed of batch transaction insertion in case of multiple accounts. -func BenchmarkPoolMultiAccountBatchInsert(b *testing.B) { +func BenchmarkPoolAccountMultiBatchInsert(b *testing.B) { // Generate a batch of transactions to enqueue into the pool pool, _ := setupTxPool() defer pool.Stop() - b.ReportAllocs() + batches := make(types.Transactions, b.N) + for i := 0; i < b.N; i++ { key, _ := crypto.GenerateKey() account := crypto.PubkeyToAddress(key.PublicKey) + pool.currentState.AddBalance(account, big.NewInt(1000000)) + tx := transaction(uint64(0), 100000, key) + batches[i] = tx } + // Benchmark importing the transactions into the queue + b.ReportAllocs() b.ResetTimer() + for _, tx := range batches { pool.AddRemotesSync([]*types.Transaction{tx}) } } -type acc struct { - nonce uint64 - key *ecdsa.PrivateKey - account common.Address -} - -type testTx struct { - tx *types.Transaction - idx int - isLocal bool -} +func BenchmarkPoolAccountMultiBatchInsertRace(b *testing.B) { + // Generate a batch of transactions to enqueue into the pool + pool, _ := setupTxPool() + defer pool.Stop() -const localIdx = 0 + batches := make(types.Transactions, b.N) -func getTransactionGen(t *rapid.T, keys []*acc, nonces []uint64, localKey *acc, gasPriceMin, gasPriceMax, gasLimitMin, gasLimitMax uint64) *testTx { - idx := rapid.IntRange(0, len(keys)-1).Draw(t, "accIdx").(int) + for i := 0; i < b.N; i++ { + key, _ := crypto.GenerateKey() + account := crypto.PubkeyToAddress(key.PublicKey) + tx := transaction(uint64(0), 100000, key) - var ( - isLocal bool - key *ecdsa.PrivateKey - ) + pool.currentState.AddBalance(account, big.NewInt(1000000)) - if idx == localIdx { - isLocal = true - key = localKey.key - } else { - key = keys[idx].key + batches[i] = tx } - nonces[idx]++ + done := make(chan struct{}) - gasPriceUint := rapid.Uint64Range(gasPriceMin, gasPriceMax).Draw(t, "gasPrice").(uint64) - gasPrice := big.NewInt(0).SetUint64(gasPriceUint) - gasLimit := rapid.Uint64Range(gasLimitMin, gasLimitMax).Draw(t, "gasLimit").(uint64) + go func() { + t := time.NewTicker(time.Microsecond) + defer t.Stop() - return &testTx{ - tx: pricedTransaction(nonces[idx]-1, gasLimit, gasPrice, key), - idx: idx, - isLocal: isLocal, - } -} + var pending map[common.Address]types.Transactions -type transactionBatches struct { - txs []*testTx - totalTxs int -} + loop: + for { + select { + case <-t.C: + pending = pool.Pending(context.Background(), true) + case <-done: + break loop + } + } -func transactionsGen(keys []*acc, nonces []uint64, localKey *acc, minTxs int, maxTxs int, gasPriceMin, gasPriceMax, gasLimitMin, gasLimitMax uint64, caseParams *strings.Builder) func(t *rapid.T) *transactionBatches { - return func(t *rapid.T) *transactionBatches { - totalTxs := rapid.IntRange(minTxs, maxTxs).Draw(t, "totalTxs").(int) - txs := make([]*testTx, totalTxs) + fmt.Fprint(io.Discard, pending) + }() - gasValues := make([]float64, totalTxs) + b.ReportAllocs() + b.ResetTimer() + + for _, tx := range batches { + pool.AddRemotesSync([]*types.Transaction{tx}) + } + + close(done) +} + +func BenchmarkPoolAccountMultiBatchInsertNoLockRace(b *testing.B) { + // Generate a batch of transactions to enqueue into the pool + pendingAddedCh := make(chan struct{}, 1024) + + pool, localKey := setupTxPoolWithConfig(params.TestChainConfig, testTxPoolConfig, txPoolGasLimit, MakeWithPromoteTxCh(pendingAddedCh)) + defer pool.Stop() + + _ = localKey + + batches := make(types.Transactions, b.N) + + for i := 0; i < b.N; i++ { + key, _ := crypto.GenerateKey() + account := crypto.PubkeyToAddress(key.PublicKey) + tx := transaction(uint64(0), 100000, key) + + pool.currentState.AddBalance(account, big.NewInt(1000000)) + + batches[i] = tx + } + + done := make(chan struct{}) + + go func() { + t := time.NewTicker(time.Microsecond) + defer t.Stop() + + var pending map[common.Address]types.Transactions + + for range t.C { + pending = pool.Pending(context.Background(), true) + + if len(pending) >= b.N/2 { + close(done) + + return + } + } + }() + + b.ReportAllocs() + b.ResetTimer() + + for _, tx := range batches { + pool.AddRemotes([]*types.Transaction{tx}) + } + + <-done +} + +func BenchmarkPoolAccountsBatchInsert(b *testing.B) { + // Generate a batch of transactions to enqueue into the pool + pool, _ := setupTxPool() + defer pool.Stop() + + batches := make(types.Transactions, b.N) + + for i := 0; i < b.N; i++ { + key, _ := crypto.GenerateKey() + account := crypto.PubkeyToAddress(key.PublicKey) + + pool.currentState.AddBalance(account, big.NewInt(1000000)) + + tx := transaction(uint64(0), 100000, key) + + batches[i] = tx + } + + // Benchmark importing the transactions into the queue + b.ReportAllocs() + b.ResetTimer() + + for _, tx := range batches { + _ = pool.AddRemoteSync(tx) + } +} + +func BenchmarkPoolAccountsBatchInsertRace(b *testing.B) { + // Generate a batch of transactions to enqueue into the pool + pool, _ := setupTxPool() + defer pool.Stop() + + batches := make(types.Transactions, b.N) + + for i := 0; i < b.N; i++ { + key, _ := crypto.GenerateKey() + account := crypto.PubkeyToAddress(key.PublicKey) + tx := transaction(uint64(0), 100000, key) + + pool.currentState.AddBalance(account, big.NewInt(1000000)) + + batches[i] = tx + } + + done := make(chan struct{}) + + go func() { + t := time.NewTicker(time.Microsecond) + defer t.Stop() + + var pending map[common.Address]types.Transactions + + loop: + for { + select { + case <-t.C: + pending = pool.Pending(context.Background(), true) + case <-done: + break loop + } + } + + fmt.Fprint(io.Discard, pending) + }() + + b.ReportAllocs() + b.ResetTimer() + + for _, tx := range batches { + _ = pool.AddRemoteSync(tx) + } + + close(done) +} + +func BenchmarkPoolAccountsBatchInsertNoLockRace(b *testing.B) { + // Generate a batch of transactions to enqueue into the pool + pendingAddedCh := make(chan struct{}, 1024) + + pool, localKey := setupTxPoolWithConfig(params.TestChainConfig, testTxPoolConfig, txPoolGasLimit, MakeWithPromoteTxCh(pendingAddedCh)) + defer pool.Stop() + + _ = localKey + + batches := make(types.Transactions, b.N) + + for i := 0; i < b.N; i++ { + key, _ := crypto.GenerateKey() + account := crypto.PubkeyToAddress(key.PublicKey) + tx := transaction(uint64(0), 100000, key) + + pool.currentState.AddBalance(account, big.NewInt(1000000)) + + batches[i] = tx + } + + done := make(chan struct{}) + + go func() { + t := time.NewTicker(time.Microsecond) + defer t.Stop() + + var pending map[common.Address]types.Transactions + + for range t.C { + pending = pool.Pending(context.Background(), true) + + if len(pending) >= b.N/2 { + close(done) + + return + } + } + }() + + b.ReportAllocs() + b.ResetTimer() + + for _, tx := range batches { + _ = pool.AddRemote(tx) + } + + <-done +} + +func TestPoolMultiAccountBatchInsertRace(t *testing.T) { + t.Parallel() + + // Generate a batch of transactions to enqueue into the pool + pool, _ := setupTxPool() + defer pool.Stop() + + const n = 5000 + + batches := make(types.Transactions, n) + batchesSecond := make(types.Transactions, n) + + for i := 0; i < n; i++ { + batches[i] = newTxs(pool) + batchesSecond[i] = newTxs(pool) + } + + done := make(chan struct{}) + + go func() { + t := time.NewTicker(time.Microsecond) + defer t.Stop() + + var ( + pending map[common.Address]types.Transactions + total int + ) + + for range t.C { + pending = pool.Pending(context.Background(), true) + total = len(pending) + + _ = pool.Locals() + + if total >= n { + close(done) + + return + } + } + }() + + for _, tx := range batches { + pool.AddRemotesSync([]*types.Transaction{tx}) + } + + for _, tx := range batchesSecond { + pool.AddRemotes([]*types.Transaction{tx}) + } + + <-done +} + +func newTxs(pool *TxPool) *types.Transaction { + key, _ := crypto.GenerateKey() + account := crypto.PubkeyToAddress(key.PublicKey) + tx := transaction(uint64(0), 100000, key) + + pool.currentState.AddBalance(account, big.NewInt(1_000_000_000)) + + return tx +} + +type acc struct { + nonce uint64 + key *ecdsa.PrivateKey + account common.Address +} + +type testTx struct { + tx *types.Transaction + idx int + isLocal bool +} + +const localIdx = 0 + +func getTransactionGen(t *rapid.T, keys []*acc, nonces []uint64, localKey *acc, gasPriceMin, gasPriceMax, gasLimitMin, gasLimitMax uint64) *testTx { + idx := rapid.IntRange(0, len(keys)-1).Draw(t, "accIdx").(int) + + var ( + isLocal bool + key *ecdsa.PrivateKey + ) + + if idx == localIdx { + isLocal = true + key = localKey.key + } else { + key = keys[idx].key + } + + nonces[idx]++ + + gasPriceUint := rapid.Uint64Range(gasPriceMin, gasPriceMax).Draw(t, "gasPrice").(uint64) + gasPrice := big.NewInt(0).SetUint64(gasPriceUint) + gasLimit := rapid.Uint64Range(gasLimitMin, gasLimitMax).Draw(t, "gasLimit").(uint64) + + return &testTx{ + tx: pricedTransaction(nonces[idx]-1, gasLimit, gasPrice, key), + idx: idx, + isLocal: isLocal, + } +} + +type transactionBatches struct { + txs []*testTx + totalTxs int +} + +func transactionsGen(keys []*acc, nonces []uint64, localKey *acc, minTxs int, maxTxs int, gasPriceMin, gasPriceMax, gasLimitMin, gasLimitMax uint64, caseParams *strings.Builder) func(t *rapid.T) *transactionBatches { + return func(t *rapid.T) *transactionBatches { + totalTxs := rapid.IntRange(minTxs, maxTxs).Draw(t, "totalTxs").(int) + txs := make([]*testTx, totalTxs) + + gasValues := make([]float64, totalTxs) fmt.Fprintf(caseParams, " totalTxs = %d;", totalTxs) @@ -2893,20 +3483,20 @@ func testPoolBatchInsert(t *testing.T, cfg txPoolRapidConfig) { wg.Wait() var ( - addIntoTxPool func(tx []*types.Transaction) []error + addIntoTxPool func(tx *types.Transaction) error totalInBatch int ) for _, tx := range txs.txs { - addIntoTxPool = pool.AddRemotesSync + addIntoTxPool = pool.AddRemoteSync if tx.isLocal { - addIntoTxPool = pool.AddLocals + addIntoTxPool = pool.AddLocal } - err := addIntoTxPool([]*types.Transaction{tx.tx}) - if len(err) != 0 && err[0] != nil { - rt.Log("on adding a transaction to the tx pool", err[0], tx.tx.Gas(), tx.tx.GasPrice(), pool.GasPrice(), getBalance(pool, keys[tx.idx].account)) + err := addIntoTxPool(tx.tx) + if err != nil { + rt.Log("on adding a transaction to the tx pool", err, tx.tx.Gas(), tx.tx.GasPrice(), pool.GasPrice(), getBalance(pool, keys[tx.idx].account)) } } @@ -2945,7 +3535,7 @@ func testPoolBatchInsert(t *testing.T, cfg txPoolRapidConfig) { // check if txPool got stuck if currentTxPoolStats == lastTxPoolStats { - stuckBlocks++ //todo: переписать + stuckBlocks++ //todo: need something better then that } else { stuckBlocks = 0 lastTxPoolStats = currentTxPoolStats @@ -2953,7 +3543,7 @@ func testPoolBatchInsert(t *testing.T, cfg txPoolRapidConfig) { // copy-paste start := time.Now() - pending := pool.Pending(true) + pending := pool.Pending(context.Background(), true) locals := pool.Locals() // from fillTransactions @@ -2971,7 +3561,7 @@ func testPoolBatchInsert(t *testing.T, cfg txPoolRapidConfig) { // check for nonce gaps var lastNonce, currentNonce int - pending = pool.Pending(true) + pending = pool.Pending(context.Background(), true) for txAcc, pendingTxs := range pending { lastNonce = int(pool.Nonce(txAcc)) - len(pendingTxs) - 1 @@ -3041,7 +3631,7 @@ func fillTransactions(ctx context.Context, pool *TxPool, locals []common.Address signer := types.NewLondonSigner(big.NewInt(1)) // fake baseFee - baseFee := big.NewInt(1) + baseFee := uint256.NewInt(1) blockGasLimit := gasLimit @@ -3098,7 +3688,10 @@ func commitTransactions(pool *TxPool, txs *types.TransactionsByPriceAndNonce, bl if tx.Gas() <= blockGasLimit { blockGasLimit -= tx.Gas() + + pool.mu.Lock() pool.removeTx(tx.Hash(), false) + pool.mu.Unlock() txCount++ } else { @@ -3113,3 +3706,885 @@ func MakeWithPromoteTxCh(ch chan struct{}) func(*TxPool) { pool.promoteTxCh = ch } } + +//nolint:thelper +func mining(tb testing.TB, pool *TxPool, signer types.Signer, baseFee *uint256.Int, blockGasLimit uint64, totalBlocks int) (int, time.Duration, time.Duration) { + var ( + localTxsCount int + remoteTxsCount int + localTxs = make(map[common.Address]types.Transactions) + remoteTxs map[common.Address]types.Transactions + total int + ) + + start := time.Now() + + pending := pool.Pending(context.Background(), true) + + pendingDuration := time.Since(start) + + remoteTxs = pending + + locals := pool.Locals() + + pendingLen, queuedLen := pool.Stats() + + for _, account := range locals { + if txs := remoteTxs[account]; len(txs) > 0 { + delete(remoteTxs, account) + + localTxs[account] = txs + } + } + + localTxsCount = len(localTxs) + remoteTxsCount = len(remoteTxs) + + var txLocalCount int + + if localTxsCount > 0 { + txs := types.NewTransactionsByPriceAndNonce(signer, localTxs, baseFee) + + blockGasLimit, txLocalCount = commitTransactions(pool, txs, blockGasLimit) + + total += txLocalCount + } + + var txRemoteCount int + + if remoteTxsCount > 0 { + txs := types.NewTransactionsByPriceAndNonce(signer, remoteTxs, baseFee) + + _, txRemoteCount = commitTransactions(pool, txs, blockGasLimit) + + total += txRemoteCount + } + + miningDuration := time.Since(start) + + tb.Logf("[%s] mining block. block %d. total %d: pending %d(added %d), local %d(added %d), queued %d, localTxsCount %d, remoteTxsCount %d, pending %v, mining %v", + common.NowMilliseconds(), totalBlocks, total, pendingLen, txRemoteCount, localTxsCount, txLocalCount, queuedLen, localTxsCount, remoteTxsCount, pendingDuration, miningDuration) + + return total, pendingDuration, miningDuration +} + +//nolint:paralleltest +func TestPoolMiningDataRaces(t *testing.T) { + if testing.Short() { + t.Skip("only for data race testing") + } + + const format = "size %d, txs ticker %v, api ticker %v" + + cases := []struct { + name string + size int + txsTickerDuration time.Duration + apiTickerDuration time.Duration + }{ + { + size: 1, + txsTickerDuration: 200 * time.Millisecond, + apiTickerDuration: 10 * time.Millisecond, + }, + { + size: 1, + txsTickerDuration: 400 * time.Millisecond, + apiTickerDuration: 10 * time.Millisecond, + }, + { + size: 1, + txsTickerDuration: 600 * time.Millisecond, + apiTickerDuration: 10 * time.Millisecond, + }, + { + size: 1, + txsTickerDuration: 800 * time.Millisecond, + apiTickerDuration: 10 * time.Millisecond, + }, + + { + size: 5, + txsTickerDuration: 200 * time.Millisecond, + apiTickerDuration: 10 * time.Millisecond, + }, + { + size: 5, + txsTickerDuration: 400 * time.Millisecond, + apiTickerDuration: 10 * time.Millisecond, + }, + { + size: 5, + txsTickerDuration: 600 * time.Millisecond, + apiTickerDuration: 10 * time.Millisecond, + }, + { + size: 5, + txsTickerDuration: 800 * time.Millisecond, + apiTickerDuration: 10 * time.Millisecond, + }, + + { + size: 10, + txsTickerDuration: 200 * time.Millisecond, + apiTickerDuration: 10 * time.Millisecond, + }, + { + size: 10, + txsTickerDuration: 400 * time.Millisecond, + apiTickerDuration: 10 * time.Millisecond, + }, + { + size: 10, + txsTickerDuration: 600 * time.Millisecond, + apiTickerDuration: 10 * time.Millisecond, + }, + { + size: 10, + txsTickerDuration: 800 * time.Millisecond, + apiTickerDuration: 10 * time.Millisecond, + }, + + { + size: 20, + txsTickerDuration: 200 * time.Millisecond, + apiTickerDuration: 10 * time.Millisecond, + }, + { + size: 20, + txsTickerDuration: 400 * time.Millisecond, + apiTickerDuration: 10 * time.Millisecond, + }, + { + size: 20, + txsTickerDuration: 600 * time.Millisecond, + apiTickerDuration: 10 * time.Millisecond, + }, + { + size: 20, + txsTickerDuration: 800 * time.Millisecond, + apiTickerDuration: 10 * time.Millisecond, + }, + + { + size: 30, + txsTickerDuration: 200 * time.Millisecond, + apiTickerDuration: 10 * time.Millisecond, + }, + { + size: 30, + txsTickerDuration: 400 * time.Millisecond, + apiTickerDuration: 10 * time.Millisecond, + }, + { + size: 30, + txsTickerDuration: 600 * time.Millisecond, + apiTickerDuration: 10 * time.Millisecond, + }, + { + size: 30, + txsTickerDuration: 800 * time.Millisecond, + apiTickerDuration: 10 * time.Millisecond, + }, + } + + for i := range cases { + cases[i].name = fmt.Sprintf(format, cases[i].size, cases[i].txsTickerDuration, cases[i].apiTickerDuration) + } + + //nolint:paralleltest + for _, testCase := range cases { + singleCase := testCase + + t.Run(singleCase.name, func(t *testing.T) { + defer goleak.VerifyNone(t, leak.IgnoreList()...) + + const ( + blocks = 300 + blockGasLimit = 40_000_000 + blockPeriod = time.Second + threads = 10 + batchesSize = 10_000 + timeoutDuration = 10 * blockPeriod + + balanceStr = "1_000_000_000_000" + ) + + apiWithMining(t, balanceStr, batchesSize, singleCase, timeoutDuration, threads, blockPeriod, blocks, blockGasLimit) + }) + } +} + +//nolint:gocognit,thelper +func apiWithMining(tb testing.TB, balanceStr string, batchesSize int, singleCase struct { + name string + size int + txsTickerDuration time.Duration + apiTickerDuration time.Duration +}, timeoutDuration time.Duration, threads int, blockPeriod time.Duration, blocks int, blockGasLimit uint64) { + done := make(chan struct{}) + + var wg sync.WaitGroup + + defer func() { + close(done) + + tb.Logf("[%s] finishing apiWithMining", common.NowMilliseconds()) + + wg.Wait() + + tb.Logf("[%s] apiWithMining finished", common.NowMilliseconds()) + }() + + // Generate a batch of transactions to enqueue into the pool + pendingAddedCh := make(chan struct{}, 1024) + + pool, localKey := setupTxPoolWithConfig(params.TestChainConfig, testTxPoolConfig, txPoolGasLimit, MakeWithPromoteTxCh(pendingAddedCh)) + defer pool.Stop() + + localKeyPub := localKey.PublicKey + account := crypto.PubkeyToAddress(localKeyPub) + + balance, ok := big.NewInt(0).SetString(balanceStr, 0) + if !ok { + tb.Fatal("incorrect initial balance", balanceStr) + } + + testAddBalance(pool, account, balance) + + signer := types.NewEIP155Signer(big.NewInt(1)) + baseFee := uint256.NewInt(1) + + batchesLocal := make([]types.Transactions, batchesSize) + batchesRemote := make([]types.Transactions, batchesSize) + batchesRemotes := make([]types.Transactions, batchesSize) + batchesRemoteSync := make([]types.Transactions, batchesSize) + batchesRemotesSync := make([]types.Transactions, batchesSize) + + for i := 0; i < batchesSize; i++ { + batchesLocal[i] = make(types.Transactions, singleCase.size) + + for j := 0; j < singleCase.size; j++ { + batchesLocal[i][j] = pricedTransaction(uint64(singleCase.size*i+j), 100_000, big.NewInt(int64(i+1)), localKey) + } + + batchesRemote[i] = make(types.Transactions, singleCase.size) + + remoteKey, _ := crypto.GenerateKey() + remoteAddr := crypto.PubkeyToAddress(remoteKey.PublicKey) + testAddBalance(pool, remoteAddr, balance) + + for j := 0; j < singleCase.size; j++ { + batchesRemote[i][j] = pricedTransaction(uint64(j), 100_000, big.NewInt(int64(i+1)), remoteKey) + } + + batchesRemotes[i] = make(types.Transactions, singleCase.size) + + remotesKey, _ := crypto.GenerateKey() + remotesAddr := crypto.PubkeyToAddress(remotesKey.PublicKey) + testAddBalance(pool, remotesAddr, balance) + + for j := 0; j < singleCase.size; j++ { + batchesRemotes[i][j] = pricedTransaction(uint64(j), 100_000, big.NewInt(int64(i+1)), remotesKey) + } + + batchesRemoteSync[i] = make(types.Transactions, singleCase.size) + + remoteSyncKey, _ := crypto.GenerateKey() + remoteSyncAddr := crypto.PubkeyToAddress(remoteSyncKey.PublicKey) + testAddBalance(pool, remoteSyncAddr, balance) + + for j := 0; j < singleCase.size; j++ { + batchesRemoteSync[i][j] = pricedTransaction(uint64(j), 100_000, big.NewInt(int64(i+1)), remoteSyncKey) + } + + batchesRemotesSync[i] = make(types.Transactions, singleCase.size) + + remotesSyncKey, _ := crypto.GenerateKey() + remotesSyncAddr := crypto.PubkeyToAddress(remotesSyncKey.PublicKey) + testAddBalance(pool, remotesSyncAddr, balance) + + for j := 0; j < singleCase.size; j++ { + batchesRemotesSync[i][j] = pricedTransaction(uint64(j), 100_000, big.NewInt(int64(i+1)), remotesSyncKey) + } + } + + tb.Logf("[%s] starting goroutines", common.NowMilliseconds()) + + txsTickerDuration := singleCase.txsTickerDuration + apiTickerDuration := singleCase.apiTickerDuration + + // locals + wg.Add(1) + + go func() { + defer func() { + tb.Logf("[%s] stopping AddLocal(s)", common.NowMilliseconds()) + + wg.Done() + + tb.Logf("[%s] stopped AddLocal(s)", common.NowMilliseconds()) + }() + + tb.Logf("[%s] starting AddLocal(s)", common.NowMilliseconds()) + + for _, batch := range batchesLocal { + batch := batch + + select { + case <-done: + return + default: + } + + if rand.Int()%2 == 0 { + runWithTimeout(tb, func(_ chan struct{}) { + errs := pool.AddLocals(batch) + if len(errs) != 0 { + tb.Logf("[%s] AddLocals error, %v", common.NowMilliseconds(), errs) + } + }, done, "AddLocals", timeoutDuration, 0, 0) + } else { + for _, tx := range batch { + tx := tx + + runWithTimeout(tb, func(_ chan struct{}) { + err := pool.AddLocal(tx) + if err != nil { + tb.Logf("[%s] AddLocal error %s", common.NowMilliseconds(), err) + } + }, done, "AddLocal", timeoutDuration, 0, 0) + + time.Sleep(txsTickerDuration) + } + } + + time.Sleep(txsTickerDuration) + } + }() + + // remotes + wg.Add(1) + + go func() { + defer func() { + tb.Logf("[%s] stopping AddRemotes", common.NowMilliseconds()) + + wg.Done() + + tb.Logf("[%s] stopped AddRemotes", common.NowMilliseconds()) + }() + + addTransactionsBatches(tb, batchesRemotes, getFnForBatches(pool.AddRemotes), done, timeoutDuration, txsTickerDuration, "AddRemotes", 0) + }() + + // remote + wg.Add(1) + + go func() { + defer func() { + tb.Logf("[%s] stopping AddRemote", common.NowMilliseconds()) + + wg.Done() + + tb.Logf("[%s] stopped AddRemote", common.NowMilliseconds()) + }() + + addTransactions(tb, batchesRemote, pool.AddRemote, done, timeoutDuration, txsTickerDuration, "AddRemote", 0) + }() + + // sync + // remotes + wg.Add(1) + + go func() { + defer func() { + tb.Logf("[%s] stopping AddRemotesSync", common.NowMilliseconds()) + + wg.Done() + + tb.Logf("[%s] stopped AddRemotesSync", common.NowMilliseconds()) + }() + + addTransactionsBatches(tb, batchesRemotesSync, getFnForBatches(pool.AddRemotesSync), done, timeoutDuration, txsTickerDuration, "AddRemotesSync", 0) + }() + + // remote + wg.Add(1) + + go func() { + defer func() { + tb.Logf("[%s] stopping AddRemoteSync", common.NowMilliseconds()) + + wg.Done() + + tb.Logf("[%s] stopped AddRemoteSync", common.NowMilliseconds()) + }() + + addTransactions(tb, batchesRemoteSync, pool.AddRemoteSync, done, timeoutDuration, txsTickerDuration, "AddRemoteSync", 0) + }() + + // tx pool API + for i := 0; i < threads; i++ { + i := i + + wg.Add(1) + + go func() { + defer func() { + tb.Logf("[%s] stopping Pending-no-tips, thread %d", common.NowMilliseconds(), i) + + wg.Done() + + tb.Logf("[%s] stopped Pending-no-tips, thread %d", common.NowMilliseconds(), i) + }() + + runWithTicker(tb, func(_ chan struct{}) { + p := pool.Pending(context.Background(), false) + fmt.Fprint(io.Discard, p) + }, done, "Pending-no-tips", apiTickerDuration, timeoutDuration, i) + }() + + wg.Add(1) + + go func() { + defer func() { + tb.Logf("[%s] stopping Pending-with-tips, thread %d", common.NowMilliseconds(), i) + + wg.Done() + + tb.Logf("[%s] stopped Pending-with-tips, thread %d", common.NowMilliseconds(), i) + }() + + runWithTicker(tb, func(_ chan struct{}) { + p := pool.Pending(context.Background(), true) + fmt.Fprint(io.Discard, p) + }, done, "Pending-with-tips", apiTickerDuration, timeoutDuration, i) + }() + + wg.Add(1) + + go func() { + defer func() { + tb.Logf("[%s] stopping Locals, thread %d", common.NowMilliseconds(), i) + + wg.Done() + + tb.Logf("[%s] stopped Locals, thread %d", common.NowMilliseconds(), i) + }() + + runWithTicker(tb, func(_ chan struct{}) { + l := pool.Locals() + fmt.Fprint(io.Discard, l) + }, done, "Locals", apiTickerDuration, timeoutDuration, i) + }() + + wg.Add(1) + + go func() { + defer func() { + tb.Logf("[%s] stopping Content, thread %d", common.NowMilliseconds(), i) + + wg.Done() + + tb.Logf("[%s] stopped Content, thread %d", common.NowMilliseconds(), i) + }() + + runWithTicker(tb, func(_ chan struct{}) { + p, q := pool.Content() + fmt.Fprint(io.Discard, p, q) + }, done, "Content", apiTickerDuration, timeoutDuration, i) + }() + + wg.Add(1) + + go func() { + defer func() { + tb.Logf("[%s] stopping GasPriceUint256, thread %d", common.NowMilliseconds(), i) + + wg.Done() + + tb.Logf("[%s] stopped GasPriceUint256, thread %d", common.NowMilliseconds(), i) + }() + + runWithTicker(tb, func(_ chan struct{}) { + res := pool.GasPriceUint256() + fmt.Fprint(io.Discard, res) + }, done, "GasPriceUint256", apiTickerDuration, timeoutDuration, i) + }() + + wg.Add(1) + + go func() { + defer func() { + tb.Logf("[%s] stopping GasPrice, thread %d", common.NowMilliseconds(), i) + + wg.Done() + + tb.Logf("[%s] stopped GasPrice, thread %d", common.NowMilliseconds(), i) + }() + + runWithTicker(tb, func(_ chan struct{}) { + res := pool.GasPrice() + fmt.Fprint(io.Discard, res) + }, done, "GasPrice", apiTickerDuration, timeoutDuration, i) + }() + + wg.Add(1) + + go func() { + defer func() { + tb.Logf("[%s] stopping SetGasPrice, thread %d", common.NowMilliseconds(), i) + + wg.Done() + + tb.Logf("[%s] stopped SetGasPrice, , thread %d", common.NowMilliseconds(), i) + }() + + runWithTicker(tb, func(_ chan struct{}) { + pool.SetGasPrice(pool.GasPrice()) + }, done, "SetGasPrice", apiTickerDuration, timeoutDuration, i) + }() + + wg.Add(1) + + go func() { + defer func() { + tb.Logf("[%s] stopping ContentFrom, thread %d", common.NowMilliseconds(), i) + + wg.Done() + + tb.Logf("[%s] stopped ContentFrom, thread %d", common.NowMilliseconds(), i) + }() + + runWithTicker(tb, func(_ chan struct{}) { + p, q := pool.ContentFrom(account) + fmt.Fprint(io.Discard, p, q) + }, done, "ContentFrom", apiTickerDuration, timeoutDuration, i) + }() + + wg.Add(1) + + go func() { + defer func() { + tb.Logf("[%s] stopping Has, thread %d", common.NowMilliseconds(), i) + + wg.Done() + + tb.Logf("[%s] stopped Has, thread %d", common.NowMilliseconds(), i) + }() + + runWithTicker(tb, func(_ chan struct{}) { + res := pool.Has(batchesRemotes[0][0].Hash()) + fmt.Fprint(io.Discard, res) + }, done, "Has", apiTickerDuration, timeoutDuration, i) + }() + + wg.Add(1) + + go func() { + defer func() { + tb.Logf("[%s] stopping Get, thread %d", common.NowMilliseconds(), i) + + wg.Done() + + tb.Logf("[%s] stopped Get, thread %d", common.NowMilliseconds(), i) + }() + + runWithTicker(tb, func(_ chan struct{}) { + tx := pool.Get(batchesRemotes[0][0].Hash()) + fmt.Fprint(io.Discard, tx == nil) + }, done, "Get", apiTickerDuration, timeoutDuration, i) + }() + + wg.Add(1) + + go func() { + defer func() { + tb.Logf("[%s] stopping Nonce, thread %d", common.NowMilliseconds(), i) + + wg.Done() + + tb.Logf("[%s] stopped Nonce, thread %d", common.NowMilliseconds(), i) + }() + + runWithTicker(tb, func(_ chan struct{}) { + res := pool.Nonce(account) + fmt.Fprint(io.Discard, res) + }, done, "Nonce", apiTickerDuration, timeoutDuration, i) + }() + + wg.Add(1) + + go func() { + defer func() { + tb.Logf("[%s] stopping Stats, thread %d", common.NowMilliseconds(), i) + + wg.Done() + + tb.Logf("[%s] stopped Stats, thread %d", common.NowMilliseconds(), i) + }() + + runWithTicker(tb, func(_ chan struct{}) { + p, q := pool.Stats() + fmt.Fprint(io.Discard, p, q) + }, done, "Stats", apiTickerDuration, timeoutDuration, i) + }() + + wg.Add(1) + + go func() { + defer func() { + tb.Logf("[%s] stopping Status, thread %d", common.NowMilliseconds(), i) + + wg.Done() + + tb.Logf("[%s] stopped Status, thread %d", common.NowMilliseconds(), i) + }() + + runWithTicker(tb, func(_ chan struct{}) { + st := pool.Status([]common.Hash{batchesRemotes[1][0].Hash()}) + fmt.Fprint(io.Discard, st) + }, done, "Status", apiTickerDuration, timeoutDuration, i) + }() + + wg.Add(1) + + go func() { + defer func() { + tb.Logf("[%s] stopping SubscribeNewTxsEvent, thread %d", common.NowMilliseconds(), i) + + wg.Done() + + tb.Logf("[%s] stopped SubscribeNewTxsEvent, thread %d", common.NowMilliseconds(), i) + }() + + runWithTicker(tb, func(c chan struct{}) { + ch := make(chan NewTxsEvent, 10) + sub := pool.SubscribeNewTxsEvent(ch) + + if sub == nil { + return + } + + defer sub.Unsubscribe() + + select { + case <-done: + return + case <-c: + case res := <-ch: + fmt.Fprint(io.Discard, res) + } + + }, done, "SubscribeNewTxsEvent", apiTickerDuration, timeoutDuration, i) + }() + } + + // wait for the start + tb.Logf("[%s] before the first propagated transaction", common.NowMilliseconds()) + <-pendingAddedCh + tb.Logf("[%s] after the first propagated transaction", common.NowMilliseconds()) + + var ( + totalTxs int + totalBlocks int + ) + + pendingDurations := make([]time.Duration, 0, blocks) + + var ( + added int + pendingDuration time.Duration + miningDuration time.Duration + diff time.Duration + ) + + for { + added, pendingDuration, miningDuration = mining(tb, pool, signer, baseFee, blockGasLimit, totalBlocks) + + totalTxs += added + + pendingDurations = append(pendingDurations, pendingDuration) + + totalBlocks++ + + if totalBlocks > blocks { + fmt.Fprint(io.Discard, totalTxs) + break + } + + diff = blockPeriod - miningDuration + if diff > 0 { + time.Sleep(diff) + } + } + + pendingDurationsFloat := make([]float64, len(pendingDurations)) + + for i, v := range pendingDurations { + pendingDurationsFloat[i] = float64(v.Nanoseconds()) + } + + mean, stddev := stat.MeanStdDev(pendingDurationsFloat, nil) + tb.Logf("[%s] pending mean %v, stddev %v, %v-%v", + common.NowMilliseconds(), time.Duration(mean), time.Duration(stddev), time.Duration(floats.Min(pendingDurationsFloat)), time.Duration(floats.Max(pendingDurationsFloat))) +} + +func addTransactionsBatches(tb testing.TB, batches []types.Transactions, fn func(types.Transactions) error, done chan struct{}, timeoutDuration time.Duration, tickerDuration time.Duration, name string, thread int) { + tb.Helper() + + tb.Logf("[%s] starting %s", common.NowMilliseconds(), name) + + defer func() { + tb.Logf("[%s] stop %s", common.NowMilliseconds(), name) + }() + + for _, batch := range batches { + batch := batch + + select { + case <-done: + return + default: + } + + runWithTimeout(tb, func(_ chan struct{}) { + err := fn(batch) + if err != nil { + tb.Logf("[%s] %s error: %s", common.NowMilliseconds(), name, err) + } + }, done, name, timeoutDuration, 0, thread) + + time.Sleep(tickerDuration) + } +} + +func addTransactions(tb testing.TB, batches []types.Transactions, fn func(*types.Transaction) error, done chan struct{}, timeoutDuration time.Duration, tickerDuration time.Duration, name string, thread int) { + tb.Helper() + + tb.Logf("[%s] starting %s", common.NowMilliseconds(), name) + + defer func() { + tb.Logf("[%s] stop %s", common.NowMilliseconds(), name) + }() + + for _, batch := range batches { + for _, tx := range batch { + tx := tx + + select { + case <-done: + return + default: + } + + runWithTimeout(tb, func(_ chan struct{}) { + err := fn(tx) + if err != nil { + tb.Logf("%s error: %s", name, err) + } + }, done, name, timeoutDuration, 0, thread) + + time.Sleep(tickerDuration) + } + + time.Sleep(tickerDuration) + } +} + +func getFnForBatches(fn func([]*types.Transaction) []error) func(types.Transactions) error { + return func(batch types.Transactions) error { + errs := fn(batch) + if len(errs) != 0 { + return errs[0] + } + + return nil + } +} + +//nolint:unparam +func runWithTicker(tb testing.TB, fn func(c chan struct{}), done chan struct{}, name string, tickerDuration, timeoutDuration time.Duration, thread int) { + tb.Helper() + + select { + case <-done: + tb.Logf("[%s] Short path. finishing outer runWithTicker for %q, thread %d", common.NowMilliseconds(), name, thread) + + return + default: + } + + defer func() { + tb.Logf("[%s] finishing outer runWithTicker for %q, thread %d", common.NowMilliseconds(), name, thread) + }() + + localTicker := time.NewTicker(tickerDuration) + defer localTicker.Stop() + + n := 0 + + for range localTicker.C { + select { + case <-done: + return + default: + } + + runWithTimeout(tb, fn, done, name, timeoutDuration, n, thread) + + n++ + } +} + +func runWithTimeout(tb testing.TB, fn func(chan struct{}), outerDone chan struct{}, name string, timeoutDuration time.Duration, n, thread int) { + tb.Helper() + + select { + case <-outerDone: + tb.Logf("[%s] Short path. exiting inner runWithTimeout by outer exit event for %q, thread %d, iteration %d", common.NowMilliseconds(), name, thread, n) + + return + default: + } + + timeout := time.NewTimer(timeoutDuration) + defer timeout.Stop() + + doneCh := make(chan struct{}) + + isError := new(int32) + *isError = 0 + + go func() { + defer close(doneCh) + + select { + case <-outerDone: + return + default: + fn(doneCh) + } + }() + + const isDebug = false + + var stack string + + select { + case <-outerDone: + tb.Logf("[%s] exiting inner runWithTimeout by outer exit event for %q, thread %d, iteration %d", common.NowMilliseconds(), name, thread, n) + case <-doneCh: + // only for debug + //tb.Logf("[%s] exiting inner runWithTimeout by successful call for %q, thread %d, iteration %d", common.NowMilliseconds(), name, thread, n) + case <-timeout.C: + atomic.StoreInt32(isError, 1) + + if isDebug { + stack = string(debug.Stack(true)) + } + + tb.Errorf("[%s] %s timeouted, thread %d, iteration %d. Stack %s", common.NowMilliseconds(), name, thread, n, stack) + } +} diff --git a/core/types/access_list_tx.go b/core/types/access_list_tx.go index 8ad5e739e9..509f86b622 100644 --- a/core/types/access_list_tx.go +++ b/core/types/access_list_tx.go @@ -19,6 +19,8 @@ package types import ( "math/big" + "github.com/holiman/uint256" + "github.com/ethereum/go-ethereum/common" ) @@ -44,15 +46,16 @@ func (al AccessList) StorageKeys() int { // AccessListTx is the data of EIP-2930 access list transactions. type AccessListTx struct { - ChainID *big.Int // destination chain ID - Nonce uint64 // nonce of sender account - GasPrice *big.Int // wei per gas - Gas uint64 // gas limit - To *common.Address `rlp:"nil"` // nil means contract creation - Value *big.Int // wei amount - Data []byte // contract invocation input data - AccessList AccessList // EIP-2930 access list - V, R, S *big.Int // signature values + ChainID *big.Int // destination chain ID + Nonce uint64 // nonce of sender account + GasPrice *big.Int // wei per gas + gasPriceUint256 *uint256.Int // wei per gas + Gas uint64 // gas limit + To *common.Address `rlp:"nil"` // nil means contract creation + Value *big.Int // wei amount + Data []byte // contract invocation input data + AccessList AccessList // EIP-2930 access list + V, R, S *big.Int // signature values } // copy creates a deep copy of the transaction data and initializes all fields. @@ -80,6 +83,12 @@ func (tx *AccessListTx) copy() TxData { } if tx.GasPrice != nil { cpy.GasPrice.Set(tx.GasPrice) + + if cpy.gasPriceUint256 != nil { + cpy.gasPriceUint256.Set(tx.gasPriceUint256) + } else { + cpy.gasPriceUint256, _ = uint256.FromBig(tx.GasPrice) + } } if tx.V != nil { cpy.V.Set(tx.V) @@ -100,11 +109,39 @@ func (tx *AccessListTx) accessList() AccessList { return tx.AccessList } func (tx *AccessListTx) data() []byte { return tx.Data } func (tx *AccessListTx) gas() uint64 { return tx.Gas } func (tx *AccessListTx) gasPrice() *big.Int { return tx.GasPrice } -func (tx *AccessListTx) gasTipCap() *big.Int { return tx.GasPrice } -func (tx *AccessListTx) gasFeeCap() *big.Int { return tx.GasPrice } -func (tx *AccessListTx) value() *big.Int { return tx.Value } -func (tx *AccessListTx) nonce() uint64 { return tx.Nonce } -func (tx *AccessListTx) to() *common.Address { return tx.To } +func (tx *AccessListTx) gasPriceU256() *uint256.Int { + if tx.gasPriceUint256 != nil { + return tx.gasPriceUint256 + } + + tx.gasPriceUint256, _ = uint256.FromBig(tx.GasPrice) + + return tx.gasPriceUint256 +} + +func (tx *AccessListTx) gasTipCap() *big.Int { return tx.GasPrice } +func (tx *AccessListTx) gasTipCapU256() *uint256.Int { + if tx.gasPriceUint256 != nil { + return tx.gasPriceUint256 + } + + tx.gasPriceUint256, _ = uint256.FromBig(tx.GasPrice) + + return tx.gasPriceUint256 +} +func (tx *AccessListTx) gasFeeCap() *big.Int { return tx.GasPrice } +func (tx *AccessListTx) gasFeeCapU256() *uint256.Int { + if tx.gasPriceUint256 != nil { + return tx.gasPriceUint256 + } + + tx.gasPriceUint256, _ = uint256.FromBig(tx.GasPrice) + + return tx.gasPriceUint256 +} +func (tx *AccessListTx) value() *big.Int { return tx.Value } +func (tx *AccessListTx) nonce() uint64 { return tx.Nonce } +func (tx *AccessListTx) to() *common.Address { return tx.To } func (tx *AccessListTx) rawSignatureValues() (v, r, s *big.Int) { return tx.V, tx.R, tx.S diff --git a/core/types/dynamic_fee_tx.go b/core/types/dynamic_fee_tx.go index 53f246ea1f..532544d54e 100644 --- a/core/types/dynamic_fee_tx.go +++ b/core/types/dynamic_fee_tx.go @@ -19,19 +19,23 @@ package types import ( "math/big" + "github.com/holiman/uint256" + "github.com/ethereum/go-ethereum/common" ) type DynamicFeeTx struct { - ChainID *big.Int - Nonce uint64 - GasTipCap *big.Int // a.k.a. maxPriorityFeePerGas - GasFeeCap *big.Int // a.k.a. maxFeePerGas - Gas uint64 - To *common.Address `rlp:"nil"` // nil means contract creation - Value *big.Int - Data []byte - AccessList AccessList + ChainID *big.Int + Nonce uint64 + GasTipCap *big.Int // a.k.a. maxPriorityFeePerGas + gasTipCapUint256 *uint256.Int // a.k.a. maxPriorityFeePerGas + GasFeeCap *big.Int // a.k.a. maxFeePerGas + gasFeeCapUint256 *uint256.Int // a.k.a. maxFeePerGas + Gas uint64 + To *common.Address `rlp:"nil"` // nil means contract creation + Value *big.Int + Data []byte + AccessList AccessList // Signature values V *big.Int `json:"v" gencodec:"required"` @@ -65,9 +69,21 @@ func (tx *DynamicFeeTx) copy() TxData { } if tx.GasTipCap != nil { cpy.GasTipCap.Set(tx.GasTipCap) + + if cpy.gasTipCapUint256 != nil { + cpy.gasTipCapUint256.Set(tx.gasTipCapUint256) + } else { + cpy.gasTipCapUint256, _ = uint256.FromBig(tx.GasTipCap) + } } if tx.GasFeeCap != nil { cpy.GasFeeCap.Set(tx.GasFeeCap) + + if cpy.gasFeeCapUint256 != nil { + cpy.gasFeeCapUint256.Set(tx.gasFeeCapUint256) + } else { + cpy.gasFeeCapUint256, _ = uint256.FromBig(tx.GasFeeCap) + } } if tx.V != nil { cpy.V.Set(tx.V) @@ -88,11 +104,38 @@ func (tx *DynamicFeeTx) accessList() AccessList { return tx.AccessList } func (tx *DynamicFeeTx) data() []byte { return tx.Data } func (tx *DynamicFeeTx) gas() uint64 { return tx.Gas } func (tx *DynamicFeeTx) gasFeeCap() *big.Int { return tx.GasFeeCap } -func (tx *DynamicFeeTx) gasTipCap() *big.Int { return tx.GasTipCap } -func (tx *DynamicFeeTx) gasPrice() *big.Int { return tx.GasFeeCap } -func (tx *DynamicFeeTx) value() *big.Int { return tx.Value } -func (tx *DynamicFeeTx) nonce() uint64 { return tx.Nonce } -func (tx *DynamicFeeTx) to() *common.Address { return tx.To } +func (tx *DynamicFeeTx) gasFeeCapU256() *uint256.Int { + if tx.gasFeeCapUint256 != nil { + return tx.gasFeeCapUint256 + } + + tx.gasFeeCapUint256, _ = uint256.FromBig(tx.GasFeeCap) + + return tx.gasFeeCapUint256 +} +func (tx *DynamicFeeTx) gasTipCap() *big.Int { return tx.GasTipCap } +func (tx *DynamicFeeTx) gasTipCapU256() *uint256.Int { + if tx.gasTipCapUint256 != nil { + return tx.gasTipCapUint256 + } + + tx.gasTipCapUint256, _ = uint256.FromBig(tx.GasTipCap) + + return tx.gasTipCapUint256 +} +func (tx *DynamicFeeTx) gasPrice() *big.Int { return tx.GasFeeCap } +func (tx *DynamicFeeTx) gasPriceU256() *uint256.Int { + if tx.gasFeeCapUint256 != nil { + return tx.gasTipCapUint256 + } + + tx.gasFeeCapUint256, _ = uint256.FromBig(tx.GasFeeCap) + + return tx.gasFeeCapUint256 +} +func (tx *DynamicFeeTx) value() *big.Int { return tx.Value } +func (tx *DynamicFeeTx) nonce() uint64 { return tx.Nonce } +func (tx *DynamicFeeTx) to() *common.Address { return tx.To } func (tx *DynamicFeeTx) rawSignatureValues() (v, r, s *big.Int) { return tx.V, tx.R, tx.S diff --git a/core/types/legacy_tx.go b/core/types/legacy_tx.go index cb86bed772..72fcd34fa5 100644 --- a/core/types/legacy_tx.go +++ b/core/types/legacy_tx.go @@ -19,18 +19,21 @@ package types import ( "math/big" + "github.com/holiman/uint256" + "github.com/ethereum/go-ethereum/common" ) // LegacyTx is the transaction data of regular Ethereum transactions. type LegacyTx struct { - Nonce uint64 // nonce of sender account - GasPrice *big.Int // wei per gas - Gas uint64 // gas limit - To *common.Address `rlp:"nil"` // nil means contract creation - Value *big.Int // wei amount - Data []byte // contract invocation input data - V, R, S *big.Int // signature values + Nonce uint64 // nonce of sender account + GasPrice *big.Int // wei per gas + gasPriceUint256 *uint256.Int // wei per gas + Gas uint64 // gas limit + To *common.Address `rlp:"nil"` // nil means contract creation + Value *big.Int // wei amount + Data []byte // contract invocation input data + V, R, S *big.Int // signature values } // NewTransaction creates an unsigned legacy transaction. @@ -77,6 +80,12 @@ func (tx *LegacyTx) copy() TxData { } if tx.GasPrice != nil { cpy.GasPrice.Set(tx.GasPrice) + + if cpy.gasPriceUint256 != nil { + cpy.gasPriceUint256.Set(tx.gasPriceUint256) + } else { + cpy.gasPriceUint256, _ = uint256.FromBig(tx.GasPrice) + } } if tx.V != nil { cpy.V.Set(tx.V) @@ -97,11 +106,38 @@ func (tx *LegacyTx) accessList() AccessList { return nil } func (tx *LegacyTx) data() []byte { return tx.Data } func (tx *LegacyTx) gas() uint64 { return tx.Gas } func (tx *LegacyTx) gasPrice() *big.Int { return tx.GasPrice } -func (tx *LegacyTx) gasTipCap() *big.Int { return tx.GasPrice } -func (tx *LegacyTx) gasFeeCap() *big.Int { return tx.GasPrice } -func (tx *LegacyTx) value() *big.Int { return tx.Value } -func (tx *LegacyTx) nonce() uint64 { return tx.Nonce } -func (tx *LegacyTx) to() *common.Address { return tx.To } +func (tx *LegacyTx) gasPriceU256() *uint256.Int { + if tx.gasPriceUint256 != nil { + return tx.gasPriceUint256 + } + + tx.gasPriceUint256, _ = uint256.FromBig(tx.GasPrice) + + return tx.gasPriceUint256 +} +func (tx *LegacyTx) gasTipCap() *big.Int { return tx.GasPrice } +func (tx *LegacyTx) gasTipCapU256() *uint256.Int { + if tx.gasPriceUint256 != nil { + return tx.gasPriceUint256 + } + + tx.gasPriceUint256, _ = uint256.FromBig(tx.GasPrice) + + return tx.gasPriceUint256 +} +func (tx *LegacyTx) gasFeeCap() *big.Int { return tx.GasPrice } +func (tx *LegacyTx) gasFeeCapU256() *uint256.Int { + if tx.gasPriceUint256 != nil { + return tx.gasPriceUint256 + } + + tx.gasPriceUint256, _ = uint256.FromBig(tx.GasPrice) + + return tx.gasPriceUint256 +} +func (tx *LegacyTx) value() *big.Int { return tx.Value } +func (tx *LegacyTx) nonce() uint64 { return tx.Nonce } +func (tx *LegacyTx) to() *common.Address { return tx.To } func (tx *LegacyTx) rawSignatureValues() (v, r, s *big.Int) { return tx.V, tx.R, tx.S diff --git a/core/types/transaction.go b/core/types/transaction.go index e0e52f25bc..9b89f12517 100644 --- a/core/types/transaction.go +++ b/core/types/transaction.go @@ -25,6 +25,8 @@ import ( "sync/atomic" "time" + "github.com/holiman/uint256" + "github.com/ethereum/go-ethereum/common" "github.com/ethereum/go-ethereum/common/math" "github.com/ethereum/go-ethereum/crypto" @@ -53,9 +55,9 @@ type Transaction struct { time time.Time // Time first seen locally (spam avoidance) // caches - hash atomic.Value - size atomic.Value - from atomic.Value + hash atomic.Pointer[common.Hash] + size atomic.Pointer[common.StorageSize] + from atomic.Pointer[sigCache] } // NewTx creates a new transaction. @@ -77,8 +79,11 @@ type TxData interface { data() []byte gas() uint64 gasPrice() *big.Int + gasPriceU256() *uint256.Int gasTipCap() *big.Int + gasTipCapU256() *uint256.Int gasFeeCap() *big.Int + gasFeeCapU256() *uint256.Int value() *big.Int nonce() uint64 to() *common.Address @@ -194,7 +199,8 @@ func (tx *Transaction) setDecoded(inner TxData, size int) { tx.inner = inner tx.time = time.Now() if size > 0 { - tx.size.Store(common.StorageSize(size)) + v := float64(size) + tx.size.Store((*common.StorageSize)(&v)) } } @@ -265,16 +271,23 @@ func (tx *Transaction) AccessList() AccessList { return tx.inner.accessList() } func (tx *Transaction) Gas() uint64 { return tx.inner.gas() } // GasPrice returns the gas price of the transaction. -func (tx *Transaction) GasPrice() *big.Int { return new(big.Int).Set(tx.inner.gasPrice()) } +func (tx *Transaction) GasPrice() *big.Int { return new(big.Int).Set(tx.inner.gasPrice()) } +func (tx *Transaction) GasPriceRef() *big.Int { return tx.inner.gasPrice() } +func (tx *Transaction) GasPriceUint() *uint256.Int { return tx.inner.gasPriceU256() } // GasTipCap returns the gasTipCap per gas of the transaction. -func (tx *Transaction) GasTipCap() *big.Int { return new(big.Int).Set(tx.inner.gasTipCap()) } +func (tx *Transaction) GasTipCap() *big.Int { return new(big.Int).Set(tx.inner.gasTipCap()) } +func (tx *Transaction) GasTipCapRef() *big.Int { return tx.inner.gasTipCap() } +func (tx *Transaction) GasTipCapUint() *uint256.Int { return tx.inner.gasTipCapU256() } // GasFeeCap returns the fee cap per gas of the transaction. -func (tx *Transaction) GasFeeCap() *big.Int { return new(big.Int).Set(tx.inner.gasFeeCap()) } +func (tx *Transaction) GasFeeCap() *big.Int { return new(big.Int).Set(tx.inner.gasFeeCap()) } +func (tx *Transaction) GasFeeCapRef() *big.Int { return tx.inner.gasFeeCap() } +func (tx *Transaction) GasFeeCapUint() *uint256.Int { return tx.inner.gasFeeCapU256() } // Value returns the ether amount of the transaction. -func (tx *Transaction) Value() *big.Int { return new(big.Int).Set(tx.inner.value()) } +func (tx *Transaction) Value() *big.Int { return new(big.Int).Set(tx.inner.value()) } +func (tx *Transaction) ValueRef() *big.Int { return tx.inner.value() } // Nonce returns the sender account nonce of the transaction. func (tx *Transaction) Nonce() uint64 { return tx.inner.nonce() } @@ -287,9 +300,19 @@ func (tx *Transaction) To() *common.Address { // Cost returns gas * gasPrice + value. func (tx *Transaction) Cost() *big.Int { - total := new(big.Int).Mul(tx.GasPrice(), new(big.Int).SetUint64(tx.Gas())) - total.Add(total, tx.Value()) - return total + gasPrice, _ := uint256.FromBig(tx.GasPriceRef()) + gasPrice.Mul(gasPrice, uint256.NewInt(tx.Gas())) + value, _ := uint256.FromBig(tx.ValueRef()) + + return gasPrice.Add(gasPrice, value).ToBig() +} + +func (tx *Transaction) CostUint() *uint256.Int { + gasPrice, _ := uint256.FromBig(tx.GasPriceRef()) + gasPrice.Mul(gasPrice, uint256.NewInt(tx.Gas())) + value, _ := uint256.FromBig(tx.ValueRef()) + + return gasPrice.Add(gasPrice, value) } // RawSignatureValues returns the V, R, S signature values of the transaction. @@ -303,11 +326,18 @@ func (tx *Transaction) GasFeeCapCmp(other *Transaction) int { return tx.inner.gasFeeCap().Cmp(other.inner.gasFeeCap()) } -// GasFeeCapIntCmp compares the fee cap of the transaction against the given fee cap. func (tx *Transaction) GasFeeCapIntCmp(other *big.Int) int { return tx.inner.gasFeeCap().Cmp(other) } +func (tx *Transaction) GasFeeCapUIntCmp(other *uint256.Int) int { + return tx.inner.gasFeeCapU256().Cmp(other) +} + +func (tx *Transaction) GasFeeCapUIntLt(other *uint256.Int) bool { + return tx.inner.gasFeeCapU256().Lt(other) +} + // GasTipCapCmp compares the gasTipCap of two transactions. func (tx *Transaction) GasTipCapCmp(other *Transaction) int { return tx.inner.gasTipCap().Cmp(other.inner.gasTipCap()) @@ -318,6 +348,14 @@ func (tx *Transaction) GasTipCapIntCmp(other *big.Int) int { return tx.inner.gasTipCap().Cmp(other) } +func (tx *Transaction) GasTipCapUIntCmp(other *uint256.Int) int { + return tx.inner.gasTipCapU256().Cmp(other) +} + +func (tx *Transaction) GasTipCapUIntLt(other *uint256.Int) bool { + return tx.inner.gasTipCapU256().Lt(other) +} + // EffectiveGasTip returns the effective miner gasTipCap for the given base fee. // Note: if the effective gasTipCap is negative, this method returns both error // the actual negative value, _and_ ErrGasFeeCapTooLow @@ -356,10 +394,73 @@ func (tx *Transaction) EffectiveGasTipIntCmp(other *big.Int, baseFee *big.Int) i return tx.EffectiveGasTipValue(baseFee).Cmp(other) } +func (tx *Transaction) EffectiveGasTipUintCmp(other *uint256.Int, baseFee *uint256.Int) int { + if baseFee == nil { + return tx.GasTipCapUIntCmp(other) + } + + return tx.EffectiveGasTipValueUint(baseFee).Cmp(other) +} + +func (tx *Transaction) EffectiveGasTipUintLt(other *uint256.Int, baseFee *uint256.Int) bool { + if baseFee == nil { + return tx.GasTipCapUIntLt(other) + } + + return tx.EffectiveGasTipValueUint(baseFee).Lt(other) +} + +func (tx *Transaction) EffectiveGasTipTxUintCmp(other *Transaction, baseFee *uint256.Int) int { + if baseFee == nil { + return tx.inner.gasTipCapU256().Cmp(other.inner.gasTipCapU256()) + } + + return tx.EffectiveGasTipValueUint(baseFee).Cmp(other.EffectiveGasTipValueUint(baseFee)) +} + +func (tx *Transaction) EffectiveGasTipValueUint(baseFee *uint256.Int) *uint256.Int { + effectiveTip, _ := tx.EffectiveGasTipUnit(baseFee) + return effectiveTip +} + +func (tx *Transaction) EffectiveGasTipUnit(baseFee *uint256.Int) (*uint256.Int, error) { + if baseFee == nil { + return tx.GasFeeCapUint(), nil + } + + var err error + + gasFeeCap := tx.GasFeeCapUint().Clone() + + if gasFeeCap.Lt(baseFee) { + err = ErrGasFeeCapTooLow + } + + gasTipCapUint := tx.GasTipCapUint() + + if gasFeeCap.Lt(gasTipCapUint) { + return gasFeeCap, err + } + + if gasFeeCap.Lt(gasTipCapUint) && baseFee.IsZero() { + return gasFeeCap, err + } + + gasFeeCap.Sub(gasFeeCap, baseFee) + + if gasFeeCap.Gt(gasTipCapUint) || gasFeeCap.Eq(gasTipCapUint) { + gasFeeCap.Add(gasFeeCap, baseFee) + + return gasTipCapUint, err + } + + return gasFeeCap, err +} + // Hash returns the transaction hash. func (tx *Transaction) Hash() common.Hash { if hash := tx.hash.Load(); hash != nil { - return hash.(common.Hash) + return *hash } var h common.Hash @@ -368,7 +469,9 @@ func (tx *Transaction) Hash() common.Hash { } else { h = prefixedRlpHash(tx.Type(), tx.inner) } - tx.hash.Store(h) + + tx.hash.Store(&h) + return h } @@ -376,11 +479,14 @@ func (tx *Transaction) Hash() common.Hash { // encoding and returning it, or returning a previously cached value. func (tx *Transaction) Size() common.StorageSize { if size := tx.size.Load(); size != nil { - return size.(common.StorageSize) + return *size } + c := writeCounter(0) + rlp.Encode(&c, &tx.inner) - tx.size.Store(common.StorageSize(c)) + tx.size.Store((*common.StorageSize)(&c)) + return common.StorageSize(c) } @@ -444,14 +550,14 @@ func (s TxByNonce) Swap(i, j int) { s[i], s[j] = s[j], s[i] } // TxWithMinerFee wraps a transaction with its gas price or effective miner gasTipCap type TxWithMinerFee struct { tx *Transaction - minerFee *big.Int + minerFee *uint256.Int } // NewTxWithMinerFee creates a wrapped transaction, calculating the effective // miner gasTipCap if a base fee is provided. // Returns error in case of a negative effective miner gasTipCap. -func NewTxWithMinerFee(tx *Transaction, baseFee *big.Int) (*TxWithMinerFee, error) { - minerFee, err := tx.EffectiveGasTip(baseFee) +func NewTxWithMinerFee(tx *Transaction, baseFee *uint256.Int) (*TxWithMinerFee, error) { + minerFee, err := tx.EffectiveGasTipUnit(baseFee) if err != nil { return nil, err } @@ -496,7 +602,7 @@ type TransactionsByPriceAndNonce struct { txs map[common.Address]Transactions // Per account nonce-sorted list of transactions heads TxByPriceAndTime // Next transaction for each unique account (price heap) signer Signer // Signer for the set of transactions - baseFee *big.Int // Current base fee + baseFee *uint256.Int // Current base fee } // NewTransactionsByPriceAndNonce creates a transaction set that can retrieve @@ -504,6 +610,7 @@ type TransactionsByPriceAndNonce struct { // // Note, the input map is reowned so the caller should not interact any more with // if after providing it to the constructor. +/* func NewTransactionsByPriceAndNonce(signer Signer, txs map[common.Address]Transactions, baseFee *big.Int) *TransactionsByPriceAndNonce { // Initialize a price and received time based heap with the head transactions heads := make(TxByPriceAndTime, 0, len(txs)) @@ -524,6 +631,39 @@ func NewTransactionsByPriceAndNonce(signer Signer, txs map[common.Address]Transa } heap.Init(&heads) + // Assemble and return the transaction set + return &TransactionsByPriceAndNonce{ + txs: txs, + heads: heads, + signer: signer, + baseFee: baseFee, + } +}*/ + +func NewTransactionsByPriceAndNonce(signer Signer, txs map[common.Address]Transactions, baseFee *uint256.Int) *TransactionsByPriceAndNonce { + // Initialize a price and received time based heap with the head transactions + heads := make(TxByPriceAndTime, 0, len(txs)) + + for from, accTxs := range txs { + if len(accTxs) == 0 { + continue + } + + acc, _ := Sender(signer, accTxs[0]) + wrapped, err := NewTxWithMinerFee(accTxs[0], baseFee) + + // Remove transaction if sender doesn't match from, or if wrapping fails. + if acc != from || err != nil { + delete(txs, from) + continue + } + + heads = append(heads, wrapped) + txs[from] = accTxs[1:] + } + + heap.Init(&heads) + // Assemble and return the transaction set return &TransactionsByPriceAndNonce{ txs: txs, diff --git a/core/types/transaction_signing.go b/core/types/transaction_signing.go index 1d0d2a4c75..959aba637a 100644 --- a/core/types/transaction_signing.go +++ b/core/types/transaction_signing.go @@ -130,12 +130,11 @@ func MustSignNewTx(prv *ecdsa.PrivateKey, s Signer, txdata TxData) *Transaction // not match the signer used in the current call. func Sender(signer Signer, tx *Transaction) (common.Address, error) { if sc := tx.from.Load(); sc != nil { - sigCache := sc.(sigCache) // If the signer used to derive from in a previous // call is not the same as used current, invalidate // the cache. - if sigCache.signer.Equal(signer) { - return sigCache.from, nil + if sc.signer.Equal(signer) { + return sc.from, nil } } @@ -143,7 +142,9 @@ func Sender(signer Signer, tx *Transaction) (common.Address, error) { if err != nil { return common.Address{}, err } - tx.from.Store(sigCache{signer: signer, from: addr}) + + tx.from.Store(&sigCache{signer: signer, from: addr}) + return addr, nil } @@ -461,10 +462,10 @@ func (fs FrontierSigner) SignatureValues(tx *Transaction, sig []byte) (r, s, v * func (fs FrontierSigner) Hash(tx *Transaction) common.Hash { return rlpHash([]interface{}{ tx.Nonce(), - tx.GasPrice(), + tx.GasPriceRef(), tx.Gas(), tx.To(), - tx.Value(), + tx.ValueRef(), tx.Data(), }) } diff --git a/core/types/transaction_test.go b/core/types/transaction_test.go index a4755675cd..255a7b76b4 100644 --- a/core/types/transaction_test.go +++ b/core/types/transaction_test.go @@ -27,7 +27,10 @@ import ( "testing" "time" + "github.com/holiman/uint256" + "github.com/ethereum/go-ethereum/common" + cmath "github.com/ethereum/go-ethereum/common/math" "github.com/ethereum/go-ethereum/crypto" "github.com/ethereum/go-ethereum/rlp" ) @@ -272,14 +275,22 @@ func TestTransactionPriceNonceSort1559(t *testing.T) { // Tests that transactions can be correctly sorted according to their price in // decreasing order, but at the same time with increasing nonces when issued by // the same account. -func testTransactionPriceNonceSort(t *testing.T, baseFee *big.Int) { +// +//nolint:gocognit,thelper +func testTransactionPriceNonceSort(t *testing.T, baseFeeBig *big.Int) { // Generate a batch of accounts to start with keys := make([]*ecdsa.PrivateKey, 25) for i := 0; i < len(keys); i++ { keys[i], _ = crypto.GenerateKey() } + signer := LatestSignerForChainID(common.Big1) + var baseFee *uint256.Int + if baseFeeBig != nil { + baseFee = cmath.FromBig(baseFeeBig) + } + // Generate a batch of transactions with overlapping values, but shifted nonces groups := map[common.Address]Transactions{} expectedCount := 0 @@ -308,7 +319,7 @@ func testTransactionPriceNonceSort(t *testing.T, baseFee *big.Int) { GasTipCap: big.NewInt(int64(rand.Intn(gasFeeCap + 1))), Data: nil, }) - if count == 25 && int64(gasFeeCap) < baseFee.Int64() { + if count == 25 && uint64(gasFeeCap) < baseFee.Uint64() { count = i } } @@ -341,12 +352,25 @@ func testTransactionPriceNonceSort(t *testing.T, baseFee *big.Int) { t.Errorf("invalid nonce ordering: tx #%d (A=%x N=%v) < tx #%d (A=%x N=%v)", i, fromi[:4], txi.Nonce(), i+j, fromj[:4], txj.Nonce()) } } + // If the next tx has different from account, the price must be lower than the current one if i+1 < len(txs) { next := txs[i+1] fromNext, _ := Sender(signer, next) - tip, err := txi.EffectiveGasTip(baseFee) - nextTip, nextErr := next.EffectiveGasTip(baseFee) + tip, err := txi.EffectiveGasTipUnit(baseFee) + nextTip, nextErr := next.EffectiveGasTipUnit(baseFee) + + tipBig, _ := txi.EffectiveGasTip(baseFeeBig) + nextTipBig, _ := next.EffectiveGasTip(baseFeeBig) + + if tip.Cmp(cmath.FromBig(tipBig)) != 0 { + t.Fatalf("EffectiveGasTip incorrect. uint256 %q, big.Int %q, baseFee %q, baseFeeBig %q", tip.String(), tipBig.String(), baseFee.String(), baseFeeBig.String()) + } + + if nextTip.Cmp(cmath.FromBig(nextTipBig)) != 0 { + t.Fatalf("EffectiveGasTip next incorrect. uint256 %q, big.Int %q, baseFee %q, baseFeeBig %q", nextTip.String(), nextTipBig.String(), baseFee.String(), baseFeeBig.String()) + } + if err != nil || nextErr != nil { t.Errorf("error calculating effective tip") } diff --git a/docs/cli/example_config.toml b/docs/cli/example_config.toml index c32c40e2c6..31c73965fc 100644 --- a/docs/cli/example_config.toml +++ b/docs/cli/example_config.toml @@ -93,7 +93,7 @@ ethstats = "" # Reporting URL of a ethstats service (nodename:sec vhosts = ["localhost"] # Comma separated list of virtual hostnames from which to accept requests (server enforced). Accepts '*' wildcard. corsdomain = ["localhost"] # Comma separated list of domains from which to accept cross origin requests (browser enforced) [jsonrpc.timeouts] - read = "30s" + read = "10s" write = "30s" idle = "2m0s" diff --git a/eth/api_backend.go b/eth/api_backend.go index 60aea7527e..c8825dc582 100644 --- a/eth/api_backend.go +++ b/eth/api_backend.go @@ -236,11 +236,18 @@ func (b *EthAPIBackend) SubscribeLogsEvent(ch chan<- []*types.Log) event.Subscri } func (b *EthAPIBackend) SendTx(ctx context.Context, signedTx *types.Transaction) error { - return b.eth.txPool.AddLocal(signedTx) + err := b.eth.txPool.AddLocal(signedTx) + if err != nil { + if unwrapped := errors.Unwrap(err); unwrapped != nil { + return unwrapped + } + } + + return err } func (b *EthAPIBackend) GetPoolTransactions() (types.Transactions, error) { - pending := b.eth.txPool.Pending(false) + pending := b.eth.txPool.Pending(context.Background(), false) var txs types.Transactions for _, batch := range pending { txs = append(txs, batch...) diff --git a/eth/bor_checkpoint_verifier.go b/eth/bor_checkpoint_verifier.go index 61e8c382e1..ad81eb6116 100644 --- a/eth/bor_checkpoint_verifier.go +++ b/eth/bor_checkpoint_verifier.go @@ -26,6 +26,7 @@ func newCheckpointVerifier(verifyFn func(ctx context.Context, handler *ethHandle ) // check if we have the checkpoint blocks + //nolint:contextcheck head := handler.ethAPI.BlockNumber() if head < hexutil.Uint64(endBlock) { log.Debug("Head block behind checkpoint block", "head", head, "checkpoint end block", endBlock) diff --git a/eth/handler.go b/eth/handler.go index 8e6d89f9ef..48bdf8eb15 100644 --- a/eth/handler.go +++ b/eth/handler.go @@ -17,6 +17,7 @@ package eth import ( + "context" "errors" "math" "math/big" @@ -69,7 +70,7 @@ type txPool interface { // Pending should return pending transactions. // The slice should be modifiable by the caller. - Pending(enforceTips bool) map[common.Address]types.Transactions + Pending(ctx context.Context, enforceTips bool) map[common.Address]types.Transactions // SubscribeNewTxsEvent should return an event subscription of // NewTxsEvent and send events to the given channel. diff --git a/eth/handler_test.go b/eth/handler_test.go index c6d7811d10..7a14619159 100644 --- a/eth/handler_test.go +++ b/eth/handler_test.go @@ -17,6 +17,7 @@ package eth import ( + "context" "math/big" "sort" "sync" @@ -92,7 +93,7 @@ func (p *testTxPool) AddRemotes(txs []*types.Transaction) []error { } // Pending returns all the transactions known to the pool -func (p *testTxPool) Pending(enforceTips bool) map[common.Address]types.Transactions { +func (p *testTxPool) Pending(ctx context.Context, enforceTips bool) map[common.Address]types.Transactions { p.lock.RLock() defer p.lock.RUnlock() diff --git a/eth/sync.go b/eth/sync.go index aa79b6181c..377acff95c 100644 --- a/eth/sync.go +++ b/eth/sync.go @@ -17,6 +17,7 @@ package eth import ( + "context" "errors" "math/big" "sync/atomic" @@ -44,20 +45,24 @@ func (h *handler) syncTransactions(p *eth.Peer) { // // TODO(karalabe): Figure out if we could get away with random order somehow var txs types.Transactions - pending := h.txpool.Pending(false) + + pending := h.txpool.Pending(context.Background(), false) for _, batch := range pending { txs = append(txs, batch...) } + if len(txs) == 0 { return } // The eth/65 protocol introduces proper transaction announcements, so instead // of dripping transactions across multiple peers, just send the entire list as // an announcement and let the remote side decide what they need (likely nothing). + hashes := make([]common.Hash, len(txs)) for i, tx := range txs { hashes[i] = tx.Hash() } + p.AsyncSendPooledTransactionHashes(hashes) } diff --git a/go.mod b/go.mod index 6090fcb59b..69fa5990bd 100644 --- a/go.mod +++ b/go.mod @@ -12,13 +12,13 @@ require ( github.com/aws/aws-sdk-go-v2/config v1.1.1 github.com/aws/aws-sdk-go-v2/credentials v1.1.1 github.com/aws/aws-sdk-go-v2/service/route53 v1.1.1 - github.com/btcsuite/btcd/btcec/v2 v2.1.2 + github.com/btcsuite/btcd/btcec/v2 v2.1.3 github.com/cespare/cp v0.1.0 github.com/cloudflare/cloudflare-go v0.14.0 github.com/consensys/gnark-crypto v0.4.1-0.20210426202927-39ac3d4b3f1f github.com/davecgh/go-spew v1.1.1 github.com/deckarep/golang-set v1.8.0 - github.com/docker/docker v1.4.2-0.20180625184442-8e610b2b55bf + github.com/docker/docker v1.6.1 github.com/dop251/goja v0.0.0-20211011172007-d99e4b8cbf48 github.com/edsrzf/mmap-go v1.0.0 github.com/fatih/color v1.7.0 diff --git a/go.sum b/go.sum index e28a4deb1f..181030d9d1 100644 --- a/go.sum +++ b/go.sum @@ -33,7 +33,6 @@ github.com/JekaMas/go-grpc-net-conn v0.0.0-20220708155319-6aff21f2d13d h1:RO27lg github.com/JekaMas/go-grpc-net-conn v0.0.0-20220708155319-6aff21f2d13d/go.mod h1:romz7UPgSYhfJkKOalzEEyV6sWtt/eAEm0nX2aOrod0= github.com/JekaMas/workerpool v1.1.5 h1:xmrx2Zyft95CEGiEqzDxiawptCIRZQ0zZDhTGDFOCaw= github.com/JekaMas/workerpool v1.1.5/go.mod h1:IoDWPpwMcA27qbuugZKeBslDrgX09lVmksuh9sjzbhc= -github.com/Masterminds/goutils v1.1.0 h1:zukEsf/1JZwCMgHiK3GZftabmxiCw4apj3a28RPBiVg= github.com/Masterminds/goutils v1.1.0/go.mod h1:8cTjp+g8YejhMuvIA5y2vz3BpJxksy863GQaJW2MFNU= github.com/Masterminds/goutils v1.1.1 h1:5nUrii3FMTL5diU80unEVvNevw1nH4+ZV4DSLVJLSYI= github.com/Masterminds/goutils v1.1.1/go.mod h1:8cTjp+g8YejhMuvIA5y2vz3BpJxksy863GQaJW2MFNU= @@ -88,6 +87,8 @@ github.com/bmizerany/pat v0.0.0-20170815010413-6226ea591a40/go.mod h1:8rLXio+Wji github.com/boltdb/bolt v1.3.1/go.mod h1:clJnj/oiGkjum5o1McbSZDSLxVThjynRyGBgiAx27Ps= github.com/btcsuite/btcd/btcec/v2 v2.1.2 h1:YoYoC9J0jwfukodSBMzZYUVQ8PTiYg4BnOWiJVzTmLs= github.com/btcsuite/btcd/btcec/v2 v2.1.2/go.mod h1:ctjw4H1kknNJmRN4iP1R7bTQ+v3GJkZBd6mui8ZsAZE= +github.com/btcsuite/btcd/btcec/v2 v2.1.3 h1:xM/n3yIhHAhHy04z4i43C8p4ehixJZMsnrVJkgl+MTE= +github.com/btcsuite/btcd/btcec/v2 v2.1.3/go.mod h1:ctjw4H1kknNJmRN4iP1R7bTQ+v3GJkZBd6mui8ZsAZE= github.com/btcsuite/btcd/chaincfg/chainhash v1.0.0 h1:MSskdM4/xJYcFzy0altH/C/xHopifpWzHUi1JeVI34Q= github.com/c-bata/go-prompt v0.2.2/go.mod h1:VzqtzE2ksDBcdln8G7mk2RX9QyGjH+OVqOCSiVIqS34= github.com/cenkalti/backoff/v4 v4.1.1 h1:G2HAfAmvm/GcKan2oOQpBXOd2tT2G57ZnZGWa1PxPBQ= @@ -140,6 +141,8 @@ github.com/dnaeon/go-vcr v1.2.0 h1:zHCHvJYTMh1N7xnV7zf1m1GPBF9Ad0Jk/whtQ1663qI= github.com/dnaeon/go-vcr v1.2.0/go.mod h1:R4UdLID7HZT3taECzJs4YgbbH6PIGXB6W/sc5OLb6RQ= github.com/docker/docker v1.4.2-0.20180625184442-8e610b2b55bf h1:sh8rkQZavChcmakYiSlqu2425CHyFXLZZnvm7PDpU8M= github.com/docker/docker v1.4.2-0.20180625184442-8e610b2b55bf/go.mod h1:eEKB0N0r5NX/I1kEveEz05bcu8tLC/8azJZsviup8Sk= +github.com/docker/docker v1.6.1 h1:4xYASHy5cScPkLD7PO0uTmnVc860m9NarPN1X8zeMe8= +github.com/docker/docker v1.6.1/go.mod h1:eEKB0N0r5NX/I1kEveEz05bcu8tLC/8azJZsviup8Sk= github.com/dop251/goja v0.0.0-20211011172007-d99e4b8cbf48 h1:iZOop7pqsg+56twTopWgwCGxdB5SI2yDO8Ti7eTRliQ= github.com/dop251/goja v0.0.0-20211011172007-d99e4b8cbf48/go.mod h1:R9ET47fwRVRPZnOGvHxxhuZcbrMCuiqOz3Rlrh4KSnk= github.com/dop251/goja_nodejs v0.0.0-20210225215109-d91c329300e7/go.mod h1:hn7BA7c8pLvoGndExHudxTDKZ84Pyvv+90pbBjbTz0Y= @@ -573,8 +576,6 @@ golang.org/x/mod v0.1.1-0.20191105210325-c90efee705ee/go.mod h1:QqPTAvyqsEbceGzB golang.org/x/mod v0.2.0/go.mod h1:s0Qsj1ACt9ePp/hMypM3fl4fZqREWJwdYDEqhRiZZUA= golang.org/x/mod v0.3.0/go.mod h1:s0Qsj1ACt9ePp/hMypM3fl4fZqREWJwdYDEqhRiZZUA= golang.org/x/mod v0.4.2/go.mod h1:s0Qsj1ACt9ePp/hMypM3fl4fZqREWJwdYDEqhRiZZUA= -golang.org/x/mod v0.6.0-dev.0.20220419223038-86c51ed26bb4 h1:6zppjxzCulZykYSLyVDYbneBfbaBIQPYMevg0bEwv2s= -golang.org/x/mod v0.6.0-dev.0.20220419223038-86c51ed26bb4/go.mod h1:jJ57K6gSWd91VN4djpZkiMVwK6gcyfeH4XE8wZrZaV4= golang.org/x/mod v0.8.0 h1:LUYupSeNrTNCGzR/hVBk2NHZO4hXcVaW1k4Qx7rjPx8= golang.org/x/mod v0.8.0/go.mod h1:iBbtSCu2XBx23ZKBPSOrRkjjQPZFPuis4dIYUhu/chs= golang.org/x/net v0.0.0-20180724234803-3673e40ba225/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4= @@ -606,8 +607,6 @@ golang.org/x/net v0.0.0-20210226172049-e18ecbb05110/go.mod h1:m0MpNAwzfU5UDzcl9v golang.org/x/net v0.0.0-20210405180319-a5a99cb37ef4/go.mod h1:p54w0d4576C0XHj96bSt6lcn1PtDYWL6XObtHCRCNQM= golang.org/x/net v0.0.0-20210610132358-84b48f89b13b/go.mod h1:9nx3DQGgdP8bBQD5qxJ1jj9UTztislL4KSBs9R2vV5Y= golang.org/x/net v0.0.0-20210805182204-aaa1db679c0d/go.mod h1:9nx3DQGgdP8bBQD5qxJ1jj9UTztislL4KSBs9R2vV5Y= -golang.org/x/net v0.0.0-20220728030405-41545e8bf201 h1:bvOltf3SADAfG05iRml8lAB3qjoEX5RCyN4K6G5v3N0= -golang.org/x/net v0.0.0-20220728030405-41545e8bf201/go.mod h1:YDH+HFinaLZZlnHAfSS6ZXJJ9M9t4Dl22yv3iI2vPwk= golang.org/x/net v0.8.0 h1:Zrh2ngAOFYneWTAIAPethzeaQLuHwhuBkuV6ZiRnUaQ= golang.org/x/net v0.8.0/go.mod h1:QVkue5JL9kW//ek3r6jTKnTFis1tRmNAW2P1shuFdJc= golang.org/x/oauth2 v0.0.0-20180821212333-d2e6202438be/go.mod h1:N/0e6XlmueqKjAGxoOufVs8QHGRruUQn6yWY3a++T0U= @@ -623,8 +622,6 @@ golang.org/x/sync v0.0.0-20190423024810-112230192c58/go.mod h1:RxMgew5VJxzue5/jJ golang.org/x/sync v0.0.0-20190911185100-cd5d95a43a6e/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM= golang.org/x/sync v0.0.0-20201020160332-67f06af15bc9/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM= golang.org/x/sync v0.0.0-20210220032951-036812b2e83c/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM= -golang.org/x/sync v0.0.0-20220722155255-886fb9371eb4 h1:uVc8UZUe6tr40fFVnUP5Oj+veunVezqYl9z7DYw9xzw= -golang.org/x/sync v0.0.0-20220722155255-886fb9371eb4/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM= golang.org/x/sync v0.1.0 h1:wsuoTGHzEhffawBOhz5CYhcrV4IdKZbEyZjBMuTp12o= golang.org/x/sync v0.1.0/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM= golang.org/x/sys v0.0.0-20180830151530-49385e6e1522/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY= @@ -668,16 +665,10 @@ golang.org/x/sys v0.0.0-20210420205809-ac73e9fd8988/go.mod h1:h1NjWce9XRLGQEsW7w golang.org/x/sys v0.0.0-20210423082822-04245dca01da/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= golang.org/x/sys v0.0.0-20210423185535-09eb48e85fd7/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= golang.org/x/sys v0.0.0-20210510120138-977fb7262007/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= -golang.org/x/sys v0.0.0-20220728004956-3c1f35247d10 h1:WIoqL4EROvwiPdUtaip4VcDdpZ4kha7wBWZrbVKCIZg= -golang.org/x/sys v0.0.0-20220728004956-3c1f35247d10/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= -golang.org/x/sys v0.0.0-20221013171732-95e765b1cc43 h1:OK7RB6t2WQX54srQQYSXMW8dF5C6/8+oA/s5QBmmto4= -golang.org/x/sys v0.0.0-20221013171732-95e765b1cc43/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= golang.org/x/sys v0.6.0 h1:MVltZSvRTcU2ljQOhs94SXPftV6DCNnZViHeQps87pQ= golang.org/x/sys v0.6.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= golang.org/x/term v0.0.0-20201117132131-f5c789dd3221/go.mod h1:Nr5EML6q2oocZ2LXRh80K7BxOlk5/8JxuGnuhpl+muw= golang.org/x/term v0.0.0-20201126162022-7de9c90e9dd1/go.mod h1:bj7SfCRtBDWHUb9snDiAeCFNEtKQo2Wmx5Cou7ajbmo= -golang.org/x/term v0.0.0-20210927222741-03fcf44c2211 h1:JGgROgKl9N8DuW20oFS5gxc+lE67/N3FcwmBPMe7ArY= -golang.org/x/term v0.0.0-20210927222741-03fcf44c2211/go.mod h1:jbD1KX2456YbFQfuXm/mYQcufACuNUgVhRMnK/tPxf8= golang.org/x/term v0.6.0 h1:clScbb1cHjoCkyRbWwBEUZ5H/tIFu5TAXIqaZD0Gcjw= golang.org/x/term v0.6.0/go.mod h1:m6U89DPEgQRMq3DNkDClhWw02AUbt2daBVO4cn4Hv9U= golang.org/x/text v0.3.0/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ= @@ -726,8 +717,6 @@ golang.org/x/tools v0.0.0-20210106214847-113979e3529a/go.mod h1:emZCQorbCU4vsT4f golang.org/x/tools v0.1.0/go.mod h1:xkSsbof2nBLbhDlRMhhhyNLN/zl3eTqcnHD5viDpcZ0= golang.org/x/tools v0.1.1/go.mod h1:o0xws9oXOQQZyjljx8fwUC0k7L1pTE6eaCbjGeHmOkk= golang.org/x/tools v0.1.5/go.mod h1:o0xws9oXOQQZyjljx8fwUC0k7L1pTE6eaCbjGeHmOkk= -golang.org/x/tools v0.1.12 h1:VveCTK38A2rkS8ZqFY25HIDFscX5X9OoEhJd3quQmXU= -golang.org/x/tools v0.1.12/go.mod h1:hNGJHUnrk76NpqgfD5Aqm5Crs+Hm0VOH/i9J2+nxYbc= golang.org/x/tools v0.6.0 h1:BOw41kyTf3PuCW1pVQf8+Cyg8pMlkYB1oo9iJ6D/lKM= golang.org/x/tools v0.6.0/go.mod h1:Xwgl3UAJ/d3gWutnCtw505GrjyAbvKui8lOU390QaIU= golang.org/x/xerrors v0.0.0-20190717185122-a985d3407aa7/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0= diff --git a/internal/cli/server/config.go b/internal/cli/server/config.go index ca7a235ace..b733f36988 100644 --- a/internal/cli/server/config.go +++ b/internal/cli/server/config.go @@ -540,7 +540,7 @@ func DefaultConfig() *Config { VHost: []string{"localhost"}, }, HttpTimeout: &HttpTimeouts{ - ReadTimeout: 30 * time.Second, + ReadTimeout: 10 * time.Second, WriteTimeout: 30 * time.Second, IdleTimeout: 120 * time.Second, }, diff --git a/internal/cli/server/pprof/pprof.go b/internal/cli/server/pprof/pprof.go index 44034f3bb8..69056bd0fb 100644 --- a/internal/cli/server/pprof/pprof.go +++ b/internal/cli/server/pprof/pprof.go @@ -61,6 +61,28 @@ func CPUProfile(ctx context.Context, sec int) ([]byte, map[string]string, error) }, nil } +// CPUProfile generates a CPU Profile for a given duration +func CPUProfileWithChannel(done chan bool) ([]byte, map[string]string, error) { + var buf bytes.Buffer + if err := pprof.StartCPUProfile(&buf); err != nil { + return nil, nil, err + } + + select { + case <-done: + case <-time.After(30 * time.Second): + } + + pprof.StopCPUProfile() + + return buf.Bytes(), + map[string]string{ + "X-Content-Type-Options": "nosniff", + "Content-Type": "application/octet-stream", + "Content-Disposition": `attachment; filename="profile"`, + }, nil +} + // Trace runs a trace profile for a given duration func Trace(ctx context.Context, sec int) ([]byte, map[string]string, error) { if sec <= 0 { diff --git a/internal/ethapi/api.go b/internal/ethapi/api.go index 49b1610987..4755033a05 100644 --- a/internal/ethapi/api.go +++ b/internal/ethapi/api.go @@ -21,6 +21,7 @@ import ( "errors" "fmt" "math/big" + "runtime" "strings" "time" @@ -2221,6 +2222,21 @@ func (api *PrivateDebugAPI) PurgeCheckpointWhitelist() { api.b.PurgeCheckpointWhitelist() } +// GetTraceStack returns the current trace stack +func (api *PrivateDebugAPI) GetTraceStack() string { + buf := make([]byte, 1024) + + for { + n := runtime.Stack(buf, true) + + if n < len(buf) { + return string(buf) + } + + buf = make([]byte, 2*len(buf)) + } +} + // PublicNetAPI offers network related RPC methods type PublicNetAPI struct { net *p2p.Server diff --git a/internal/web3ext/web3ext.go b/internal/web3ext/web3ext.go index c823f096d6..38ce69e33e 100644 --- a/internal/web3ext/web3ext.go +++ b/internal/web3ext/web3ext.go @@ -512,6 +512,11 @@ web3._extend({ call: 'debug_purgeCheckpointWhitelist', params: 0, }), + new web3._extend.Method({ + name: 'getTraceStack', + call: 'debug_getTraceStack', + params: 0, + }), ], properties: [] }); diff --git a/les/handler_test.go b/les/handler_test.go index 3ceabdf8ec..af3324b042 100644 --- a/les/handler_test.go +++ b/les/handler_test.go @@ -617,7 +617,7 @@ func testTransactionStatus(t *testing.T, protocol int) { sendRequest(rawPeer.app, GetTxStatusMsg, reqID, []common.Hash{tx.Hash()}) } if err := expectResponse(rawPeer.app, TxStatusMsg, reqID, testBufLimit, []light.TxStatus{expStatus}); err != nil { - t.Errorf("transaction status mismatch") + t.Error("transaction status mismatch", err) } } signer := types.HomesteadSigner{} diff --git a/les/server_requests.go b/les/server_requests.go index 3595a6ab38..b31c11c9d0 100644 --- a/les/server_requests.go +++ b/les/server_requests.go @@ -507,25 +507,39 @@ func handleSendTx(msg Decoder) (serveRequestFn, uint64, uint64, error) { if err := msg.Decode(&r); err != nil { return nil, 0, 0, err } + amount := uint64(len(r.Txs)) + return func(backend serverBackend, p *clientPeer, waitOrStop func() bool) *reply { stats := make([]light.TxStatus, len(r.Txs)) + + var ( + err error + addFn func(transaction *types.Transaction) error + ) + for i, tx := range r.Txs { if i != 0 && !waitOrStop() { return nil } + hash := tx.Hash() stats[i] = txStatus(backend, hash) + if stats[i].Status == core.TxStatusUnknown { - addFn := backend.TxPool().AddRemotes + addFn = backend.TxPool().AddRemote + // Add txs synchronously for testing purpose if backend.AddTxsSync() { - addFn = backend.TxPool().AddRemotesSync + addFn = backend.TxPool().AddRemoteSync } - if errs := addFn([]*types.Transaction{tx}); errs[0] != nil { - stats[i].Error = errs[0].Error() + + if err = addFn(tx); err != nil { + stats[i].Error = err.Error() + continue } + stats[i] = txStatus(backend, hash) } } diff --git a/miner/worker.go b/miner/worker.go index 08acc56d55..6db906460e 100644 --- a/miner/worker.go +++ b/miner/worker.go @@ -17,10 +17,15 @@ package miner import ( + "bytes" "context" "errors" "fmt" "math/big" + "os" + "runtime" + "runtime/pprof" + ptrace "runtime/trace" "sync" "sync/atomic" "time" @@ -31,6 +36,7 @@ import ( "go.opentelemetry.io/otel/trace" "github.com/ethereum/go-ethereum/common" + cmath "github.com/ethereum/go-ethereum/common/math" "github.com/ethereum/go-ethereum/common/tracing" "github.com/ethereum/go-ethereum/consensus" "github.com/ethereum/go-ethereum/consensus/misc" @@ -39,6 +45,7 @@ import ( "github.com/ethereum/go-ethereum/core/types" "github.com/ethereum/go-ethereum/event" "github.com/ethereum/go-ethereum/log" + "github.com/ethereum/go-ethereum/metrics" "github.com/ethereum/go-ethereum/params" "github.com/ethereum/go-ethereum/trie" ) @@ -83,6 +90,12 @@ const ( staleThreshold = 7 ) +// metrics gauge to track total and empty blocks sealed by a miner +var ( + sealedBlocksCounter = metrics.NewRegisteredCounter("worker/sealedBlocks", nil) + sealedEmptyBlocksCounter = metrics.NewRegisteredCounter("worker/sealedEmptyBlocks", nil) +) + // environment is the worker's current environment and holds all // information of the sealing block generation. type environment struct { @@ -257,6 +270,8 @@ type worker struct { skipSealHook func(*task) bool // Method to decide whether skipping the sealing. fullTaskHook func() // Method to call before pushing the full sealing task. resubmitHook func(time.Duration, time.Duration) // Method to call upon updating resubmitting interval. + + profileCount *int32 // Global count for profiling } //nolint:staticcheck @@ -285,6 +300,7 @@ func newWorker(config *Config, chainConfig *params.ChainConfig, engine consensus resubmitIntervalCh: make(chan time.Duration), resubmitAdjustCh: make(chan *intervalAdjust, resubmitAdjustChanSize), } + worker.profileCount = new(int32) // Subscribe NewTxsEvent for tx pool worker.txsSub = eth.TxPool().SubscribeNewTxsEvent(worker.txsCh) // Subscribe events for blockchain @@ -560,9 +576,11 @@ func (w *worker) mainLoop(ctx context.Context) { for { select { case req := <-w.newWorkCh: + //nolint:contextcheck w.commitWork(req.ctx, req.interrupt, req.noempty, req.timestamp) case req := <-w.getWorkCh: + //nolint:contextcheck block, err := w.generateWork(req.ctx, req.params) if err != nil { req.err = err @@ -622,13 +640,17 @@ func (w *worker) mainLoop(ctx context.Context) { if gp := w.current.gasPool; gp != nil && gp.Gas() < params.TxGas { continue } + txs := make(map[common.Address]types.Transactions) + for _, tx := range ev.Txs { acc, _ := types.Sender(w.current.signer, tx) txs[acc] = append(txs[acc], tx) } - txset := types.NewTransactionsByPriceAndNonce(w.current.signer, txs, w.current.header.BaseFee) + + txset := types.NewTransactionsByPriceAndNonce(w.current.signer, txs, cmath.FromBig(w.current.header.BaseFee)) tcount := w.current.tcount + w.commitTransactions(w.current, txset, nil) // Only update the snapshot if any new transactions were added @@ -758,7 +780,7 @@ func (w *worker) resultLoop() { err error ) - tracing.Exec(task.ctx, "resultLoop", func(ctx context.Context, span trace.Span) { + tracing.Exec(task.ctx, "", "resultLoop", func(ctx context.Context, span trace.Span) { for i, taskReceipt := range task.receipts { receipt := new(types.Receipt) receipts[i] = receipt @@ -782,7 +804,7 @@ func (w *worker) resultLoop() { } // Commit block and state to database. - tracing.Exec(ctx, "resultLoop.WriteBlockAndSetHead", func(ctx context.Context, span trace.Span) { + tracing.Exec(ctx, "", "resultLoop.WriteBlockAndSetHead", func(ctx context.Context, span trace.Span) { _, err = w.chain.WriteBlockAndSetHead(ctx, block, receipts, logs, task.state, true) }) @@ -808,6 +830,12 @@ func (w *worker) resultLoop() { // Broadcast the block and announce chain insertion event w.mux.Post(core.NewMinedBlockEvent{Block: block}) + sealedBlocksCounter.Inc(1) + + if block.Transactions().Len() == 0 { + sealedEmptyBlocksCounter.Inc(1) + } + // Insert the block into the set of pending ones to resultLoop for confirmations w.unconfirmed.Insert(block.NumberU64(), block.Hash()) case <-w.exitCh: @@ -965,7 +993,10 @@ func (w *worker) commitTransactions(env *environment, txs *types.TransactionsByP // Start executing the transaction env.state.Prepare(tx.Hash(), env.tcount) + start := time.Now() + logs, err := w.commitTransaction(env, tx) + switch { case errors.Is(err, core.ErrGasLimitReached): // Pop the current out-of-gas transaction without shifting in the next from the account @@ -987,6 +1018,7 @@ func (w *worker) commitTransactions(env *environment, txs *types.TransactionsByP coalescedLogs = append(coalescedLogs, logs...) env.tcount++ txs.Shift() + log.Info("Committed new tx", "tx hash", tx.Hash(), "from", from, "to", tx.To(), "nonce", tx.Nonce(), "gas", tx.Gas(), "gasPrice", tx.GasPrice(), "value", tx.Value(), "time spent", time.Since(start)) case errors.Is(err, core.ErrTxTypeNotSupported): // Pop the unsupported transaction without shifting in the next from the account @@ -1077,7 +1109,7 @@ func (w *worker) prepareWork(genParams *generateParams) (*environment, error) { } // Set baseFee and GasLimit if we are on an EIP-1559 chain if w.chainConfig.IsLondon(header.Number) { - header.BaseFee = misc.CalcBaseFee(w.chainConfig, parent.Header()) + header.BaseFee = misc.CalcBaseFeeUint(w.chainConfig, parent.Header()).ToBig() if !w.chainConfig.IsLondon(parent.Number()) { parentGasLimit := parent.GasLimit() * params.ElasticityMultiplier header.GasLimit = core.CalcGasLimit(parentGasLimit, w.config.GasCeil) @@ -1117,9 +1149,75 @@ func (w *worker) prepareWork(genParams *generateParams) (*environment, error) { return env, nil } +func startProfiler(profile string, filepath string, number uint64) (func() error, error) { + var ( + buf bytes.Buffer + err error + ) + + closeFn := func() {} + + switch profile { + case "cpu": + err = pprof.StartCPUProfile(&buf) + + if err == nil { + closeFn = func() { + pprof.StopCPUProfile() + } + } + case "trace": + err = ptrace.Start(&buf) + + if err == nil { + closeFn = func() { + ptrace.Stop() + } + } + case "heap": + runtime.GC() + + err = pprof.WriteHeapProfile(&buf) + default: + log.Info("Incorrect profile name") + } + + if err != nil { + return func() error { + closeFn() + return nil + }, err + } + + closeFnNew := func() error { + var err error + + closeFn() + + if buf.Len() == 0 { + return nil + } + + f, err := os.Create(filepath + "/" + profile + "-" + fmt.Sprint(number) + ".prof") + if err != nil { + return err + } + + defer f.Close() + + _, err = f.Write(buf.Bytes()) + + return err + } + + return closeFnNew, nil +} + // fillTransactions retrieves the pending transactions from the txpool and fills them // into the given sealing block. The transaction selection and ordering strategy can // be customized with the plugin in the future. +// +//nolint:gocognit func (w *worker) fillTransactions(ctx context.Context, interrupt *int32, env *environment) { ctx, span := tracing.StartSpan(ctx, "fillTransactions") defer tracing.EndSpan(span) @@ -1134,10 +1232,76 @@ func (w *worker) fillTransactions(ctx context.Context, interrupt *int32, env *en remoteTxs map[common.Address]types.Transactions ) - tracing.Exec(ctx, "worker.SplittingTransactions", func(ctx context.Context, span trace.Span) { - pending := w.eth.TxPool().Pending(true) + // TODO: move to config or RPC + const profiling = false + + if profiling { + doneCh := make(chan struct{}) + + defer func() { + close(doneCh) + }() + + go func(number uint64) { + closeFn := func() error { + return nil + } + + for { + select { + case <-time.After(150 * time.Millisecond): + // Check if we've not crossed limit + if attempt := atomic.AddInt32(w.profileCount, 1); attempt >= 10 { + log.Info("Completed profiling", "attempt", attempt) + + return + } + + log.Info("Starting profiling in fill transactions", "number", number) + + dir, err := os.MkdirTemp("", fmt.Sprintf("bor-traces-%s-", time.Now().UTC().Format("2006-01-02-150405Z"))) + if err != nil { + log.Error("Error in profiling", "path", dir, "number", number, "err", err) + return + } + + // grab the cpu profile + closeFnInternal, err := startProfiler("cpu", dir, number) + if err != nil { + log.Error("Error in profiling", "path", dir, "number", number, "err", err) + return + } + + closeFn = func() error { + err := closeFnInternal() + + log.Info("Completed profiling", "path", dir, "number", number, "error", err) + + return nil + } + + case <-doneCh: + err := closeFn() + + if err != nil { + log.Info("closing fillTransactions", "number", number, "error", err) + } + + return + } + } + }(env.header.Number.Uint64()) + } + + tracing.Exec(ctx, "", "worker.SplittingTransactions", func(ctx context.Context, span trace.Span) { + + prePendingTime := time.Now() + + pending := w.eth.TxPool().Pending(ctx, true) remoteTxs = pending + postPendingTime := time.Now() + for _, account := range w.eth.TxPool().Locals() { if txs := remoteTxs[account]; len(txs) > 0 { delete(remoteTxs, account) @@ -1145,6 +1309,8 @@ func (w *worker) fillTransactions(ctx context.Context, interrupt *int32, env *en } } + postLocalsTime := time.Now() + localTxsCount = len(localTxs) remoteTxsCount = len(remoteTxs) @@ -1152,6 +1318,8 @@ func (w *worker) fillTransactions(ctx context.Context, interrupt *int32, env *en span, attribute.Int("len of local txs", localTxsCount), attribute.Int("len of remote txs", remoteTxsCount), + attribute.String("time taken by Pending()", fmt.Sprintf("%v", postPendingTime.Sub(prePendingTime))), + attribute.String("time taken by Locals()", fmt.Sprintf("%v", postLocalsTime.Sub(postPendingTime))), ) }) @@ -1164,8 +1332,8 @@ func (w *worker) fillTransactions(ctx context.Context, interrupt *int32, env *en if localTxsCount > 0 { var txs *types.TransactionsByPriceAndNonce - tracing.Exec(ctx, "worker.LocalTransactionsByPriceAndNonce", func(ctx context.Context, span trace.Span) { - txs = types.NewTransactionsByPriceAndNonce(env.signer, localTxs, env.header.BaseFee) + tracing.Exec(ctx, "", "worker.LocalTransactionsByPriceAndNonce", func(ctx context.Context, span trace.Span) { + txs = types.NewTransactionsByPriceAndNonce(env.signer, localTxs, cmath.FromBig(env.header.BaseFee)) tracing.SetAttributes( span, @@ -1173,7 +1341,7 @@ func (w *worker) fillTransactions(ctx context.Context, interrupt *int32, env *en ) }) - tracing.Exec(ctx, "worker.LocalCommitTransactions", func(ctx context.Context, span trace.Span) { + tracing.Exec(ctx, "", "worker.LocalCommitTransactions", func(ctx context.Context, span trace.Span) { committed = w.commitTransactions(env, txs, interrupt) }) @@ -1187,8 +1355,8 @@ func (w *worker) fillTransactions(ctx context.Context, interrupt *int32, env *en if remoteTxsCount > 0 { var txs *types.TransactionsByPriceAndNonce - tracing.Exec(ctx, "worker.RemoteTransactionsByPriceAndNonce", func(ctx context.Context, span trace.Span) { - txs = types.NewTransactionsByPriceAndNonce(env.signer, remoteTxs, env.header.BaseFee) + tracing.Exec(ctx, "", "worker.RemoteTransactionsByPriceAndNonce", func(ctx context.Context, span trace.Span) { + txs = types.NewTransactionsByPriceAndNonce(env.signer, remoteTxs, cmath.FromBig(env.header.BaseFee)) tracing.SetAttributes( span, @@ -1196,7 +1364,7 @@ func (w *worker) fillTransactions(ctx context.Context, interrupt *int32, env *en ) }) - tracing.Exec(ctx, "worker.RemoteCommitTransactions", func(ctx context.Context, span trace.Span) { + tracing.Exec(ctx, "", "worker.RemoteCommitTransactions", func(ctx context.Context, span trace.Span) { committed = w.commitTransactions(env, txs, interrupt) }) @@ -1237,7 +1405,7 @@ func (w *worker) commitWork(ctx context.Context, interrupt *int32, noempty bool, err error ) - tracing.Exec(ctx, "worker.prepareWork", func(ctx context.Context, span trace.Span) { + tracing.Exec(ctx, "", "worker.prepareWork", func(ctx context.Context, span trace.Span) { // Set the coinbase if the worker is running or it's required var coinbase common.Address if w.isRunning() { diff --git a/packaging/templates/mainnet-v1/archive/config.toml b/packaging/templates/mainnet-v1/archive/config.toml index 5491c784ef..355d07051d 100644 --- a/packaging/templates/mainnet-v1/archive/config.toml +++ b/packaging/templates/mainnet-v1/archive/config.toml @@ -86,7 +86,7 @@ gcmode = "archive" # vhosts = ["*"] # corsdomain = ["*"] # [jsonrpc.timeouts] - # read = "30s" + # read = "10s" # write = "30s" # idle = "2m0s" diff --git a/packaging/templates/mainnet-v1/sentry/sentry/bor/config.toml b/packaging/templates/mainnet-v1/sentry/sentry/bor/config.toml index 90df84dc07..0299f59bdc 100644 --- a/packaging/templates/mainnet-v1/sentry/sentry/bor/config.toml +++ b/packaging/templates/mainnet-v1/sentry/sentry/bor/config.toml @@ -86,7 +86,7 @@ syncmode = "full" # vhosts = ["*"] # corsdomain = ["*"] # [jsonrpc.timeouts] - # read = "30s" + # read = "10s" # write = "30s" # idle = "2m0s" diff --git a/packaging/templates/mainnet-v1/sentry/validator/bor/config.toml b/packaging/templates/mainnet-v1/sentry/validator/bor/config.toml index 9e2d80fd2a..36b14cd263 100644 --- a/packaging/templates/mainnet-v1/sentry/validator/bor/config.toml +++ b/packaging/templates/mainnet-v1/sentry/validator/bor/config.toml @@ -88,7 +88,7 @@ syncmode = "full" # vhosts = ["*"] # corsdomain = ["*"] # [jsonrpc.timeouts] - # read = "30s" + # read = "10s" # write = "30s" # idle = "2m0s" diff --git a/packaging/templates/mainnet-v1/without-sentry/bor/config.toml b/packaging/templates/mainnet-v1/without-sentry/bor/config.toml index 1e5fd67762..cfba7fb181 100644 --- a/packaging/templates/mainnet-v1/without-sentry/bor/config.toml +++ b/packaging/templates/mainnet-v1/without-sentry/bor/config.toml @@ -88,7 +88,7 @@ syncmode = "full" # vhosts = ["*"] # corsdomain = ["*"] # [jsonrpc.timeouts] - # read = "30s" + # read = "10s" # write = "30s" # idle = "2m0s" diff --git a/packaging/templates/testnet-v4/archive/config.toml b/packaging/templates/testnet-v4/archive/config.toml index fb9ffd0a17..71d360faf6 100644 --- a/packaging/templates/testnet-v4/archive/config.toml +++ b/packaging/templates/testnet-v4/archive/config.toml @@ -86,7 +86,7 @@ gcmode = "archive" # vhosts = ["*"] # corsdomain = ["*"] # [jsonrpc.timeouts] - # read = "30s" + # read = "10s" # write = "30s" # idle = "2m0s" diff --git a/packaging/templates/testnet-v4/sentry/sentry/bor/config.toml b/packaging/templates/testnet-v4/sentry/sentry/bor/config.toml index 9884c0eccc..124b34f09c 100644 --- a/packaging/templates/testnet-v4/sentry/sentry/bor/config.toml +++ b/packaging/templates/testnet-v4/sentry/sentry/bor/config.toml @@ -86,7 +86,7 @@ syncmode = "full" # vhosts = ["*"] # corsdomain = ["*"] # [jsonrpc.timeouts] - # read = "30s" + # read = "10s" # write = "30s" # idle = "2m0s" diff --git a/packaging/templates/testnet-v4/sentry/validator/bor/config.toml b/packaging/templates/testnet-v4/sentry/validator/bor/config.toml index 49c47fedd4..bfebe422ca 100644 --- a/packaging/templates/testnet-v4/sentry/validator/bor/config.toml +++ b/packaging/templates/testnet-v4/sentry/validator/bor/config.toml @@ -88,7 +88,7 @@ syncmode = "full" # vhosts = ["*"] # corsdomain = ["*"] # [jsonrpc.timeouts] - # read = "30s" + # read = "10s" # write = "30s" # idle = "2m0s" diff --git a/packaging/templates/testnet-v4/without-sentry/bor/config.toml b/packaging/templates/testnet-v4/without-sentry/bor/config.toml index 2fb83a6ae2..2f7710d0d8 100644 --- a/packaging/templates/testnet-v4/without-sentry/bor/config.toml +++ b/packaging/templates/testnet-v4/without-sentry/bor/config.toml @@ -88,7 +88,7 @@ syncmode = "full" # vhosts = ["*"] # corsdomain = ["*"] # [jsonrpc.timeouts] - # read = "30s" + # read = "10s" # write = "30s" # idle = "2m0s" diff --git a/rpc/http.go b/rpc/http.go index 18404c060a..09594d0280 100644 --- a/rpc/http.go +++ b/rpc/http.go @@ -104,7 +104,7 @@ type HTTPTimeouts struct { // DefaultHTTPTimeouts represents the default timeout values used if further // configuration is not provided. var DefaultHTTPTimeouts = HTTPTimeouts{ - ReadTimeout: 30 * time.Second, + ReadTimeout: 10 * time.Second, WriteTimeout: 30 * time.Second, IdleTimeout: 120 * time.Second, } diff --git a/tests/init_test.go b/tests/init_test.go index 1c6841e030..5e32f20abf 100644 --- a/tests/init_test.go +++ b/tests/init_test.go @@ -141,9 +141,6 @@ func (tm *testMatcher) findSkip(name string) (reason string, skipload bool) { isWin32 := runtime.GOARCH == "386" && runtime.GOOS == "windows" for _, re := range tm.slowpat { if re.MatchString(name) { - if testing.Short() { - return "skipped in -short mode", false - } if isWin32 { return "skipped on 32bit windows", false } From 92cd006df9598087ea13f73cc9c6cc482c131557 Mon Sep 17 00:00:00 2001 From: Raneet Debnath <35629432+Raneet10@users.noreply.github.com> Date: Wed, 15 Mar 2023 18:28:25 +0530 Subject: [PATCH 19/29] packaging,params: bump to v0.3.6 (#782) --- packaging/templates/package_scripts/control | 2 +- packaging/templates/package_scripts/control.arm64 | 2 +- packaging/templates/package_scripts/control.profile.amd64 | 2 +- packaging/templates/package_scripts/control.profile.arm64 | 2 +- packaging/templates/package_scripts/control.validator | 2 +- packaging/templates/package_scripts/control.validator.arm64 | 2 +- params/version.go | 2 +- 7 files changed, 7 insertions(+), 7 deletions(-) diff --git a/packaging/templates/package_scripts/control b/packaging/templates/package_scripts/control index 4c50800fcf..858f40a0a6 100644 --- a/packaging/templates/package_scripts/control +++ b/packaging/templates/package_scripts/control @@ -1,5 +1,5 @@ Source: bor -Version: 0.3.5-stable +Version: 0.3.6 Section: develop Priority: standard Maintainer: Polygon diff --git a/packaging/templates/package_scripts/control.arm64 b/packaging/templates/package_scripts/control.arm64 index 11645c582d..3b90affc2d 100644 --- a/packaging/templates/package_scripts/control.arm64 +++ b/packaging/templates/package_scripts/control.arm64 @@ -1,5 +1,5 @@ Source: bor -Version: 0.3.5-stable +Version: 0.3.6 Section: develop Priority: standard Maintainer: Polygon diff --git a/packaging/templates/package_scripts/control.profile.amd64 b/packaging/templates/package_scripts/control.profile.amd64 index 2bfa82a23a..41ae856d2b 100644 --- a/packaging/templates/package_scripts/control.profile.amd64 +++ b/packaging/templates/package_scripts/control.profile.amd64 @@ -1,5 +1,5 @@ Source: bor-profile -Version: 0.3.5-stable +Version: 0.3.6 Section: develop Priority: standard Maintainer: Polygon diff --git a/packaging/templates/package_scripts/control.profile.arm64 b/packaging/templates/package_scripts/control.profile.arm64 index 504c6d3ea2..8603c454b4 100644 --- a/packaging/templates/package_scripts/control.profile.arm64 +++ b/packaging/templates/package_scripts/control.profile.arm64 @@ -1,5 +1,5 @@ Source: bor-profile -Version: 0.3.5-stable +Version: 0.3.6 Section: develop Priority: standard Maintainer: Polygon diff --git a/packaging/templates/package_scripts/control.validator b/packaging/templates/package_scripts/control.validator index 3d11d78542..bda34ca58b 100644 --- a/packaging/templates/package_scripts/control.validator +++ b/packaging/templates/package_scripts/control.validator @@ -1,5 +1,5 @@ Source: bor-profile -Version: 0.3.5-stable +Version: 0.3.6 Section: develop Priority: standard Maintainer: Polygon diff --git a/packaging/templates/package_scripts/control.validator.arm64 b/packaging/templates/package_scripts/control.validator.arm64 index a6e8753812..9ce97855d0 100644 --- a/packaging/templates/package_scripts/control.validator.arm64 +++ b/packaging/templates/package_scripts/control.validator.arm64 @@ -1,5 +1,5 @@ Source: bor-profile -Version: 0.3.5-stable +Version: 0.3.6 Section: develop Priority: standard Maintainer: Polygon diff --git a/params/version.go b/params/version.go index 2054cd18c9..148810632f 100644 --- a/params/version.go +++ b/params/version.go @@ -23,7 +23,7 @@ import ( const ( VersionMajor = 0 // Major version component of the current release VersionMinor = 3 // Minor version component of the current release - VersionPatch = 5 // Patch version component of the current release + VersionPatch = 6 // Patch version component of the current release VersionMeta = "stable" // Version metadata to append to the version string ) From 72a222091cecf05998e5c78323cc009240df1a4d Mon Sep 17 00:00:00 2001 From: Jerry Date: Thu, 23 Mar 2023 00:29:36 -0700 Subject: [PATCH 20/29] v0.3.6 fix (#787) * Fix get validator set in header verifier * chg : commit tx logs from info to debug (#673) * chg : commit tx logs from info to debug * fix : minor changes * chg : miner : commitTransactions-stats moved from info to debug * lint : fix linters * refactor logging * miner : chg : UnauthorizedSignerError to debug * lint : fix lint * fix : log.Logger interface compatibility --------- Co-authored-by: Evgeny Danienko <6655321@bk.ru> * Remove unnecessary sorting of valset from header in verification * dev: chg: version bump --------- Co-authored-by: SHIVAM SHARMA Co-authored-by: Evgeny Danienko <6655321@bk.ru> Co-authored-by: marcello33 --- consensus/bor/bor.go | 20 ++------ consensus/bor/heimdall/span/spanner.go | 13 ++++-- consensus/bor/span.go | 4 +- consensus/bor/span_mock.go | 46 +++++++++++++------ core/tests/blockchain_repair_test.go | 2 +- internal/testlog/testlog.go | 32 +++++++++++++ log/logger.go | 41 +++++++++++++++++ log/root.go | 32 +++++++++++++ miner/fake_miner.go | 2 +- miner/worker.go | 35 ++++++++++++-- miner/worker_test.go | 4 +- packaging/templates/package_scripts/control | 2 +- .../templates/package_scripts/control.arm64 | 2 +- .../package_scripts/control.profile.amd64 | 2 +- .../package_scripts/control.profile.arm64 | 2 +- .../package_scripts/control.validator | 2 +- .../package_scripts/control.validator.arm64 | 2 +- params/version.go | 2 +- tests/bor/helper.go | 3 +- 19 files changed, 197 insertions(+), 51 deletions(-) diff --git a/consensus/bor/bor.go b/consensus/bor/bor.go index a640526f22..e01b26b688 100644 --- a/consensus/bor/bor.go +++ b/consensus/bor/bor.go @@ -464,18 +464,10 @@ func (c *Bor) verifyCascadingFields(chain consensus.ChainHeaderReader, header *t // Verify the validator list match the local contract if IsSprintStart(number+1, c.config.CalculateSprint(number)) { - parentBlockNumber := number - 1 - - var newValidators []*valset.Validator - - var err error - - for newValidators, err = c.spanner.GetCurrentValidators(context.Background(), chain.GetHeaderByNumber(parentBlockNumber).Hash(), number+1); err != nil && parentBlockNumber > 0; { - parentBlockNumber-- - } + newValidators, err := c.spanner.GetCurrentValidatorsByBlockNrOrHash(context.Background(), rpc.BlockNumberOrHashWithNumber(rpc.LatestBlockNumber), number+1) if err != nil { - return errUnknownValidators + return err } sort.Sort(valset.ValidatorsByAddress(newValidators)) @@ -486,8 +478,6 @@ func (c *Bor) verifyCascadingFields(chain consensus.ChainHeaderReader, header *t return err } - sort.Sort(valset.ValidatorsByAddress(headerVals)) - if len(newValidators) != len(headerVals) { return errInvalidSpanValidators } @@ -563,7 +553,7 @@ func (c *Bor) snapshot(chain consensus.ChainHeaderReader, number uint64, hash co hash := checkpoint.Hash() // get validators and current span - validators, err := c.spanner.GetCurrentValidators(context.Background(), hash, number+1) + validators, err := c.spanner.GetCurrentValidatorsByHash(context.Background(), hash, number+1) if err != nil { return nil, err } @@ -733,7 +723,7 @@ func (c *Bor) Prepare(chain consensus.ChainHeaderReader, header *types.Header) e // get validator set if number if IsSprintStart(number+1, c.config.CalculateSprint(number)) { - newValidators, err := c.spanner.GetCurrentValidators(context.Background(), header.ParentHash, number+1) + newValidators, err := c.spanner.GetCurrentValidatorsByHash(context.Background(), header.ParentHash, number+1) if err != nil { return errUnknownValidators } @@ -1263,7 +1253,7 @@ func (c *Bor) SetHeimdallClient(h IHeimdallClient) { } func (c *Bor) GetCurrentValidators(ctx context.Context, headerHash common.Hash, blockNumber uint64) ([]*valset.Validator, error) { - return c.spanner.GetCurrentValidators(ctx, headerHash, blockNumber) + return c.spanner.GetCurrentValidatorsByHash(ctx, headerHash, blockNumber) } // diff --git a/consensus/bor/heimdall/span/spanner.go b/consensus/bor/heimdall/span/spanner.go index 5001b0f738..9307a0337e 100644 --- a/consensus/bor/heimdall/span/spanner.go +++ b/consensus/bor/heimdall/span/spanner.go @@ -89,7 +89,7 @@ func (c *ChainSpanner) GetCurrentSpan(ctx context.Context, headerHash common.Has } // GetCurrentValidators get current validators -func (c *ChainSpanner) GetCurrentValidators(ctx context.Context, headerHash common.Hash, blockNumber uint64) ([]*valset.Validator, error) { +func (c *ChainSpanner) GetCurrentValidatorsByBlockNrOrHash(ctx context.Context, blockNrOrHash rpc.BlockNumberOrHash, blockNumber uint64) ([]*valset.Validator, error) { ctx, cancel := context.WithCancel(ctx) defer cancel() @@ -107,14 +107,11 @@ func (c *ChainSpanner) GetCurrentValidators(ctx context.Context, headerHash comm toAddress := c.validatorContractAddress gas := (hexutil.Uint64)(uint64(math.MaxUint64 / 2)) - // block - blockNr := rpc.BlockNumberOrHashWithHash(headerHash, false) - result, err := c.ethAPI.Call(ctx, ethapi.TransactionArgs{ Gas: &gas, To: &toAddress, Data: &msgData, - }, blockNr, nil) + }, blockNrOrHash, nil) if err != nil { return nil, err } @@ -144,6 +141,12 @@ func (c *ChainSpanner) GetCurrentValidators(ctx context.Context, headerHash comm return valz, nil } +func (c *ChainSpanner) GetCurrentValidatorsByHash(ctx context.Context, headerHash common.Hash, blockNumber uint64) ([]*valset.Validator, error) { + blockNr := rpc.BlockNumberOrHashWithHash(headerHash, false) + + return c.GetCurrentValidatorsByBlockNrOrHash(ctx, blockNr, blockNumber) +} + const method = "commitSpan" func (c *ChainSpanner) CommitSpan(ctx context.Context, heimdallSpan HeimdallSpan, state *state.StateDB, header *types.Header, chainContext core.ChainContext) error { diff --git a/consensus/bor/span.go b/consensus/bor/span.go index 86a58fa42e..179f92c79c 100644 --- a/consensus/bor/span.go +++ b/consensus/bor/span.go @@ -9,11 +9,13 @@ import ( "github.com/ethereum/go-ethereum/core" "github.com/ethereum/go-ethereum/core/state" "github.com/ethereum/go-ethereum/core/types" + "github.com/ethereum/go-ethereum/rpc" ) //go:generate mockgen -destination=./span_mock.go -package=bor . Spanner type Spanner interface { GetCurrentSpan(ctx context.Context, headerHash common.Hash) (*span.Span, error) - GetCurrentValidators(ctx context.Context, headerHash common.Hash, blockNumber uint64) ([]*valset.Validator, error) + GetCurrentValidatorsByHash(ctx context.Context, headerHash common.Hash, blockNumber uint64) ([]*valset.Validator, error) + GetCurrentValidatorsByBlockNrOrHash(ctx context.Context, blockNrOrHash rpc.BlockNumberOrHash, blockNumber uint64) ([]*valset.Validator, error) CommitSpan(ctx context.Context, heimdallSpan span.HeimdallSpan, state *state.StateDB, header *types.Header, chainContext core.ChainContext) error } diff --git a/consensus/bor/span_mock.go b/consensus/bor/span_mock.go index 6d5f62e25d..910e81716c 100644 --- a/consensus/bor/span_mock.go +++ b/consensus/bor/span_mock.go @@ -1,5 +1,5 @@ // Code generated by MockGen. DO NOT EDIT. -// Source: github.com/ethereum/go-ethereum/consensus/bor (interfaces: Spanner) +// Source: consensus/bor/span.go // Package bor is a generated GoMock package. package bor @@ -14,6 +14,7 @@ import ( core "github.com/ethereum/go-ethereum/core" state "github.com/ethereum/go-ethereum/core/state" types "github.com/ethereum/go-ethereum/core/types" + rpc "github.com/ethereum/go-ethereum/rpc" gomock "github.com/golang/mock/gomock" ) @@ -41,45 +42,60 @@ func (m *MockSpanner) EXPECT() *MockSpannerMockRecorder { } // CommitSpan mocks base method. -func (m *MockSpanner) CommitSpan(arg0 context.Context, arg1 span.HeimdallSpan, arg2 *state.StateDB, arg3 *types.Header, arg4 core.ChainContext) error { +func (m *MockSpanner) CommitSpan(ctx context.Context, heimdallSpan span.HeimdallSpan, state *state.StateDB, header *types.Header, chainContext core.ChainContext) error { m.ctrl.T.Helper() - ret := m.ctrl.Call(m, "CommitSpan", arg0, arg1, arg2, arg3, arg4) + ret := m.ctrl.Call(m, "CommitSpan", ctx, heimdallSpan, state, header, chainContext) ret0, _ := ret[0].(error) return ret0 } // CommitSpan indicates an expected call of CommitSpan. -func (mr *MockSpannerMockRecorder) CommitSpan(arg0, arg1, arg2, arg3, arg4 interface{}) *gomock.Call { +func (mr *MockSpannerMockRecorder) CommitSpan(ctx, heimdallSpan, state, header, chainContext interface{}) *gomock.Call { mr.mock.ctrl.T.Helper() - return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "CommitSpan", reflect.TypeOf((*MockSpanner)(nil).CommitSpan), arg0, arg1, arg2, arg3, arg4) + return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "CommitSpan", reflect.TypeOf((*MockSpanner)(nil).CommitSpan), ctx, heimdallSpan, state, header, chainContext) } // GetCurrentSpan mocks base method. -func (m *MockSpanner) GetCurrentSpan(arg0 context.Context, arg1 common.Hash) (*span.Span, error) { +func (m *MockSpanner) GetCurrentSpan(ctx context.Context, headerHash common.Hash) (*span.Span, error) { m.ctrl.T.Helper() - ret := m.ctrl.Call(m, "GetCurrentSpan", arg0, arg1) + ret := m.ctrl.Call(m, "GetCurrentSpan", ctx, headerHash) ret0, _ := ret[0].(*span.Span) ret1, _ := ret[1].(error) return ret0, ret1 } // GetCurrentSpan indicates an expected call of GetCurrentSpan. -func (mr *MockSpannerMockRecorder) GetCurrentSpan(arg0, arg1 interface{}) *gomock.Call { +func (mr *MockSpannerMockRecorder) GetCurrentSpan(ctx, headerHash interface{}) *gomock.Call { mr.mock.ctrl.T.Helper() - return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "GetCurrentSpan", reflect.TypeOf((*MockSpanner)(nil).GetCurrentSpan), arg0, arg1) + return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "GetCurrentSpan", reflect.TypeOf((*MockSpanner)(nil).GetCurrentSpan), ctx, headerHash) } -// GetCurrentValidators mocks base method. -func (m *MockSpanner) GetCurrentValidators(arg0 context.Context, arg1 common.Hash, arg2 uint64) ([]*valset.Validator, error) { +// GetCurrentValidatorsByBlockNrOrHash mocks base method. +func (m *MockSpanner) GetCurrentValidatorsByBlockNrOrHash(ctx context.Context, blockNrOrHash rpc.BlockNumberOrHash, blockNumber uint64) ([]*valset.Validator, error) { m.ctrl.T.Helper() - ret := m.ctrl.Call(m, "GetCurrentValidators", arg0, arg1, arg2) + ret := m.ctrl.Call(m, "GetCurrentValidatorsByBlockNrOrHash", ctx, blockNrOrHash, blockNumber) ret0, _ := ret[0].([]*valset.Validator) ret1, _ := ret[1].(error) return ret0, ret1 } -// GetCurrentValidators indicates an expected call of GetCurrentValidators. -func (mr *MockSpannerMockRecorder) GetCurrentValidators(arg0, arg1, arg2 interface{}) *gomock.Call { +// GetCurrentValidatorsByBlockNrOrHash indicates an expected call of GetCurrentValidatorsByBlockNrOrHash. +func (mr *MockSpannerMockRecorder) GetCurrentValidatorsByBlockNrOrHash(ctx, blockNrOrHash, blockNumber interface{}) *gomock.Call { mr.mock.ctrl.T.Helper() - return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "GetCurrentValidators", reflect.TypeOf((*MockSpanner)(nil).GetCurrentValidators), arg0, arg1, arg2) + return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "GetCurrentValidatorsByBlockNrOrHash", reflect.TypeOf((*MockSpanner)(nil).GetCurrentValidatorsByBlockNrOrHash), ctx, blockNrOrHash, blockNumber) +} + +// GetCurrentValidatorsByHash mocks base method. +func (m *MockSpanner) GetCurrentValidatorsByHash(ctx context.Context, headerHash common.Hash, blockNumber uint64) ([]*valset.Validator, error) { + m.ctrl.T.Helper() + ret := m.ctrl.Call(m, "GetCurrentValidatorsByHash", ctx, headerHash, blockNumber) + ret0, _ := ret[0].([]*valset.Validator) + ret1, _ := ret[1].(error) + return ret0, ret1 +} + +// GetCurrentValidatorsByHash indicates an expected call of GetCurrentValidatorsByHash. +func (mr *MockSpannerMockRecorder) GetCurrentValidatorsByHash(ctx, headerHash, blockNumber interface{}) *gomock.Call { + mr.mock.ctrl.T.Helper() + return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "GetCurrentValidatorsByHash", reflect.TypeOf((*MockSpanner)(nil).GetCurrentValidatorsByHash), ctx, headerHash, blockNumber) } diff --git a/core/tests/blockchain_repair_test.go b/core/tests/blockchain_repair_test.go index 9b166b7165..d18418727b 100644 --- a/core/tests/blockchain_repair_test.go +++ b/core/tests/blockchain_repair_test.go @@ -1796,7 +1796,7 @@ func testRepair(t *testing.T, tt *rewindTest, snapshots bool) { ethAPIMock.EXPECT().Call(gomock.Any(), gomock.Any(), gomock.Any(), gomock.Any()).AnyTimes() spanner := bor.NewMockSpanner(ctrl) - spanner.EXPECT().GetCurrentValidators(gomock.Any(), gomock.Any(), gomock.Any()).Return([]*valset.Validator{ + spanner.EXPECT().GetCurrentValidatorsByHash(gomock.Any(), gomock.Any(), gomock.Any()).Return([]*valset.Validator{ { ID: 0, Address: miner.TestBankAddress, diff --git a/internal/testlog/testlog.go b/internal/testlog/testlog.go index a5836b8446..93d6f27086 100644 --- a/internal/testlog/testlog.go +++ b/internal/testlog/testlog.go @@ -148,3 +148,35 @@ func (l *logger) flush() { } l.h.buf = nil } + +func (l *logger) OnTrace(fn func(l log.Logging)) { + if l.GetHandler().Level() >= log.LvlTrace { + fn(l.Trace) + } +} + +func (l *logger) OnDebug(fn func(l log.Logging)) { + if l.GetHandler().Level() >= log.LvlDebug { + fn(l.Debug) + } +} +func (l *logger) OnInfo(fn func(l log.Logging)) { + if l.GetHandler().Level() >= log.LvlInfo { + fn(l.Info) + } +} +func (l *logger) OnWarn(fn func(l log.Logging)) { + if l.GetHandler().Level() >= log.LvlWarn { + fn(l.Warn) + } +} +func (l *logger) OnError(fn func(l log.Logging)) { + if l.GetHandler().Level() >= log.LvlError { + fn(l.Error) + } +} +func (l *logger) OnCrit(fn func(l log.Logging)) { + if l.GetHandler().Level() >= log.LvlCrit { + fn(l.Crit) + } +} diff --git a/log/logger.go b/log/logger.go index 2b96681a82..c2678259bf 100644 --- a/log/logger.go +++ b/log/logger.go @@ -106,6 +106,8 @@ type RecordKeyNames struct { Ctx string } +type Logging func(msg string, ctx ...interface{}) + // A Logger writes key/value pairs to a Handler type Logger interface { // New returns a new Logger that has this logger's context plus the given context @@ -124,6 +126,13 @@ type Logger interface { Warn(msg string, ctx ...interface{}) Error(msg string, ctx ...interface{}) Crit(msg string, ctx ...interface{}) + + OnTrace(func(l Logging)) + OnDebug(func(l Logging)) + OnInfo(func(l Logging)) + OnWarn(func(l Logging)) + OnError(func(l Logging)) + OnCrit(func(l Logging)) } type logger struct { @@ -198,6 +207,38 @@ func (l *logger) SetHandler(h Handler) { l.h.Swap(h) } +func (l *logger) OnTrace(fn func(l Logging)) { + if l.GetHandler().Level() >= LvlTrace { + fn(l.Trace) + } +} + +func (l *logger) OnDebug(fn func(l Logging)) { + if l.GetHandler().Level() >= LvlDebug { + fn(l.Debug) + } +} +func (l *logger) OnInfo(fn func(l Logging)) { + if l.GetHandler().Level() >= LvlInfo { + fn(l.Info) + } +} +func (l *logger) OnWarn(fn func(l Logging)) { + if l.GetHandler().Level() >= LvlWarn { + fn(l.Warn) + } +} +func (l *logger) OnError(fn func(l Logging)) { + if l.GetHandler().Level() >= LvlError { + fn(l.Error) + } +} +func (l *logger) OnCrit(fn func(l Logging)) { + if l.GetHandler().Level() >= LvlCrit { + fn(l.Crit) + } +} + func normalize(ctx []interface{}) []interface{} { // if the caller passed a Ctx object, then expand it if len(ctx) == 1 { diff --git a/log/root.go b/log/root.go index 9fb4c5ae0b..04b80f4a02 100644 --- a/log/root.go +++ b/log/root.go @@ -60,6 +60,38 @@ func Crit(msg string, ctx ...interface{}) { os.Exit(1) } +func OnTrace(fn func(l Logging)) { + if root.GetHandler().Level() >= LvlTrace { + fn(root.Trace) + } +} + +func OnDebug(fn func(l Logging)) { + if root.GetHandler().Level() >= LvlDebug { + fn(root.Debug) + } +} +func OnInfo(fn func(l Logging)) { + if root.GetHandler().Level() >= LvlInfo { + fn(root.Info) + } +} +func OnWarn(fn func(l Logging)) { + if root.GetHandler().Level() >= LvlWarn { + fn(root.Warn) + } +} +func OnError(fn func(l Logging)) { + if root.GetHandler().Level() >= LvlError { + fn(root.Error) + } +} +func OnCrit(fn func(l Logging)) { + if root.GetHandler().Level() >= LvlCrit { + fn(root.Crit) + } +} + // Output is a convenient alias for write, allowing for the modification of // the calldepth (number of stack frames to skip). // calldepth influences the reported line number of the log message. diff --git a/miner/fake_miner.go b/miner/fake_miner.go index 3ca2f5be77..39cc999a0a 100644 --- a/miner/fake_miner.go +++ b/miner/fake_miner.go @@ -47,7 +47,7 @@ func NewBorDefaultMiner(t *testing.T) *DefaultBorMiner { ethAPI.EXPECT().Call(gomock.Any(), gomock.Any(), gomock.Any(), gomock.Any()).AnyTimes() spanner := bor.NewMockSpanner(ctrl) - spanner.EXPECT().GetCurrentValidators(gomock.Any(), gomock.Any(), gomock.Any()).Return([]*valset.Validator{ + spanner.EXPECT().GetCurrentValidatorsByHash(gomock.Any(), gomock.Any(), gomock.Any()).Return([]*valset.Validator{ { ID: 0, Address: common.Address{0x1}, diff --git a/miner/worker.go b/miner/worker.go index 6db906460e..9d04838ccb 100644 --- a/miner/worker.go +++ b/miner/worker.go @@ -39,6 +39,7 @@ import ( cmath "github.com/ethereum/go-ethereum/common/math" "github.com/ethereum/go-ethereum/common/tracing" "github.com/ethereum/go-ethereum/consensus" + "github.com/ethereum/go-ethereum/consensus/bor" "github.com/ethereum/go-ethereum/consensus/misc" "github.com/ethereum/go-ethereum/core" "github.com/ethereum/go-ethereum/core/state" @@ -946,6 +947,22 @@ func (w *worker) commitTransactions(env *environment, txs *types.TransactionsByP } var coalescedLogs []*types.Log + initialGasLimit := env.gasPool.Gas() + initialTxs := txs.GetTxs() + + var breakCause string + + defer func() { + log.OnDebug(func(lg log.Logging) { + lg("commitTransactions-stats", + "initialTxsCount", initialTxs, + "initialGasLimit", initialGasLimit, + "resultTxsCount", txs.GetTxs(), + "resultGapPool", env.gasPool.Gas(), + "exitCause", breakCause) + }) + }() + for { // In the following three cases, we will interrupt the execution of the transaction. // (1) new head block event arrival, the interrupt signal is 1 @@ -993,7 +1010,11 @@ func (w *worker) commitTransactions(env *environment, txs *types.TransactionsByP // Start executing the transaction env.state.Prepare(tx.Hash(), env.tcount) - start := time.Now() + var start time.Time + + log.OnDebug(func(log.Logging) { + start = time.Now() + }) logs, err := w.commitTransaction(env, tx) @@ -1018,7 +1039,10 @@ func (w *worker) commitTransactions(env *environment, txs *types.TransactionsByP coalescedLogs = append(coalescedLogs, logs...) env.tcount++ txs.Shift() - log.Info("Committed new tx", "tx hash", tx.Hash(), "from", from, "to", tx.To(), "nonce", tx.Nonce(), "gas", tx.Gas(), "gasPrice", tx.GasPrice(), "value", tx.Value(), "time spent", time.Since(start)) + + log.OnDebug(func(lg log.Logging) { + lg("Committed new tx", "tx hash", tx.Hash(), "from", from, "to", tx.To(), "nonce", tx.Nonce(), "gas", tx.Gas(), "gasPrice", tx.GasPrice(), "value", tx.Value(), "time spent", time.Since(start)) + }) case errors.Is(err, core.ErrTxTypeNotSupported): // Pop the unsupported transaction without shifting in the next from the account @@ -1117,7 +1141,12 @@ func (w *worker) prepareWork(genParams *generateParams) (*environment, error) { } // Run the consensus preparation with the default or customized consensus engine. if err := w.engine.Prepare(w.chain, header); err != nil { - log.Error("Failed to prepare header for sealing", "err", err) + switch err.(type) { + case *bor.UnauthorizedSignerError: + log.Debug("Failed to prepare header for sealing", "err", err) + default: + log.Error("Failed to prepare header for sealing", "err", err) + } return nil, err } // Could potentially happen if starting to mine in an odd state. diff --git a/miner/worker_test.go b/miner/worker_test.go index 011895c854..ffd44bebfe 100644 --- a/miner/worker_test.go +++ b/miner/worker_test.go @@ -75,7 +75,7 @@ func testGenerateBlockAndImport(t *testing.T, isClique bool, isBor bool) { ethAPIMock.EXPECT().Call(gomock.Any(), gomock.Any(), gomock.Any(), gomock.Any()).AnyTimes() spanner := bor.NewMockSpanner(ctrl) - spanner.EXPECT().GetCurrentValidators(gomock.Any(), gomock.Any(), gomock.Any()).Return([]*valset.Validator{ + spanner.EXPECT().GetCurrentValidatorsByHash(gomock.Any(), gomock.Any(), gomock.Any()).Return([]*valset.Validator{ { ID: 0, Address: TestBankAddress, @@ -622,7 +622,7 @@ func BenchmarkBorMining(b *testing.B) { ethAPIMock.EXPECT().Call(gomock.Any(), gomock.Any(), gomock.Any(), gomock.Any()).AnyTimes() spanner := bor.NewMockSpanner(ctrl) - spanner.EXPECT().GetCurrentValidators(gomock.Any(), gomock.Any(), gomock.Any()).Return([]*valset.Validator{ + spanner.EXPECT().GetCurrentValidatorsByHash(gomock.Any(), gomock.Any(), gomock.Any()).Return([]*valset.Validator{ { ID: 0, Address: TestBankAddress, diff --git a/packaging/templates/package_scripts/control b/packaging/templates/package_scripts/control index 858f40a0a6..a1159d2473 100644 --- a/packaging/templates/package_scripts/control +++ b/packaging/templates/package_scripts/control @@ -1,5 +1,5 @@ Source: bor -Version: 0.3.6 +Version: 0.3.7 Section: develop Priority: standard Maintainer: Polygon diff --git a/packaging/templates/package_scripts/control.arm64 b/packaging/templates/package_scripts/control.arm64 index 3b90affc2d..7f8d11d9b7 100644 --- a/packaging/templates/package_scripts/control.arm64 +++ b/packaging/templates/package_scripts/control.arm64 @@ -1,5 +1,5 @@ Source: bor -Version: 0.3.6 +Version: 0.3.7 Section: develop Priority: standard Maintainer: Polygon diff --git a/packaging/templates/package_scripts/control.profile.amd64 b/packaging/templates/package_scripts/control.profile.amd64 index 41ae856d2b..228a7f44a2 100644 --- a/packaging/templates/package_scripts/control.profile.amd64 +++ b/packaging/templates/package_scripts/control.profile.amd64 @@ -1,5 +1,5 @@ Source: bor-profile -Version: 0.3.6 +Version: 0.3.7 Section: develop Priority: standard Maintainer: Polygon diff --git a/packaging/templates/package_scripts/control.profile.arm64 b/packaging/templates/package_scripts/control.profile.arm64 index 8603c454b4..90f5e93624 100644 --- a/packaging/templates/package_scripts/control.profile.arm64 +++ b/packaging/templates/package_scripts/control.profile.arm64 @@ -1,5 +1,5 @@ Source: bor-profile -Version: 0.3.6 +Version: 0.3.7 Section: develop Priority: standard Maintainer: Polygon diff --git a/packaging/templates/package_scripts/control.validator b/packaging/templates/package_scripts/control.validator index bda34ca58b..0f2d320b2d 100644 --- a/packaging/templates/package_scripts/control.validator +++ b/packaging/templates/package_scripts/control.validator @@ -1,5 +1,5 @@ Source: bor-profile -Version: 0.3.6 +Version: 0.3.7 Section: develop Priority: standard Maintainer: Polygon diff --git a/packaging/templates/package_scripts/control.validator.arm64 b/packaging/templates/package_scripts/control.validator.arm64 index 9ce97855d0..20a8302faf 100644 --- a/packaging/templates/package_scripts/control.validator.arm64 +++ b/packaging/templates/package_scripts/control.validator.arm64 @@ -1,5 +1,5 @@ Source: bor-profile -Version: 0.3.6 +Version: 0.3.7 Section: develop Priority: standard Maintainer: Polygon diff --git a/params/version.go b/params/version.go index 148810632f..aa1f21b992 100644 --- a/params/version.go +++ b/params/version.go @@ -23,7 +23,7 @@ import ( const ( VersionMajor = 0 // Major version component of the current release VersionMinor = 3 // Minor version component of the current release - VersionPatch = 6 // Patch version component of the current release + VersionPatch = 7 // Patch version component of the current release VersionMeta = "stable" // Version metadata to append to the version string ) diff --git a/tests/bor/helper.go b/tests/bor/helper.go index e1b83858b3..c4b45f970d 100644 --- a/tests/bor/helper.go +++ b/tests/bor/helper.go @@ -356,7 +356,8 @@ func getMockedSpanner(t *testing.T, validators []*valset.Validator) *bor.MockSpa t.Helper() spanner := bor.NewMockSpanner(gomock.NewController(t)) - spanner.EXPECT().GetCurrentValidators(gomock.Any(), gomock.Any(), gomock.Any()).Return(validators, nil).AnyTimes() + spanner.EXPECT().GetCurrentValidatorsByHash(gomock.Any(), gomock.Any(), gomock.Any()).Return(validators, nil).AnyTimes() + spanner.EXPECT().GetCurrentValidatorsByBlockNrOrHash(gomock.Any(), gomock.Any(), gomock.Any()).Return(validators, nil).AnyTimes() spanner.EXPECT().GetCurrentSpan(gomock.Any(), gomock.Any()).Return(&span.Span{0, 0, 0}, nil).AnyTimes() spanner.EXPECT().CommitSpan(gomock.Any(), gomock.Any(), gomock.Any(), gomock.Any(), gomock.Any()).Return(nil).AnyTimes() return spanner From 73f3c75aea9d61c4d1ae131f56cbd9898b72e6ea Mon Sep 17 00:00:00 2001 From: Manav Darji Date: Wed, 5 Apr 2023 00:47:11 +0530 Subject: [PATCH 21/29] core: improve locks in txpool (#807) * added a write lock to the txs.filter method and a read lock to the txs.reheap method - both of which are called by Filter during reorg adjustments to txpool * txpool reorg locks * more locks * locks * linters * params, packaging: update version for v0.3.8-beta release * core: add logs in reheap --------- Co-authored-by: Alex Co-authored-by: Evgeny Danienko <6655321@bk.ru> --- core/tx_list.go | 37 ++++++++++++++++--- packaging/templates/package_scripts/control | 2 +- .../templates/package_scripts/control.arm64 | 2 +- .../package_scripts/control.profile.amd64 | 2 +- .../package_scripts/control.profile.arm64 | 2 +- .../package_scripts/control.validator | 2 +- .../package_scripts/control.validator.arm64 | 2 +- params/version.go | 8 ++-- 8 files changed, 41 insertions(+), 16 deletions(-) diff --git a/core/tx_list.go b/core/tx_list.go index f7c3d08295..851f732905 100644 --- a/core/tx_list.go +++ b/core/tx_list.go @@ -30,6 +30,7 @@ import ( "github.com/ethereum/go-ethereum/common" cmath "github.com/ethereum/go-ethereum/common/math" "github.com/ethereum/go-ethereum/core/types" + "github.com/ethereum/go-ethereum/log" ) // nonceHeap is a heap.Interface implementation over 64bit unsigned integers for @@ -150,19 +151,38 @@ func (m *txSortedMap) Filter(filter func(*types.Transaction) bool) types.Transac removed := m.filter(filter) // If transactions were removed, the heap and cache are ruined if len(removed) > 0 { - m.reheap() + m.reheap(false) } return removed } -func (m *txSortedMap) reheap() { - *m.index = make([]uint64, 0, len(m.items)) +func (m *txSortedMap) reheap(withRlock bool) { + index := make(nonceHeap, 0, len(m.items)) + + if withRlock { + m.m.RLock() + log.Info("[DEBUG] Acquired lock over txpool map while performing reheap") + } for nonce := range m.items { - *m.index = append(*m.index, nonce) + index = append(index, nonce) } - heap.Init(m.index) + if withRlock { + m.m.RUnlock() + } + + heap.Init(&index) + + if withRlock { + m.m.Lock() + } + + m.index = &index + + if withRlock { + m.m.Unlock() + } m.cacheMu.Lock() m.cache = nil @@ -521,12 +541,17 @@ func (l *txList) Filter(costLimit *uint256.Int, gasLimit uint64) (types.Transact lowest = nonce } } + + l.txs.m.Lock() invalids = l.txs.filter(func(tx *types.Transaction) bool { return tx.Nonce() > lowest }) + l.txs.m.Unlock() } // Reset total cost l.subTotalCost(removed) l.subTotalCost(invalids) - l.txs.reheap() + + l.txs.reheap(true) + return removed, invalids } diff --git a/packaging/templates/package_scripts/control b/packaging/templates/package_scripts/control index a1159d2473..130226241b 100644 --- a/packaging/templates/package_scripts/control +++ b/packaging/templates/package_scripts/control @@ -1,5 +1,5 @@ Source: bor -Version: 0.3.7 +Version: 0.3.8-beta Section: develop Priority: standard Maintainer: Polygon diff --git a/packaging/templates/package_scripts/control.arm64 b/packaging/templates/package_scripts/control.arm64 index 7f8d11d9b7..e8073532af 100644 --- a/packaging/templates/package_scripts/control.arm64 +++ b/packaging/templates/package_scripts/control.arm64 @@ -1,5 +1,5 @@ Source: bor -Version: 0.3.7 +Version: 0.3.8-beta Section: develop Priority: standard Maintainer: Polygon diff --git a/packaging/templates/package_scripts/control.profile.amd64 b/packaging/templates/package_scripts/control.profile.amd64 index 228a7f44a2..a5b46bff79 100644 --- a/packaging/templates/package_scripts/control.profile.amd64 +++ b/packaging/templates/package_scripts/control.profile.amd64 @@ -1,5 +1,5 @@ Source: bor-profile -Version: 0.3.7 +Version: 0.3.8-beta Section: develop Priority: standard Maintainer: Polygon diff --git a/packaging/templates/package_scripts/control.profile.arm64 b/packaging/templates/package_scripts/control.profile.arm64 index 90f5e93624..b0d94da338 100644 --- a/packaging/templates/package_scripts/control.profile.arm64 +++ b/packaging/templates/package_scripts/control.profile.arm64 @@ -1,5 +1,5 @@ Source: bor-profile -Version: 0.3.7 +Version: 0.3.8-beta Section: develop Priority: standard Maintainer: Polygon diff --git a/packaging/templates/package_scripts/control.validator b/packaging/templates/package_scripts/control.validator index 0f2d320b2d..887713056a 100644 --- a/packaging/templates/package_scripts/control.validator +++ b/packaging/templates/package_scripts/control.validator @@ -1,5 +1,5 @@ Source: bor-profile -Version: 0.3.7 +Version: 0.3.8-beta Section: develop Priority: standard Maintainer: Polygon diff --git a/packaging/templates/package_scripts/control.validator.arm64 b/packaging/templates/package_scripts/control.validator.arm64 index 20a8302faf..f9fa7635a9 100644 --- a/packaging/templates/package_scripts/control.validator.arm64 +++ b/packaging/templates/package_scripts/control.validator.arm64 @@ -1,5 +1,5 @@ Source: bor-profile -Version: 0.3.7 +Version: 0.3.8-beta Section: develop Priority: standard Maintainer: Polygon diff --git a/params/version.go b/params/version.go index aa1f21b992..affdc6b5eb 100644 --- a/params/version.go +++ b/params/version.go @@ -21,10 +21,10 @@ import ( ) const ( - VersionMajor = 0 // Major version component of the current release - VersionMinor = 3 // Minor version component of the current release - VersionPatch = 7 // Patch version component of the current release - VersionMeta = "stable" // Version metadata to append to the version string + VersionMajor = 0 // Major version component of the current release + VersionMinor = 3 // Minor version component of the current release + VersionPatch = 8 // Patch version component of the current release + VersionMeta = "beta" // Version metadata to append to the version string ) // Version holds the textual version string. From ac774c26958ccd331f75a05031cb3ccd1677efa6 Mon Sep 17 00:00:00 2001 From: Manav Darji Date: Wed, 5 Apr 2023 16:45:49 +0530 Subject: [PATCH 22/29] Merge qa to master (#808) * Added checks to RPC requests and introduced new flags to customise the parameters (#657) * added a check to reject rpc requests with batch size > the one set using a newly added flag (rpcbatchlimit) * added a check to reject rpc requests whose result size > the one set using a newly added flag (rpcreturndatalimit) * updated the config files and docs * chg : trieTimeout from 60 to 10 mins (#692) * chg : trieTimeout from 60 to 10 mins * chg : cache.timout to 10m from 1h in configs * internal/cli/server : fix : added triesInMemory in config (#691) * changed version from 0.3.0 to 0.3.4-beta (#693) * fix nil state-sync issue, increase grpc limit (#695) * Increase grpc message size limit in pprof * consensus/bor/bor.go : stateSyncs init fixed [Fix #686] * eth/filters: handle nil state-sync before notify * eth/filters: update check Co-authored-by: Jerry Co-authored-by: Daniil * core, tests/bor: add more tests for state-sync validation (#710) * core: add get state sync function for tests * tests/bor: add validation for state sync events post consensus * Arpit/temp bor sync (#701) * Increase grpc message size limit in pprof * ReadBorReceipts improvements * use internal function * fix tests * fetch geth upstread for ReadBorReceiptRLP * Only query bor receipt when the query index is equal to # tx in block body This change reduces the frequency of calling ReadBorReceipt and ReadBorTransaction, which are CPU and db intensive. * Revert "fetch geth upstread for ReadBorReceiptRLP" This reverts commit 2e838a6b1313d26674f3a8df4b044e35dcbf35a0. * Restore ReadBorReceiptRLP * fix bor receipts * remove unused * fix lints --------- Co-authored-by: Jerry Co-authored-by: Manav Darji Co-authored-by: Evgeny Danienko <6655321@bk.ru> * Revert "chg : trieTimeout from 60 to 10 mins (#692)" (#720) This reverts commit 241843c7e7bb18e64d2e157fd6fbbd665f6ce9d9. * Arpit/add execution pool 2 (#719) * initial * linters * linters * remove timeout * update pool * change pool size function * check nil * check nil * fix tests * Use execution pool from server in all handlers * simplify things * test fix * add support for cli, config * add to cli and config * merge base branch * debug statements * fix bug * atomic pointer timeout * add apis * update workerpool * fix issues * change params * fix issues * fix ipc issue * remove execution pool from IPC * revert * fix tests * mutex * refactor flag and value names * ordering fix * refactor flag and value names * update default ep size to 40 * fix bor start issues * revert file changes * debug statements * fix bug * update workerpool * atomic pointer timeout * add apis * Merge branch 'add-execution-pool' of github.com:maticnetwork/bor into arpit/add-execution-pool * fix issues * change params * fix issues * fix ipc issue * remove execution pool from IPC * revert * merge base branch * Merge branch 'add-execution-pool' of github.com:maticnetwork/bor into arpit/add-execution-pool * mutex * fix tests * Merge branch 'arpit/add-execution-pool' of github.com:maticnetwork/bor into arpit/add-execution-pool * Change default size of execution pool to 40 * refactor flag and value names * fix merge conflicts * ordering fix * refactor flag and value names * update default ep size to 40 * fix bor start issues * revert file changes * fix linters * fix go.mod * change sec to ms * change default value for ep timeout * fix node api calls * comment setter for ep timeout --------- Co-authored-by: Evgeny Danienko <6655321@bk.ru> Co-authored-by: Jerry Co-authored-by: Manav Darji * version change (#721) * Event based pprof (#732) * feature * Save pprof to /tmp --------- Co-authored-by: Jerry * Cherry-pick changes from develop (#738) * Check if block is nil to prevent panic (#736) * miner: use env for tracing instead of block object (#728) --------- Co-authored-by: Dmitry <46797839+dkeysil@users.noreply.github.com> * add max code init size check in txpool (#739) * Revert "Event based pprof" and update version (#742) * Revert "Event based pprof (#732)" This reverts commit 22fa4033e8fabb51c44e8d2a8c6bb695a6e9285e. * params: update version to 0.3.4-beta3 * packaging/templates: update bor version * internal/ethapi :: Fix : newRPCTransactionFromBlockIndex * fix: remove assignment for bor receipt --------- Co-authored-by: SHIVAM SHARMA Co-authored-by: Pratik Patil Co-authored-by: Jerry Co-authored-by: Daniil Co-authored-by: Arpit Temani Co-authored-by: Evgeny Danienko <6655321@bk.ru> Co-authored-by: Dmitry <46797839+dkeysil@users.noreply.github.com> --- go.sum | 4 ---- internal/ethapi/api.go | 25 ++++++++++++++++++------- 2 files changed, 18 insertions(+), 11 deletions(-) diff --git a/go.sum b/go.sum index 181030d9d1..61c9fd1ca5 100644 --- a/go.sum +++ b/go.sum @@ -85,8 +85,6 @@ github.com/bgentry/speakeasy v0.1.0 h1:ByYyxL9InA1OWqxJqqp2A5pYHUrCiAL6K3J+LKSsQ github.com/bgentry/speakeasy v0.1.0/go.mod h1:+zsyZBPWlz7T6j88CTgSN5bM796AkVf0kBD4zp0CCIs= github.com/bmizerany/pat v0.0.0-20170815010413-6226ea591a40/go.mod h1:8rLXio+WjiTceGBHIoTvn60HIbs7Hm7bcHjyrSqYB9c= github.com/boltdb/bolt v1.3.1/go.mod h1:clJnj/oiGkjum5o1McbSZDSLxVThjynRyGBgiAx27Ps= -github.com/btcsuite/btcd/btcec/v2 v2.1.2 h1:YoYoC9J0jwfukodSBMzZYUVQ8PTiYg4BnOWiJVzTmLs= -github.com/btcsuite/btcd/btcec/v2 v2.1.2/go.mod h1:ctjw4H1kknNJmRN4iP1R7bTQ+v3GJkZBd6mui8ZsAZE= github.com/btcsuite/btcd/btcec/v2 v2.1.3 h1:xM/n3yIhHAhHy04z4i43C8p4ehixJZMsnrVJkgl+MTE= github.com/btcsuite/btcd/btcec/v2 v2.1.3/go.mod h1:ctjw4H1kknNJmRN4iP1R7bTQ+v3GJkZBd6mui8ZsAZE= github.com/btcsuite/btcd/chaincfg/chainhash v1.0.0 h1:MSskdM4/xJYcFzy0altH/C/xHopifpWzHUi1JeVI34Q= @@ -139,8 +137,6 @@ github.com/dlclark/regexp2 v1.4.1-0.20201116162257-a2a8dda75c91/go.mod h1:2pZnwu github.com/dnaeon/go-vcr v1.1.0/go.mod h1:M7tiix8f0r6mKKJ3Yq/kqU1OYf3MnfmBWVbPx/yU9ko= github.com/dnaeon/go-vcr v1.2.0 h1:zHCHvJYTMh1N7xnV7zf1m1GPBF9Ad0Jk/whtQ1663qI= github.com/dnaeon/go-vcr v1.2.0/go.mod h1:R4UdLID7HZT3taECzJs4YgbbH6PIGXB6W/sc5OLb6RQ= -github.com/docker/docker v1.4.2-0.20180625184442-8e610b2b55bf h1:sh8rkQZavChcmakYiSlqu2425CHyFXLZZnvm7PDpU8M= -github.com/docker/docker v1.4.2-0.20180625184442-8e610b2b55bf/go.mod h1:eEKB0N0r5NX/I1kEveEz05bcu8tLC/8azJZsviup8Sk= github.com/docker/docker v1.6.1 h1:4xYASHy5cScPkLD7PO0uTmnVc860m9NarPN1X8zeMe8= github.com/docker/docker v1.6.1/go.mod h1:eEKB0N0r5NX/I1kEveEz05bcu8tLC/8azJZsviup8Sk= github.com/dop251/goja v0.0.0-20211011172007-d99e4b8cbf48 h1:iZOop7pqsg+56twTopWgwCGxdB5SI2yDO8Ti7eTRliQ= diff --git a/internal/ethapi/api.go b/internal/ethapi/api.go index 4755033a05..f130813234 100644 --- a/internal/ethapi/api.go +++ b/internal/ethapi/api.go @@ -1462,14 +1462,17 @@ func newRPCTransactionFromBlockIndex(b *types.Block, index uint64, config *param return nil } - // If the index out of the range of transactions defined in block body, it means that the transaction is a bor state sync transaction, and we need to fetch it from the database + var borReceipt *types.Receipt + + // Read bor receipts if a state-sync transaction is requested if index == uint64(len(txs)) { - borReceipt := rawdb.ReadBorReceipt(db, b.Hash(), b.NumberU64(), config) + borReceipt = rawdb.ReadBorReceipt(db, b.Hash(), b.NumberU64(), config) if borReceipt != nil { - tx, _, _, _ := rawdb.ReadBorTransaction(db, borReceipt.TxHash) - - if tx != nil { - txs = append(txs, tx) + if borReceipt.TxHash != (common.Hash{}) { + borTx, _, _, _ := rawdb.ReadBorTransactionWithBlockHash(db, borReceipt.TxHash, b.Hash()) + if borTx != nil { + txs = append(txs, borTx) + } } } } @@ -1478,7 +1481,15 @@ func newRPCTransactionFromBlockIndex(b *types.Block, index uint64, config *param if index >= uint64(len(txs)) { return nil } - return newRPCTransaction(txs[index], b.Hash(), b.NumberU64(), index, b.BaseFee(), config) + + rpcTx := newRPCTransaction(txs[index], b.Hash(), b.NumberU64(), index, b.BaseFee(), config) + + // If the transaction is a bor transaction, we need to set the hash to the derived bor tx hash. BorTx is always the last index. + if borReceipt != nil && index == uint64(len(txs)-1) { + rpcTx.Hash = borReceipt.TxHash + } + + return rpcTx } // newRPCRawTransactionFromBlockIndex returns the bytes of a transaction given a block and a transaction index. From 6de2a4922738786aa7177d30eab511098f8671b7 Mon Sep 17 00:00:00 2001 From: Daniel Jones Date: Wed, 5 Apr 2023 11:37:50 -0500 Subject: [PATCH 23/29] Setting up bor to use hosted 18.04 runner as ubuntu provided 18.04 runner is end of life --- .github/workflows/packager.yml | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-) diff --git a/.github/workflows/packager.yml b/.github/workflows/packager.yml index 7485aca976..f2aef42485 100644 --- a/.github/workflows/packager.yml +++ b/.github/workflows/packager.yml @@ -12,7 +12,9 @@ on: jobs: build: - runs-on: ubuntu-18.04 + runs-on: + group: ubuntu-runners + labels: 18.04RunnerT2Large steps: - name: Checkout uses: actions/checkout@v2 From 3f71e3b6760d9782e9258f0b6a19eb9943fe842a Mon Sep 17 00:00:00 2001 From: marcello33 Date: Wed, 10 May 2023 22:52:14 +0200 Subject: [PATCH 24/29] mardizzone/merge-qa-to-master (#856) * Added checks to RPC requests and introduced new flags to customise the parameters (#657) * added a check to reject rpc requests with batch size > the one set using a newly added flag (rpcbatchlimit) * added a check to reject rpc requests whose result size > the one set using a newly added flag (rpcreturndatalimit) * updated the config files and docs * chg : trieTimeout from 60 to 10 mins (#692) * chg : trieTimeout from 60 to 10 mins * chg : cache.timout to 10m from 1h in configs * internal/cli/server : fix : added triesInMemory in config (#691) * changed version from 0.3.0 to 0.3.4-beta (#693) * fix nil state-sync issue, increase grpc limit (#695) * Increase grpc message size limit in pprof * consensus/bor/bor.go : stateSyncs init fixed [Fix #686] * eth/filters: handle nil state-sync before notify * eth/filters: update check Co-authored-by: Jerry Co-authored-by: Daniil * core, tests/bor: add more tests for state-sync validation (#710) * core: add get state sync function for tests * tests/bor: add validation for state sync events post consensus * Arpit/temp bor sync (#701) * Increase grpc message size limit in pprof * ReadBorReceipts improvements * use internal function * fix tests * fetch geth upstread for ReadBorReceiptRLP * Only query bor receipt when the query index is equal to # tx in block body This change reduces the frequency of calling ReadBorReceipt and ReadBorTransaction, which are CPU and db intensive. * Revert "fetch geth upstread for ReadBorReceiptRLP" This reverts commit 2e838a6b1313d26674f3a8df4b044e35dcbf35a0. * Restore ReadBorReceiptRLP * fix bor receipts * remove unused * fix lints --------- Co-authored-by: Jerry Co-authored-by: Manav Darji Co-authored-by: Evgeny Danienko <6655321@bk.ru> * Revert "chg : trieTimeout from 60 to 10 mins (#692)" (#720) This reverts commit 241843c7e7bb18e64d2e157fd6fbbd665f6ce9d9. * Arpit/add execution pool 2 (#719) * initial * linters * linters * remove timeout * update pool * change pool size function * check nil * check nil * fix tests * Use execution pool from server in all handlers * simplify things * test fix * add support for cli, config * add to cli and config * merge base branch * debug statements * fix bug * atomic pointer timeout * add apis * update workerpool * fix issues * change params * fix issues * fix ipc issue * remove execution pool from IPC * revert * fix tests * mutex * refactor flag and value names * ordering fix * refactor flag and value names * update default ep size to 40 * fix bor start issues * revert file changes * debug statements * fix bug * update workerpool * atomic pointer timeout * add apis * Merge branch 'add-execution-pool' of github.com:maticnetwork/bor into arpit/add-execution-pool * fix issues * change params * fix issues * fix ipc issue * remove execution pool from IPC * revert * merge base branch * Merge branch 'add-execution-pool' of github.com:maticnetwork/bor into arpit/add-execution-pool * mutex * fix tests * Merge branch 'arpit/add-execution-pool' of github.com:maticnetwork/bor into arpit/add-execution-pool * Change default size of execution pool to 40 * refactor flag and value names * fix merge conflicts * ordering fix * refactor flag and value names * update default ep size to 40 * fix bor start issues * revert file changes * fix linters * fix go.mod * change sec to ms * change default value for ep timeout * fix node api calls * comment setter for ep timeout --------- Co-authored-by: Evgeny Danienko <6655321@bk.ru> Co-authored-by: Jerry Co-authored-by: Manav Darji * version change (#721) * Event based pprof (#732) * feature * Save pprof to /tmp --------- Co-authored-by: Jerry * Cherry-pick changes from develop (#738) * Check if block is nil to prevent panic (#736) * miner: use env for tracing instead of block object (#728) --------- Co-authored-by: Dmitry <46797839+dkeysil@users.noreply.github.com> * add max code init size check in txpool (#739) * Revert "Event based pprof" and update version (#742) * Revert "Event based pprof (#732)" This reverts commit 22fa4033e8fabb51c44e8d2a8c6bb695a6e9285e. * params: update version to 0.3.4-beta3 * packaging/templates: update bor version * internal/ethapi :: Fix : newRPCTransactionFromBlockIndex * Merge master to qa (#813) * Merge qa to master (#750) * Added checks to RPC requests and introduced new flags to customise the parameters (#657) * added a check to reject rpc requests with batch size > the one set using a newly added flag (rpcbatchlimit) * added a check to reject rpc requests whose result size > the one set using a newly added flag (rpcreturndatalimit) * updated the config files and docs * chg : trieTimeout from 60 to 10 mins (#692) * chg : trieTimeout from 60 to 10 mins * chg : cache.timout to 10m from 1h in configs * internal/cli/server : fix : added triesInMemory in config (#691) * changed version from 0.3.0 to 0.3.4-beta (#693) * fix nil state-sync issue, increase grpc limit (#695) * Increase grpc message size limit in pprof * consensus/bor/bor.go : stateSyncs init fixed [Fix #686] * eth/filters: handle nil state-sync before notify * eth/filters: update check Co-authored-by: Jerry Co-authored-by: Daniil * core, tests/bor: add more tests for state-sync validation (#710) * core: add get state sync function for tests * tests/bor: add validation for state sync events post consensus * Arpit/temp bor sync (#701) * Increase grpc message size limit in pprof * ReadBorReceipts improvements * use internal function * fix tests * fetch geth upstread for ReadBorReceiptRLP * Only query bor receipt when the query index is equal to # tx in block body This change reduces the frequency of calling ReadBorReceipt and ReadBorTransaction, which are CPU and db intensive. * Revert "fetch geth upstread for ReadBorReceiptRLP" This reverts commit 2e838a6b1313d26674f3a8df4b044e35dcbf35a0. * Restore ReadBorReceiptRLP * fix bor receipts * remove unused * fix lints --------- Co-authored-by: Jerry Co-authored-by: Manav Darji Co-authored-by: Evgeny Danienko <6655321@bk.ru> * Revert "chg : trieTimeout from 60 to 10 mins (#692)" (#720) This reverts commit 241843c7e7bb18e64d2e157fd6fbbd665f6ce9d9. * Arpit/add execution pool 2 (#719) * initial * linters * linters * remove timeout * update pool * change pool size function * check nil * check nil * fix tests * Use execution pool from server in all handlers * simplify things * test fix * add support for cli, config * add to cli and config * merge base branch * debug statements * fix bug * atomic pointer timeout * add apis * update workerpool * fix issues * change params * fix issues * fix ipc issue * remove execution pool from IPC * revert * fix tests * mutex * refactor flag and value names * ordering fix * refactor flag and value names * update default ep size to 40 * fix bor start issues * revert file changes * debug statements * fix bug * update workerpool * atomic pointer timeout * add apis * Merge branch 'add-execution-pool' of github.com:maticnetwork/bor into arpit/add-execution-pool * fix issues * change params * fix issues * fix ipc issue * remove execution pool from IPC * revert * merge base branch * Merge branch 'add-execution-pool' of github.com:maticnetwork/bor into arpit/add-execution-pool * mutex * fix tests * Merge branch 'arpit/add-execution-pool' of github.com:maticnetwork/bor into arpit/add-execution-pool * Change default size of execution pool to 40 * refactor flag and value names * fix merge conflicts * ordering fix * refactor flag and value names * update default ep size to 40 * fix bor start issues * revert file changes * fix linters * fix go.mod * change sec to ms * change default value for ep timeout * fix node api calls * comment setter for ep timeout --------- Co-authored-by: Evgeny Danienko <6655321@bk.ru> Co-authored-by: Jerry Co-authored-by: Manav Darji * version change (#721) * Event based pprof (#732) * feature * Save pprof to /tmp --------- Co-authored-by: Jerry * Cherry-pick changes from develop (#738) * Check if block is nil to prevent panic (#736) * miner: use env for tracing instead of block object (#728) --------- Co-authored-by: Dmitry <46797839+dkeysil@users.noreply.github.com> * add max code init size check in txpool (#739) * Revert "Event based pprof" and update version (#742) * Revert "Event based pprof (#732)" This reverts commit 22fa4033e8fabb51c44e8d2a8c6bb695a6e9285e. * params: update version to 0.3.4-beta3 * packaging/templates: update bor version * params, packaging/templates: update bor version --------- Co-authored-by: SHIVAM SHARMA Co-authored-by: Pratik Patil Co-authored-by: Jerry Co-authored-by: Daniil Co-authored-by: Arpit Temani Co-authored-by: Evgeny Danienko <6655321@bk.ru> Co-authored-by: Dmitry <46797839+dkeysil@users.noreply.github.com> * core, miner: add sub-spans for tracing (#753) * core, miner: add sub-spans for tracing * fix linters * core: add logs for debugging * core: add more logs to print tdd while reorg * fix linters * core: minor fix * core: remove debug logs * core: use different span for write block and set head * core: use internal context for sending traces (#755) * core: add : impossible reorg block dump (#754) * add : impossible reorg block dump * chg : 3 seperate files for impossoble reorg dump * add : use exportBlocks method and RLP blocks before writing * chg : small changes * bump : go version from 1.19 to 1.20.1 (#761) * Revert "bump : go version from 1.19 to 1.20.1 (#761)" This reverts commit 4561012af9a31d20c2715ce26e4b39ca93420b8b. * core/vm: use optimized bigint (#26021) * Add holiman/big * Fix linter * Bump version to v0.3.5 * fix lints from develop (few lints decided to appear from code that was untouched, weird) * upgrade crypto lib version (#770) * bump dep : github.com/Masterminds/goutils to v1.1.1 (#769) * mardizzone/pos-1313: bump crypto dependency (#772) * dev: chg: bumd net dependency * dev: chg: bump crypto dependency * dev: chg: bump crypto dependency * bump dep : golang.org/x/net to v0.8.0 (#771) * Verify validator set against local contract on receiving an end-of-sprint block (#768) * Verify validator set against local contract on receiving an end-of-sprint block * Fix tests * Respect error returned by ParseValidators * Keep going back until a parent block presents * core/txpool: implement DoS defenses from geth (#778) * Hotfixes and deps bump (#776) * dev: chg: bump deps * internal/cli/server, rpc: lower down http readtimeout to 10s * dev: chg: get p2p adapter * dev: chg: lower down jsonrpc readtimeout to 10s * cherry-pick txpool optimisation changes * add check for empty lists in txpool (#704) * add check * linters * core, miner: add empty instrumentation name for tracing --------- Co-authored-by: Raneet Debnath Co-authored-by: SHIVAM SHARMA Co-authored-by: Evgeny Danilenko <6655321@bk.ru> Co-authored-by: Manav Darji * packaging,params: bump to v0.3.6 (#782) * v0.3.6 fix (#787) * Fix get validator set in header verifier * chg : commit tx logs from info to debug (#673) * chg : commit tx logs from info to debug * fix : minor changes * chg : miner : commitTransactions-stats moved from info to debug * lint : fix linters * refactor logging * miner : chg : UnauthorizedSignerError to debug * lint : fix lint * fix : log.Logger interface compatibility --------- Co-authored-by: Evgeny Danienko <6655321@bk.ru> * Remove unnecessary sorting of valset from header in verification * dev: chg: version bump --------- Co-authored-by: SHIVAM SHARMA Co-authored-by: Evgeny Danienko <6655321@bk.ru> Co-authored-by: marcello33 * core: improve locks in txpool (#807) * added a write lock to the txs.filter method and a read lock to the txs.reheap method - both of which are called by Filter during reorg adjustments to txpool * txpool reorg locks * more locks * locks * linters * params, packaging: update version for v0.3.8-beta release * core: add logs in reheap --------- Co-authored-by: Alex Co-authored-by: Evgeny Danienko <6655321@bk.ru> * Merge qa to master (#808) * Added checks to RPC requests and introduced new flags to customise the parameters (#657) * added a check to reject rpc requests with batch size > the one set using a newly added flag (rpcbatchlimit) * added a check to reject rpc requests whose result size > the one set using a newly added flag (rpcreturndatalimit) * updated the config files and docs * chg : trieTimeout from 60 to 10 mins (#692) * chg : trieTimeout from 60 to 10 mins * chg : cache.timout to 10m from 1h in configs * internal/cli/server : fix : added triesInMemory in config (#691) * changed version from 0.3.0 to 0.3.4-beta (#693) * fix nil state-sync issue, increase grpc limit (#695) * Increase grpc message size limit in pprof * consensus/bor/bor.go : stateSyncs init fixed [Fix #686] * eth/filters: handle nil state-sync before notify * eth/filters: update check Co-authored-by: Jerry Co-authored-by: Daniil * core, tests/bor: add more tests for state-sync validation (#710) * core: add get state sync function for tests * tests/bor: add validation for state sync events post consensus * Arpit/temp bor sync (#701) * Increase grpc message size limit in pprof * ReadBorReceipts improvements * use internal function * fix tests * fetch geth upstread for ReadBorReceiptRLP * Only query bor receipt when the query index is equal to # tx in block body This change reduces the frequency of calling ReadBorReceipt and ReadBorTransaction, which are CPU and db intensive. * Revert "fetch geth upstread for ReadBorReceiptRLP" This reverts commit 2e838a6b1313d26674f3a8df4b044e35dcbf35a0. * Restore ReadBorReceiptRLP * fix bor receipts * remove unused * fix lints --------- Co-authored-by: Jerry Co-authored-by: Manav Darji Co-authored-by: Evgeny Danienko <6655321@bk.ru> * Revert "chg : trieTimeout from 60 to 10 mins (#692)" (#720) This reverts commit 241843c7e7bb18e64d2e157fd6fbbd665f6ce9d9. * Arpit/add execution pool 2 (#719) * initial * linters * linters * remove timeout * update pool * change pool size function * check nil * check nil * fix tests * Use execution pool from server in all handlers * simplify things * test fix * add support for cli, config * add to cli and config * merge base branch * debug statements * fix bug * atomic pointer timeout * add apis * update workerpool * fix issues * change params * fix issues * fix ipc issue * remove execution pool from IPC * revert * fix tests * mutex * refactor flag and value names * ordering fix * refactor flag and value names * update default ep size to 40 * fix bor start issues * revert file changes * debug statements * fix bug * update workerpool * atomic pointer timeout * add apis * Merge branch 'add-execution-pool' of github.com:maticnetwork/bor into arpit/add-execution-pool * fix issues * change params * fix issues * fix ipc issue * remove execution pool from IPC * revert * merge base branch * Merge branch 'add-execution-pool' of github.com:maticnetwork/bor into arpit/add-execution-pool * mutex * fix tests * Merge branch 'arpit/add-execution-pool' of github.com:maticnetwork/bor into arpit/add-execution-pool * Change default size of execution pool to 40 * refactor flag and value names * fix merge conflicts * ordering fix * refactor flag and value names * update default ep size to 40 * fix bor start issues * revert file changes * fix linters * fix go.mod * change sec to ms * change default value for ep timeout * fix node api calls * comment setter for ep timeout --------- Co-authored-by: Evgeny Danienko <6655321@bk.ru> Co-authored-by: Jerry Co-authored-by: Manav Darji * version change (#721) * Event based pprof (#732) * feature * Save pprof to /tmp --------- Co-authored-by: Jerry * Cherry-pick changes from develop (#738) * Check if block is nil to prevent panic (#736) * miner: use env for tracing instead of block object (#728) --------- Co-authored-by: Dmitry <46797839+dkeysil@users.noreply.github.com> * add max code init size check in txpool (#739) * Revert "Event based pprof" and update version (#742) * Revert "Event based pprof (#732)" This reverts commit 22fa4033e8fabb51c44e8d2a8c6bb695a6e9285e. * params: update version to 0.3.4-beta3 * packaging/templates: update bor version * internal/ethapi :: Fix : newRPCTransactionFromBlockIndex * fix: remove assignment for bor receipt --------- Co-authored-by: SHIVAM SHARMA Co-authored-by: Pratik Patil Co-authored-by: Jerry Co-authored-by: Daniil Co-authored-by: Arpit Temani Co-authored-by: Evgeny Danienko <6655321@bk.ru> Co-authored-by: Dmitry <46797839+dkeysil@users.noreply.github.com> * Setting up bor to use hosted 18.04 runner as ubuntu provided 18.04 runner is end of life --------- Co-authored-by: SHIVAM SHARMA Co-authored-by: Pratik Patil Co-authored-by: Jerry Co-authored-by: Daniil Co-authored-by: Arpit Temani Co-authored-by: Evgeny Danienko <6655321@bk.ru> Co-authored-by: Dmitry <46797839+dkeysil@users.noreply.github.com> Co-authored-by: Martin Holst Swende Co-authored-by: marcello33 Co-authored-by: Raneet Debnath Co-authored-by: Raneet Debnath <35629432+Raneet10@users.noreply.github.com> Co-authored-by: Alex Co-authored-by: Daniel Jones * merge v0.3.9-alpha to qa (#824) * commit logs * CI: test launch devnet without hardcoded sleep time (pos-534) * CI: try using checked out bor path * CI: fix missing ; * CI: fix assignment operator * CI: echo peers and block no. * CI: cleanup * minor chg: add new line * dev: add: pos-944: snyk and govuln integration (#578) * dev: add: pos-944 security ci and readme * dev: add: pos-944 remove linters as this is included already in build ci * dev: chg: pos-947 dependencies upgrade to solve snyk security issues * dev: chg: update security-ci * dev: chg: remove linter to allow replacements for security issues * dev: add: pos-944 verify path when updating metrics from config * dev: add: pos-944 fix linter * dev: add: pos-944 add .snyk policy file / fix snyk code vulnerabilities * dev: fix: pos-944 import common package / gitignore snyk dccache file * dev: fix: pos-944 verify canonical path for crashers * dev: fix: pos-944 linter * dev: add: pos-976 add govuln check * dev: add: pos-976 test upload with permissions * dev: add: pos-976 remove duplicated upload * dev: add: pos-976 report upload * dev: add: pos-976 remove upload * dev: fix: pos-944 fix govuln action * dev: fix: pos-944 move govulncheck to security-ci * dev: fix: pos-944 bump golvun action and golang versions * dev: fix: pos-944 remove persmissions and fix conflicts * dev: chg: restore err msg * dev: chg: remove duplicated function * dev: chg: sort import * dev: chg: fix linter * dev: add: use common VerifyCrasher function to avoid duplications / replace deprecated ioutils.ReadFile * dev: fix: typo * fix linters * upgrade grpc version * add ignore rule for net/http2 * Shivam/txpool tracing (#604) * lock, unlock to rlock, runlock * add : tracing Pending() and Locals() * Log time spent in committing a tx during mining * Remove data from logging * Move log into case where a tx completes without error * profile fillTransactions * fix conflict * bug fixes * add logs * txpool: add tracing in Pending() * rearrange tracing * add attributes * fix * fix * log error in profiling * update file mode and file path for profiling * full profiling * fix log * fix log * less wait * fix * fix * logs * worker: use block number for prof files * initial * txList add * fix gas calculation * fix * green tests * linters * prettify * allocate less * no locks between pending and reorg * no locks * no locks on locals * more tests * linters * less allocs * comment * optimize errors * linters * fix * fix * Linters * linters * linters * simplify errors * atomics for transactions * fix * optimize workers * fix copy * linters * txpool tracing * linters * fix tracing * duration in mcs * locks * metrics * fix * cache hit/miss * less locks on evict * remove once * remove once * switch off pprof * fix data race * fix data race * add : sealed total/empty blocks metric gauge * add : RPC debug_getTraceStack * fix : RPC debug_getTraceStack * fix : RPC debug_getTraceStack for all go-routines * linters * add data race test on txpool * fix concurrency * noleak * increase batch size * prettify * tests * baseFee mutex * panic fix * linters * fix gas fee data race * linters * more transactions * debug * debug * fix ticker * fix test * add cacheMu * more tests * fix test panic * linters * add statistics * add statistics * txitems data race * fix tx list Has * fix : lint Co-authored-by: Arpit Temani Co-authored-by: Jerry Co-authored-by: Manav Darji Co-authored-by: Evgeny Danienko <6655321@bk.ru> * Reduce txArriveTimeout to 100ms * init : remove exit on keystore err * add : multiple keystore tolerance * lint : fix linters * chg : use standard logging * chg : logging strings * Added flags to run heimdall as a child process (#597) * Added flags to run heimdall as a child process * Fix: Lint * Fix btcd package dependency for CI * Update btcd package version * Try removing ambigious importts * dev: fix: lint fix for parallel tests * remove delete for ambigious import * Remove unwanted space * go mod tidy * try replace * try replace * use vendor * rename vendor * tidy * vendor btcec * clean up * remove submodule * remove submodule * remove submodule * remove submodule * remove vendor & added replacer in test * go mod tidy * added replacer * Update replacer * Update replacer * Merge branch 'develop' into run-heimdall-flags * Skip TestGolangBindings * Typo fix * Remove unwanted changes Co-authored-by: marcello33 Co-authored-by: Evgeny Danienko <6655321@bk.ru> * dev: chg: update PR template to include nodes audience check (#641) * dev: chg: update PR template to include nodes audience check * dev: chg: better description * dev: chg: add entry to changes too * sonarqube integration (#658) * dev: add: sonarqube integration into security-ci * dev: add: exclude java files from sonarqube analysis * Merge branch 'qa' and 'master' into develop (#663) * Adding in Mumbai/Mainnet precursor deb packaging for tests to use during upgrade(iterations to come) * Added changes per discussion in PR, more changes may be necessary * Adding prerelease true * Disabling goreleaser * Removing README swap file * change bor_dir and add bor user for v0.3.0 release * rollback bor user and use root * metrics: handle equal to separated config flag (#596) * metrics: handle based config path * internal/cli/server: add more context to logs * use space separated flag and value in bor.service * fixed static-nodes related buf (os independent) (#598) * fixed static-nodes related buf (os independent) * taking static-nodes as input if default not present * Update default flags (#600) * internal/cli/server: use geth's default for txpool.pricelimit and add comments * builder/files: update config.toml for mainnet * packaging/templates: update defaults for mainnet and mumbai * internal/cli/server: skip overriding cache * packaging/templates: update cache value for mainnet * packaging/templates: update gcmode for archive mumbai node * metrics: handle nil telemetry config (#601) * resolve merge conflicts * update go version in release.yml * update goversion in makefile * update Docker login for goreleaser-cross v1.19 * Cleanup for the packager to use git tag in the package profile naming. Added conditional check for directory structure, this is in prep for v0.3.1, as this will create a failure on upgrade path in package due to file exist * added a toml configuration file with comments describing each flag (#607) * added a toml configuration file with comments describing each flag * internal/cli/server: update flag description * docs/cli: update example config and description of flags * docs: update new-cli docs Co-authored-by: Manav Darji * Adding of 0.3.0 package changes, control file updates, postinst changes, and packager update * added ancient datadir flag and toml field, need to decide on default value and update the conversion script * updated toml files with ancient field * Add support for new flags in new config.toml, which were present in old config.toml (#612) * added HTTPTimeouts, and TrieTimeout flag in new tol, from old toml * added RAW fields for these time.Duration flags * updated the conversion script to support these extra 4 flags * removed hcl and json config tests as we are only supporting toml config files * updated toml files with cache.timeout field * updated toml files with jsonrpc.timeouts field * tests/bor: expect a call for latest checkpoint * tests/bor: expect a call for latest checkpoint * packaging/templates: update cache values for archive nodes Co-authored-by: Manav Darji * remove unwanted code * Fix docker publish authentication issue In gorelease-cross 1.19+, dockerhub authentication will require docker logion action followed by mounting docker config file. See https://github.com/goreleaser/goreleaser-cross#github-actions. * Revert "update Docker login for goreleaser-cross v1.19" This reverts commit 4d19cf5342a439d98cca21b03c63a0bc075769cf. * Bump version to stable * Revert "Merge pull request #435 from maticnetwork/POS-553" This reverts commit 657d262defc9c94e9513b3d45230492d8b20eac7, reversing changes made to 88dbfa1c13c15464d3c1a3085a9f12d0ffb9b218. * revert change for release for go1.19 * Add default values to CLI helper and docs This commit adds default values to CLI helper and docs. When the default value of a string flag, slice string flag, or map string flag is empty, its helper message won't show any default value. * Add a summary of new CLI in docs * Updating packager as binutils changed version so that apt-get installs current versions * Add state pruning to new CLI * Minor wording fix in prune state description * Bumping control file versions * Mainnet Delhi fork * Set version to stable * change delhi hardfork block number * handle future chain import and skip peer drop (#650) * handle future chain import and skip peer drop * add block import metric * params: bump version to v0.3.3-stable * Bump bor version in control files for v0.3.3 mainnet release Co-authored-by: Daniel Jones Co-authored-by: Will Button Co-authored-by: Will Button Co-authored-by: Manav Darji Co-authored-by: Daniel Jones <105369507+djpolygon@users.noreply.github.com> Co-authored-by: Pratik Patil Co-authored-by: Arpit Temani Co-authored-by: Jerry * CI: use matic-cli master branch * trigger ci * internal/cli/server : fix : added triesInMemory in config (#677) * update requirements in README (#681) * consensus/bor : add : devFakeAuthor flag * core,eth,internal/cli,internal/ethapi: add --rpc.allow-unprotected-txs flag to allow txs to get replayed (for shadow node) * internal/cli: add `skiptrace` flag for profiling (#715) * internal/cli: add skiptrace flag * docs: update docs for skiptrace flag * Added flag in Status to wait for backend, and fixed panic issue. (#708) * checking if backend is available during status call, and added a flag to wait if backend is not available * added test for status command (does not cover the whole code) * Revert "Reduce txArriveTimeout to 100ms" (#707) This reverts commit 243d231fe45bc02f33678bb4f69e941167d7f466. * consensus/bor : add : devFakeAuthor flag (#697) * add check for empty lists in txpool (#704) * add check * linters * dev: chg: POS-215 move sonarqube to own ci (#733) * Added verbosity flag, supports log-level as well, but will remove that in future. (#722) * changed log-level flag back to verbosity, and updated the conversion script * supporting both verbosity and log-level, with a message to deprecat log-level * converted verbosity to a int value * Check if block is nil to prevent panic (#736) * miner: use env for tracing instead of block object (#728) * Add : mutex pprof profile (#731) * add : mutex pprof profile * rm : remove trace from default pprof profiles * chg : commit tx logs from info to debug (#673) * chg : commit tx logs from info to debug * fix : minor changes * chg : miner : commitTransactions-stats moved from info to debug * lint : fix linters * refactor logging * miner : chg : UnauthorizedSignerError to debug * lint : fix lint * fix : log.Logger interface compatibility --------- Co-authored-by: Evgeny Danienko <6655321@bk.ru> * Add : commit details to bor version (#730) * add : commit details to bor version * fix : MAKEFILE * undo : rm test-txpool-race * rm : params/build_date * rm : params/gitbranch and params/gitdate * core,docs/cli,internal/cli/server: make docs * builder,docs/cli,packaging: update toml files * mardizzone/hotfix-snyk: remove vcs build when running snyk (#745) * Feat : SetMaxPeers (#726) * init : add admin.setMaxPeers and getMaxPeers * lint : fix lint * add : some comments * Update wiki link (#762) * Heimdall App implementation (#646) * added support for miner.recommit flag (#743) * interrupting commit experiment (#556) * initial * delete * linters * big benchmark * benchmark big ints * delay * fix generate * remove debug * miner : chg : remove noempty check * fix lints * consensus/bor: handle unauthorized signer in consensus.Prepare (#651) * fix : break loop fix * lint : fix * lint : more lint fix * fix : skip TestEmptyWorkEthash and TestEmptyWorkClique * add : metrics commitInterruptCounter --------- Co-authored-by: Shivam Sharma Co-authored-by: Arpit Temani Co-authored-by: Manav Darji * internal/cli: added missing flags (#744) * minor comment update * added support for rpc.evmtimeout flag * added support for vmdebug (EnablePreimageRecording) flag * added support for jsonrpc.auth.(jwtsecret, addr, port, vhosts) flags * added support for miner.recommit flag * added support for gpo.maxheaderhistory and gpo.maxblockhistory flag * Revert "added support for miner.recommit flag" This reverts commit fcb722e4bf5bdda23d4c91da40ab95c9023a80c3. * added pprof related flags (expect --pprof.cpuprofile - Write CPU profile to the given file) * added support for --dev.gaslimit flag * added support for --fdlimit flag * added support for --netrestrict flag * added support for --nodekey and --nodekeyhex flag * added support for --vmodule, --log.json, --log.backtrace, and --log.debug flags * fixed related lint errors * fix lints from develop (few lints decided to appear from code that was untouched, weird) * more weird lints from develop * small precautionary fix * small bug ;) fix in NetRestrict * weird lints * change vmdebug = true to vmdebug = false * internal/ethapi: set rpc gas cap as default gas limit when creating access list * add : testcase for CommitInterruptExperiment (#792) * add : TestCommitInterruptExperimentBor * lint : fix lint * chg : add waitgroup * dev: chg: continue on error when uploading snyk results to GH (#795) * consensus/bor: revert handle unauthorized signer in consensus.Prepare * Revert "consensus/bor: handle unauthorized signer in consensus.Prepare (#651)" This reverts commit 9ce8c7de75a1b5021bb6f2b8086e813c8244b22c. * eth/downloader/whitelist: skip future chain validation (#796) * eth/downloader/whitelist: skip future chain validation * eth/downloader/whitelist: fix tests for future chain import * Setting up bor to use hosted 18.04 runner as ubuntu provided 18.04 runner is end of life * Merge qa to develop (#814) * Adding in Mumbai/Mainnet precursor deb packaging for tests to use during upgrade(iterations to come) * Added changes per discussion in PR, more changes may be necessary * Adding prerelease true * Disabling goreleaser * Removing README swap file * change bor_dir and add bor user for v0.3.0 release * rollback bor user and use root * metrics: handle equal to separated config flag (#596) * metrics: handle based config path * internal/cli/server: add more context to logs * use space separated flag and value in bor.service * fixed static-nodes related buf (os independent) (#598) * fixed static-nodes related buf (os independent) * taking static-nodes as input if default not present * Update default flags (#600) * internal/cli/server: use geth's default for txpool.pricelimit and add comments * builder/files: update config.toml for mainnet * packaging/templates: update defaults for mainnet and mumbai * internal/cli/server: skip overriding cache * packaging/templates: update cache value for mainnet * packaging/templates: update gcmode for archive mumbai node * metrics: handle nil telemetry config (#601) * resolve merge conflicts * update go version in release.yml * update goversion in makefile * update Docker login for goreleaser-cross v1.19 * Cleanup for the packager to use git tag in the package profile naming. Added conditional check for directory structure, this is in prep for v0.3.1, as this will create a failure on upgrade path in package due to file exist * added a toml configuration file with comments describing each flag (#607) * added a toml configuration file with comments describing each flag * internal/cli/server: update flag description * docs/cli: update example config and description of flags * docs: update new-cli docs Co-authored-by: Manav Darji * Adding of 0.3.0 package changes, control file updates, postinst changes, and packager update * added ancient datadir flag and toml field, need to decide on default value and update the conversion script * updated toml files with ancient field * Add support for new flags in new config.toml, which were present in old config.toml (#612) * added HTTPTimeouts, and TrieTimeout flag in new tol, from old toml * added RAW fields for these time.Duration flags * updated the conversion script to support these extra 4 flags * removed hcl and json config tests as we are only supporting toml config files * updated toml files with cache.timeout field * updated toml files with jsonrpc.timeouts field * tests/bor: expect a call for latest checkpoint * tests/bor: expect a call for latest checkpoint * packaging/templates: update cache values for archive nodes Co-authored-by: Manav Darji * remove unwanted code * Fix docker publish authentication issue In gorelease-cross 1.19+, dockerhub authentication will require docker logion action followed by mounting docker config file. See https://github.com/goreleaser/goreleaser-cross#github-actions. * Revert "update Docker login for goreleaser-cross v1.19" This reverts commit 4d19cf5342a439d98cca21b03c63a0bc075769cf. * Bump version to stable * Revert "Merge pull request #435 from maticnetwork/POS-553" This reverts commit 657d262defc9c94e9513b3d45230492d8b20eac7, reversing changes made to 88dbfa1c13c15464d3c1a3085a9f12d0ffb9b218. * revert change for release for go1.19 * Add default values to CLI helper and docs This commit adds default values to CLI helper and docs. When the default value of a string flag, slice string flag, or map string flag is empty, its helper message won't show any default value. * Add a summary of new CLI in docs * Updating packager as binutils changed version so that apt-get installs current versions * Add state pruning to new CLI * Minor wording fix in prune state description * Bumping control file versions * Mainnet Delhi fork * Set version to stable * change delhi hardfork block number * handle future chain import and skip peer drop (#650) * handle future chain import and skip peer drop * add block import metric * params: bump version to v0.3.3-stable * Bump bor version in control files for v0.3.3 mainnet release * Added checks to RPC requests and introduced new flags to customise the parameters (#657) * added a check to reject rpc requests with batch size > the one set using a newly added flag (rpcbatchlimit) * added a check to reject rpc requests whose result size > the one set using a newly added flag (rpcreturndatalimit) * updated the config files and docs * chg : trieTimeout from 60 to 10 mins (#692) * chg : trieTimeout from 60 to 10 mins * chg : cache.timout to 10m from 1h in configs * internal/cli/server : fix : added triesInMemory in config (#691) * changed version from 0.3.0 to 0.3.4-beta (#693) * fix nil state-sync issue, increase grpc limit (#695) * Increase grpc message size limit in pprof * consensus/bor/bor.go : stateSyncs init fixed [Fix #686] * eth/filters: handle nil state-sync before notify * eth/filters: update check Co-authored-by: Jerry Co-authored-by: Daniil * core, tests/bor: add more tests for state-sync validation (#710) * core: add get state sync function for tests * tests/bor: add validation for state sync events post consensus * Arpit/temp bor sync (#701) * Increase grpc message size limit in pprof * ReadBorReceipts improvements * use internal function * fix tests * fetch geth upstread for ReadBorReceiptRLP * Only query bor receipt when the query index is equal to # tx in block body This change reduces the frequency of calling ReadBorReceipt and ReadBorTransaction, which are CPU and db intensive. * Revert "fetch geth upstread for ReadBorReceiptRLP" This reverts commit 2e838a6b1313d26674f3a8df4b044e35dcbf35a0. * Restore ReadBorReceiptRLP * fix bor receipts * remove unused * fix lints --------- Co-authored-by: Jerry Co-authored-by: Manav Darji Co-authored-by: Evgeny Danienko <6655321@bk.ru> * Revert "chg : trieTimeout from 60 to 10 mins (#692)" (#720) This reverts commit 241843c7e7bb18e64d2e157fd6fbbd665f6ce9d9. * Arpit/add execution pool 2 (#719) * initial * linters * linters * remove timeout * update pool * change pool size function * check nil * check nil * fix tests * Use execution pool from server in all handlers * simplify things * test fix * add support for cli, config * add to cli and config * merge base branch * debug statements * fix bug * atomic pointer timeout * add apis * update workerpool * fix issues * change params * fix issues * fix ipc issue * remove execution pool from IPC * revert * fix tests * mutex * refactor flag and value names * ordering fix * refactor flag and value names * update default ep size to 40 * fix bor start issues * revert file changes * debug statements * fix bug * update workerpool * atomic pointer timeout * add apis * Merge branch 'add-execution-pool' of github.com:maticnetwork/bor into arpit/add-execution-pool * fix issues * change params * fix issues * fix ipc issue * remove execution pool from IPC * revert * merge base branch * Merge branch 'add-execution-pool' of github.com:maticnetwork/bor into arpit/add-execution-pool * mutex * fix tests * Merge branch 'arpit/add-execution-pool' of github.com:maticnetwork/bor into arpit/add-execution-pool * Change default size of execution pool to 40 * refactor flag and value names * fix merge conflicts * ordering fix * refactor flag and value names * update default ep size to 40 * fix bor start issues * revert file changes * fix linters * fix go.mod * change sec to ms * change default value for ep timeout * fix node api calls * comment setter for ep timeout --------- Co-authored-by: Evgeny Danienko <6655321@bk.ru> Co-authored-by: Jerry Co-authored-by: Manav Darji * version change (#721) * Event based pprof (#732) * feature * Save pprof to /tmp --------- Co-authored-by: Jerry * Cherry-pick changes from develop (#738) * Check if block is nil to prevent panic (#736) * miner: use env for tracing instead of block object (#728) --------- Co-authored-by: Dmitry <46797839+dkeysil@users.noreply.github.com> * add max code init size check in txpool (#739) * Revert "Event based pprof" and update version (#742) * Revert "Event based pprof (#732)" This reverts commit 22fa4033e8fabb51c44e8d2a8c6bb695a6e9285e. * params: update version to 0.3.4-beta3 * packaging/templates: update bor version * internal/ethapi :: Fix : newRPCTransactionFromBlockIndex * Merge master to qa (#813) * Merge qa to master (#750) * Added checks to RPC requests and introduced new flags to customise the parameters (#657) * added a check to reject rpc requests with batch size > the one set using a newly added flag (rpcbatchlimit) * added a check to reject rpc requests whose result size > the one set using a newly added flag (rpcreturndatalimit) * updated the config files and docs * chg : trieTimeout from 60 to 10 mins (#692) * chg : trieTimeout from 60 to 10 mins * chg : cache.timout to 10m from 1h in configs * internal/cli/server : fix : added triesInMemory in config (#691) * changed version from 0.3.0 to 0.3.4-beta (#693) * fix nil state-sync issue, increase grpc limit (#695) * Increase grpc message size limit in pprof * consensus/bor/bor.go : stateSyncs init fixed [Fix #686] * eth/filters: handle nil state-sync before notify * eth/filters: update check Co-authored-by: Jerry Co-authored-by: Daniil * core, tests/bor: add more tests for state-sync validation (#710) * core: add get state sync function for tests * tests/bor: add validation for state sync events post consensus * Arpit/temp bor sync (#701) * Increase grpc message size limit in pprof * ReadBorReceipts improvements * use internal function * fix tests * fetch geth upstread for ReadBorReceiptRLP * Only query bor receipt when the query index is equal to # tx in block body This change reduces the frequency of calling ReadBorReceipt and ReadBorTransaction, which are CPU and db intensive. * Revert "fetch geth upstread for ReadBorReceiptRLP" This reverts commit 2e838a6b1313d26674f3a8df4b044e35dcbf35a0. * Restore ReadBorReceiptRLP * fix bor receipts * remove unused * fix lints --------- Co-authored-by: Jerry Co-authored-by: Manav Darji Co-authored-by: Evgeny Danienko <6655321@bk.ru> * Revert "chg : trieTimeout from 60 to 10 mins (#692)" (#720) This reverts commit 241843c7e7bb18e64d2e157fd6fbbd665f6ce9d9. * Arpit/add execution pool 2 (#719) * initial * linters * linters * remove timeout * update pool * change pool size function * check nil * check nil * fix tests * Use execution pool from server in all handlers * simplify things * test fix * add support for cli, config * add to cli and config * merge base branch * debug statements * fix bug * atomic pointer timeout * add apis * update workerpool * fix issues * change params * fix issues * fix ipc issue * remove execution pool from IPC * revert * fix tests * mutex * refactor flag and value names * ordering fix * refactor flag and value names * update default ep size to 40 * fix bor start issues * revert file changes * debug statements * fix bug * update workerpool * atomic pointer timeout * add apis * Merge branch 'add-execution-pool' of github.com:maticnetwork/bor into arpit/add-execution-pool * fix issues * change params * fix issues * fix ipc issue * remove execution pool from IPC * revert * merge base branch * Merge branch 'add-execution-pool' of github.com:maticnetwork/bor into arpit/add-execution-pool * mutex * fix tests * Merge branch 'arpit/add-execution-pool' of github.com:maticnetwork/bor into arpit/add-execution-pool * Change default size of execution pool to 40 * refactor flag and value names * fix merge conflicts * ordering fix * refactor flag and value names * update default ep size to 40 * fix bor start issues * revert file changes * fix linters * fix go.mod * change sec to ms * change default value for ep timeout * fix node api calls * comment setter for ep timeout --------- Co-authored-by: Evgeny Danienko <6655321@bk.ru> Co-authored-by: Jerry Co-authored-by: Manav Darji * version change (#721) * Event based pprof (#732) * feature * Save pprof to /tmp --------- Co-authored-by: Jerry * Cherry-pick changes from develop (#738) * Check if block is nil to prevent panic (#736) * miner: use env for tracing instead of block object (#728) --------- Co-authored-by: Dmitry <46797839+dkeysil@users.noreply.github.com> * add max code init size check in txpool (#739) * Revert "Event based pprof" and update version (#742) * Revert "Event based pprof (#732)" This reverts commit 22fa4033e8fabb51c44e8d2a8c6bb695a6e9285e. * params: update version to 0.3.4-beta3 * packaging/templates: update bor version * params, packaging/templates: update bor version --------- Co-authored-by: SHIVAM SHARMA Co-authored-by: Pratik Patil Co-authored-by: Jerry Co-authored-by: Daniil Co-authored-by: Arpit Temani Co-authored-by: Evgeny Danienko <6655321@bk.ru> Co-authored-by: Dmitry <46797839+dkeysil@users.noreply.github.com> * core, miner: add sub-spans for tracing (#753) * core, miner: add sub-spans for tracing * fix linters * core: add logs for debugging * core: add more logs to print tdd while reorg * fix linters * core: minor fix * core: remove debug logs * core: use different span for write block and set head * core: use internal context for sending traces (#755) * core: add : impossible reorg block dump (#754) * add : impossible reorg block dump * chg : 3 seperate files for impossoble reorg dump * add : use exportBlocks method and RLP blocks before writing * chg : small changes * bump : go version from 1.19 to 1.20.1 (#761) * Revert "bump : go version from 1.19 to 1.20.1 (#761)" This reverts commit 4561012af9a31d20c2715ce26e4b39ca93420b8b. * core/vm: use optimized bigint (#26021) * Add holiman/big * Fix linter * Bump version to v0.3.5 * fix lints from develop (few lints decided to appear from code that was untouched, weird) * upgrade crypto lib version (#770) * bump dep : github.com/Masterminds/goutils to v1.1.1 (#769) * mardizzone/pos-1313: bump crypto dependency (#772) * dev: chg: bumd net dependency * dev: chg: bump crypto dependency * dev: chg: bump crypto dependency * bump dep : golang.org/x/net to v0.8.0 (#771) * Verify validator set against local contract on receiving an end-of-sprint block (#768) * Verify validator set against local contract on receiving an end-of-sprint block * Fix tests * Respect error returned by ParseValidators * Keep going back until a parent block presents * core/txpool: implement DoS defenses from geth (#778) * Hotfixes and deps bump (#776) * dev: chg: bump deps * internal/cli/server, rpc: lower down http readtimeout to 10s * dev: chg: get p2p adapter * dev: chg: lower down jsonrpc readtimeout to 10s * cherry-pick txpool optimisation changes * add check for empty lists in txpool (#704) * add check * linters * core, miner: add empty instrumentation name for tracing --------- Co-authored-by: Raneet Debnath Co-authored-by: SHIVAM SHARMA Co-authored-by: Evgeny Danilenko <6655321@bk.ru> Co-authored-by: Manav Darji * packaging,params: bump to v0.3.6 (#782) * v0.3.6 fix (#787) * Fix get validator set in header verifier * chg : commit tx logs from info to debug (#673) * chg : commit tx logs from info to debug * fix : minor changes * chg : miner : commitTransactions-stats moved from info to debug * lint : fix linters * refactor logging * miner : chg : UnauthorizedSignerError to debug * lint : fix lint * fix : log.Logger interface compatibility --------- Co-authored-by: Evgeny Danienko <6655321@bk.ru> * Remove unnecessary sorting of valset from header in verification * dev: chg: version bump --------- Co-authored-by: SHIVAM SHARMA Co-authored-by: Evgeny Danienko <6655321@bk.ru> Co-authored-by: marcello33 * core: improve locks in txpool (#807) * added a write lock to the txs.filter method and a read lock to the txs.reheap method - both of which are called by Filter during reorg adjustments to txpool * txpool reorg locks * more locks * locks * linters * params, packaging: update version for v0.3.8-beta release * core: add logs in reheap --------- Co-authored-by: Alex Co-authored-by: Evgeny Danienko <6655321@bk.ru> * Merge qa to master (#808) * Added checks to RPC requests and introduced new flags to customise the parameters (#657) * added a check to reject rpc requests with batch size > the one set using a newly added flag (rpcbatchlimit) * added a check to reject rpc requests whose result size > the one set using a newly added flag (rpcreturndatalimit) * updated the config files and docs * chg : trieTimeout from 60 to 10 mins (#692) * chg : trieTimeout from 60 to 10 mins * chg : cache.timout to 10m from 1h in configs * internal/cli/server : fix : added triesInMemory in config (#691) * changed version from 0.3.0 to 0.3.4-beta (#693) * fix nil state-sync issue, increase grpc limit (#695) * Increase grpc message size limit in pprof * consensus/bor/bor.go : stateSyncs init fixed [Fix #686] * eth/filters: handle nil state-sync before notify * eth/filters: update check Co-authored-by: Jerry Co-authored-by: Daniil * core, tests/bor: add more tests for state-sync validation (#710) * core: add get state sync function for tests * tests/bor: add validation for state sync events post consensus * Arpit/temp bor sync (#701) * Increase grpc message size limit in pprof * ReadBorReceipts improvements * use internal function * fix tests * fetch geth upstread for ReadBorReceiptRLP * Only query bor receipt when the query index is equal to # tx in block body This change reduces the frequency of calling ReadBorReceipt and ReadBorTransaction, which are CPU and db intensive. * Revert "fetch geth upstread for ReadBorReceiptRLP" This reverts commit 2e838a6b1313d26674f3a8df4b044e35dcbf35a0. * Restore ReadBorReceiptRLP * fix bor receipts * remove unused * fix lints --------- Co-authored-by: Jerry Co-authored-by: Manav Darji Co-authored-by: Evgeny Danienko <6655321@bk.ru> * Revert "chg : trieTimeout from 60 to 10 mins (#692)" (#720) This reverts commit 241843c7e7bb18e64d2e157fd6fbbd665f6ce9d9. * Arpit/add execution pool 2 (#719) * initial * linters * linters * remove timeout * update pool * change pool size function * check nil * check nil * fix tests * Use execution pool from server in all handlers * simplify things * test fix * add support for cli, config * add to cli and config * merge base branch * debug statements * fix bug * atomic pointer timeout * add apis * update workerpool * fix issues * change params * fix issues * fix ipc issue * remove execution pool from IPC * revert * fix tests * mutex * refactor flag and value names * ordering fix * refactor flag and value names * update default ep size to 40 * fix bor start issues * revert file changes * debug statements * fix bug * update workerpool * atomic pointer timeout * add apis * Merge branch 'add-execution-pool' of github.com:maticnetwork/bor into arpit/add-execution-pool * fix issues * change params * fix issues * fix ipc issue * remove execution pool from IPC * revert * merge base branch * Merge branch 'add-execution-pool' of github.com:maticnetwork/bor into arpit/add-execution-pool * mutex * fix tests * Merge branch 'arpit/add-execution-pool' of github.com:maticnetwork/bor into arpit/add-execution-pool * Change default size of execution pool to 40 * refactor flag and value names * fix merge conflicts * ordering fix * refactor flag and value names * update default ep size to 40 * fix bor start issues * revert file changes * fix linters * fix go.mod * change sec to ms * change default value for ep timeout * fix node api calls * comment setter for ep timeout --------- Co-authored-by: Evgeny Danienko <6655321@bk.ru> Co-authored-by: Jerry Co-authored-by: Manav Darji * version change (#721) * Event based pprof (#732) * feature * Save pprof to /tmp --------- Co-authored-by: Jerry * Cherry-pick changes from develop (#738) * Check if block is nil to prevent panic (#736) * miner: use env for tracing instead of block object (#728) --------- Co-authored-by: Dmitry <46797839+dkeysil@users.noreply.github.com> * add max code init size check in txpool (#739) * Revert "Event based pprof" and update version (#742) * Revert "Event based pprof (#732)" This reverts commit 22fa4033e8fabb51c44e8d2a8c6bb695a6e9285e. * params: update version to 0.3.4-beta3 * packaging/templates: update bor version * internal/ethapi :: Fix : newRPCTransactionFromBlockIndex * fix: remove assignment for bor receipt --------- Co-authored-by: SHIVAM SHARMA Co-authored-by: Pratik Patil Co-authored-by: Jerry Co-authored-by: Daniil Co-authored-by: Arpit Temani Co-authored-by: Evgeny Danienko <6655321@bk.ru> Co-authored-by: Dmitry <46797839+dkeysil@users.noreply.github.com> * Setting up bor to use hosted 18.04 runner as ubuntu provided 18.04 runner is end of life --------- Co-authored-by: SHIVAM SHARMA Co-authored-by: Pratik Patil Co-authored-by: Jerry Co-authored-by: Daniil Co-authored-by: Arpit Temani Co-authored-by: Evgeny Danienko <6655321@bk.ru> Co-authored-by: Dmitry <46797839+dkeysil@users.noreply.github.com> Co-authored-by: Martin Holst Swende Co-authored-by: marcello33 Co-authored-by: Raneet Debnath Co-authored-by: Raneet Debnath <35629432+Raneet10@users.noreply.github.com> Co-authored-by: Alex Co-authored-by: Daniel Jones * core: remove duplicate tests * miner: use get validators by hash in tests --------- Co-authored-by: Daniel Jones Co-authored-by: Will Button Co-authored-by: Will Button Co-authored-by: Daniel Jones <105369507+djpolygon@users.noreply.github.com> Co-authored-by: Pratik Patil Co-authored-by: Arpit Temani Co-authored-by: Jerry Co-authored-by: SHIVAM SHARMA Co-authored-by: Daniil Co-authored-by: Evgeny Danienko <6655321@bk.ru> Co-authored-by: Dmitry <46797839+dkeysil@users.noreply.github.com> Co-authored-by: Martin Holst Swende Co-authored-by: marcello33 Co-authored-by: Raneet Debnath Co-authored-by: Raneet Debnath <35629432+Raneet10@users.noreply.github.com> Co-authored-by: Alex --------- Co-authored-by: Evgeny Danienko <6655321@bk.ru> Co-authored-by: Raneet Debnath Co-authored-by: Arpit Temani Co-authored-by: SHIVAM SHARMA Co-authored-by: Jerry Co-authored-by: Manav Darji Co-authored-by: builder90210 Co-authored-by: Krishna Upadhyaya Co-authored-by: Daniel Jones Co-authored-by: Will Button Co-authored-by: Will Button Co-authored-by: Daniel Jones <105369507+djpolygon@users.noreply.github.com> Co-authored-by: Pratik Patil Co-authored-by: Raneet Debnath <35629432+Raneet10@users.noreply.github.com> Co-authored-by: Didi Co-authored-by: ephess Co-authored-by: Dmitry <46797839+dkeysil@users.noreply.github.com> Co-authored-by: Daniil Co-authored-by: Martin Holst Swende Co-authored-by: Alex * dev: chg: version bump for v0.3.9-beta (#825) * eth/fetcher: if peers never respond, drop them (#837) (#844) * dev: chg: bump version to beta-2 for v0.3.9 (#845) * rm : disable interruptCommit on txsCh select case (#848) * dev: chg: version bump (#849) * fix : go-routine leak in commitInterrupt channel (#851) * fix : go-routine leak in commitInterrupt channel * fix : fix lint * rm : remove t.Parallel() from TestGenerateBlockAndImport tests * fix : minor optimisations in test * fix : BenchmarkBorMining * dev: chg: beta version bump (#853) * dev: chg: bump version to stable * dev: chg: contracts branch * dev: fix: linter * dev: chg: update templum/govulncheck-action to fix execution --------- Co-authored-by: SHIVAM SHARMA Co-authored-by: Pratik Patil Co-authored-by: Manav Darji Co-authored-by: Jerry Co-authored-by: Daniil Co-authored-by: Arpit Temani Co-authored-by: Evgeny Danienko <6655321@bk.ru> Co-authored-by: Dmitry <46797839+dkeysil@users.noreply.github.com> Co-authored-by: Martin Holst Swende Co-authored-by: Raneet Debnath Co-authored-by: Raneet Debnath <35629432+Raneet10@users.noreply.github.com> Co-authored-by: Alex Co-authored-by: Daniel Jones Co-authored-by: builder90210 Co-authored-by: Krishna Upadhyaya Co-authored-by: Will Button Co-authored-by: Will Button Co-authored-by: Daniel Jones <105369507+djpolygon@users.noreply.github.com> Co-authored-by: Didi Co-authored-by: ephess --- .github/ISSUE_TEMPLATE/bug.md | 2 +- .github/matic-cli-config.yml | 17 +- .github/pull_request_template.md | 7 +- .github/workflows/ci.yml | 14 +- .github/workflows/security-ci.yml | 67 ++ .github/workflows/security-sonarqube-ci.yml | 32 + .gitignore | 2 + .golangci.yml | 10 +- .snyk | 41 ++ Makefile | 8 +- README.md | 4 +- SECURITY.md | 181 +---- accounts/abi/bind/bind_test.go | 1 + build/ci.go | 38 +- builder/files/config.toml | 35 +- cmd/faucet/faucet.go | 11 +- cmd/geth/chaincmd.go | 3 + cmd/geth/main.go | 18 + cmd/utils/bor_flags.go | 24 + cmd/utils/flags.go | 3 + common/path.go | 29 + consensus/bor/bor.go | 18 +- consensus/bor/heimdallapp/checkpoint.go | 52 ++ consensus/bor/heimdallapp/client.go | 27 + consensus/bor/heimdallapp/span.go | 86 +++ consensus/bor/heimdallapp/state_sync.go | 64 ++ consensus/bor/valset/validator_set.go | 27 +- core/tests/blockchain_repair_test.go | 2 +- core/tx_pool.go | 16 +- core/tx_pool_test.go | 91 +++ core/types/transaction_signing.go | 36 + core/vm/contracts.go | 1 + docs/cli/bootnode.md | 6 +- docs/cli/debug_pprof.md | 6 +- docs/cli/example_config.toml | 75 +- docs/cli/server.md | 66 +- eth/backend.go | 4 +- eth/downloader/whitelist/service.go | 18 +- eth/downloader/whitelist/service_test.go | 11 +- eth/ethconfig/config.go | 24 +- eth/fetcher/block_fetcher.go | 39 +- eth/filters/bor_api.go | 1 + ethclient/ethclient_test.go | 141 ++-- go.mod | 118 ++- go.sum | 687 +++++++++++++++++- integration-tests/bor_health.sh | 15 + internal/cli/bootnode.go | 23 +- internal/cli/debug_pprof.go | 27 +- internal/cli/debug_test.go | 3 +- internal/cli/dumpconfig.go | 2 + internal/cli/server/chains/developer.go | 4 +- internal/cli/server/command.go | 43 ++ internal/cli/server/config.go | 342 ++++++++- internal/cli/server/flags.go | 204 +++++- internal/cli/server/pprof/pprof.go | 22 + internal/cli/server/proto/server.pb.go | 418 ++++++----- internal/cli/server/proto/server.proto | 6 +- internal/cli/server/proto/server_grpc.pb.go | 16 +- internal/cli/server/server.go | 79 +- internal/cli/server/service.go | 31 +- internal/cli/snapshot.go | 2 +- internal/cli/status.go | 22 +- internal/cli/status_test.go | 42 ++ internal/cli/version.go | 2 +- internal/ethapi/api.go | 25 +- internal/web3ext/web3ext.go | 10 +- metrics/metrics.go | 11 +- miner/fake_miner.go | 4 +- miner/test_backend.go | 619 +++++++++++++++- miner/worker.go | 73 +- miner/worker_test.go | 147 ++-- node/api.go | 24 + p2p/server.go | 31 + .../templates/mainnet-v1/archive/config.toml | 32 +- .../mainnet-v1/sentry/sentry/bor/config.toml | 40 +- .../sentry/validator/bor/config.toml | 40 +- .../mainnet-v1/without-sentry/bor/config.toml | 40 +- packaging/templates/package_scripts/control | 2 +- .../templates/package_scripts/control.arm64 | 2 +- .../package_scripts/control.profile.amd64 | 2 +- .../package_scripts/control.profile.arm64 | 2 +- .../package_scripts/control.validator | 2 +- .../package_scripts/control.validator.arm64 | 2 +- .../templates/testnet-v4/archive/config.toml | 32 +- .../testnet-v4/sentry/sentry/bor/config.toml | 32 +- .../sentry/validator/bor/config.toml | 39 +- .../testnet-v4/without-sentry/bor/config.toml | 32 +- params/version.go | 19 +- rlp/rlpgen/main.go | 13 +- rpc/handler.go | 1 + scripts/getconfig.go | 208 +++--- sonar-project.properties | 2 + tests/fuzzers/difficulty/debug/main.go | 11 +- tests/fuzzers/les/debug/main.go | 11 +- tests/fuzzers/rangeproof/debug/main.go | 11 +- tests/fuzzers/snap/debug/main.go | 11 +- tests/fuzzers/stacktrie/debug/main.go | 11 +- tests/fuzzers/vflux/debug/main.go | 11 +- 98 files changed, 4144 insertions(+), 874 deletions(-) create mode 100644 .github/workflows/security-ci.yml create mode 100644 .github/workflows/security-sonarqube-ci.yml create mode 100644 .snyk create mode 100644 consensus/bor/heimdallapp/checkpoint.go create mode 100644 consensus/bor/heimdallapp/client.go create mode 100644 consensus/bor/heimdallapp/span.go create mode 100644 consensus/bor/heimdallapp/state_sync.go create mode 100644 integration-tests/bor_health.sh create mode 100644 internal/cli/status_test.go create mode 100644 sonar-project.properties diff --git a/.github/ISSUE_TEMPLATE/bug.md b/.github/ISSUE_TEMPLATE/bug.md index 7d34216478..8c23f0bd9f 100644 --- a/.github/ISSUE_TEMPLATE/bug.md +++ b/.github/ISSUE_TEMPLATE/bug.md @@ -6,7 +6,7 @@ labels: 'type:bug' assignees: '' --- -Our support team has aggregated some common issues and their solutions from past which are faced while running or interacting with a bor client. In order to prevent redundant efforts, we would encourage you to have a look at the [FAQ's section](https://docs.polygon.technology/docs/faq/technical-faqs) of our documentation mentioning the same, before filing an issue here. In case of additional support, you can also join our [discord](https://discord.com/invite/zdwkdvMNY2) server +Our support team has aggregated some common issues and their solutions from past which are faced while running or interacting with a bor client. In order to prevent redundant efforts, we would encourage you to have a look at the [FAQ's section](https://wiki.polygon.technology/docs/faq/technical-faqs/) of our documentation mentioning the same, before filing an issue here. In case of additional support, you can also join our [discord](https://discord.com/invite/zdwkdvMNY2) server