Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Run multiple nimbus-eth1 mainnet instances #193

Open
tersec opened this issue Aug 12, 2024 · 36 comments
Open

Run multiple nimbus-eth1 mainnet instances #193

tersec opened this issue Aug 12, 2024 · 36 comments
Assignees

Comments

@tersec
Copy link

tersec commented Aug 12, 2024

Initially, these don't have to have validators attached to them, but function as a fourth backing EL in addition to Nethermind, Erigon, and Geth.

To facilitate syncing, it can be provided by a combination of era file syncing and/or a pre-prepared database synced close to current mainnet head.

@arnetheduck
Copy link
Member

arnetheduck commented Aug 20, 2024

10 instances each for mainnet / holesky / sepolia - the database takes 2-3 weeks to create, so we'll pre-seed the nodes with a pre-prepared database copy

each instance needs about 300gb disk for the state - we should also think about setting it up in such a way that they have access to era1/era stores for historical block data (a single copy shared between the nodes)

@jakubgs
Copy link
Member

jakubgs commented Aug 20, 2024

From a conversation with Jacek we can start with a setup like this and then grow from there:

  • mainnet - Extend storage on all 7 hosts and add one nimbus-eth1 node on each host, attached to all BNs on it.
  • sepolia - Extend storage on the host and add four nimbus-eth1 nodes, for each of the BNs on it.
  • holesky - Replace Erigon EL nodes with nimbus-eth nodes on all 10 erigon-01 hosts.

The priority is on deploying nimbus-eth1 nodes on mainnet network. first.

@yakimant
Copy link
Member

yakimant commented Aug 28, 2024

nimbus.mainnet has enough space after re-sync, I will put it to the /docker volume together with geth.

❯ ansible -i ansible/inventory/test nimbus-mainnet-metal -a 'df -h /data /docker' -f1
linux-01.ih-eu-mda1.nimbus.mainnet | CHANGED | rc=0 >>
Filesystem      Size  Used Avail Use% Mounted on
/dev/sdb        2.9T  1.5T  1.4T  52% /data
/dev/sdc        3.5T  1.4T  1.9T  43% /docker
linux-02.ih-eu-mda1.nimbus.mainnet | CHANGED | rc=0 >>
Filesystem      Size  Used Avail Use% Mounted on
/dev/sdb        2.9T  1.5T  1.3T  55% /data
/dev/sdc        3.5T  1.4T  1.9T  43% /docker
linux-03.ih-eu-mda1.nimbus.mainnet | CHANGED | rc=0 >>
Filesystem      Size  Used Avail Use% Mounted on
/dev/sdb        2.9T  950G  1.8T  35% /data
/dev/sdc        3.5T  1.4T  1.9T  43% /docker
linux-04.ih-eu-mda1.nimbus.mainnet | CHANGED | rc=0 >>
Filesystem      Size  Used Avail Use% Mounted on
/dev/sdb        2.9T  1.1T  1.8T  38% /data
/dev/sdc        3.5T  1.4T  1.9T  43% /docker
linux-05.ih-eu-mda1.nimbus.mainnet | CHANGED | rc=0 >>
Filesystem      Size  Used Avail Use% Mounted on
/dev/sdb        2.9T  943G  1.8T  34% /data
/dev/sdc        3.5T  1.4T  1.9T  43% /docker
linux-06.ih-eu-mda1.nimbus.mainnet | CHANGED | rc=0 >>
Filesystem      Size  Used Avail Use% Mounted on
/dev/sdb        2.9T  946G  1.8T  34% /data
/dev/sdc        3.5T  1.4T  1.9T  43% /docker
linux-07.ih-eu-mda1.nimbus.mainnet | CHANGED | rc=0 >>
Filesystem      Size  Used Avail Use% Mounted on
/dev/sdb        2.9T  1.1T  1.7T  41% /data
/dev/sdc        3.5T  1.4T  2.0T  41% /docker

@yakimant
Copy link
Member

nimbus-eth1 is running on linux-01.ih-eu-mda1.nimbus.mainnet attached to it's beacon nodes.

Here is it's config template:
https://github.com/status-im/infra-role-nimbus-eth1/blob/master/templates/nimbus-eth1.service.j2

Looks like it needs some additional configuration in regards of syncing (prepared database or era files).

Found other config options here:
https://github.com/status-im/nimbus-eth1/blob/master/nimbus/config.nim

@yakimant
Copy link
Member

yakimant commented Aug 28, 2024

We have those era files at the host:

❯ ls -1 /data/era/
mainnet-00000-4b363db9.era
...
mainnet-01198-7fa25a94.era

Shall I point nimbus-eth1 to it with --era-dir /data/era?
Or i need to put them to data/shared_mainnet_0/era?

@yakimant
Copy link
Member

yakimant commented Aug 28, 2024

FYI

Beacon node EL stats:
Screenshot 2024-08-28 at 17 27 27

Errors in nimbus-eth1 logs (/var/log/service/nimbus-eth1-mainnet-master/service.log):

DBG 2024-08-28 15:25:36.018+00:00 Discovery send failed                      topics="eth p2p discovery" msg="(97) Address family not supported by protocol"
...
ERR 2024-08-28 15:26:39.042+00:00 Unexpected exception in rlpxAccept         topics="eth p2p rlpx" exc=EthP2PError err="Eth handshake for different network"
...
WRN 2024-08-28 15:27:32.303+00:00 Error while handling RLPx message          topics="eth p2p rlpx" peer=Node[37.24.131.128:30306] msg=newBlockHashes err="block announcements disallowed"
...
ERR 2024-08-28 15:28:23.082+00:00 Unexpected exception in rlpxAccept         topics="eth p2p rlpx" exc=EthP2PError err="Eth handshake for different network"
...
WRN 2024-08-28 15:28:29.446+00:00 Error while handling RLPx message          topics="eth p2p rlpx" peer=Node[136.244.57.56:30345] msg=newBlock err="block broadcasts disallowed"

Metrics (curl -sSf http://0:9401/metrics | grep -v '#' | sort):

discv4_routing_table_nodes 8307.0
discv4_routing_table_nodes_created 1724854339.0
nec_import_block_number 0.0
nec_import_block_number_created 1724854339.0
nec_imported_blocks_created 1724854339.0
nec_imported_blocks_total 0.0
nec_imported_gas_created 1724854339.0
nec_imported_gas_total 0.0
nec_imported_transactions_created 1724854339.0
nec_imported_transactions_total 0.0
nim_gc_heap_instance_occupied_bytes{type_name="KeyValuePairSeq[desc_identifiers.RootedVertexID, desc_identifiers.HashKey]"} 2097184.0
nim_gc_heap_instance_occupied_bytes{type_name="KeyValuePairSeq[desc_identifiers.RootedVertexID, desc_structural.VertexRef]"} 1048608.0
nim_gc_heap_instance_occupied_bytes{type_name="KeyValuePairSeq[desc_identifiers.VertexID, KeyedQueueItem[desc_identifiers.VertexID, desc_identifiers.HashKey]]"} 1179680.0
nim_gc_heap_instance_occupied_bytes{type_name="KeyValuePairSeq[eth_types.EthAddress, chain_config.GenesisAccount]"} 1714336.0
nim_gc_heap_instance_occupied_bytes{type_name="KeyValuePairSeq[eth_types.Hash256, desc_structural.VertexRef]"} 1048608.0
nim_gc_heap_instance_occupied_bytes{type_name="Node"} 6927976.0
nim_gc_heap_instance_occupied_bytes{type_name="OrderedKeyValuePairSeq[kademlia.TimeKey, system.int64]"} 1310752.0
nim_gc_heap_instance_occupied_bytes{type_name="seq[byte]"} 10073653.0
nim_gc_heap_instance_occupied_bytes{type_name="seq[OutstandingRequest]"} 946176.0
nim_gc_heap_instance_occupied_bytes{type_name="VertexRef"} 3150992.0
nim_gc_heap_instance_occupied_summed_bytes 34260237.0
nim_gc_mem_bytes_created{thread_id="3337631"} 1724854350.0
nim_gc_mem_bytes{thread_id="3337631"} 81338368.0
nim_gc_mem_occupied_bytes_created{thread_id="3337631"} 1724854350.0
nim_gc_mem_occupied_bytes{thread_id="3337631"} 38001264.0
process_cpu_seconds_total 97.06
process_max_fds 1024.0
process_open_fds 56.0
process_resident_memory_bytes 137035776.0
process_start_time_seconds 1724854339.4
process_virtual_memory_bytes 1152454656.0
rlpx_accept_failure_created{reason=""} 1724854345.0
rlpx_accept_failure_created{reason="AlreadyConnected"} 1724854469.0
rlpx_accept_failure_created{reason="EthP2PError"} 1724854345.0
rlpx_accept_failure_created{reason="MessageTimeout"} 1724854975.0
rlpx_accept_failure_created{reason="P2PInternalError"} 1724855703.0
rlpx_accept_failure_created{reason="UselessPeerError"} 1724854459.0
rlpx_accept_failure_total{reason=""} 298.0
rlpx_accept_failure_total{reason="AlreadyConnected"} 119.0
rlpx_accept_failure_total{reason="EthP2PError"} 131.0
rlpx_accept_failure_total{reason="MessageTimeout"} 4.0
rlpx_accept_failure_total{reason="P2PInternalError"} 1.0
rlpx_accept_failure_total{reason="UselessPeerError"} 43.0
rlpx_accept_success_created 1724854339.0
rlpx_accept_success_total 117.0
rlpx_connected_peers 17.0
rlpx_connected_peers_created 1724854339.0
rlpx_connect_failure_created{reason=""} 1724854418.0
rlpx_connect_failure_created{reason="P2PHandshakeError"} 1724854418.0
rlpx_connect_failure_created{reason="ProtocolError"} 1724854418.0
rlpx_connect_failure_created{reason="RlpxHandshakeTransportError"} 1724854418.0
rlpx_connect_failure_created{reason="TransportConnectError"} 1724854418.0
rlpx_connect_failure_created{reason="UselessRlpxPeerError"} 1724854418.0
rlpx_connect_failure_total{reason=""} 37480.0
rlpx_connect_failure_total{reason="P2PHandshakeError"} 2021.0
rlpx_connect_failure_total{reason="ProtocolError"} 1465.0
rlpx_connect_failure_total{reason="RlpxHandshakeTransportError"} 33292.0
rlpx_connect_failure_total{reason="TransportConnectError"} 546.0
rlpx_connect_failure_total{reason="UselessRlpxPeerError"} 156.0
rlpx_connect_success_created 1724854339.0
rlpx_connect_success_total 180.0

@arnetheduck
Copy link
Member

Errors in nimbus-eth1 logs:

cc @mjfh can you take a look at this?

see https://github.com/status-im/nimbus-eth2/blob/unstable/docs/logging.md for our logging levels - in particular, remote nodes doing strange things should never result in any logs above debug level - from the point of view of nimbus, it is "normal" for remote nodes to misbehave and we should have logic in place that deals with the misbehavior rather than raising the issue to the user via logs - ie these are expected conditions, that there exist nodes that do strange things so they are not errors, warnings or even info.

@yakimant
Copy link
Member

Exporting era1 can be done like that:

sudo geth --datadir=/docker/geth-mainnet/node/data  --mainnet export-history /docker/era1 0 15537393

where 15537393 is tha last block before merge.
See also:

@yakimant
Copy link
Member

yakimant commented Sep 9, 2024

Shortcut for era1 files suggested by Jacek:
https://era1.ethportal.net

Downloaded to:
linux-01.ih-eu-mda1.nimbus.mainnet:/docker/era1

Checksums match the file they provide.

@yakimant
Copy link
Member

yakimant commented Sep 12, 2024

Import from era files should be done like that I guess:

/docker/nimbus-eth1-mainnet-master/repo/build/nimbus import --era1-dir=/docker/era1 --era-dir=/data/era

With current speed it should take ~1h to import.

@yakimant
Copy link
Member

RPC API doesn't show much:

❯ ./rpc.sh eth_syncing
{
  "jsonrpc": "2.0",
  "id": 1,
  "result": {
    "startingBlock": "0x0",
    "currentBlock": "0x0",
    "highestBlock": "0x0"
  }
}

@tersec
Copy link
Author

tersec commented Sep 12, 2024

Syntactically, this is a valid, if minimalistic, response: https://ethereum.org/en/developers/docs/apis/json-rpc/#eth_syncing

https://github.com/status-im/nimbus-eth1/blob/178d77ab310a79f3fa3a350d3546b607145a6aab/nimbus/core/chain/forked_chain.nim#L356-365 sets highestBlock:

proc setHead(c: ForkedChainRef,
             headHash: Hash256,
             number: BlockNumber) =
  # TODO: db.setHead should not read from db anymore
  # all canonical chain marking
  # should be done from here.
  discard c.db.setHead(headHash)

  # update global syncHighest
  c.com.syncHighest = number

but https://github.com/status-im/nimbus-eth1/blob/master/nimbus/nimbus_import.nim never calls setHead(...).

startingBlock is arguably correct.

currentBlock is internally syncCurrent, and is updated from nimbus import by its persistBlocks(...) call, but these syncCurrent/syncHighest/syncStartvariables basically only reflect the syncing happening at that time, nothing per se thenimbus importcommand did. When Nimbus is run after thenimbus import`, those are simply never changed from their defaults, because no syncing is happening.

However, it reports syncing because

  server.rpc("eth_syncing") do() -> SyncingStatus:
    ## Returns SyncObject or false when not syncing.
    # TODO: make sure we are not syncing
    # when we reach the recent block
    let numPeers = node.peerPool.connectedNodes.len
    if numPeers > 0:
      var sync = SyncObject(
        startingBlock: w3Qty com.syncStart,
        currentBlock : w3Qty com.syncCurrent,
        highestBlock : w3Qty com.syncHighest
      )
      result = SyncingStatus(syncing: true, syncObject: sync)
    else:
      result = SyncingStatus(syncing: false)

which isn't really correct. Having peers does not imply syncing.

So the issue is basically that it's not syncing, but falsely showing that it is syncing.

Its not syncing when connected to an EL is itself a bug but right now expected, known issue being addressed. I'm not sure I've seen the falsely-showing-syncing reported before.

@tersec
Copy link
Author

tersec commented Sep 12, 2024

status-im/nimbus-eth1#2618

@tersec
Copy link
Author

tersec commented Sep 12, 2024

@yakimant
Copy link
Member

yakimant commented Sep 13, 2024

Progress

  • geth full sync: 9.9M of 20.7M (48%), 10d 19h
  • eth1 era/era1 import: 6.4M of 20.7M (31%), 22h

@yakimant
Copy link
Member

yakimant commented Sep 16, 2024

Stopped era/era1 import (would take 10 days).
And used state files from Jacek.

After starting the node, those appear in the logs:

DBG 2024-09-16 12:43:17.912+00:00 No finalized block stored in database, reverting to base
INF 2024-09-16 12:43:17.912+00:00 Database initialized                       base="(E7955DAE5D39FACC2C01EB33C3609F791E9938B70D0B32FD50873DD902DD3E00, 20475504)" finalized="(E7955DAE5D39FACC2C01EB33C3609F791E9938B70D0B32FD50873DD902DD3E00, 20475504)" head="(E7955DAE5D39FACC2C01EB33C3609F791E9938B70D0B32FD50873DD902DD3E00, 20475504)"
INF 2024-09-16 12:43:17.913+00:00 RLPx listener up                           topics="eth p2p" self=enode://f1ed95f5c159f1a20d358d60632b742e8ce2e432c69dc370bed303736d68577c2517be15c868d5a218883cf4186de9ef4953c7f5acd09dcacfa97b6aeeb113da@194.33.40.70:30304
WRN 2024-09-16 12:43:17.913+00:00 Engine API disabled, the node will not respond to consensus client updates (enable with `--engine-api`)
...
DBG 2024-09-16 12:52:10.781+00:00 Ignoring peer already in k.neighboursCallbacks topics="eth p2p discovery" peer=Node[54.185.121.78:30308]
NTC 2024-09-16 12:52:28.367+00:00 Wrong msg mac from                         topics="eth p2p discovery" a=108.61.185.123:7093:7093
DBG 2024-09-16 12:56:58.659+00:00 Bonding failed, already waiting for pong   topics="eth p2p discovery" n=Node[131.153.232.203:30320]

@yakimant
Copy link
Member

yakimant commented Sep 16, 2024

BTW, this is how we run nimbus-eth1 currently:

❯ /docker/nimbus-eth1-mainnet-master/repo/build/nimbus \
    --network=mainnet \
    --data-dir='/docker/nimbus-eth1-mainnet-master/data/shared_mainnet_0' \
    --nat=extip:194.33.40.70 \
    --log-level=DEBUG \
    --listen-address=0.0.0.0 \
    --tcp-port=30304 \
    --udp-port=30304 \
    --max-peers=160 \
    --discovery=V4 \
    --jwt-secret=/docker/nimbus-eth1-mainnet-master/data/jwt.hex \
    --rpc=true \
    --ws=false \
    --graphql=false \
    --http-address=127.0.0.1 \
    --http-port=8546 \
    --rpc-api=eth,debug \
    --engine-api=false \
    --engine-api-ws=false \
    --metrics=true \
    --metrics-address=0.0.0.0 \
    --metrics-port=9401 \
    --era1-dir=/docker/era1 \
    --era-dir=/data/era

@yakimant
Copy link
Member

yakimant commented Sep 16, 2024

❯ ./rpc.sh eth_blockNumber
{
  "jsonrpc": "2.0",
  "id": 1,
  "result": "0x1386e70"
}

❯ ./rpc.sh eth_syncing
{
  "jsonrpc": "2.0",
  "id": 1,
  "result": false
}

@tersec
Copy link
Author

tersec commented Sep 16, 2024

    --engine-api=false \
    --engine-api-ws=false \

Turn these on/true

@yakimant
Copy link
Member

Done!

BTW, more interesting messages:

WRN 2024-09-17 09:05:17.889+00:00 getUncles()                                topics="core_db" ommersHash=3EF3400295E8F4DFE6AED323009018E85DADBB1A4D960FC5CEE0709D484F6E97 error="KvtNotFound(Kvt, ctx=get, error=GetNotFound)"

@tersec
Copy link
Author

tersec commented Sep 17, 2024

Done!

BTW, more interesting messages:

WRN 2024-09-17 09:05:17.889+00:00 getUncles()                                topics="core_db" ommersHash=3EF3400295E8F4DFE6AED323009018E85DADBB1A4D960FC5CEE0709D484F6E97 error="KvtNotFound(Kvt, ctx=get, error=GetNotFound)"

Do you know which block was involved here (if you have a block hash or number for example)? The issue is that only PoW blocks have uncles. PoS (post-merge) blocks don't. But, by that token, PoS blocks are supposed to have an ommersHash indicating this.

So the question is, is it a PoW block which might have real uncles and there's a bug finding them, or is the bug in detecting that a PoS block should not have uncles.

@yakimant
Copy link
Member

Hm, I'm not sure how to help, but you can check the logs yourself:
linux-01.ih-eu-mda1.nimbus.mainnet:/var/log/service/nimbus-eth1-mainnet-master/

There are quite a lot of such warning: ~22k yesterday, ~9k today already.

@yakimant
Copy link
Member

@yakimant
Copy link
Member

Changed a port for BNs to use engine port, should be working now.

Also made a PR for review: #199

@tersec
Copy link
Author

tersec commented Sep 19, 2024

Hm, I'm not sure how to help, but you can check the logs yourself: linux-01.ih-eu-mda1.nimbus.mainnet:/var/log/service/nimbus-eth1-mainnet-master/

There are quite a lot of such warning: ~22k yesterday, ~9k today already.

Non-issue/overzealous logging: status-im/nimbus-eth1#2639

@jakubgs
Copy link
Member

jakubgs commented Sep 19, 2024

After discussing the layout with Dustin and Jacek we came to a conclusion that setups where multiple BNs with one EL are unsupported and should be avoided. Instead we will run two ELs per host, and half of BNs will run without. Branches unstable and stable should run with an EL, and others should run without any(--no-el).

In addition to that we want EL diversity, so we'll start with a split similar to nimbus.holesky fleet:

  • linux-01.ih-eu-mda1.nimbus.mainnet -> neth1-01.ih-eu-mda1.nimbus.mainnet
  • linux-02.ih-eu-mda1.nimbus.mainnet -> neth1-02.ih-eu-mda1.nimbus.mainnet
  • linux-03.ih-eu-mda1.nimbus.mainnet -> neth1-03.ih-eu-mda1.nimbus.mainnet
  • linux-04.ih-eu-mda1.nimbus.mainnet -> erigon-01.ih-eu-mda1.nimbus.mainnet
  • linux-05.ih-eu-mda1.nimbus.mainnet -> erigon-02.ih-eu-mda1.nimbus.mainnet
  • linux-06.ih-eu-mda1.nimbus.mainnet -> geth-01.ih-eu-mda1.nimbus.mainnet
  • linux-07.ih-eu-mda1.nimbus.mainnet -> geth-02.ih-eu-mda1.nimbus.mainnet

This probably should be implemented using a flag in the layout file, same way we enable validator clients with vc = true.

Is this correct @tersec ?

@yakimant
Copy link
Member

yakimant commented Oct 9, 2024

@tersec need your review of new mainnet layout:
https://github.com/status-im/infra-nimbus/pull/206/files#diff-3f8ce3d2e84c759a7dd04d47c2e9fe41ba16bbbcf0098de7bf0bd7673945043e

It's changed considerably:

  • stable, testing, 2x unstable -> stable, testing, unstable, libp2p (same as holesky)
  • linux-07 had 4x libp2p before
  • removed extra_flags: {'debug-enable-yamux': true}, not sure we need it
  • shall I keep public_api on geth nodes?
  • there are max_peers: 10000 and open_libp2p_ports: false, do we need it?

@tersec
Copy link
Author

tersec commented Oct 11, 2024

@tersec need your review of new mainnet layout: https://github.com/status-im/infra-nimbus/pull/206/files#diff-3f8ce3d2e84c759a7dd04d47c2e9fe41ba16bbbcf0098de7bf0bd7673945043e

It's changed considerably:

* stable, testing, 2x unstable -> stable, testing, unstable, libp2p (same as holesky)

* linux-07 had 4x libp2p before

* removed `extra_flags: {'debug-enable-yamux': true}`, not sure we need it

* shall I keep `public_api` on `geth` nodes?

* there are `max_peers: 10000` and `open_libp2p_ports: false`, do we need it?
  • We don't need --debug-enable-yamux, no: Deprecate mplex ethereum/consensus-specs#3866 (comment) and we don't have QUIC support yet, though that would have been worth testing if so

  • stable, testing, 2x unstable -> stable, testing, unstable, libp2p (same as holesky) is a useful change

  • 4x libp2p was probably overkill. maybe there was a reason, but sure.

  • I can't speak to the exact motivation on the very high peer count, but it's a good thing to test that it doesn't break on one node.

  • I'm not sure what open_libp2p_ports does. It's not directly a Nimbus flag.

  • We need some public API, ideally on a non-optimistic (i.e. has a functioning EL, so Geth/Nethermind/Erigon, not yet), e.g., to support https://eth-clients.github.io/checkpoint-sync-endpoints/

@jakubgs
Copy link
Member

jakubgs commented Oct 15, 2024

I can speak to the open_libp2p_ports setting. It was something most probably requested by Zahary or Jacek. The intention was to not open LibP2P ports on the firewall to compare how well discovery handles it and how quickly nodes gain peers.

I've decided to drop it as nobody has talked about this in ages and it made the setup more complex.

@yakimant
Copy link
Member

  1. removed open_libp2p_ports: false to test close libp2p ports
  2. Moved unstable public_api to beacon node with geth. What do we do with tesging public_api? We don't have a tesing BN with EL.
  3. Moved testing max peers 10k on stable BN with geth

@jakubgs
Copy link
Member

jakubgs commented Oct 15, 2024

That's a good question, does the BN used for public API endpoint need to have an EL? If so we might have to make an exception.

@tersec
Copy link
Author

tersec commented Oct 16, 2024

That's a good question, does the BN used for public API endpoint need to have an EL? If so we might have to make an exception.

It should, yes

@arnetheduck
Copy link
Member

how well discovery handles it and how quickly nodes gain peers.

this is still an important test that we should keep, since many of our users don't have a public ip and we want to catch regressions.

@yakimant
Copy link
Member

Ok, I will keep this test.

Previously it was for unstable and testing nodes (different hosts) with geth EL attached.
Any preference for node now?
You can get an overview of fleet here:
https://github.com/status-im/infra-nimbus/blob/104a36a2f20ff1b5b73ead7069da7e89cca90d4c/ansible/vars/layout/mainnet.yml

@jakubgs
Copy link
Member

jakubgs commented Oct 23, 2024

Ok, I will keep this test.

I think we can achieve it not by changing already existing rules but by adding extra rules that deny access for specific ports.

@yakimant
Copy link
Member

yakimant commented Oct 23, 2024

Merged the PR:
#206

Layout is mostly changed.

What left:

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants