Skip to content

Commit

Permalink
Merge branch 'main' into explore-a-running-aggregate
Browse files Browse the repository at this point in the history
  • Loading branch information
akremstudy committed Dec 11, 2024
2 parents ec36079 + 1f84b0c commit 85fef60
Show file tree
Hide file tree
Showing 8 changed files with 234 additions and 8 deletions.
13 changes: 13 additions & 0 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -43,6 +43,19 @@ Here you will find a detailed breakdown for how to join a chain as a node and ho

Run the chain locally in a docker container, powered by [local-ic](https://github.com/strangelove-ventures/interchaintest/tree/main/local-interchain)

Install heighliner:
```sh
make get-heighliner
```
Create image:
```sh
make local-image
```
Install local interchain:
```sh
make get-localic
```
Start the local-devnet:
```sh
make local-devnet
```
Expand Down
16 changes: 8 additions & 8 deletions adr/adr001 - chain size limitations.md
Original file line number Diff line number Diff line change
Expand Up @@ -11,6 +11,7 @@
- 2024-04-02: formatting
- 2024-04-01: clarity
- 2024-08-03: progress update
- 2024-12-06: update

## Context

Expand All @@ -22,11 +23,8 @@ This ADR is meant to go over the limits relating to decisions that affect the si

How fast does the chain grow in size? The initial tests show overall disk storage increasing about 3GB a day. The difference between the size of the chain and total storage is about 6-8GB. For example, it has been observed that the total storage being used showed as 18GB but the size of the chain was 10GB. The difference seems to be largely attributed to the log files. As we move forward on setting up the test nodes, validators, and reporters the log files will be pruned periodically. To ensure we are able to view the logs if something goes wrong there will be two logs, one historical and one current. The historical will be deleted daily after the current log is renamed to become the historical file and a new current is created.

What measures / designs are in place for pruning state? The chain will also be pruned periodically. Because disputes have a window of 21 days that is the minimun history the chain needs to keep. The team will keep archive nodes (and anyone is welcome to keep one as well) with state sync. State sync is necessary to allow other nodes to join the network without having to fully sync back to genesis. This will make it efficient for nodes, validators, and reporters to join the network as well as keep the storage requirements lower to become part of the network.
How big the chain is as a whole is up to the individual node operator. The default is for the chain to be pruned periodically. Because disputes have a window of 21 days that is the minimun history the chain needs to keep. The team will keep archive nodes (and anyone is welcome to keep one as well) with state sync. State sync is necessary to allow other nodes to join the network without having to fully sync back to genesis. This will make it efficient for nodes, validators, and reporters to join the network as well as keep the storage requirements lower to become part of the network. Ideally you do come in and validate the chain from genesis, and Tellor will maintain a commitment for people to be able to do so.

Should we consider a data availability layer? Should we assume no-one needs data or verification passed pruning timeframe?
Using a data availability layer is still an open question.


### Blocksize limits

Expand All @@ -50,12 +48,14 @@ We have currently opted for implementing signing off on bridge data via vote ext
- What is the size limit for VoteExtension? Currently estimated at about 4MB.
- How many signatures can we add to vote extensions (queryId's aggregated x validators needed to hit 2/3 consensus)?

If we don't use voteExtensions,
- how many signatures can we fit into a block (i.e. store them in the next block)?
- Should lanes be implemented? What is a good balance between data reports, transfers, and bridge signatures?

## Alternate approaches to state growth

### Use a DA layer or long term storage
A DA layer was decided against for the reason being it hurts finality of the chain in the sense that most DA solutions are relatively slower than our target blocktime. If you have to wait 12seconds on Ethereum for a block to be finalized (best case scenario), this can limit chains looking to use the data. Additionally, a data storage network for long term use may be beneficial, but as an oracle network, data that is old is not necessarily critical as most oracle data should be consumed relatively quickly and can be asked for again if needed.

Ultimately this route may be an option (a DA or long term data layer), as technology such as zk data proofs for bridging, preconfirmations for speed, and just base security/decentralization of the chain gets better.


### Cap number of reporters (like validators)

You could cap the number of reporters at the total level. This problem is that we would then be forcing anyone who wants to report to stake a large amount or tip a validator who potentially doesn't care about their small data point (the LINK problem of no one supporting your illiquid coin). If you want to use it purely optimistically, you shouldn't need to worry about having too much stake.
Expand Down
3 changes: 3 additions & 0 deletions adr/adr1003 - time based rewards eligibility.md
Original file line number Diff line number Diff line change
Expand Up @@ -11,6 +11,7 @@
- 2024-04-02: clarity
- 2024-04-05: clarity/spelling
- 2024-08-03: clarity
- 1014-12-06: bridge deposits

## Context

Expand All @@ -22,6 +23,8 @@ b) provides a heartbeat for the system in the absence of tips (reporters are the

The issue in just distributing inflationary rewards to all reported data is that there becomes an incentive to report more (unneeded) data in order to increase the amount of rewards given to your reporter. For instance, if you have 10 reporters (equal weight) and they all report for BTC/USD, then they would split the inflationary rewards (if they have unequal weight it would be distributed based upon reporting weight). The problem is what happens when one of those parties reports for a query that only they support. For calculation purposes, let's say they don't just do it for one, but report for 9 new queries that only they support. If the inflation is split based on total reported queries, they had 9 reports(all ones they only support) and all other reporters (equal weight) also had 9 (just for BTC/USD). In this scenario, if you split the time based reward by weight given, the attacker would get 50% of the rewards. In order to prevent this, we only give inflationary rewards to cycle list queries (queries that have been voted on by governance that everyone should support at a base level).

Note that bridge deposits are also part of the cycle list (reporters report deposits on the Ethereum bridge contract)

![ADR1003: rewards](./graphics/adr1003.png)

## Alternative Approaches
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -21,6 +21,10 @@ In order to parse validator signatures on EVM chains, Tellor validators need to

The original idea was to have validators submit a transaction in each block after finalized oracle data. The issue with this approach is that the proposer for any given block has control over what transactions get included. This means that they could censor signatures from certain validators and it would be impossible to tell whether they were censored or failed to submit the transaction and would require off-chain monitoring. Additionally, size issues still exist with transactions even as block size is generally much bigger than 4MB, and storing each signature as a transaction is much larger on aggregate (when considering chain state size growth). The transaction method would force validators to pay gas on signature transactions and compete for space in each block with non-bridge signature transactions (e.g. data submissions). This problem can be addressed by implementing lanes from Skip Protocol. However, at this moment, we have decided on using vote extensions and will be testing the size limitations and experimenting with data compression techniques.

### use bls signature aggregations

This one could definitely be researched more, but the option here is to use BLS aggregations as a way to reduce the gas costs for solidity verification. There is higher complexity on the tendermint side, and the ability to reuse blobstream logic was key for the decision in the token bridge. For users, this may be a great option if they operate on chains where gas costs are a limitation.

### use zk method, no signatures

A future option (that celestia is also taking) is to completely abandon external signatures and opt for zk methods. This is the long term plan, however current zk methods are so novel that relying on them would be more akin to experimentation than actual robust usage. Additionally, proving times for most of these methods is still prohibitively slow for many oracle use cases and may also add a centralization vector if advanced hardware is required.
Expand Down
7 changes: 7 additions & 0 deletions daemons/reporter/client/reporter_monitors.go
Original file line number Diff line number Diff line change
Expand Up @@ -21,7 +21,14 @@ func (c *Client) MonitorCyclelistQuery(ctx context.Context, wg *sync.WaitGroup)
if err != nil {
// log error
c.logger.Error("getting current query", "error", err)
continue
}

if querymeta == nil {
c.logger.Error("QueryMeta is nil")
continue
}

mutex.RLock()
committed := commitedIds[querymeta.Id]
mutex.RUnlock()
Expand Down
61 changes: 61 additions & 0 deletions layer_scripts/configure_layer_linux.sh
Original file line number Diff line number Diff line change
@@ -0,0 +1,61 @@
#!/bin/bash

# clear the terminal
clear

# Stop execution if any command fails
set -e

# set variables in your .bashrc before starting this script!
source ~/.bashrc

export LAYER_NODE_URL=tellorlayer.com
export TELLORNODE_ID=5ca2c0eccb54e907ba474ce3b6827077ae40ba53
export KEYRING_BACKEND="test"
export PEERS="[email protected]:26656,[email protected]:26656,[email protected]:26656"

echo "Change denom to loya in config files..."
sed -i 's/([0-9]+)stake/1loya/g' ~/.layer/config/app.toml

echo "Set Chain Id to layer in client config file..."
sed -i 's/^chain-id = .*$/chain-id = "layer"/g' ~/.layer/config/app.toml

# Modify timeout_commit in config.toml for node
echo "Modifying timeout_commit in config.toml for node..."
sed -i 's/timeout_commit = "5s"/timeout_commit = "1s"/' ~/.layer/config/config.toml

# Open up node to outside traffic
echo "Open up node to outside traffice"
sed -i 's/^laddr = "tcp:\/\/127.0.0.1:26656"/laddr = "tcp:\/\/0.0.0.0:26656"/g' ~/.layer/config/config.toml

sed -i 's/^address = "tcp:\/\/localhost:1317"/address = "tcp:\/\/0.0.0.0:1317"/g' ~/.layer/config/app.toml

# Modify cors to accept *
echo "Modify cors to accept *"
sed -i 's/^cors_allowed_origins = \[\]/cors_allowed_origins = \["\*"\]/g' ~/.layer/config/config.toml

# enable unsafe cors
echo "Enable unsafe cors"
sed -i 's/^cors_allowed_origins = \[\]/cors_allowed_origins = \["\*"\]/g' ~/.layer/config/app.toml
sed -i 's/^enable-unsafe-cors = false/enable-unsafe-cors = true/g' ~/.layer/config/app.toml
sed -i 's/^enabled-unsafe-cors = false/enabled-unsafe-cors = true/g' ~/.layer/config/app.toml
sed -i 's/^enable-unsafe-cors = false/enable-unsafe-cors = true/g' ~/.layer/config/app.toml

# Modify keyring-backend in client.toml for node
echo "Modifying keyring-backend in client.toml for node..."
sed -i 's/^keyring-backend = "os"/keyring-backend = "'$KEYRING_BACKEND'"/g' ~/.layer/config/client.toml
# update for main dir as well. why is this needed?
sed -i 's/keyring-backend = "os"/keyring-backend = "'$KEYRING_BACKEND'"/g' ~/.layer/config/client.toml

rm -f ~/.layer/config/genesis.json
# get genesis file from running node's rpc
echo "Getting genesis from runnning node....."
curl $LAYER_NODE_URL:26657/genesis | jq '.result.genesis' > ~/.layer/config/genesis.json

# set initial seeds / peers
echo "Running Tellor node id: $TELLORNODE_ID"
sed -i 's/seeds = ""/seeds = "'$PEERS'"/g' ~/.layer/config/config.toml
sed -i 's/persistent_peers = ""/persistent_peers = "'$PEERS'"/g' ~/.layer/config/config.toml


echo "layer has been configured in it's home folder!"
61 changes: 61 additions & 0 deletions layer_scripts/configure_layer_mac.sh
Original file line number Diff line number Diff line change
@@ -0,0 +1,61 @@
#!/bin/bash

# clear the terminal
clear

# Stop execution if any command fails
set -e

# set variables in your .bashrc before starting this script!
source ~/.zshrc

export LAYER_NODE_URL=tellorlayer.com
export TELLORNODE_ID=5ca2c0eccb54e907ba474ce3b6827077ae40ba53
export KEYRING_BACKEND="test"
export PEERS="[email protected]:26656,[email protected]:26656,[email protected]:26656"

echo "Change denom to loya in config files..."
sed -i '' 's/([0-9]+)stake/1loya/g' ~/.layer/config/app.toml

echo "Set Chain Id to layer in client config file..."
sed -i '' 's/^chain-id = .*$/chain-id = "layer"/g' ~/.layer/config/app.toml

# Modify timeout_commit in config.toml for node
echo "Modifying timeout_commit in config.toml for node..."
sed -i '' 's/timeout_commit = "5s"/timeout_commit = "1s"/' ~/.layer/config/config.toml

# Open up node to outside traffic
echo "Open up node to outside traffice"
sed -i '' 's/^laddr = "tcp:\/\/127.0.0.1:26656"/laddr = "tcp:\/\/0.0.0.0:26656"/g' ~/.layer/config/config.toml

sed -i '' 's/^address = "tcp:\/\/localhost:1317"/address = "tcp:\/\/0.0.0.0:1317"/g' ~/.layer/config/app.toml

# Modify cors to accept *
echo "Modify cors to accept *"
sed -i '' 's/^cors_allowed_origins = \[\]/cors_allowed_origins = \["\*"\]/g' ~/.layer/config/config.toml

# enable unsafe cors
echo "Enable unsafe cors"
sed -i '' 's/^cors_allowed_origins = \[\]/cors_allowed_origins = \["\*"\]/g' ~/.layer/config/app.toml
sed -i '' 's/^enable-unsafe-cors = false/enable-unsafe-cors = true/g' ~/.layer/config/app.toml
sed -i '' 's/^enabled-unsafe-cors = false/enabled-unsafe-cors = true/g' ~/.layer/config/app.toml
sed -i '' 's/^enable-unsafe-cors = false/enable-unsafe-cors = true/g' ~/.layer/config/app.toml

# Modify keyring-backend in client.toml for node
echo "Modifying keyring-backend in client.toml for node..."
sed -i '' 's/^keyring-backend = "os"/keyring-backend = "'$KEYRING_BACKEND'"/g' ~/.layer/config/client.toml
# update for main dir as well. why is this needed?
sed -i '' 's/keyring-backend = "os"/keyring-backend = "'$KEYRING_BACKEND'"/g' ~/.layer/config/client.toml

rm -f ~/.layer/config/genesis.json
# get genesis file from running node's rpc
echo "Getting genesis from runnning node....."
curl $LAYER_NODE_URL:26657/genesis | jq '.result.genesis' > ~/.layer/config/genesis.json

# set initial seeds / peers
echo "Running Tellor node id: $TELLORNODE_ID"
sed -i '' 's/seeds = ""/seeds = "'$PEERS'"/g' ~/.layer/config/config.toml
sed -i '' 's/persistent_peers = ""/persistent_peers = "'$PEERS'"/g' ~/.layer/config/config.toml


echo "layer has been configured in it's home folder!"
77 changes: 77 additions & 0 deletions local_devnet/upgrade_test.sh
Original file line number Diff line number Diff line change
@@ -0,0 +1,77 @@
#!/bin/bash

# THIS SCRIPT CAN BE USED TO TEST A CHAIN UPGRADE WHILE RUNNING THE LOCAL DEVNET
# BE SURE TO SET "expedited": false FOR THE PROPOSAL.
# THE LOCAL DEVNET HAS A 15s VOTING PERIOD.
# RUN THIS SCRIPT AS SOON AS THE DOCKER CONTAINERS START UP.
# Sleep times may need to be adjusted depending on the power of your computer. (faster computers go faster)

# Stop execution if any command fails
set -e

# use docker ps to get the container id hashes
echo "setting variables"
export ETH_RPC_URL="https://sepolia.infura.io/v3/53ba4f713bf940fb87a58280912231ab"
export TOKEN_BRIDGE_CONTRACT="0xFC1C57F1E466605e3Dd40840bC3e7DdAa400528c"
export upgrade_binary_path="/Users/sloetter/projects/layer/local_devnet/v2.0.0-audit/layerd"
export terminal="/Users/sloetter/projects/layer/local_devnet/v2.0.0-audit/layerd"

# Get all container IDs into an array
container_ids=($(docker ps -q))

# automatically sets a CONTAINER_ variable for all containers
# (for running docker exec)
for i in "${!container_ids[@]}"; do
varname="CONTAINER_$((i + 1))"
export "$varname=${container_ids[i]}"
echo "Exported $varname=${container_ids[i]}"
done

# optionally view the logs in the terminal:
# mac os:
# osascript -e "tell application \"Terminal\" to do script \"docker logs -f $CONTAINER_1\""
# osascript -e "tell application \"Terminal\" to do script \"docker logs -f $CONTAINER_3\""
# desktop linux with gnome:
# gnome-terminal -- bash -c "docker logs -f $CONTAINER_1; exec bash"
# gnome-terminal -- bash -c "docker logs -f $CONTAINER_NAME; exec bash"

# copy proposal to node1 container and submit proposal
echo "copying the proposal to CONTAINER_1"
docker cp ./proposal.json $CONTAINER_1:/bin/

echo "proposing the upgrade"
docker exec $CONTAINER_1 layerd tx gov submit-proposal /bin/proposal.json --from validator --chain-id layer-1 --home /var/cosmos-chain/layer-1 --keyring-backend test --fees 510loya --yes

# (optionally) check if proposal is live
# docker exec $node1_id layerd query gov proposals
# wait a bit
echo "voting in the next block..."
sleep 5

# vote on proposal
echo "voting on the upgrade proposal"
docker exec $CONTAINER_1 layerd tx gov vote 1 yes --from validator --chain-id layer-1 --home /var/cosmos-chain/layer-1 --keyring-backend test --fees 500loya --yes
docker exec $CONTAINER_2 layerd tx gov vote 1 yes --from validator --chain-id layer-1 --home /var/cosmos-chain/layer-1 --keyring-backend test --fees 500loya --yes
docker exec $CONTAINER_3 layerd tx gov vote 1 yes --from validator --chain-id layer-1 --home /var/cosmos-chain/layer-1 --keyring-backend test --fees 500loya --yes
docker exec $CONTAINER_4 layerd tx gov vote 1 yes --from validator --chain-id layer-1 --home /var/cosmos-chain/layer-1 --keyring-backend test --fees 500loya --yes

echo "making reporters in the next block..."
sleep 5

# create 2 reporters to sanity check that reporting works before and after the upgrade
echo "creating two reporters to test that reporting works before / after upgrade"
docker exec $CONTAINER_3 layerd tx reporter create-reporter "2000000" "10000000" --from validator --home /var/cosmos-chain/layer-1 --keyring-dir /var/cosmos-chain/layer-1 --keyring-backend test --chain-id layer-1 --fees 500loya --yes
docker exec $CONTAINER_4 layerd tx reporter create-reporter "200000" "1000000" --from validator --home /var/cosmos-chain/layer-1 --keyring-dir /var/cosmos-chain/layer-1 --keyring-backend test --chain-id layer-1 --fees 500loya --yes

# wait for chain to stop
sleep 30

#copy new binary into each container
docker cp $upgrade_binary_path $CONTAINER_1:/bin/
docker cp $upgrade_binary_path $CONTAINER_2:/bin/
docker cp $upgrade_binary_path $CONTAINER_3:/bin/
docker cp $upgrade_binary_path $CONTAINER_4:/bin/

# all done!
echo "Done!"
echo "Restart the docker containers via docker desktop to verify that the upgrade was successsful."

0 comments on commit 85fef60

Please sign in to comment.