You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I'd like to describe a pattern we've been discussing for getting realtime L1 data into an L2. I'm curious for any feedback. The motivating example is to get realtime Chainlink price feeds into an L2. For now, we will assume that this is the only relevant L1 state to retrieve.
By realtime I mean:
the last publication occurred at some L1 block (let's say 1000)
the next publication will be posted at some future L1 block (let's say 2000)
any L2 transaction that is preconfirmed in the mean time will use the state of the most recent L1 block (let's say 1500)
Proposal
L1 block hash feed
there is an L1 publication feed that proves the sequence of all L1 block hashes.
This will query the latest blockhash and include it as an attribute
The content (in the blob) will be the full list of all L1 block headers since the last one published in the feed
this can be validated by ensuring the corresponding hash chain ends with the expected block hash.
any publication that does not pass validation is a no-op.
Anyone following along with this feed can reconstruct the list of all L1 block hashes
Chainlink state root feed
there is an L1 publication feed that proves the sequence of L1 Chainlink storage roots
it builds on the L1 block hash feed. You cannot post an L1 Chainlink storage root unless the corresponding L1 block hash already appears in the L1 block hash feed (otherwise the publication is a no-op).
The content in the blob will be the full list of all L1 Chainlink storage roots since the last one published in the feed, along with the Merkle proofs to demonstrate they are consistent with the L1 block hashes
a potential optimisation is to only include Chainlink storage roots that are needed by some rollup. The publisher would need to ensure the relevant proof is published to this feed before publishing the corresponding rollup publication.
Anyone following along with this feed can reconstruct the list of all (relevant) L1 Chainlink state roots
Chainlink opcode
the rollup exposes some mechanism (eg. an opcode or a precompile) to let transactions query a storage value in the L1 Chainlink feed.
when executing a transaction that uses the opcode, the preconfer queries the latest value from the L1 state and uses this as the return value. Note that this value is unrelated to either publication state root or anchor transaction. The idea is if the latest actual L1 block was 1500, the preconfer will use the value from L1 block 1500.
for now, I'm focussing on the basic structure and am ignoring details about how to price the opcode, or whether the user should specify the required storage slots beforehand
the "latest value" in this case implies the value in the L1 block whose timestamp immediately precedes the L2 block timestamp
the publication will include a proof that the claimed value matches the L1 chainlink storage root feed.
any publications that do not pass this check are no-ops.
L2 timestamp validations
One complication is that the L1 proposer can choose the L2 block timestamp to be any value between 1000 and 2000. In practice, the preconfirmation will create a tight upper bound on the timestamp, so we just need a mechanism to ensure a reasonable lower-bound (so the transactions will use the latest L1 chainlink price and not some previous value). Here are some options.
Preconfirmation requests have a timestamp window
the preconfirmation mechanism itself constrains the preconfer to use the latest timestamp. Deviations are slashed
this seems cleanest to me because there are no other infrastructure modifications and it's also a pretty natural place for users to express their intent.
I don't remember if this is part of the current expected preconfirmation requirement.
L2 enforces timestamp requests
the user specifies a minimum timestamp in a way that the L2 state transition function can recognise
for example, there is a new transaction type with a new field
alternatively, the requested timestamp is introduced into calldata
I don't like having to modify the L2 but I do like that this is still a universal solution
L2 contracts enforce timestamp
the user ensures the L2 timestamp as part of the actual L2 transaction
this could mean they call an entrypoint that checks the specified timestamp (and reverts otherwise)
it could also mean there is a 4337 or other account abstraction account that checks the timestamp as part of the validity conditions
this doesn't require any changes to infrastructure / wallets but it does limit the kinds of transactions that can take advantage of this (at least if they want a guarantee of a minimum timestamp).
depending on the details, it might also require the user to pay the gas costs of invalid transactions.
Objections
Reorgs
One objection that was raised is that reading L1 state in real time might undermine one of the important properties of based rollups - that they reorg with the base chain.
I'm not sure I actually track the objection. It seems to me like the chain still reorgs in the same way
eg. if a publication depends on a chainlink checkpoint that depends on the wrong L1 block, then the whole thing is invalid anyway
preconfirmations that depend on an invalid L1 state root are void (so the preconfer is not slashed)
this should all be obvious to the proposer for all but the last few blocks at publication time, so they're not going to post a publication that has any massive reorg. They could, of course, reorg out invalid preconfirmations that happened in the middle of the period (eg. at block 1500) before posting the publication.
However, I'm genuinely not sure that I'm tracking the objection, so I'm definitely open to be corrected.
The text was updated successfully, but these errors were encountered:
I don't remember if this is part of the current expected preconfirmation requirement.
Anshu explained that the current design does not include any additional metadata or requirements along with the request. Users submit a regular L2 transaction to the mempool, so they have no mechanism to constrain the timestamp of the L2 block that they find themselves in.
Overview
I'd like to describe a pattern we've been discussing for getting realtime L1 data into an L2. I'm curious for any feedback. The motivating example is to get realtime Chainlink price feeds into an L2. For now, we will assume that this is the only relevant L1 state to retrieve.
By realtime I mean:
Proposal
L1 block hash feed
blockhash
and include it as an attributeChainlink state root feed
Chainlink opcode
L2 timestamp validations
One complication is that the L1 proposer can choose the L2 block timestamp to be any value between 1000 and 2000. In practice, the preconfirmation will create a tight upper bound on the timestamp, so we just need a mechanism to ensure a reasonable lower-bound (so the transactions will use the latest L1 chainlink price and not some previous value). Here are some options.
Preconfirmation requests have a timestamp window
L2 enforces timestamp requests
L2 contracts enforce timestamp
Objections
Reorgs
The text was updated successfully, but these errors were encountered: