Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Real time L1 data feed #57

Open
nikeshnazareth opened this issue Feb 28, 2025 · 1 comment
Open

Real time L1 data feed #57

nikeshnazareth opened this issue Feb 28, 2025 · 1 comment
Labels
needs discussion This issue still needs discussion before implementing

Comments

@nikeshnazareth
Copy link
Collaborator

nikeshnazareth commented Feb 28, 2025

Overview

I'd like to describe a pattern we've been discussing for getting realtime L1 data into an L2. I'm curious for any feedback. The motivating example is to get realtime Chainlink price feeds into an L2. For now, we will assume that this is the only relevant L1 state to retrieve.

By realtime I mean:

  • the last publication occurred at some L1 block (let's say 1000)
  • the next publication will be posted at some future L1 block (let's say 2000)
  • any L2 transaction that is preconfirmed in the mean time will use the state of the most recent L1 block (let's say 1500)

Proposal

L1 block hash feed

  • there is an L1 publication feed that proves the sequence of all L1 block hashes.
  • This will query the latest blockhash and include it as an attribute
  • The content (in the blob) will be the full list of all L1 block headers since the last one published in the feed
    • this can be validated by ensuring the corresponding hash chain ends with the expected block hash.
    • any publication that does not pass validation is a no-op.
  • Anyone following along with this feed can reconstruct the list of all L1 block hashes

Chainlink state root feed

  • there is an L1 publication feed that proves the sequence of L1 Chainlink storage roots
  • it builds on the L1 block hash feed. You cannot post an L1 Chainlink storage root unless the corresponding L1 block hash already appears in the L1 block hash feed (otherwise the publication is a no-op).
  • The content in the blob will be the full list of all L1 Chainlink storage roots since the last one published in the feed, along with the Merkle proofs to demonstrate they are consistent with the L1 block hashes
    • a potential optimisation is to only include Chainlink storage roots that are needed by some rollup. The publisher would need to ensure the relevant proof is published to this feed before publishing the corresponding rollup publication.
  • Anyone following along with this feed can reconstruct the list of all (relevant) L1 Chainlink state roots

Chainlink opcode

  • the rollup exposes some mechanism (eg. an opcode or a precompile) to let transactions query a storage value in the L1 Chainlink feed.
  • when executing a transaction that uses the opcode, the preconfer queries the latest value from the L1 state and uses this as the return value. Note that this value is unrelated to either publication state root or anchor transaction. The idea is if the latest actual L1 block was 1500, the preconfer will use the value from L1 block 1500.
    • for now, I'm focussing on the basic structure and am ignoring details about how to price the opcode, or whether the user should specify the required storage slots beforehand
  • the "latest value" in this case implies the value in the L1 block whose timestamp immediately precedes the L2 block timestamp
  • the publication will include a proof that the claimed value matches the L1 chainlink storage root feed.
    • any publications that do not pass this check are no-ops.

L2 timestamp validations

One complication is that the L1 proposer can choose the L2 block timestamp to be any value between 1000 and 2000. In practice, the preconfirmation will create a tight upper bound on the timestamp, so we just need a mechanism to ensure a reasonable lower-bound (so the transactions will use the latest L1 chainlink price and not some previous value). Here are some options.

Preconfirmation requests have a timestamp window

  • the preconfirmation mechanism itself constrains the preconfer to use the latest timestamp. Deviations are slashed
  • this seems cleanest to me because there are no other infrastructure modifications and it's also a pretty natural place for users to express their intent.
  • I don't remember if this is part of the current expected preconfirmation requirement.

L2 enforces timestamp requests

  • the user specifies a minimum timestamp in a way that the L2 state transition function can recognise
    • for example, there is a new transaction type with a new field
    • alternatively, the requested timestamp is introduced into calldata
  • I don't like having to modify the L2 but I do like that this is still a universal solution

L2 contracts enforce timestamp

  • the user ensures the L2 timestamp as part of the actual L2 transaction
  • this could mean they call an entrypoint that checks the specified timestamp (and reverts otherwise)
  • it could also mean there is a 4337 or other account abstraction account that checks the timestamp as part of the validity conditions
  • this doesn't require any changes to infrastructure / wallets but it does limit the kinds of transactions that can take advantage of this (at least if they want a guarantee of a minimum timestamp).
  • depending on the details, it might also require the user to pay the gas costs of invalid transactions.

Objections

Reorgs

  • One objection that was raised is that reading L1 state in real time might undermine one of the important properties of based rollups - that they reorg with the base chain.
  • I'm not sure I actually track the objection. It seems to me like the chain still reorgs in the same way
    • eg. if a publication depends on a chainlink checkpoint that depends on the wrong L1 block, then the whole thing is invalid anyway
    • preconfirmations that depend on an invalid L1 state root are void (so the preconfer is not slashed)
    • this should all be obvious to the proposer for all but the last few blocks at publication time, so they're not going to post a publication that has any massive reorg. They could, of course, reorg out invalid preconfirmations that happened in the middle of the period (eg. at block 1500) before posting the publication.
  • However, I'm genuinely not sure that I'm tracking the objection, so I'm definitely open to be corrected.
@nikeshnazareth
Copy link
Collaborator Author

I don't remember if this is part of the current expected preconfirmation requirement.

Anshu explained that the current design does not include any additional metadata or requirements along with the request. Users submit a regular L2 transaction to the mempool, so they have no mechanism to constrain the timestamp of the L2 block that they find themselves in.

@ggonzalez94 ggonzalez94 added the needs discussion This issue still needs discussion before implementing label Feb 28, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
needs discussion This issue still needs discussion before implementing
Projects
None yet
Development

No branches or pull requests

2 participants