Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat: adding cache functionality in Receiver for unsorted datapoints #3

Closed
wants to merge 7 commits into from

Conversation

wei3erHase
Copy link
Member

@wei3erHase wei3erHase commented Aug 14, 2024

This PR aims to solve the issue of data arriving in unsorted order, which made the datapoint ignored, and all the subsequent also invalid, here's what was happening:

  • DataFeed bridges nonce 1,2,3,4
  • The Bridge process it unsorted, and txs are executed in order 1,3,2,4
  • Nonce 1 arrives ✅
  • Nonce 3 arrives ❌ (missing nonce 2)
  • Nonce 2 arrives ✅
  • Nonce 4 arrives ❌ (missing nonce 3)
  • Nonce N arrives ❌ (bricked until nonce 3 is re-broadcast)

The feature includes a mechanism, in which when a nonce is rejected, it will store it on contract, waiting for the correct nonce to arrive, in the example above:

  • Nonce 1 arrives ✅
  • Nonce 3 arrives ❌ > stores it on cache
  • Nonce 2 arrives ✅
  • Nonce 4 arrives ❌ > reads cache > writes nonce 3 ✅ > writes nonce 4 ✅

The PR also deprecates the observation data from ObservationsAdded event (not needed, info is already emitted in L1, and we can use (pool,nonce) as observation id) and adds a ObservationsCached event for the cases where the observation is invalid

TODO:

  • fix tests

Comment on lines 106 to 111
await expect(tx1).to.emit(allowedDataReceiver, 'ObservationsAdded').withArgs(salt, 1, deployer.address);
await expect(tx2).to.emit(allowedDataReceiver, 'ObservationsCached').withArgs(salt, 3, deployer.address);
await expect(tx2).not.to.emit(allowedDataReceiver, 'ObservationsAdded');
await expect(tx3).to.emit(allowedDataReceiver, 'ObservationsAdded').withArgs(salt, 2, deployer.address);
await expect(tx4).to.emit(allowedDataReceiver, 'ObservationsAdded').withArgs(salt, 3, deployer.address);
await expect(tx4).to.emit(allowedDataReceiver, 'ObservationsAdded').withArgs(salt, 4, deployer.address);
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why delay in writing obs3 when it could be done in tx3?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

to show the behaviour of observations arriving disordered

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Wouldn't writing obs3 in tx3 still show the behavior of disordered observations, as the cacheing happened in tx2?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ahá! yes, that was initially just a result of the design, and then it was definitely intended, the alternative is doing while(true) and rely on the break to exit the loop.

should the caché be fully loaded of new observations (sth that may actually happen!), any write tx would revert with out of gas

so, instead, this mechanism adds observations only up to the current nonce being sent, so if there's 1M observations in the cache, we could only process from 0-100, then 100-200,... and the receiver wouldn't be bricked forever

great catch 🙌🏻

@wei3erHase
Copy link
Member Author

Closed in favour of #4 & #5

@wei3erHase wei3erHase closed this Aug 19, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants