-
Notifications
You must be signed in to change notification settings - Fork 0
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat: adding cache functionality in Receiver for unsorted datapoints #3
Conversation
test/e2e/oracle-sidechain.spec.ts
Outdated
await expect(tx1).to.emit(allowedDataReceiver, 'ObservationsAdded').withArgs(salt, 1, deployer.address); | ||
await expect(tx2).to.emit(allowedDataReceiver, 'ObservationsCached').withArgs(salt, 3, deployer.address); | ||
await expect(tx2).not.to.emit(allowedDataReceiver, 'ObservationsAdded'); | ||
await expect(tx3).to.emit(allowedDataReceiver, 'ObservationsAdded').withArgs(salt, 2, deployer.address); | ||
await expect(tx4).to.emit(allowedDataReceiver, 'ObservationsAdded').withArgs(salt, 3, deployer.address); | ||
await expect(tx4).to.emit(allowedDataReceiver, 'ObservationsAdded').withArgs(salt, 4, deployer.address); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why delay in writing obs3
when it could be done in tx3
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
to show the behaviour of observations arriving disordered
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Wouldn't writing obs3
in tx3
still show the behavior of disordered observations, as the cacheing happened in tx2
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ahá! yes, that was initially just a result of the design, and then it was definitely intended, the alternative is doing while(true)
and rely on the break
to exit the loop.
should the caché be fully loaded of new observations (sth that may actually happen!), any write tx would revert with out of gas
so, instead, this mechanism adds observations only up to the current nonce being sent, so if there's 1M observations in the cache, we could only process from 0-100, then 100-200,... and the receiver wouldn't be bricked forever
great catch 🙌🏻
This PR aims to solve the issue of data arriving in unsorted order, which made the datapoint ignored, and all the subsequent also invalid, here's what was happening:
The feature includes a mechanism, in which when a nonce is rejected, it will store it on contract, waiting for the correct nonce to arrive, in the example above:
The PR also deprecates the observation data from
ObservationsAdded
event (not needed, info is already emitted in L1, and we can use(pool,nonce)
as observation id) and adds aObservationsCached
event for the cases where the observation is invalidTODO: