-
Notifications
You must be signed in to change notification settings - Fork 59
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat: support for streamed computations and their proofs #1209
Conversation
7f5f605
to
a09f04d
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is mostly in good shape, thansk! I left minor comments re: simplifying some of the trait manipulation.
src/proof/mod.rs
Outdated
type BaseRecursiveSNARK; | ||
|
||
/// Associated proof type, which must implement `RecursiveSNARKTrait` | ||
type RecursiveSnark: RecursiveSNARKTrait<F, Self::Frame, PublicParams = Self::PublicParams>; | ||
type RecursiveSnark: RecursiveSNARKTrait< | ||
F, | ||
Self::Frame, | ||
PublicParams = Self::PublicParams, | ||
BaseRecursiveSNARK = Self::BaseRecursiveSNARK, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think this can be made simpler by removing the BaseRecursiveSNARK
associated type, and constraining the argument of prove
instead:
fn prove(
&self,
pp: &Self::PublicParams,
steps: Vec<Self::Frame>,
store: &'a Store<F>,
init: Option<<Self::RecursiveSnark as RecursiveSNARKTrait<F, Self::Frame>>::BaseRecursiveSNARK>,
) -> Result<(Self::RecursiveSnark, Vec<F>, Vec<F>, usize), ProofError>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We need BaseRecursiveSNARK
in order to define
/// Associated proof type, which must implement `RecursiveSNARKTrait`
type RecursiveSNARK: RecursiveSNARKTrait<
F,
Self::Frame,
PublicParams = Self::PublicParams,
BaseRecursiveSNARK = Self::BaseRecursiveSNARK,
>;
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You can just remove that constraint instead.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks!
let a = bindings.get_ptr(a)?; | ||
println!("{}", a.fmt_to_string_simple(&scope.store)); | ||
} | ||
Op::Recv(_) => todo!("not supported yet"), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Please open an issue.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@@ -433,6 +436,6 @@ impl<'a, F: CurveCycleEquipped, C: Coprocessor<F> + 'a> Prover<'a, F> for NovaPr | |||
let eval_config = self.folding_mode().eval_config(self.lang()); | |||
let frames = | |||
C1LEM::<'a, F, C>::build_frames(expr, env, store, limit, &eval_config, ch_terminal)?; | |||
self.prove_from_frames(pp, &frames, store) | |||
self.prove_from_frames(pp, &frames, store, None) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
While we're here, we may want to open an issue to hoist
fn lang(&self) -> &Arc<Lang<F, C>>;
fn prove_from_frames(
&self,
pp: &Self::PublicParams,
frames: &[Self::Frame],
store: &'a Store<F>,
init: Option<<Self::RecursiveSnark as RecursiveSNARKTrait<F, Self::Frame>>::BaseRecursiveSNARK>,
) -> Result<(Self::RecursiveSnark, Vec<F>, Vec<F>, usize), ProofError>;
Into the Prover
trait, that would save us some Nova
/SuperNova
duplication.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
a09f04d
to
0dac83e
Compare
Introduce the `Op::Recv` LEM operator, which awaits for data to be received by the channel terminal at LEM interpretation time. This operator is used to implement the logic for the `StreamIn` continuation, which receives the argument to be applied to the (chaining) callable object. The result will be available once the (new) `StreamOut` continuation is reached. When that happens, the computation can be resumed by supplying two pieces of data through the channel: * A flag to tell if the machine should stutter while in `StreamOut` * The next argument to be consumed in the incoming `StreamIn` state Note: the second message should not be be sent if the machine is set to stutter with the first message. There is a test to show that we're now able to construct proofs of computations that grow incrementally by starting from where we had stopped. We don't need to start the folding process from the beginning.
Prepare the next call directly from `StreamOut` without having to go through `StreamIn` again. Since `StreamIn` only occurs in the very first frame after this optimization, it was renamed to `StreamStart`. And `StreamOut`, which now serves as literal point for pause while supporting both input and output, was renamed to `StreamPause`.
0dac83e
to
26749d4
Compare
Introduce the
Op::Recv
LEM operator, which awaits for data to be received by the channel terminal at LEM interpretation time.This operator is used to implement the logic for the
StreamIn
continuation, which receives the argument to be applied to the (chaining) callable object.The result will be available once the (new)
StreamOut
continuation is reached. When that happens, the computation can be resumed by supplying two pieces of data through the channel:StreamOut
StreamIn
stateNote: the second message should not be be sent if the machine is set to stutter with the first message.
There is a test to show that we're now able to construct proofs of computations that grow incrementally by starting from where we had stopped. We don't need to start the folding process from the beginning.
Optimizing
From the second commit in this PR:
This optimization makes starting and resuming streams consume the same number of Lurk iterations.