Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat: support for streamed computations and their proofs #1209

Merged
merged 2 commits into from
Mar 18, 2024
Merged

Conversation

arthurpaulino
Copy link
Contributor

@arthurpaulino arthurpaulino commented Mar 10, 2024

Introduce the Op::Recv LEM operator, which awaits for data to be received by the channel terminal at LEM interpretation time.

This operator is used to implement the logic for the StreamIn continuation, which receives the argument to be applied to the (chaining) callable object.

The result will be available once the (new) StreamOut continuation is reached. When that happens, the computation can be resumed by supplying two pieces of data through the channel:

  • A flag to tell if the machine should stutter while in StreamOut
  • The next argument to be consumed in the incoming StreamIn state

Note: the second message should not be be sent if the machine is set to stutter with the first message.

There is a test to show that we're now able to construct proofs of computations that grow incrementally by starting from where we had stopped. We don't need to start the folding process from the beginning.

Optimizing

From the second commit in this PR:

optimize: skip new `StreamIn` ocurrences

Prepare the next call directly from `StreamOut` without having to
go through `StreamIn` again.

Since `StreamIn` only occurs in the very first frame after this
optimization, it was renamed to `StreamStart`. And `StreamOut`,
which now serves as literal point for pause while supporting both
input and output, was renamed to `StreamPause`.

This optimization makes starting and resuming streams consume the same number of Lurk iterations.

Copy link
Contributor

@huitseeker huitseeker left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is mostly in good shape, thansk! I left minor comments re: simplifying some of the trait manipulation.

src/proof/mod.rs Outdated
Comment on lines 167 to 174
type BaseRecursiveSNARK;

/// Associated proof type, which must implement `RecursiveSNARKTrait`
type RecursiveSnark: RecursiveSNARKTrait<F, Self::Frame, PublicParams = Self::PublicParams>;
type RecursiveSnark: RecursiveSNARKTrait<
F,
Self::Frame,
PublicParams = Self::PublicParams,
BaseRecursiveSNARK = Self::BaseRecursiveSNARK,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think this can be made simpler by removing the BaseRecursiveSNARK associated type, and constraining the argument of prove instead:

    fn prove(
        &self,
        pp: &Self::PublicParams,
        steps: Vec<Self::Frame>,
        store: &'a Store<F>,
        init: Option<<Self::RecursiveSnark as RecursiveSNARKTrait<F, Self::Frame>>::BaseRecursiveSNARK>,
    ) -> Result<(Self::RecursiveSnark, Vec<F>, Vec<F>, usize), ProofError> 

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We need BaseRecursiveSNARK in order to define

    /// Associated proof type, which must implement `RecursiveSNARKTrait`
    type RecursiveSNARK: RecursiveSNARKTrait<
        F,
        Self::Frame,
        PublicParams = Self::PublicParams,
        BaseRecursiveSNARK = Self::BaseRecursiveSNARK,
    >;

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You can just remove that constraint instead.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks!

let a = bindings.get_ptr(a)?;
println!("{}", a.fmt_to_string_simple(&scope.store));
}
Op::Recv(_) => todo!("not supported yet"),
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please open an issue.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@@ -433,6 +436,6 @@ impl<'a, F: CurveCycleEquipped, C: Coprocessor<F> + 'a> Prover<'a, F> for NovaPr
let eval_config = self.folding_mode().eval_config(self.lang());
let frames =
C1LEM::<'a, F, C>::build_frames(expr, env, store, limit, &eval_config, ch_terminal)?;
self.prove_from_frames(pp, &frames, store)
self.prove_from_frames(pp, &frames, store, None)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

While we're here, we may want to open an issue to hoist

    fn lang(&self) -> &Arc<Lang<F, C>>;
    fn prove_from_frames(
        &self,
        pp: &Self::PublicParams,
        frames: &[Self::Frame],
        store: &'a Store<F>,
        init: Option<<Self::RecursiveSnark as RecursiveSNARKTrait<F, Self::Frame>>::BaseRecursiveSNARK>,
    ) -> Result<(Self::RecursiveSnark, Vec<F>, Vec<F>, usize), ProofError>;

Into the Prover trait, that would save us some Nova/SuperNova duplication.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Introduce the `Op::Recv` LEM operator, which awaits for data to be
received by the channel terminal at LEM interpretation time.

This operator is used to implement the logic for the `StreamIn`
continuation, which receives the argument to be applied to the
(chaining) callable object.

The result will be available once the (new) `StreamOut` continuation
is reached. When that happens, the computation can be resumed by
supplying two pieces of data through the channel:
* A flag to tell if the machine should stutter while in `StreamOut`
* The next argument to be consumed in the incoming `StreamIn` state

Note: the second message should not be be sent if the machine is set
to stutter with the first message.

There is a test to show that we're now able to construct proofs of
computations that grow incrementally by starting from where we had
stopped. We don't need to start the folding process from the
beginning.
Prepare the next call directly from `StreamOut` without having to
go through `StreamIn` again.

Since `StreamIn` only occurs in the very first frame after this
optimization, it was renamed to `StreamStart`. And `StreamOut`,
which now serves as literal point for pause while supporting both
input and output, was renamed to `StreamPause`.
@arthurpaulino arthurpaulino added this pull request to the merge queue Mar 18, 2024
Merged via the queue into main with commit 54e9292 Mar 18, 2024
11 checks passed
@arthurpaulino arthurpaulino deleted the ap/stream branch March 18, 2024 16:49
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants