Skip to content
Permalink

Comparing changes

Choose two branches to see what’s changed or to start a new pull request. If you need to, you can also or learn more about diff comparisons.

Open a pull request

Create a new pull request by comparing changes across two branches. If you need to, you can also . Learn more about diff comparisons here.
base repository: gavofyork/graypaper
Failed to load repositories. Confirm that selected base ref is valid, then try again.
Loading
base: v0.5.2
Choose a base ref
...
head repository: gavofyork/graypaper
Failed to load repositories. Confirm that selected head ref is valid, then try again.
Loading
compare: main
Choose a head ref
Loading
Showing with 627 additions and 680 deletions.
  1. +1 −196 README.md
  2. +1 −1 VERSION
  3. +16 −0 preamble.tex
  4. +5 −5 text/accounts.tex
  5. +27 −16 text/accumulation.tex
  6. +15 −12 text/definitions.tex
  7. +24 −25 text/discussion.tex
  8. +1 −1 text/erasure_coding.tex
  9. +1 −1 text/header.tex
  10. +2 −2 text/intro.tex
  11. +12 −12 text/merklization.tex
  12. +2 −0 text/notation.tex
  13. +3 −3 text/overview.tex
  14. +188 −112 text/pvm.tex
  15. +264 −235 text/pvm_invocations.tex
  16. +22 −24 text/reporting_assurance.tex
  17. +8 −4 text/serialization.tex
  18. +1 −1 text/utilities.tex
  19. +34 −30 text/work_packages_and_reports.tex
197 changes: 1 addition & 196 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,202 +1,7 @@
# Graypaper: The JAM Specification
# Gray Paper: The JAM Specification

The description and formal specification of the Jam protocol, a potential successor to the Polkadot Relay chain.

Build with xelatex.

https://graypaper.com/

## Remaining near-term

### Finesse
- [ ] Make all subscript names capitalized
- [ ] Ensure all definitions are referenced
- [ ] Link and integrate to Bandersnatch RingVRF references (Davide/Syed) IN-PROGRESS
- [ ] Remove any "TODOs" in text
- [ ] Macrofy everything
- [ ] Limit number of extrinsics in a WP.

### Final PVM
- [ ] 64-bit PVM
- [ ] Gas pricing
- [ ] Merkle reads in terms of nodes traversed.
- [ ] Non-linear gas for export/import host calls
- [ ] No pages mappable in first 64 KB

### Final DA
- [ ] Migrate formalization & explanation:
- [ ] guaranteeing-specific stuff into relevant section
- [ ] assurance-specific stuff into relevant section
- [ ] auditing-specific stuff into relevant section

### Guaranteeing & Auditing
- [ ] Specify announcement signatures
- [ ] Specify how to build perspective on other validators with announcements

### Discussion and Conclusions/Further Work
- [x] Security assumptions: redirect to ELVES paper
- [ ] Creating a parachains service: further work (RFC for upgrade perhaps)
- [ ] Key differences
- [ ] limited size of Work Output vs unlimited candidate receipt
- [ ] Laissez-faire on Work Items vs requirement for valid transition
- [ ] Hermit relay (staking &c is on system chains)
- [ ] Supporting liveness
- [ ] Supporting *MP
- [ ] No need for UMP/DMP
- [ ] Compare with danksharding v1
- [ ] Deeper talk Cost & latency comparison with RISC0-VM and latest ZK stuff.
- [ ] Include full calculations for bandwidth requirements.

## Stuff before 1.0

### Final networking protocol
- [ ] Consider a simple network protocol needed for M1/M2 and a production protocol for M3+
- [ ] Block distribution via EC and proactive-chunk-redistribution
- [ ] Guarantor-guarantor handover
- [ ] Star-shaped Point-to-point extrinsic distribution
- [ ] Mixnet for ticket submission

## Bring together sub-protocols
- [ ] Better integration to Grandpa paper
- [ ] Better description of Beefy
- [ ] Better integration to Bandersnatch RingVRF.

## Ideas to consider

### Work Packages
At present all WorkItems can succeed or fail independently. Instead we should be able to specify co-dependency criteria, so that if one fails, both fail. This should be respected through to accumulation, whereby an accumulator can commit to accumulating the `WorkResult` iff there is a signal from the other accumulator that the result has been accumulated there.

### Statistics/Bookkeeping
- [ ] Consider integrating the subjective extrinsic and state:
- [ ] If so, have three items to allow for a whole epoch of opinion submission
- [ ] In which case allow for guaranteeing val keys from last epoch to gain points

### General
- [ ] Think about time and relationship between lookup-anchor block and import/export period.
- [ ] Lookup anchor: maybe it should be 48 hours since lookup anchor can already be up to 24 hours after reporting and we want something available up to 24 hours after that?
- [ ] Refine arguments:
- [ ] Currently passing in the WP hash, some WP fields and all manifest preimages: Consider passing in the whole work-package and a work-item index.
- [ ] Consider removal of the arrow-above notation in favour of subscript and ellipsis (this only works for the right-arrow).
- Optional `on_report` entry point
- Make memo bounded, rather than fixed.

## Done
- Statistics/Bookkeeping
- [x] Integrate into intro and definitions.
- [x] All "where" and "let" lines are unnumbered/integrated
- DA2
- [x] Update chunks/segments to new size of 12 bytes / 4KB in the availability sections, especially the work-packages and work-reports section and appendix H.
- [x] `export` is in multiples of 4096 bytes.
- [x] Manifest specifies WI (maximum) export count.
- [x] `import` is provided as concatenated segments of 4096 bytes, as per manifest.
- [x] Constant-depth merkle root
- [x] (Partial) Merkle proof generation function
- [x] New erasure root (4 items per validator; 2 hashes + 2 roots).
- [x] Specification of import hash (to include concatenated import data and proof).
- [x] Proof spec.
- [x] Specification of segment root.
- [x] Additional two segment-roots in WR.
- [x] Specification of segment tree.
- [x] Specification of segment proofs.
- [x] Specification of final segments for DA and ER.
- [x] Re-erasure-code imports.
- [x] Fetching imports and verification.
- [x] Independent definition of PVM.
- [x] Need to translate the basic work result into an "L"; do it in the appendix to ease layout
- [x] service - easy
- [x] service code hash - easy
- [x] payload hash - easy
- [x] gas prioritization - just from WP?
- [x] Edit Previous Work.
- [x] Edit Discussion.
- [x] Document guide at beginning.
- [x] Move constants to appendix and define at first use.
- [x] Context strings for all signatures.
- [x] List of all context strings in definitions.
- [x] Remove header items from ST dependency graph where possible.
- [x] Update serialization
- [x] For $\beta$ component $b$ - implement MMR encode.
- [x] Additional field: $\rho_g$
- [x] Link and integrate to RISCV references (Jan) HAVE SPEC
- [x] Link and integrate to Beefy signing spec (Syed)
- [x] Link and integrate to Erasure-Coding references (work with Al)
- [x] Grandpa/best-block: Disregard blocks which we believe are equivocated unless finalized.
- [x] Other PVM work
- [x] Define `sbrk` properly:
- [x] Update host functions to agreed API.
- [x] Figure out what to do with the jump table.
- [x] Define inner PVM host-calls
- [x] Spec below
- [x] Figure out what the $c_i$/$c_b$ are
- [x] Avoid entry point
- [x] Ensure code and jump-table is amalgamated down to VM-spec
- [x] Move host calls to use register index
- [x] Update serialization for Judgement extrinsic and judgments state.
- [x] Define Beefy process
- [x] Accumulate: should return Beefy service hash
- [x] Define Keccak hash $\mathbb{H}_K$
- [x] Remove Beefy root from header
- [x] Put the Beefy root into recent blocks after computation
- [x] Recent blocks should store MMR of roots of tree of accumulated service hashes
- [x] Define an MMR
- [x] Add \textsc{bls} public key to keyset (48 octet).
- [x] Specify requirement of validators to sign.
- [x] Define audit process.
- [x] Erasure coding root must be correct
- [x] This means we cannot assume that the WP hash can be inverted.
- [x] Instead, we assume that we can collect 1/3 chunks and combine to produce *some* data
- [x] Then we check:
- [x] if that hashes to the WP hash.
- [x] if the erasure-coded chunks merklise into a tree of the given root.
- [x] If so we continue.
- [x] NOTE: The above should be done in guarantor stage also.
- [x] Auditing: Always finish once announced.
- [x] Judgments: Should cancel work-report from Rho prior to accumulation.
- [x] Signed judgments should not include guarantor keys;
- [x] Judgement extrinsic should use from rho.
- [x] Check "Which History" section and ensure it mentions possibility for reversion via judgment.
- [x] No reversion beyond finalized
- [x] Of unfinalized extension, not block containing work-reports which appear in the banned-set of any other (valid) block.
- [x] Prior work and refine/remove the zk argumentation (work with Al)
- [x] Disputes state transitioning and extrinsic (work with Al)
- [x] Finish Merklization description
- [x] Bibliography
- [x] Updated PVM
- [x] Remove extrinsic segment root. Rename "* segment-root" to just "segment-root".
- [x] Combine chunk-root for WP, concatenated extrinsics and concatenated imports.
- [x] Imports are host-call
- [x] Make work-report field r bold.
- [x] Segmented DA v2
- [x] Underlying EC doesn't change, need to make clear segments are just a double-EC
- [x] Make work-report field r bold.
- [x] Need to translate the basic work result into an "L"; do it in the appendix to ease layout
- [x] service - easy
- [x] service code hash - easy
- [x] payload hash - easy
- [x] gas prioritization - just from WP?
- [x] Consider introducing a host-call for reading manifest data rather than always passing it in.
- [x] Guarantees by validator indices

### ELVES
- [x] Don't immediately alter kappa mid-epoch as it affects off-chain judgments.
- [x] Instead, apply the blacklist to guarantor sig verification directly.
- [x] Include the core in the WR.
- [x] Deposit the WR's signatures in with the judgment.
- [x] Require at >= 1 negative judgments to be included with a positive verdict, and place signer in offender keys.
- [x] Serialization of judgment stuff
- [x] Should always be possible to submit more guarantee signatures of a known-bad report to place them in the offender set.
- [x] Only from lambda and kappa?
- [x] Use posterior kappa for everything except judgments.

% A set of independent, sequential, asynchronously interacting 32-octet state machines each of whose transitions lasts around 2 seconds of webassembly computation if a predetermined and fixed program and whose transition arguments are 5 MB. While well-suited to the verification of substrate blockchains, it is otherwise quite limiting.

### Final DA
- [x] Include an epochal on-chain lookup from Work Package hash to segments root.
- [x] Allow import spec to be given as either WP hash (in case of being recent) or segments root.
- [x] How can this work given that WP hash to segments-root needs to happen in-core?
- [x] Include a bond to any SR->WPh lookups in the WR.
- [x] Check these bonds only just prior to accumulation; if they fail, drop the report and reduce the guarantors' points.
- [x] Update on-chain SR->WPh immediately at the time of being reported (don't want for accumulation).
- [x] Define Erasure Coding proof means
- [x] Define binary Merkle proof-generation function which compiles neighbours down to leaf.
- [x] Define binary Merkle proof-verification function exists sequence of values which contains our value and Merklised to some root.
2 changes: 1 addition & 1 deletion VERSION
Original file line number Diff line number Diff line change
@@ -1 +1 @@
0.5.2
0.6.3
16 changes: 16 additions & 0 deletions preamble.tex
Original file line number Diff line number Diff line change
@@ -102,6 +102,7 @@
\newcommand*{\oog}{\infty}
\newcommand*{\error}{\nabla}
\newcommand*{\panic}{\lightning}
\newcommand*{\badexports}{\circledcirc}
\newcommand*{\host}{\hbar}
\newcommand*{\halt}{\blacksquare}
\newcommand*{\fault}{\text{\raisebox{6pt}{\rotatebox{180}{\textsf{F}}}}}
@@ -165,12 +166,27 @@

\newcommand*{\wp¬authorizer}{a}
\newcommand*{\wp¬authcode}{\mathbf{c}}
\newcommand*{\wp¬metadata}{\mathbf{m}}

\newcommand*{\wi¬importsegments}{\mathbf{i}}
\newcommand*{\wi¬extrinsics}{\mathbf{x}}
\newcommand*{\wi¬exportsegments}{e}
\newcommand*{\wi¬codehash}{c}

\newcommand*{\wr¬srlookup}{\mathbf{l}}
\newcommand*{\wr¬authoutput}{\mathbf{o}} % TODO: Consider renaming to \mathbf{u} to free up \mathbf{o}
\newcommand*{\wr¬results}{\mathbf{r}} % TODO: Consider renaming to avoid overloading \mathbf{r} reports vs results; maybe \mathbf{t} for results

% TODO: Standardize and fully check this:
% p P: work Packages
% w I: work Items
% w W: work Reports
% r L: work resulTs
% d: work output data (digest)
% y: work payload

\newcommand*{\wl¬data}{\mathbf{d}}
\newcommand*{\wl¬payloadhash}{y}

\newcommand*{\accumulated}{\xi}
\newcommand*{\ready}{\vartheta}
10 changes: 5 additions & 5 deletions text/accounts.tex
Original file line number Diff line number Diff line change
@@ -28,9 +28,9 @@ \section{Service Accounts}\label{sec:accounts}

\subsection{Code and Gas}

The code $c$ of a service account is represented by a hash which, if the service is to be functional, must be present within its preimage lookup (see section \ref{sec:lookups}). We thus define the actual code $\mathbf{c}$:
The code and associated metadata of a service account is identified by a hash which, if the service is to be functional, must be present within its preimage lookup (see section \ref{sec:lookups}). We thus define the actual code $\mathbf{c}$ and metadata $\mathbf{m}$:
\begin{align}
\forall \mathbf{a} \in \mathbb{A} : \mathbf{a}_\mathbf{c} \equiv \begin{cases}
\forall \mathbf{a} \in \mathbb{A} : \se(\var{\mathbf{a}_\mathbf{m}}, \mathbf{a}_\mathbf{c}) \equiv \begin{cases}
\mathbf{a}_\mathbf{p}[\mathbf{a}_c] &\when \mathbf{a}_c \in \mathbf{a}_\mathbf{p} \\
\none &\otherwise
\end{cases}
@@ -126,17 +126,17 @@ \subsubsection{Semantics}

\subsection{Account Footprint and Threshold Balance}

We define the dependent values $i$ and $l$ as the storage footprint of the service, specifically the number of items in storage and the total number of octets used in storage. They are defined purely in terms of the storage map of a service, and it must be assumed that whenever a service's storage is changed, these change also.
We define the dependent values $i$ and $o$ as the storage footprint of the service, specifically the number of items in storage and the total number of octets used in storage. They are defined purely in terms of the storage map of a service, and it must be assumed that whenever a service's storage is changed, these change also.

Furthermore, as we will see in the account serialization function in section \ref{sec:serialization}, these are expected to be found explicitly within the Merklized state data. Because of this we make explicit their set.

We may then define a second dependent term $t$, the minimum, or \emph{threshold}, balance needed for any given service account in terms of its storage footprint.
\begin{align}
\forall a \in \mathcal{V}(\delta):\ \begin{cases}
a_i \in \N_{2^{32}} &\equiv 2\cdot|\,a_\mathbf{l}\,| + |\,a_\mathbf{s}\,| \\
a_l \in \N_{2^{64}} &\equiv \sum\limits_{\,(h, z) \in \keys{a_\mathbf{l}}\,} 81 + z \\
a_o \in \N_{2^{64}} &\equiv \sum\limits_{\,(h, z) \in \keys{a_\mathbf{l}}\,} 81 + z \\
&\phantom{\equiv} + \sum\limits_{x \in \mathcal{V}(a_\mathbf{s})} 32 + |x| \\
a_t \in \N_B &\equiv \mathsf{B}_S + \mathsf{B}_I \cdot a_i + \mathsf{B}_L \cdot a_l
a_t \in \N_B &\equiv \mathsf{B}_S + \mathsf{B}_I \cdot a_i + \mathsf{B}_L \cdot a_o
\end{cases}
\end{align}

Loading