-
Notifications
You must be signed in to change notification settings - Fork 187
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
DAS protobuffs #710
DAS protobuffs #710
Conversation
Signed-off-by: Cody Littley <[email protected]>
Signed-off-by: Cody Littley <[email protected]>
api/proto/disperser/disperser.proto
Outdated
// A request to retrieve a specific chunk. | ||
message GetChunkRequest { | ||
// The hash of the blob's ReducedBatchHeader defined on-chain. | ||
bytes batch_header_hash = 1; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
As above, should we support both lookup schemes?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Would it make sense to support requesting multiple chunks at once?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Exactly how this is implemented depends a lot on whether we will eventually stop supporting lookup by blob header hash. Will make this change once I know the answer to this question.
Supporting multiple requests at once makes a lot of sense to me. All also add that in when I make the change to support lookup by both mechanisms.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hmm... the more I think about it, the more I'd like to delay adding things that are not absolutely necessary in this iteration. Could be circle back and add methods that return multiple chunks at a future time?
Signed-off-by: Cody Littley <[email protected]>
Signed-off-by: Cody Littley <[email protected]>
Signed-off-by: Cody Littley <[email protected]>
Signed-off-by: Cody Littley <[email protected]>
Signed-off-by: Cody Littley <[email protected]>
Signed-off-by: Cody Littley <[email protected]>
Signed-off-by: Cody Littley <[email protected]>
Signed-off-by: Cody Littley <[email protected]>
Signed-off-by: Cody Littley <[email protected]>
api/proto/node/node.proto
Outdated
//////////////////////////////////////////////////////////////////////////////////////////////// | ||
|
||
// GetChunks retrieves all of the chunks for a blob held by the node. Eventually this will replace RetrieveChunks. | ||
rpc GetChunks(common.BlobKey) returns (GetChunksReply) {} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There's some ambiguity here, since the quorum ID isn't specified in the BlobKey
. Maybe we should have a common.BlobKeyAndQuorumID
which wraps the BlobKey
.
RetrieveChunks
can take BlobKeyandQuorumID
;
ChunkKey
can contain the BlobKeyAndQuorumID
and the ChunkIndex
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
^^Just one idea
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I ended up just calling the argument GetChunksRequest
, which contains a blob key and a quorum ID. Do you think we will end up reusing this type? If so, I can move it to common and give it a more general name.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
A couple comments.
Signed-off-by: Cody Littley <[email protected]>
Signed-off-by: Cody Littley <[email protected]>
Signed-off-by: Cody Littley <[email protected]>
Signed-off-by: Cody Littley <[email protected]>
8657d3e
to
cc291e4
Compare
Signed-off-by: Cody Littley <[email protected]>
Signed-off-by: Cody Littley <[email protected]>
Signed-off-by: Cody Littley <[email protected]>
api/proto/node/node.proto
Outdated
// Eventually this will replace GetBlobHeader. | ||
rpc GetHeader(common.BlobKey) returns (GetBlobHeaderReply) {} | ||
// StreamHeaders requests a stream all new headers. | ||
rpc StreamHeaders(stream StreamHeadersRequest) returns (stream GetBlobHeaderReply) {} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We should discuss whether we also want to have a StreamBlobKeys
endpoint.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Repeating result of discussion here. It was decided that having a StreamBlobKeys
endpoint would probably be useful, but that it was not a necessary component for the initial MVP. We plan on circling back on this question after the MVP launches.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Remove it for now if we plan to revisit/decide later
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@mooselumph's prior comment was about adding an additional RPC. The StreamHeaders
RPC is one that we are confident we will want to keep.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks good to me. @jianoaix, can you please take a look?
Signed-off-by: Cody Littley <[email protected]>
api/proto/common/common.proto
Outdated
@@ -3,8 +3,47 @@ package common; | |||
option go_package = "github.com/Layr-Labs/eigenda/api/grpc/common"; | |||
|
|||
message G1Commitment { | |||
// The X coordinate of the KZG commitment. This is the raw byte representation of the field element. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This existing indent was compatible with other protos. Let's make them consistent.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Whitespace changed back to tabs.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I did a search of this directory tree and found a few places that used spaces instead of tabs in proto files. I also updated those places to use tabs.
api/proto/node/node.proto
Outdated
rpc GetChunks(GetChunksRequest) returns (GetChunksReply) {} | ||
// GetChunk retrieves a specific chunk for a blob custodied at the Node. | ||
rpc GetChunk(common.ChunkKey) returns (common.ChunkData) {} | ||
// GetHeader is similar to RetrieveChunks, this just returns the header of the blob. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You may not want to refer to RetrieveChunks
in this new set of apis
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I removed the reference to RetrieveChunks
, but left the note that it will eventually replace GetBlobHeader
.
api/proto/node/node.proto
Outdated
// Messages for WIP RPCs. These are subject to change, do not consider these to be stable API. // | ||
///////////////////////////////////////////////////////////////////////////////////////////////// | ||
|
||
// Request that all chunks from a particular quorum for a blob be sent. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
"...all custodied chunks..." and drop "be sent"
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
// Request all custodied chunks from a particular quorum
api/proto/node/node.proto
Outdated
|
||
// Reply to GetChunksRequest. | ||
message GetChunksReply { | ||
// The indices of the chunks sent. Each index corresponds to the chunk at the same position in the chunks field. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
"indices of the chunks custodied"
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
change made
api/proto/node/node.proto
Outdated
// StreamHeaders requests a stream all new headers. | ||
rpc StreamHeaders(stream StreamHeadersRequest) returns (stream GetBlobHeaderReply) {} | ||
// Retrieve node info metadata. Eventually this will replace NodeInfo. | ||
rpc GetNodeInfo(GetNodeInfoRequest) returns (GetNodeInfoReply) {} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why do we need a new node info API?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The main goal is to have better uniformity in the names of the RPCs. This does the same thing as the RPC NodeInfo
, it just has a different name. Should we remove this new RPC for now?
api/proto/node/node.proto
Outdated
// Eventually this will replace GetBlobHeader. | ||
rpc GetHeader(common.BlobKey) returns (GetBlobHeaderReply) {} | ||
// StreamHeaders requests a stream all new headers. | ||
rpc StreamHeaders(stream StreamHeadersRequest) returns (stream GetBlobHeaderReply) {} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Remove it for now if we plan to revisit/decide later
api/proto/common/common.proto
Outdated
|
||
// Uniquely identifies a blob based on its certificate hash and blob index. | ||
message BatchBlobKey { | ||
bytes batch_header_hash = 1; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It'd be more readable if we make the wording consistent with comment: either it's batch_header_hash, or certificate_hash
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Comment fixed. This was leftover cruft from a previous iteration where we were calling it something different.
api/proto/common/common.proto
Outdated
|
||
// Uniquely identifies a blob based on its blob header hash. | ||
message HeaderBlobKey { | ||
bytes header_hash = 1; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If there is a length expected for this hash, comment it out
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@mooselumph what is the expected length of the header hash and the batch header hash?
api/proto/common/common.proto
Outdated
|
||
// A unique identifier for a blob. | ||
message BlobKey { | ||
oneof key { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do we have to support both? It makes the golang code more verbose and less easy to work with (have to check which field is set each time)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is probably a good discussion topic with @mooselumph, the plan for how to address a blob is something he's been working on.
api/proto/common/common.proto
Outdated
// A unique identifier for a blob. | ||
message BlobKey { | ||
oneof key { | ||
HeaderBlobKey header_blob_key = 1; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It'd be nice to have a bit more intuitive naming for these two formats of keys
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
How about BatchHeaderHashBlobKey
and HeaderHashBlobKey
? Or is there something else you think would fit better.
node/grpc/server.go
Outdated
return nil, status.Errorf(codes.Unimplemented, "method GetHeader not implemented") | ||
} | ||
|
||
func (s *Server) StreamHeaders(pb.Retrieval_StreamHeadersServer) error { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I assume this means there could be support for long standing GRPC connections ?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That was the idea. I've never set up a system with long lived GRPC connections, so I don't know if there are any pitfalls we need to worry about. Is this something you think we should reconsider from a design perspective?
Signed-off-by: Cody Littley <[email protected]>
Signed-off-by: Cody Littley <[email protected]>
Signed-off-by: Cody Littley <[email protected]>
Signed-off-by: Cody Littley <[email protected]>
Signed-off-by: Cody Littley <[email protected]>
Signed-off-by: Cody Littley <[email protected]>
api/proto/common/common.proto
Outdated
@@ -8,3 +8,12 @@ message G1Commitment { | |||
// The Y coordinate of the KZG commitment. This is the raw byte representation of the field element. | |||
bytes y = 2; | |||
} | |||
|
|||
///////////////////////////////////////////////////////////////////////////////////////////////// | |||
// Messages for WIP RPCs. These are subject to change, do not consider these to be stable API. // |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We could usually call such work "Experimental". E.g.
// # Experimental
//
// The following are EXPERIMENTAL and subject to change, do not consider these to be stable API.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Changed to the following comment:
/////////////////////////////////////////////////////////////////////////////////////
// Experimental: the following definitions are experimental and subject to change. //
/////////////////////////////////////////////////////////////////////////////////////
|
||
// A chunk of a blob. | ||
message ChunkData { | ||
bytes data = 1; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Users need a way to decode the bytes. We may settle on Gnark, so maybe it's fine to just have bytes and implicitly assume it's Gnark.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
My thinking here for just having bytes is that once we settle on a single encoding, we can document the format. If we end up deciding to support multiple encodings, we can trivially add another field to specify the encoding.
api/proto/disperser/disperser.proto
Outdated
// Request a specific chunk | ||
message GetChunkRequest { | ||
// The hash of the blob's header. | ||
bytes header_hash = 1; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
blob_header_hash for clarity?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
change made
common.ChunkData chunk = 1; | ||
} | ||
|
||
// A request from a DA node to an agent light node to stream the availability status of all chunks |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why is this API specific to agent light node? Should it applicable to all light nodes?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
All light nodes will expose this API. Non-agents will return an error if it is called. Agents will return an error if the proper authentication token is not provided, and that error will look identical to if a non-agent is called. It's important for the public interface of an agent light node to be indistinguishable from the public interface of an agent light node.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Good to know this, it may be worth documenting the difference of behavior between agent vs. non-agent LNs.
api/proto/node/node.proto
Outdated
// GetChunk retrieves a specific chunk for a blob custodied at the Node. | ||
rpc GetChunk(GetChunkRequest) returns (GetChunkReply) {} | ||
// StreamHeaders requests a stream all new headers. | ||
rpc StreamHeaders(stream StreamHeadersRequest) returns (stream StreamHeadersReply) {} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
StreamBlobHeaders
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
change made
|
||
// A reply to a StreamAvailabilityStatus request. | ||
message StreamAvailabilityStatusReply { | ||
bytes header_hash = 1; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The comment of the API says "availability status of all chunks", but this is a blob header hash, how should users interpret it? (document it here)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Updated doc as follows:
// The hash of a blob header corresponding to a chunk the agent received and verified. From the light node's
// perspective, the blob is available if the chunk is available.
bytes header_hash = 1;
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This comment is helpful. Note my point was actually related to the naming of the API method itself, i.e. the return value is "blobs (header hash) that are available at the light node", not "availability statuses of all blobs that the light node received".
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Updated RPC name and comment as follows:
// StreamBlobAvailability streams the availability status blobs from the light node's perspective.
// A light node considers a blob to be available if all chunks it wants to sample are available.
// This API is for use by a DA node for monitoring the availability of chunks through its
// constellation of agent light nodes.
rpc StreamBlobAvailability(StreamChunkAvailabilityRequest) returns (stream StreamChunkAvailabilityReply) {}
Signed-off-by: Cody Littley <[email protected]>
api/proto/lightnode/lightnode.proto
Outdated
|
||
// StreamAvailabilityStatus streams the availability status of all chunks assigned to the light node. | ||
// For use by a DA node for monitoring the availability of chunks through its constellation of agent light nodes. | ||
rpc StreamAvailabilityStatus(StreamAvailabilityStatusRequest) returns (stream StreamAvailabilityStatusReply) {} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should it be StreamAvailableChunks
or StreamAvailableBlobs
?
The returned message is not a status (like available, not available, live, expired etc), it's just the blobs that are available at the light node.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is a good callout. I think there's probably still room to rename these if needed. Cody, maybe you can make a judgement call and go with it for now.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I've changed this to StreamChunkAvailability
. Even after this PR merges, not too late to change naming. Only becomes tricky once we have community members running light nodes of their own.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks good to me!
Signed-off-by: Cody Littley <[email protected]>
Signed-off-by: Cody Littley <[email protected]>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Mostly LG.
api/proto/relay/relay.proto
Outdated
///////////////////////////////////////////////////////////////////////////////////// | ||
|
||
service Relay { | ||
// Note: the relay API is still being fleshed out. This definition is not yet complete. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit: align the indent style with other .proto files
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
done
|
||
// A reply to a StreamAvailabilityStatus request. | ||
message StreamAvailabilityStatusReply { | ||
bytes header_hash = 1; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This comment is helpful. Note my point was actually related to the naming of the API method itself, i.e. the return value is "blobs (header hash) that are available at the light node", not "availability statuses of all blobs that the light node received".
Signed-off-by: Cody Littley <[email protected]>
Signed-off-by: Cody Littley <[email protected]>
Why are these changes needed?
Introduces protobuf definitions for DAS.
Checks