-
Notifications
You must be signed in to change notification settings - Fork 6
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
MIB Extension Proposal #142
Comments
@timburke and I discussed this offline and have agreed there's a simpler way to achieve these goals. Our current thinking is this:
|
Per discussion with @amcgee today, we decided the following:
uint8 bulk_receive(void *buffer, uint8 length, void (*handler)(void *, uint8)) A master would call a slave endpoint, which would internally prepare a buffer to receive the largest chunk of data it can handle and call bulk_receive with the buffer pointer and size. The MIB executive would internally store the buffer location and length. Subsequent calls to a standard bulk_transfer endpoint would load in 20 bytes at a time into the buffer and once it was full, the handler would be called. The filling would happen without application code intervention on the slave side. If the slave wanted more data it could recall bulk_receive from the handler code. Since this is happening with the clocked stretched, the master doesn't need to know anything about how the slave is processing the chunks and can just send chunks until the slave doesn't want anymore or it runs out. |
The other point we didn't discuss was whether or not to discontinue support |
I'm going to mock up this change this weekend and open a PR |
I would like to propose the following extension to the MIB protocol. This extension is intended to support two additional features:
There are a few important considerations in this protocol extension, namely:
Message Size Limitations
Since some chips have very limited RAM, supporting arbitrarily large message sizes can be challenging. To accomodate this limitation, I propose that during the initial RPC handshake the two parties should agree on a maximum "chunk size" to be used when sending the buffer. The caller then sends the message in chunks of this size, waiting for the callee to process each chunk before proceeding.
This could theoretically be done asynchronously as well so as not to lock the bus during message transmission, but this significantly increases the protocol overhead and I think it is more reasonable to expect developers to design the module APIs in a way that minimizes data chunk processing time.
On another note, supporting arbitrary buffer sizes would be problematic with the existing RPC Queue feature on the controller. I propose having a "pointer"-style queue, where each queued RPC's data buffer is saved to flash (if it's long enough), with the first 16 or so bytes actually stored in the in-memory queue structure. The controller can load each subsequent chunk of the stored buffer into memory while the slave is processing the previous one.
In the existing MIB protocol, the param_spec byte can specify up to 3 integer parameters and a buffer parameter of size up to 31 bytes. This is insufficient for our purposes, and I don't think the parameter typing is particularly useful (we already "hack" it multiple places to pass more than 3 integers). I think the callback definition API can grab "typed" arguments even if the parameters are untyped at the MIB protocol layer. Very very basic type-checking can be done by just confirming that the RPC's parameter buffer size matches what the callback expects (i.e. 4 bytes for 2 integer parameters). Removing this protocol-layer type enforcement means we have a full 8 bits to specify message length, bumping our maximum to 255 which seems reasonable.
Callback state
Supporting asynchronous RPCs requires state about previous MIB calls to be stored by the individual modules. The callee needs to know what callback address to hit when the handler has finished doing its thing. This should be straightforward on the 16-bit chips because they already have state in the RPC queue (it will just need to support removal of elements that aren't at the "top" of the queue), and I think 8-bit chips should only need to support one callback at a time. The easiest way to implement this should be to reserve a particular feature and command (say 0xFF:0xFF, or something less banal) as the universal "callback address", and have it expect the first byte of the callback data to be the "identifier" of the RPC as specified by the original call.
The Heart of the Matter
The "arbitrary message length" protocol extension
<start write>+(address, feature, command, message_size)+checksum+<repeated start read>
onto the bus(max_chunk_size)+checksum
, otherwise(0,error_code)+checksum
min(caller_max_chunk_size, callee_max_chunk_size)
. A chunk with size n look like this:<r. start write>+(data[n])+checksum+<r. start read>
.<r. start read>
(below) instead of<r. start write>
(above).After the RPC handler has executed, the return logic could also use this same method to support arbitrary return value buffer size.
The "asynchronous RPC" protocol extension
Once the RPC call has been made, it would traditionally be the callee's turn to execute the callback and specify a return status+value. This can still supported by the extension (advantageous for handlers that run quickly and would suffer from the additional overhead of orchestrating the callback).
<repeated start read>
<r. start write>(origin_address, rpc_identifier)+checksum
, where origin_address is the caller's i2c address and rpc_identifier is the ID the caller has assigned to this "pending" call.(origin_address, callback_feature, callback_command)
where callback_feature and callback_command are well-known, and specifying the rpc_identifier as the first byte of the data buffer.Handler Definition Syntax
To facilitate processing long messages in small-sized chunks, the following C API could be used (written assuming 16-bit compiler, it would have to be optimized for 8-bit:
And that's it. Let me know if anything is unclear (I'm sure something is).
The text was updated successfully, but these errors were encountered: