You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
OpenAI's spec introduced some new capabilities that led to some larger changes in the async-openai crate. I'm more than happy to contribute this work to tiktoken-rs but I wanted to open the dialogue first about one of the larger structural changes:
This makes getting message content less trivial, but seemed like a necessity since OpenAI now supports user messages including images when invoking certain models, so this will require a data structure change. I'm not very familiar with this space outside of general application, so looking for input in terms of how these new user messages should be handled in terms of token counting.
The text was updated successfully, but these errors were encountered:
OpenAI's spec introduced some new capabilities that led to some larger changes in the
async-openai
crate. I'm more than happy to contribute this work totiktoken-rs
but I wanted to open the dialogue first about one of the larger structural changes:The biggest thing (imo) is the structure of
ChatCompletionRequestMessage
was changed to support a different data type per role.This makes getting message content less trivial, but seemed like a necessity since OpenAI now supports user messages including images when invoking certain models, so this will require a data structure change. I'm not very familiar with this space outside of general application, so looking for input in terms of how these new user messages should be handled in terms of token counting.
The text was updated successfully, but these errors were encountered: