Has anyone seen GPT-4o-Mini failing to properly fill out an Instructor model? #873
Unanswered
faroukcharkas
asked this question in
Q&A
Replies: 1 comment 1 reply
-
Short suggestion is to use Literal["A", "B"] vs Enum. enum performance is so bad i almost want to warn users code... |
Beta Was this translation helpful? Give feedback.
1 reply
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
My Setup
Using
TOOLS
as mymode
, 0.2 for mytemperature
, and 4096 for mymax_tokens
. Using the asynchronous OpenAI client. My prompt token size is around 10,000 (a lot, I know), and I'm using a model that has the following fields:List
ofEnum
s (20Enum
options, averageList
length of 5)Observations
gpt-3.5-turbo
andgpt-4o
work perfectlyBeta Was this translation helpful? Give feedback.
All reactions