We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Running an evaluation may ignore the supplied model and use the default model (gpt-3.5-turbo) instead. With the following, for example:
ontogpt -vvv eval -o soup.yaml --num-tests 5 --no-chunking -m gpt-4 EvalCTD
The client appears to still be using the default:
INFO:ontogpt.clients.openai_client:Complete: engine=gpt-3.5-turbo, prompt[769]=Split the following piece of text into fields in the following format:
BUT...it also is using a cached response, so either way the specified model isn't used.
The text was updated successfully, but these errors were encountered:
Hrm, even with a fresh cache it isn't passing the value for the model param.
model
Sorry, something went wrong.
Fix #250 (#251)
2e35f5c
Fixes issue in which model param wasn't getting set properly for evaluations.
Successfully merging a pull request may close this issue.
Running an evaluation may ignore the supplied model and use the default model (gpt-3.5-turbo) instead.
With the following, for example:
The client appears to still be using the default:
BUT...it also is using a cached response, so either way the specified model isn't used.
The text was updated successfully, but these errors were encountered: