Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Evaluation engine may not respect choice of model #250

Closed
caufieldjh opened this issue Oct 24, 2023 · 1 comment · Fixed by #251
Closed

Evaluation engine may not respect choice of model #250

caufieldjh opened this issue Oct 24, 2023 · 1 comment · Fixed by #251

Comments

@caufieldjh
Copy link
Member

Running an evaluation may ignore the supplied model and use the default model (gpt-3.5-turbo) instead.
With the following, for example:

ontogpt -vvv eval -o soup.yaml --num-tests 5 --no-chunking -m gpt-4 EvalCTD

The client appears to still be using the default:

INFO:ontogpt.clients.openai_client:Complete: engine=gpt-3.5-turbo, prompt[769]=Split the following piece of text into fields in the following format:

BUT...it also is using a cached response, so either way the specified model isn't used.

@caufieldjh
Copy link
Member Author

Hrm, even with a fresh cache it isn't passing the value for the model param.

@caufieldjh caufieldjh linked a pull request Oct 25, 2023 that will close this issue
caufieldjh added a commit that referenced this issue Oct 25, 2023
Fixes issue in which model param wasn't getting set properly for evaluations.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

1 participant