-
Notifications
You must be signed in to change notification settings - Fork 48
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add completeness judge #1410
base: main
Are you sure you want to change the base?
Add completeness judge #1410
Conversation
Signed-off-by: Yoav Katz <[email protected]>
}, | ||
"generic_inference_engine": { | ||
"model_name": "generic", | ||
"inference_model": (GenericInferenceEngine()), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There is now a simpler way to support multiple inference engines in the judge using one consistent API, and that's with CrossProviderInferenceEngine.
https://www.unitxt.ai/en/latest/docs/inference.html#creating-a-cross-api-engine
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It uses the standard OpenAI params names.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ok. I updated the engine to CrossProvider. Pls check now.
Please see these formatting errors: prepare/templates/response_assessment/judges/completeness/v5.py:6:6056: RUF001 String contains ambiguous |
Add LLM-based judge to check for completeness: if the response is complete with respect to the information in the document.