-
Notifications
You must be signed in to change notification settings - Fork 246
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Feature Request] Support any OpenAI compatible endpoints by adding two flags #1008
Comments
Thanks, this is a good idea. Will take a look. |
This is already possible with For the moment can you try one of these examples and see if there are edge cases that might need to be investigated? openai-compat-endpoint.yaml plugins:
generators:
nim:
uri: http://0.0.0.0:8000/v1
context_len: 32768
api_key: <enter here or in env var NIM_API_KEY> This can be passed via python -m garak -m nim -n my_deployed_model_name --config openai-compat-endpoint.yaml Or as json openai-compat-endpoint.json: {
"generators": {
"nim": {
"uri": "http://0.0.0.0:8000/v1",
"context_len": 32768
}
}
} This can be passed python -m garak -m nim -n my_deployed_model_name --generator_option_file openai-compat-endpoint.json |
Might be worth farming this out to a putative |
Thanks, I just tried this and it works. (It would be nice if this is explicitly documented though) I guess my proposal is not needed in this case? I will close the issue now. |
agree, we should have a clearer route. if you don't mind i'll reopen this to track resolving that |
It may make sense to enhance |
Summary
Support any OpenAI compatible endpoints, such as tabbyAPI, vLLM, ollama, etc.
I am running Qwen2.5-coder 32B with tabbyAPI which is a OpenAI comaptible API server.
Here is what I did to make it work with garak (openai generator):
export OPENAI_BASE_URL="http://localhost:5000/v1"
so that the OpenAI client uses my servergpt-4-32k
(becausegpt-4-32k
is one of the supported models and is hardcoded to have 32k context, which is the same context length as Qwen2.5 coder)garak --model_type openai --model_name gpt-4-32k
It would be nice if garak support arbitary OpenAI models out of the box.
Basic example
I suggest adding the following logic:
--custom_base_url
and--context_len
flag; user must uses either both or none of them--custom_base_url
is used, initiate the OpenAI client with it. Something like this:--context_len
For example
OPENAI_API_KEY="sk-123XXXXXXXXXXXX" garak --model_type openai --model_name Qwen_Qwen2.5-Coder-32B-Instruct-exl2 --custom_base_url http://localhost:5000/v1 --context_len 32768
Motivation
There is quite a lot of OpenAI compatible API servers out there, supporting them would cover a lot more use cases.
Also, I think it is more straightforward to setup (compared to the REST generator with has a lot of manual config values).
The text was updated successfully, but these errors were encountered: