Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Feature Request] Support any OpenAI compatible endpoints by adding two flags #1008

Open
regunakyle opened this issue Nov 18, 2024 · 6 comments · May be fixed by #1021
Open

[Feature Request] Support any OpenAI compatible endpoints by adding two flags #1008

regunakyle opened this issue Nov 18, 2024 · 6 comments · May be fixed by #1021
Labels
architecture Architectural upgrades generators Interfaces with LLMs

Comments

@regunakyle
Copy link

Summary

Support any OpenAI compatible endpoints, such as tabbyAPI, vLLM, ollama, etc.

I am running Qwen2.5-coder 32B with tabbyAPI which is a OpenAI comaptible API server.

Here is what I did to make it work with garak (openai generator):

  1. export OPENAI_BASE_URL="http://localhost:5000/v1" so that the OpenAI client uses my server
  2. Set the model name of my Qwen2.5 to gpt-4-32k (because gpt-4-32k is one of the supported models and is hardcoded to have 32k context, which is the same context length as Qwen2.5 coder)
  3. Run garak with garak --model_type openai --model_name gpt-4-32k

It would be nice if garak support arbitary OpenAI models out of the box.

Basic example

I suggest adding the following logic:

  1. Add --custom_base_url and --context_len flag; user must uses either both or none of them
  2. If --custom_base_url is used, initiate the OpenAI client with it. Something like this:
# ...
self.client = openai.OpenAI(api_key=self.api_key, base_url=custom_base_url)
# ...
  1. Set context length to the value of --context_len
  2. User run garak with
OPENAI_API_KEY=<API key> garak --model_type openai --model_name <model name> --custom_base_url <custom_base_url> --context_len <context_len>

For example

OPENAI_API_KEY="sk-123XXXXXXXXXXXX" garak --model_type openai --model_name Qwen_Qwen2.5-Coder-32B-Instruct-exl2 --custom_base_url http://localhost:5000/v1 --context_len 32768

Motivation

There is quite a lot of OpenAI compatible API servers out there, supporting them would cover a lot more use cases.
Also, I think it is more straightforward to setup (compared to the REST generator with has a lot of manual config values).

@regunakyle regunakyle added the architecture Architectural upgrades label Nov 18, 2024
@leondz leondz added the generators Interfaces with LLMs label Nov 18, 2024
@leondz
Copy link
Collaborator

leondz commented Nov 18, 2024

Thanks, this is a good idea. Will take a look.

@jmartin-tech
Copy link
Collaborator

jmartin-tech commented Nov 18, 2024

This is already possible with nim generators since NIMs are published as OpenAI compatible service containers, you can pass in a config to garak that provides a uri to target OpenAI client compatible endpoints. Promoting OpenAICompatible as a generic generator however may be a more straight forward accessible pattern.

For the moment can you try one of these examples and see if there are edge cases that might need to be investigated?

openai-compat-endpoint.yaml

plugins:
  generators:
    nim:
      uri: http://0.0.0.0:8000/v1
      context_len: 32768
      api_key: <enter here or in env var NIM_API_KEY>

This can be passed via --config

python -m garak -m nim -n my_deployed_model_name --config openai-compat-endpoint.yaml

Or as json openai-compat-endpoint.json:

{
  "generators": {
    "nim": {
      "uri": "http://0.0.0.0:8000/v1",
      "context_len": 32768
    }
  }
}

This can be passed

python -m garak -m nim -n my_deployed_model_name --generator_option_file openai-compat-endpoint.json

@leondz
Copy link
Collaborator

leondz commented Nov 19, 2024

Might be worth farming this out to a putative openai.Compatible that requires an endpoint uri

@regunakyle
Copy link
Author

regunakyle commented Nov 19, 2024

This is already possible with nim generators since NIMs are published as OpenAI compatible service containers, you can pass in a config to garak that provides a uri to target OpenAI client compatible endpoints. Promoting OpenAICompatible as a generic generator however may be a more straight forward accessible pattern.

For the moment can you try one of these examples and see if there are edge cases that might need to be investigated?

openai-compat-endpoint.yaml

plugins:
  generators:
    nim:
      uri: http://0.0.0.0:8000/v1
      context_len: 32768
      api_key: <enter here or in env var NIM_API_KEY>

This can be passed via --config

python -m garak -m nim -n my_deployed_model_name --config openai-compat-endpoint.yaml

Or as json openai-compat-endpoint.json:

{
  "generators": {
    "nim": {
      "uri": "http://0.0.0.0:8000/v1",
      "context_len": 32768
    }
  }
}

This can be passed

python -m garak -m nim -n my_deployed_model_name --generator_option_file openai-compat-endpoint.json

Thanks, I just tried this and it works. (It would be nice if this is explicitly documented though)

I guess my proposal is not needed in this case? I will close the issue now.

@leondz
Copy link
Collaborator

leondz commented Nov 20, 2024

It would be nice if this is explicitly documented though

agree, we should have a clearer route. if you don't mind i'll reopen this to track resolving that

@leondz leondz reopened this Nov 20, 2024
@jmartin-tech
Copy link
Collaborator

It may make sense to enhance OpenAICompatible to be fully functional with exposed params for expecting a uri to be provided.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
architecture Architectural upgrades generators Interfaces with LLMs
Projects
None yet
Development

Successfully merging a pull request may close this issue.

3 participants