Integration with Ollama? #221
-
Ollama recently added an OpenAI compatible API that will allow you to interface with any of the models listed using the Open AI API schema. I have attempted to run ollama in the background using curl http://localhost:11434/v1/chat/completions \
-H "Content-Type: application/json" \
-d '{
"model": "llama2",
"messages": [
{
"role": "system",
"content": "You are a helpful assistant."
},
{
"role": "user",
"content": "Hello!"
}
]
}'
{"id":"chatcmpl-920","object":"chat.completion","created":1708966675,"model":"llama2","system_fingerprint":"fp_ollama","choices":[{"index":0,"message":{"role":"assistant","content":"Hello there! It's nice to meet you. How may I assist you today? Is there something you need help with or a question you have? Feel free to ask me anything! 😊"},"finish_reason":"stop"}],"usage":{"prompt_tokens":27,"completion_tokens":45,"total_tokens":72}} However, when attempting to update the endpoint in settings in # Aliases and endpoints for OpenAI compatible REST API.
apis:
openai:
# base-url: https://api.openai.com/v1
base-url: http://localhost:11434/v1
api-key: "ignored"
api-key-env: OPENAI_API_KEY
models:
# ... I get the following response from > mods
ERROR Missing model 'gpt-3.5' for API ''.
error, status code: 404, message: model 'gpt-3.5' not found, try pulling it first I appear to be doing something wrong witht the |
Beta Was this translation helpful? Give feedback.
Replies: 1 comment
-
Ignore me, I've answered my own question. It seems that you only need to add an entry for the correct model name (in this case, The following is a workable config ollama:
I'm going to have a bit more of a play around with this tonight. |
Beta Was this translation helpful? Give feedback.
Ignore me, I've answered my own question. It seems that you only need to add an entry for the correct model name (in this case,
llama2
).The following is a workable config ollama:
I'm going to have a bit more of a play around with this tonight.