-
Hi, I am trying to install it but I want to be able to run it with ollama running as the backend for my LLM on my machine. How can we do that? |
Beta Was this translation helpful? Give feedback.
Replies: 3 comments 1 reply
-
I found that there is too many steps for running with local LLms and still not working. This is what i did: 1- Install gpt 2- Install litellm 3- Follow the steps on the documentation
4- Try to launch gptme on terminal And i keep getting errors: File "/Users/rubenfernandez/.pyenv/versions/3.12.7/lib/python3.12/site-packages/openai/_base_client.py", line 1014, in _request |
Beta Was this translation helpful? Give feedback.
-
The exact same issue here. Looks like the support for local/ollama is not implemented. |
Beta Was this translation helpful? Give feedback.
-
Looks like some default ports got changed somewhere since I last tried this. Here are exact steps I just ran to get it working: MODEL=llama3.2:1b
ollama pull $MODEL
ollama serve
OPENAI_API_BASE="http://127.0.0.1:11434" gptme 'hello' -m local/$MODEL I updated the docs to reflect this change |
Beta Was this translation helpful? Give feedback.
Looks like some default ports got changed somewhere since I last tried this.
Here are exact steps I just ran to get it working:
I updated the docs to reflect this change