Skip to content

How to use with local models like llama3.1 70b running on ollama #178

Closed Answered by ErikBjare
Gimel12 asked this question in Q&A
Discussion options

You must be logged in to vote

Looks like some default ports got changed somewhere since I last tried this.

Here are exact steps I just ran to get it working:

MODEL=llama3.2:1b
ollama pull $MODEL
ollama serve
OPENAI_API_BASE="http://127.0.0.1:11434" gptme 'hello' -m local/$MODEL

I updated the docs to reflect this change

Replies: 3 comments 1 reply

Comment options

You must be logged in to vote
0 replies
Comment options

You must be logged in to vote
0 replies
Comment options

You must be logged in to vote
1 reply
@ErikBjare
Comment options

Answer selected by ErikBjare
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Category
Q&A
Labels
None yet
3 participants
Converted from issue

This discussion was converted from issue #175 on October 09, 2024 06:50.