You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Local OpenAI inference is supported, including Ollama and LM Studio. Would it make sense to also support ollama's specific API? If so, would this be a new CompletionModel, or are these classified more as Tools?
Local OpenAI inference is supported, including Ollama and LM Studio. Would it make sense to also support ollama's specific API? If so, would this be a new CompletionModel, or are these classified more as Tools?
Yes it makes a lot of sense, as written in #147! It would be a new provider that directly implements ollama's API. Feel free to take this on as apart of your existing PR or create a new issue and new PR that tracks that!
As requested in question: Is it possible to load Models from local drive? #125
Feature Request
Motivation
We want to support local execution of LLMs. Starting with ollama.
Proposal
Alternatives
The text was updated successfully, but these errors were encountered: