Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Feature Request: Support for free and local LLMs (ollama) #95

Open
kieran-mace opened this issue May 2, 2024 · 11 comments
Open

Feature Request: Support for free and local LLMs (ollama) #95

kieran-mace opened this issue May 2, 2024 · 11 comments
Labels
enhancement New feature or request

Comments

@kieran-mace
Copy link

kieran-mace commented May 2, 2024

Ollama is a fantastic tool, that enables user to run freely available LLMs locally and chat with them via the command line. They regularly update what LLMs are available (llama3 became available this week)

My feature request is to enable the chattr app to interact with these local and open source instances.

@kieran-mace kieran-mace changed the title Feature Request: Support for local LLMs (ollama) Feature Request: Support for free and local LLMs (ollama) May 2, 2024
@edgararuiz
Copy link
Collaborator

Hi! That's actually in the works, I have a branch that already has a working prototype. I just need to document it. Feel free to try it and provide feedback 😄

pak::pak("mlverse/chattr@ollama")
library(chattr)
chattr_use("ollama")
chattr("hi")

@edgararuiz edgararuiz added the enhancement New feature or request label May 2, 2024
@ManuelSpinola
Copy link

Hi!

I followed your instructions but I cannot run Ollama.

pak::pak("mlverse/chattr@ollama")

ℹ No downloads are needed
✔ 1 pkg + 44 deps: kept 45 [1.1s]

chattr_use("ollama")

── chattr
• Provider: Ollama
• Path/URL: http://localhost:11434/
• Model: llama2
• Label: Ollama

chattr("hi")
Error in chattr():
! ! The 'llama2' model is not found.
Would you like to download it?
Backtrace:

  1. chattr::chattr("hi")

@ga-it
Copy link

ga-it commented Jun 18, 2024

Rather than work specifically with ollama, could you allow defining the endpoint for the OpenAI connection?

In addition to the endpoint, this should also then allow selection of the specific model to connect to and the litellm api key.

I believe the OpenAI package does most of this. Inheriting this as the backend could allow chattr to focus on the front end RStudio / RShiny integration.

This would allow connection to LiteLLM and from there proxy connection to ollama or a variety of other LLMs

@ManuelSpinola
Copy link

ManuelSpinola commented Jun 18, 2024 via email

@ManuelSpinola
Copy link

ManuelSpinola commented Jun 18, 2024 via email

@ga-it
Copy link

ga-it commented Jun 18, 2024

Apologies - to clarify I was suggesting an alternative approach to @mlverse for the enhancement

@ManuelSpinola
Copy link

ManuelSpinola commented Jun 18, 2024 via email

@alexvorobiev
Copy link

@edgararuiz Is there a roadmap to merge the branch? Thanks!

@jansoe
Copy link

jansoe commented Dec 18, 2024

Rather than work specifically with ollama, could you allow defining the endpoint for the OpenAI connection?

I would really appreciate this. Our university runs an open-ai API compatible server (vLLM). It would be great to use it by just setting the model endpoint url

@wangqc0
Copy link

wangqc0 commented Feb 19, 2025

Hi! That's actually in the works, I have a branch that already has a working prototype. I just need to document it. Feel free to try it and provide feedback 😄

pak::pak("mlverse/chattr@ollama")
library(chattr)
chattr_use("ollama")
chattr("hi")

Thank you very much for sharing this! I have deployed a local model on my laptop using Ollama. I am trying to revise your "ollama.yml" file (change the model to the model I want) and build the package on my laptop. However, when I try to use it on R, it shows:

! The '<THE MODEL I USE>' model is not found.
Would you like to download it?

1: Yes
2: No

Every time I choose "1", and it pulls (modify my local model file without changing anything), then I can use it once. From the speed, I can tell that the system does not download anything. If I want to use it again (either with chattr_app() via Viewer box or chattr("<MY QUESTION>")), the "not found" message reappears, and I have to repeat the above steps again. Could you fix the issue?

@liar666
Copy link

liar666 commented Feb 27, 2025

Thanks for the nice work.

  • Same problem as above: I need to ask to download the model each time :(
  • It would be nice to be able to stop the generation, e.g. have everything run in a separate thread,
    (I tried to use a 1.5b local version of DeepSeek R1 that started "thinking" at length, which blocked my work for a loooooong time :( )
  • Would be nice to be able to change the model from the 'setup' GUI (based on ollama list output) rather than finding & editing a yaml file.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

8 participants