Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

question: Is it possible to load Models from local drive? #125

Closed
Mumpitz opened this issue Nov 26, 2024 · 4 comments
Closed

question: Is it possible to load Models from local drive? #125

Mumpitz opened this issue Nov 26, 2024 · 4 comments
Labels
question Further information is requested

Comments

@Mumpitz
Copy link

Mumpitz commented Nov 26, 2024

I am looking into building an app, that should run even without internet connection in isolated environments.
Therefore i need to load models locally.

I could not exactly see if this will be possible through the providers.
If not, do you plan on adding this?

@ThierryBleau
Copy link

You can't use local models yet but its definitely a planned feature!

We'll probably start by integrating Burn and Candle.

@0xMochan 0xMochan added the question Further information is requested label Nov 28, 2024
@cvauclair
Copy link
Contributor

Hey @Mumpitz, if you already have a local LLM implemented in Rust, you can wrap that model in a struct which implements the CompletionModel trait. You could then create agents using that struct!

E.g.:

struct MyLocalModel {
    // Your local model
}

impl rig::completion::CompletionModel for MyLocalModel {
    // TODO
}

// Create an instance of your local model
let local_model = MyLocalModel::new(...);

// Create an agent that uses your custom local model
let agent = AgentBuilder::new(local_model)
    .preamble(...)
    .tools(...)
    .build();

let response = agent.prompt(...).await?;

However as @ThierryBleau said, Rig does not have an out-of-the-box solution that supports this at the moment. Hope this helps!

@awdemos
Copy link

awdemos commented Dec 4, 2024

Bumping this thread in the hope we can see some traction on a Llama provider.

@mateobelanger
Copy link
Member

Will be adressed in a future version. This is one of our short-term priority.
Closing this to continue the discussion in #143.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
question Further information is requested
Projects
None yet
Development

No branches or pull requests

6 participants