-
Notifications
You must be signed in to change notification settings - Fork 173
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
question: Is it possible to load Models from local drive? #125
Comments
You can't use local models yet but its definitely a planned feature! We'll probably start by integrating Burn and Candle. |
Hey @Mumpitz, if you already have a local LLM implemented in Rust, you can wrap that model in a struct which implements the E.g.: struct MyLocalModel {
// Your local model
}
impl rig::completion::CompletionModel for MyLocalModel {
// TODO
}
// Create an instance of your local model
let local_model = MyLocalModel::new(...);
// Create an agent that uses your custom local model
let agent = AgentBuilder::new(local_model)
.preamble(...)
.tools(...)
.build();
let response = agent.prompt(...).await?; However as @ThierryBleau said, Rig does not have an out-of-the-box solution that supports this at the moment. Hope this helps! |
Bumping this thread in the hope we can see some traction on a Llama provider. |
Will be adressed in a future version. This is one of our short-term priority. |
I am looking into building an app, that should run even without internet connection in isolated environments.
Therefore i need to load models locally.
I could not exactly see if this will be possible through the providers.
If not, do you plan on adding this?
The text was updated successfully, but these errors were encountered: