✨ If you would like to help spread the word about Rig, please consider starring the repo!
Warning
Here be dragons! Rig is alpha software and will contain breaking changes as it evolves. We'll annotate them and highlight migration paths as we encounter them.
Rig is a Rust library for building scalable, modular, and ergonomic LLM-powered applications.
More information about this crate can be found in the crate documentation.
Help us improve Rig by contributing to our Feedback form.
- Full support for LLM completion and embedding workflows
- Simple but powerful common abstractions over LLM providers (e.g. OpenAI, Cohere) and vector stores (e.g. MongoDB, in-memory)
- Integrate LLMs in your app with minimal boilerplate
cargo add rig-core
use rig::{completion::Prompt, providers::openai};
#[tokio::main]
async fn main() {
// Create OpenAI client and model
// This requires the `OPENAI_API_KEY` environment variable to be set.
let openai_client = openai::Client::from_env();
let gpt4 = openai_client.agent("gpt-4").build();
// Prompt the model and print its response
let response = gpt4
.prompt("Who are you?")
.await
.expect("Failed to prompt GPT-4");
println!("GPT-4: {response}");
}
Note using #[tokio::main]
requires you enable tokio's macros
and rt-multi-thread
features
or just full
to enable all features (cargo add tokio --features macros,rt-multi-thread
).
You can find more examples each crate's examples
(ie. src/examples
) directory. More detailed use cases walkthroughs are regularly published on our Dev.to Blog.
Model Providers | Vector Stores |
---|---|
Vector stores are available as separate companion-crates:
- MongoDB vector store:
rig-mongodb
- LanceDB vector store:
rig-lancedb
- Neo4j vector store:
rig-neo4j
- Qdrant vector store:
rig-qdrant