Skip to content

Commit

Permalink
minor changes to docs
Browse files Browse the repository at this point in the history
  • Loading branch information
anish-palakurthi committed Jun 13, 2024
1 parent 973f229 commit 4b73f67
Showing 1 changed file with 18 additions and 15 deletions.
33 changes: 18 additions & 15 deletions docs/docs/syntax/client/client.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -34,8 +34,8 @@ BAML ships with the following providers (you can can also write your own!):
- `openai`
- `azure-openai`
- `anthropic`
- `ollama`
- `google-ai`
- `ollama`
- Composite client providers
- `fallback`
- `round-robin`
Expand Down Expand Up @@ -111,6 +111,21 @@ client<llm> MyClient {
}
}
```
### Google

Provider names:
- `google-ai`

Accepts any options as defined by the [Gemini SDK](https://ai.google.dev/gemini-api/docs/get-started/tutorial?lang=rest#configuration).

```rust
client<llm> MyGoogleClient {
provider google-ai
options{
model "gemini-1.5-pro-001"
}
}
```

### Ollama

Expand All @@ -135,6 +150,7 @@ client<llm> MyOllamaClient {

1. For Ollama, in your terminal run `ollama serve`
2. In another window, run `ollama run llama2` (or your model), and you should be good to go.
3. If your Ollama port is not 11434, you can specify the endpoint manually.

```rust
client<llm> MyClient {
Expand All @@ -143,26 +159,13 @@ client<llm> MyClient {
model llama2
options {
temperature 0
base_url "http://localhost:<ollama_port>" // Default is 11434
}
}
}
```

### Google

Provider names:
- `google-ai`

Accepts any options as defined by the [Gemini SDK](https://ai.google.dev/gemini-api/docs/get-started/tutorial?lang=rest).

```rust
client<llm> MyGoogleClient {
provider google-ai
options{
model "gemini-1.5-pro-001"
}
}
```
This is not the Vertex AI Gemini API, but the Google Generative AI Gemini API, which supports the same models but at a different endpoint.


Expand Down

0 comments on commit 4b73f67

Please sign in to comment.