Skip to content

Commit

Permalink
Merge branch 'canary' into sam/json-schema-docs
Browse files Browse the repository at this point in the history
  • Loading branch information
sxlijin authored Jul 10, 2024
2 parents fcd2c43 + c0fb454 commit afb3b97
Show file tree
Hide file tree
Showing 82 changed files with 1,786 additions and 972 deletions.
15 changes: 8 additions & 7 deletions .github/workflows/release.yml
Original file line number Diff line number Diff line change
@@ -1,6 +1,11 @@
name: BAML Release

on:
workflow_dispatch: {}
# need to run this periodically on the default branch to populate the build cache
schedule:
# daily at 2am PST
- cron: 0 10 * * *
push:
tags:
- "test-release/*.*"
Expand Down Expand Up @@ -39,7 +44,7 @@ jobs:
run_install: false
# Set up Node.js
- name: Setup Node.js
uses: actions/setup-node@v3
uses: actions/setup-node@v4
with:
cache: "pnpm"
node-version: 20
Expand Down Expand Up @@ -137,7 +142,6 @@ jobs:
with:
python-version: "3.8"
architecture: ${{ matrix._.host == 'windows-latest' && 'x64' || null }}
# Install node set up
- uses: pnpm/action-setup@v3
with:
version: 9.0.6
Expand All @@ -163,7 +167,7 @@ jobs:
- uses: Swatinem/rust-cache@v2
with:
workspaces: engine
shared-key: ${{ env.GITHUB_JOB }}-engine-${{ matrix._.target }}
shared-key: engine-${{ github.job }}-${{ matrix._.target }}
cache-on-failure: true

- name: Build Rust
Expand Down Expand Up @@ -227,9 +231,6 @@ jobs:
NODE_AUTH_TOKEN: ${{ secrets.NPM_TOKEN }}
steps:
- uses: actions/checkout@v4
- uses: actions/setup-node@v4
with:
node-version: 20

- name: setup pnpm
uses: pnpm/action-setup@v3
Expand Down Expand Up @@ -310,7 +311,7 @@ jobs:
# run_install: false
# # Set up Node.js
# - name: Setup Node.js
# uses: actions/setup-node@v3
# uses: actions/setup-node@v4
# with:
# cache: "pnpm"
# node-version: 20
Expand Down
17 changes: 14 additions & 3 deletions CHANGELOG.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,12 +2,23 @@

All notable changes to this project will be documented in this file. See [conventional commits](https://www.conventionalcommits.org/) for commit guidelines.

## [0.49.0](https://github.com/boundaryml/baml/compare/0.46.0..0.49.0) - 2024-07-08

### Bug Fixes
- Fixed Azure / Ollama clients. Removing stream_options from azure and ollama clients (#760) - ([30bf88f](https://github.com/boundaryml/baml/commit/30bf88f65c8583ab02db6a7b7db40c1e9f3b05b6)) - hellovai

### Features
- Add support for arm64-linux (#751) - ([adb8ee3](https://github.com/boundaryml/baml/commit/adb8ee3097fd386370f75b3ba179d18b952e9678)) - Samuel Lijin

## [0.48.0](https://github.com/boundaryml/baml/compare/0.47.0..0.48.0) - 2024-07-04

### Bug Fixes
- Fix env variables dialoge on VSCode (#750)
- Playground selects correct function after loading (#757) - ([09963a0](https://github.com/boundaryml/baml/commit/09963a02e581da9eb8f7bafd3ba812058c97f672)) - aaronvg


### UNMATCHED
- improve error handling when submitting logs to api (#754) - ([49c768f](https://github.com/boundaryml/baml/commit/49c768fbe8eb8023cba28b8dc68c2553d8b2318a)) - aaronvg
- readd ts files - ([1635bf0](https://github.com/boundaryml/baml/commit/1635bf06f87a18b1f933b6c112cd38044239d5c5)) - Aaron Villalpando
### Miscellaneous Chores
- Better error messages on logging failures to Boundary Studio (#754) - ([49c768f](https://github.com/boundaryml/baml/commit/49c768fbe8eb8023cba28b8dc68c2553d8b2318a)) - aaronvg

## [0.47.0](https://github.com/boundaryml/baml/compare/0.46.0..0.47.0) - 2024-07-03

Expand Down
96 changes: 96 additions & 0 deletions docs/docs/calling-baml/client-registry.mdx
Original file line number Diff line number Diff line change
@@ -0,0 +1,96 @@
---
title: "Client Registry"
---

If you need to modify the model / parameters for an LLM client at runtime, you can modify the `ClientRegistry` for any specified function.

<CodeGroup>

```python Python
from baml_py import ClientRegistry

async def run():
cr = ClientRegistry()
# Creates a new client
cr.add_llm_client(name='MyAmazingClient', provider='openai', options={
"model": "gpt-4o",
"temperature": 0.7,
"api_key": "sk-..."
})
# Sets MyAmazingClient as the primary client
cr.set_primary('MyAmazingClient')

# ExtractResume will now use MyAmazingClient as the calling client
res = await b.ExtractResume("...", { "client_registry": cr })
```

```typescript TypeScript
import { ClientRegistry } from '@boundaryml/baml'

async function run() {
const cr = new ClientRegistry()
// Creates a new client
cr.addLlmClient({ name: 'MyAmazingClient', provider: 'openai', options: {
model: "gpt-4o",
temperature: 0.7,
api_key: "sk-..."
}})
// Sets MyAmazingClient as the primary client
cr.setPrimary('MyAmazingClient')

// ExtractResume will now use MyAmazingClient as the calling client
const res = await b.ExtractResume("...", { clientRegistry: cr })
}
```

```ruby Ruby
Not available yet
```

</CodeGroup>

## ClientRegistry Interface
import ClientConstructorParams from '/snippets/client-params.mdx'


<Tip>
Note: `ClientRegistry` is imported from `baml_py` in Python and `@boundaryml/baml` in TypeScript, not `baml_client`.

As we mature `ClientRegistry`, we will add a more type-safe and ergonomic interface directly in `baml_client`. See [Github issue #766](https://github.com/BoundaryML/baml/issues/766).
</Tip>

Methods use `snake_case` in Python and `camelCase` in TypeScript.

### add_llm_client / addLlmClient
A function to add an LLM client to the registry.

<ParamField
path="name"
type="string"
required
>
The name of the client.

<Warning>
Using the exact same name as a client also defined in .baml files overwrites the existing client whenever the ClientRegistry is used.
</Warning>
</ParamField>

<ClientConstructorParams />

<ParamField path="retry_policy" type="string">
The name of a retry policy that is already defined in a .baml file. See [Retry Policies](/docs/snippets/clients/retry.mdx).
</ParamField>

### set_primary / setPrimary
This sets the client for the function to use. (i.e. replaces the `client` property in a function)

<ParamField
path="name"
type="string"
required
>
The name of the client to use.

This can be a new client that was added with `add_llm_client` or an existing client that is already in a .baml file.
</ParamField>
Empty file.
2 changes: 1 addition & 1 deletion docs/docs/get-started/what-is-baml.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -52,7 +52,7 @@ Share your creations and ask questions in our [Discord](https://discord.gg/BTNBe
## Starter projects

- [BAML + NextJS 14 + Streaming](https://github.com/BoundaryML/baml-examples/tree/main/nextjs-starter)
- [BAML + FastAPI + Streaming](https://github.com/BoundaryML/baml-examples/tree/main/fastapi-starter)
- [BAML + FastAPI + Streaming](https://github.com/BoundaryML/baml-examples/tree/main/python-fastapi-starter)

## First steps
We recommend checking the examples in [PromptFiddle.com](https://promptfiddle.com). Once you're ready to start, [install the toolchain](/docs/get-started/quickstart/python) and read the [guides](/docs/calling-baml/calling-functions).
23 changes: 2 additions & 21 deletions docs/docs/snippets/clients/overview.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -25,30 +25,11 @@ function MakeHaiku(topic: string) -> string {

## Fields

<ParamField path="provider" required>
This configures which provider to use. The provider is responsible for handling the actual API calls to the LLM service. The provider is a required field.
import ClientConstructorParams from '/snippets/client-constructor.mdx'

The configuration modifies the URL request BAML runtime makes.

| Provider Name | Docs | Notes |
| -------------- | -------------------------------- | ---------------------------------------------------------- |
| `openai` | [OpenAI](providers/openai) | Anything that follows openai's API exactly |
| `ollama` | [Ollama](providers/ollama) | Alias for an openai client but with default ollama options |
| `azure-openai` | [Azure OpenAI](providers/azure) | |
| `anthropic` | [Anthropic](providers/anthropic) | |
| `google-ai` | [Google AI](providers/gemini) | |
| `fallback` | [Fallback](fallback) | Used to chain models conditional on failures |
| `round-robin` | [Round Robin](round-robin) | Used to load balance |

</ParamField>
<ClientConstructorParams />

<ParamField path="retry_policy">
The name of the retry policy. See [Retry
Policy](/docs/snippets/clients/retry).
</ParamField>

<ParamField path="options">
These vary per provider. Please see provider specific documentation for more
information. Generally they are pass through options to the POST request made
to the LLM.
</ParamField>
38 changes: 37 additions & 1 deletion docs/docs/snippets/clients/providers/other.mdx
Original file line number Diff line number Diff line change
@@ -1,5 +1,5 @@
---
title: Others (e.g. openrouter)
title: Others (e.g. groq, openrouter)
---

Since many model providers are settling on following the OpenAI Chat API spec, the recommended way to use them is to use the `openai` provider.
Expand All @@ -26,3 +26,39 @@ client<llm> MyClient {
}
}
```

### Groq

https://groq.com - Fast AI Inference

You can use Groq's openai interface with BAML.

See https://console.groq.com/docs/openai for more information.

```rust BAML
client<llm> MyClient {
provider openai
options {
base_url "https://api.groq.com/openai/v1"
api_key env.GROQ_API_KEY
model "llama3-70b-8192"
}
}
```

### Together AI

https://www.together.ai/ - The fastest cloud platform for building and running generative AI.

See https://docs.together.ai/docs/openai-api-compatibility for more information.

```rust BAML
client<llm> MyClient {
provider openai
options {
base_url "https://api.together.ai/v1"
api_key env.TOGETHER_API_KEY
model "meta-llama/Llama-3-70b-chat-hf"
}
}
```
13 changes: 9 additions & 4 deletions docs/mint.json
Original file line number Diff line number Diff line change
Expand Up @@ -137,16 +137,21 @@
]
},
{
"group": "Calling BAML Functions",
"group": "Advanced BAML Snippets",
"pages": [
"docs/calling-baml/dynamic-types",
"docs/calling-baml/client-registry"
]
},
{
"group": "BAML with Python/TS/Ruby",
"pages": [
"docs/calling-baml/generate-baml-client",
"docs/calling-baml/set-env-vars",
"docs/calling-baml/calling-functions",
"docs/calling-baml/streaming",
"docs/calling-baml/concurrent-calls",
"docs/calling-baml/multi-modal",
"docs/calling-baml/dynamic-types",
"docs/calling-baml/dynamic-clients"
"docs/calling-baml/multi-modal"
]
},
{
Expand Down
23 changes: 23 additions & 0 deletions docs/snippets/client-constructor.mdx
Original file line number Diff line number Diff line change
@@ -0,0 +1,23 @@
<ParamField path="provider" type="string" required>
This configures which provider to use. The provider is responsible for handling the actual API calls to the LLM service. The provider is a required field.

The configuration modifies the URL request BAML runtime makes.

| Provider Name | Docs | Notes |
| -------------- | -------------------------------- | ---------------------------------------------------------- |
| `openai` | [OpenAI](/docs/snippets/clients/providers/openai) | Anything that follows openai's API exactly |
| `ollama` | [Ollama](/docs/snippets/clients/providers/ollama) | Alias for an openai client but with default ollama options |
| `azure-openai` | [Azure OpenAI](/docs/snippets/clients/providers/azure) | |
| `anthropic` | [Anthropic](/docs/snippets/clients/providers/anthropic) | |
| `google-ai` | [Google AI](/docs/snippets/clients/providers/gemini) | |
| `fallback` | [Fallback](/docs/snippets/clients/fallback) | Used to chain models conditional on failures |
| `round-robin` | [Round Robin](/docs/snippets/clients/round-robin) | Used to load balance |

</ParamField>

<ParamField path="options" type="dict[str, Any]" required>
These vary per provider. Please see provider specific documentation for more
information. Generally they are pass through options to the POST request made
to the LLM.
</ParamField>

File renamed without changes.
Loading

0 comments on commit afb3b97

Please sign in to comment.