diff --git a/docs/docs/guides/boundary_studio/tracing-tagging.mdx b/docs/docs/guides/boundary_studio/tracing-tagging.mdx
index b47bb392f..1d8c25b35 100644
--- a/docs/docs/guides/boundary_studio/tracing-tagging.mdx
+++ b/docs/docs/guides/boundary_studio/tracing-tagging.mdx
@@ -2,10 +2,6 @@
title: "Tracing and tagging functions"
---
-
- TypeScript function tracing in Boundary Studio is also available if you setup the environment keys, but more advanced features like the @trace decorator are currently available only on Python. Contact us if you'd like to enable this in TypeScript.
-
-
BAML allows you to trace any function with the **@trace** decorator.
This will make the function's input and output show up in the Boundary dashboard. This works for any python function you define yourself. BAML LLM functions (or any other function declared in a .baml file) are already traced by default. Logs are only sent to the Dashboard if you setup your environment variables correctly.
@@ -20,10 +16,8 @@ Make sure you also CTRl+S a .baml file to generate the `baml_client`
In the example below, we trace each of the two functions `pre_process_text` and `full_analysis`:
```python
-import pytest
-from baml_client.testing import baml_test
from baml_client import baml
-from baml_client.baml_types import Book, AuthorInfo
+from baml_client.types import Book, AuthorInfo
from baml_client.tracing import trace
# You can also add a custom name with trace(name="my_custom_name")
@@ -42,19 +36,18 @@ async def full_analysis(book: Book):
return book_analysis
-@baml_test
-class TestBookAnalysis:
- async def test_book1(self):
- content = """Before I could reply that he [Gatsby] was my neighbor...
- """
- processed_content = await pre_process_text(content)
- return await full_analysis(
- Book(
- title="The Great Gatsby",
- author=AuthorInfo(firstName="F. Scott", lastName="Fitzgerald"),
- content=processed_content,
- ),
- )
+@trace
+async def test_book1():
+ content = """Before I could reply that he [Gatsby] was my neighbor...
+ """
+ processed_content = await pre_process_text(content)
+ return await full_analysis(
+ Book(
+ title="The Great Gatsby",
+ author=AuthorInfo(firstName="F. Scott", lastName="Fitzgerald"),
+ content=processed_content,
+ ),
+ )
```
diff --git a/docs/docs/guides/hello_world/baml-project-structure.mdx b/docs/docs/guides/hello_world/baml-project-structure.mdx
index 4598d6157..84643385d 100644
--- a/docs/docs/guides/hello_world/baml-project-structure.mdx
+++ b/docs/docs/guides/hello_world/baml-project-structure.mdx
@@ -8,12 +8,8 @@ your application, depending on the generators configured in your `main.baml`:
```rust main.baml
generator MyGenerator{
- language "python"
- // This is where the generated baml-client will be written to
- project_root "../"
- test_command "poetry run python -m pytest"
- install_command "poetry add baml@latest"
- package_version_command "poetry show baml"
+ output_type typescript
+ output_dir "../"
}
```
@@ -22,20 +18,13 @@ Here is the typical project structure:
```bash
.
├── baml_client/ # Generated code
-├── baml_src/ # Prompts live here
-│ ├── __tests__/ # Tests loaded by playground
-│ │ └── YourAIFunction/
-│ │ └── test_1.json
-│ ├── main.baml
-│ ├── any_directory/
-│ │ └── baz.baml
+├── baml_src/ # Prompts and baml tests live here
│ └── foo.baml
# The rest of your project (not generated nor used by BAML)
├── app/
│ ├── __init__.py
│ └── main.py
-├── pyproject.toml
-└── poetry.lock
+└── pyproject.toml
```
@@ -45,7 +34,7 @@ function declarations, prompts, retry policies, etc. It also contains
transpile your BAML code.
2. `baml_client/` is where the BAML compiler will generate code for you,
-based on the types and functions you define in your BAML code.
+based on the types and functions you define in your BAML code. Here's how you'd access the generated functions from baml_client:
```python Python
@@ -64,9 +53,6 @@ const use_llm_for_task = async () => {
```
-3. `baml_src/__tests__/**/*.json` is where your test inputs live. The VSCode
-extension allows you to load, delete, create, and run any of these these inputs.
-You can also use these tests ou like. See [here](/docs/syntax/function-testing) for more information.
**You should never edit any files inside baml_client directory** as the whole
@@ -77,8 +63,7 @@ You can also use these tests ou like. See [here](/docs/syntax/function-testing)
If you ever run into any issues with the generated code (like merge
conflicts), you can always delete the `baml_client` directory and it will get
- regenerated automatically once you fix any other conflicts in your `.baml`
- files.
+ regenerated automatically on save.
### imports
diff --git a/docs/docs/guides/hello_world/testing-ai-functions.mdx b/docs/docs/guides/hello_world/testing-ai-functions.mdx
index ceb283d30..7fac24b87 100644
--- a/docs/docs/guides/hello_world/testing-ai-functions.mdx
+++ b/docs/docs/guides/hello_world/testing-ai-functions.mdx
@@ -3,68 +3,20 @@ title: "Testing AI functions"
---
-## Overview
-
One important way to ensure your AI functions are working as expected is to write unit tests. This is especially important when you're working with AI functions that are used in production, or when you're working with a team.
-You have two options for adding / running tests:
-
-- Using the Playground
-- Using the BAML CLI
-
-## Using the Playground
-
-The playground allows a type-safe interface for creating tests along with running them.
-Under the hood, the playground runs `baml test` for you and writes the test files to the `__tests__` folder (see below).
-
-
-
-At the end of this video notice that the test runs are all saved into **Boundary Studio**, our analytics and observability platform, and can be accessed from within VSCode. Check out the [installation](/docs/home/installation) steps to set it up!
-
-## Run tests using the BAML CLI
-
-To understand how to run tests using the CLI, you'll need to understand how BAML projects are structured.
-
-```bash
-.
-├── baml_src/ # Where you write BAML files
-│ ├── __tests__/ # Where you write tests
-│ │ ├── YourAIFunction/ # A folder for each AI function
-│ │ │ └── test_name_cricket.json
-│ │ └── YourAIFunction2/ # Another folder for another AI function
-│ │ └── test_name_jellyfish.json
-```
-
-To run tests, you'll need to run `baml test` from the root of your project. This will run all tests in the `__tests__` folder.
+To test functions:
+1. Install the VSCode extension
+2. Create a test in any .baml file:
+```rust
+test MyTest {
+ functions [ExtractResume]
+ args {
+ resume_text "hello"
+ }
+}
-```bash
-# This will LIST all tests found in the __tests__ folder
-$ baml test
-# This will RUN all tests found in the __tests__ folder
-$ baml test run
```
+3. Run the test in the VSCode extension!
-You can also run tests for a specific AI function by passing the `-i` flag.
-
-```bash
-# This will list all tests found in the __tests__/YourAIFunction folder
-$ baml test -i "YourAIFunction:"
-
-# This will run all tests found in the __tests__/YourAIFunction folder
-$ baml test run -i "YourAIFunction:" -i "YourAIFunction2:"
-```
-
-For more filters on the `baml test` command, run `baml test --help`.
-
-## Evaluating test results
-
-`baml test` and the Playground UI (which uses baml test under the hood) don't have a way to evaluate results other than by manual inspection at the moment.
-
-If you want to add **asserts** or **expectations** on the output, you can declare your tests programmatically using our `pytest` plugin to do so.
-See [this tutorial](/docs/syntax/testing/evaluators)
\ No newline at end of file
+We have more capabilities like assertions coming soon!
\ No newline at end of file
diff --git a/docs/docs/guides/hello_world/writing-ai-functions.mdx b/docs/docs/guides/hello_world/writing-ai-functions.mdx
index 4848e3a96..e3584ab5f 100644
--- a/docs/docs/guides/hello_world/writing-ai-functions.mdx
+++ b/docs/docs/guides/hello_world/writing-ai-functions.mdx
@@ -5,10 +5,10 @@ title: "BAML AI functions in 2 minutes"
### Pre-requisites
-Follow the [installation](/v3/home/installation) instructions and run **baml init** in a new project.
+Follow the [installation](/v3/home/installation) instructions.
-The starting project structure will look something like this:
-
+{/* The starting project structure will look something like this: */}
+{/* */}
## Overview
@@ -25,11 +25,18 @@ and agents.
The best way to learn BAML is to run an example in our web playground -- [PromptFiddle.com](https://promptfiddle.com).
-But at a high-level, BAML is simple to use -- prompts are built using [Jinja syntax](https://jinja.palletsprojects.com/en/3.1.x/) to make working with strings easier. We extended jinja to add type-support, and static analysis of your template variables, and a realtime side-by-side playground for VSCode.
+But at a high-level, BAML is simple to use -- prompts are built using [Jinja syntax](https://jinja.palletsprojects.com/en/3.1.x/) to make working with strings easier. But we extended jinja to add type-support, static analysis of your template variables, and we have a real-time preview of prompts in the BAML VSCode extension no matter how much logic your prompts use.
-We'll write out an example from PromptFiddle here:
+Here's an example from PromptFiddle:
```rust baml_src/main.baml
+client GPT4Turbo {
+ provider openai
+ options {
+ model gpt-4-turbo
+ api_key env.OPENAI_API_KEY
+ }
+}
// Declare the Resume type we want the AI function to return
class Resume {
name string
@@ -71,14 +78,14 @@ All your types become Pydantic models in Python, or type definitions in Typescri
## 2. Usage in Python or TypeScript
-Our VSCode extension automatically generates a **baml_client** in your language of choice.
+Our VSCode extension automatically generates a **baml_client** in the language of choice. (Click the tabs for Python or TypeScript)
```python Python
from baml_client import baml as b
# BAML types get converted to Pydantic models
-from baml_client.baml_types import Resume
+from baml_client.types import Resume
import asyncio
async def main():
diff --git a/docs/docs/guides/streaming/streaming.mdx b/docs/docs/guides/streaming/streaming.mdx
index 7b5ab6fd1..c96e4d8d2 100644
--- a/docs/docs/guides/streaming/streaming.mdx
+++ b/docs/docs/guides/streaming/streaming.mdx
@@ -2,10 +2,6 @@
title: "Streaming structured data"
---
-
- TypeScript support for streaming is still in closed alpha - please contact us if you would like to use it!
-
-
### Streaming partial objects
The following returns an object that slowly gets filled in as the response comes in. This is useful if you want to start processing the response before it's fully complete.
You can stream anything from a `string` output type, to a complex object.
@@ -18,46 +14,33 @@ Example:
{"prop1": "hello how are you", "prop2": "I'm good, how are you?", "prop3": "I'm doing great, thanks for asking!"}
```
-```python
-async def main():
- async with baml.MyFunction.stream(MyInput(...)) as stream:
- async for output in stream.parsed_stream:
-
- if output.is_parseable:
- assert output.parsed.my_property is not None
- print("my property is present", output.parsed.my_property)
- print(f"streaming: {output.parsed.model_dump_json()}")
-
- # You can also get the current delta. This will always be present.
- print(f"streaming: {output.delta}")
-
- final_output = await stream.get_final_response()
- if final_output.has_value:
- print(f"final response: {final_output.value}")
- else:
- # A deserialization error likely occurred.
- print(f"final resopnse didnt have a value")
+### Python
+```python FastAPI
+from baml_client import b
+
+@app.get("/extract_resume")
+async def extract_resume(resume_text: str):
+ async def stream_resume(resume):
+ stream = b.stream.ExtractResume(resume_text)
+ async for chunk in stream:
+ yield str(chunk.model_dump_json()) + "\n"
+
+ return StreamingResponse(stream_resume(resume), media_type="text/plain")
```
-You can also get the deltas from the `output` using `output.delta`
-### Stream a specific impl
-The following returns a stream of a specific impl. This is useful if you want to process the response as it comes in, but don't want to deal with the object being partially filled in.
-```python
-async def main():
- async with baml.MyFunction.get_impl("v1").stream(...) as stream:
- async for chunk in stream.parsed_stream:
- print(f"streaming: {chunk.delta}")
+### TypeScript
+```typescript
+import { b } from '../baml_client'; // or whatever path baml_client is in
- final_output = await stream.get_final_response()
- if final_output.has_value:
- print(f"final response: {final_output.value}")
-```
+export async function streamText() {
+ const stream = b.stream.MyFunction(MyInput(...));
+ for await (const output of stream) {
+ console.log(`streaming: ${output}`); // this is the output type of my function
+ }
+ const finalOutput = await stream.getFinalResponse();
+ console.log(`final response: ${finalOutput}`);
+}
+```
-### Caveats
-Not supported with:
-1. Fallback clients
-2. Retry policies (it may work but there may be unknown behaviors)
-4. Union types
-5. TypeScript (still in progress)
diff --git a/docs/docs/guides/testing/advanced_testing_guide.mdx b/docs/docs/guides/testing/advanced_testing_guide.mdx
deleted file mode 100644
index 481191a9a..000000000
--- a/docs/docs/guides/testing/advanced_testing_guide.mdx
+++ /dev/null
@@ -1,180 +0,0 @@
-
-
- TypeScript support for testing is still in closed alpha - please contact us if you would like to use it!
-
-
-### Common pytest issues
-
- Make sure your test file, the Test class AND/or the test function is prefixed with `Test` or `test` respectively. Otherwise, pytest will not pick up your tests. E.g. `test_foo.py`, `TestFoo`, `test_foo`
-
-
-
- Make sure you are running these commands from your python virtual environment
- (or **`poetry shell`** if you use poetry).
-
-
-
-**No module named `baml_lib`**.
-Try running `poetry run python -m pytest -m baml_test` instead if you are using poetry. Double check you are in a poetry shell and that there is a `baml_client` and `baml` dependency in your project
-
-### Helpful Pytest commands
-
-
-```bash
-# From your project root
-# Lists all tests with the baml_test marker
-pytest -m baml_test --collect-only
-```
-
-```bash
-# From your project root
-# Runs all tests
-# For every function, for every impl
-pytest -m baml_test
-```
-
-To run tests for a subdirectory
-
-```bash
-# From your project root
-# Note the underscore at the end of the folder name
-pytest -m baml_test ./your-tests-folder/
-```
-
-To run tests that match a specific name
-E.g. if your test is called "test_thing_123", the following command will run this test:
-
-```bash
-# From your project root
-pytest -m baml_test -k thing
-```
-
-You can read more about the `-k` arg of pytest here ([PyTest Docs](https://docs.pytest.org/en/latest/example/markers.html#using-k-expr-to-select-tests-based-on-their-name))
-
-`-k` will match any tests with that given name.
-
-To run a specific test case in a test group
-
-```bash
-# From your project root
-pytest -m baml_test -k 'test_group_name and test_case_name'
-```
-
-### Testing multiple impls using fixtures (advanced)
-We automatically export a pytest fixture for each of your defined functions that will automatically convert a test function into N test functions, where N is the number of impls you have defined for that function.
-
-Instead of writing:
-```python
-@baml_test
-async def test_impl1_foo():
- assert b.Foo.get_impl("v1").run(...)
-
-@baml_test
-async def test_impl2_foo():
- assert b.Foo.get_impl("v2").run(...)
-```
-
-You can import the fixture in:
-
-```python Test Function
-# Import your baml-generated functions
-from baml_client import baml as b
-# Import any custom types defined in .baml files
-from baml_client.baml_types import Sentiment, IClassifySentiment
-
-# This automatically generates a test case for each impl
-# of ClassifySentiment.
-@b.ClassifySentiment.test
-async def test_happy_user(ClassifySentimentImpl: IClassifySentiment):
- # Note that the parameter name "ClassifySentimentImpl"
- # must match the name of the function you're testing
- response = await ClassifySentimentImpl("I am ecstatic")
- assert response == Sentiment.POSITIVE
-```
-
-### Grouping tests
-You can also group tests in test classes. We won't use the impl-specific fixture in this case for simplicity.
-The dashboard will show you grouped tests as well if you use this method, which can be handy for organizing your tests.
-
-
-```python
-@baml_test
-class TestClassifySentiment:
- async def test_happy_user(self):
- response = await b.ClassifySentiment("I am ecstatic")
- assert response == Sentiment.POSITIVE
-
- async def test_sad_user(self):
- response = await b.ClassifySentiment("I am sad")
- assert response == Sentiment.NEGATIVE
-```
-
-
-The class name must start with "Test" or it won't get picked up by Pytest.
-
-
-## Parameterization
-You can also parameterize your tests. This is useful if you want to test a function with a variety of inputs.
-
-The parameters to the parametrize annotation indicate the name of the arguments sent into the test.
-
-```python
-...
-import pytest
-
-@baml_test
-class TestClassifySentiment:
- @pytest.mark.parametrize(
- "input, expected",
- [
- ("I am ecstatic", Sentiment.POSITIVE),
- ("I am sad", Sentiment.NEGATIVE),
- ],
- )
- # Note the name of the args matches what we defined in the parametrize annotation. The first arg is "self" since this is inside a class.
- async def test_sentiment(self, input, expected):
- response = await b.ClassifySentiment(input)
- assert response == expected
-```
-Or alternatively, you can group things by sentiment. You dont need to use a class.
-
-```python
-import pytest
-from baml_client.testing import baml_test
-from baml_client import baml as b
-from baml_client.baml_types import Sentiment
-
-@baml_test
-@pytest.mark.asyncio
-@pytest.mark.parametrize(
- "input",
- [
- "I am ecstatic",
- "I am super happy!",
- "I am thrilled",
- "I am overjoyed",
- ],
-)
-async def test_happy_sentiments(input):
- response = await b.ClassifySentiment(input)
- assert response == Sentiment.POSITIVE
-
-@baml_test
-@pytest.mark.asyncio
-@pytest.mark.parametrize(
- "input",
- [
- "I am sad",
- "I am angry",
- "I am upset",
- "I am frustrated",
- ],
-)
-async def test_sad_sentiments(input):
- response = await b.ClassifySentiment(input)
- assert response == Sentiment.NEGATIVE
-```
-
-
-
-You can read more about it here ([PyTest Docs](https://docs.pytest.org/en/latest/parametrize.html))
diff --git a/docs/docs/guides/testing/test_with_assertions.mdx b/docs/docs/guides/testing/test_with_assertions.mdx
index 0f6dbaf9a..be3e76f59 100644
--- a/docs/docs/guides/testing/test_with_assertions.mdx
+++ b/docs/docs/guides/testing/test_with_assertions.mdx
@@ -2,25 +2,21 @@
title: "Evaluate results with assertions or using LLM Evals"
---
-
- TypeScript support for testing is still in closed alpha - please contact us if you would like to use it!
-
-To add assertions to your tests, or add more complex testing scenarios, you can use pytest to test your functions, since Playground BAML tests don't currently support assertions.
-To view each pytest run in **Boundary Studio**, (to enable comment/label on test results, scoring, and keeping track over time) -- use the **baml_test** decorator from the `baml_client` import.
+# Python guide
+To add assertions to your tests, or add more complex testing scenarios, you can use pytest to test your functions, since Playground BAML tests don't currently support assertions.
### Example
-See [the full example, including the .baml files for GetOrderInfo](https://github.com/BoundaryML/baml-examples/tree/main/extraction-guide)
-
```python test_file.py
from baml_client import baml as b
-from baml_client.baml_types import Email
+from baml_client.types import Email
from baml_client.testing import baml_test
+import pytest
# Run `poetry run pytest -m baml_test` in this directory.
# Setup Boundary Studio to see test details!
-@baml_test
+@pytest.mark.asyncio
async def test_get_order_info():
order_info = await b.GetOrderInfo(Email(
subject="Order #1234",
@@ -34,9 +30,7 @@ async def test_get_order_info():
-Run `pytest -m baml_test -k 'order_info'` to run this test. To show have pytest show print statements add the `-s` flag.
-
-The -m flag indicates it should only run the tests decorated with "baml_test"
+Run `pytest -k 'order_info'` to run this test. To show have pytest show print statements add the `-s` flag.
Make sure you are running these commands from your python virtual environment
@@ -47,33 +41,6 @@ For more advanced testing scenarios, helpful commands, and gotchas, check out th
-### View test results in Boundary Studio
-Add @baml_test decorator from `baml_client.testing` to your test functions to view test results in Boundary Studio.
-
-
-### Run a specific impl
-When you call a function that has multiple impls, the client will automatically call your defined `default_impl`.
-
-You can always test a specific impl by calling it explicitly in the test function using `b.GetOrderInfo.get_impl("version1").run(...)`
-
-```python
-from baml_client import baml as b
-from baml_client.baml_types import Email
-from baml_client.testing import baml_test
-
-@baml_test
-async def test_get_order_info():
- order_info = await b.GetOrderInfo.get_impl("version1").run(Email(
- subject="Order #1234",
- body="Your order has been shipped. It will arrive on 1st Jan 2022. Product: iPhone 13. Cost: $999.99"
- ))
-
- assert order_info.cost == 999.99
-```
-
-BAML includes some helper pytest fixtures that will automatically generate tests for each impl you define. See the [advanced pytest guide](./advanced_testing_guide)
-
-
### Using an LLM eval
You can also declare a new BAML function that you can use in your tests to validate results.
@@ -111,7 +78,7 @@ impl v1 {
```python
from baml_client import baml as b
-from baml_client.baml_types import Email, ProfessionalismRating
+from baml_client.types import Email, ProfessionalismRating
from baml_client.testing import baml_test
@baml_test
diff --git a/docs/docs/guides/testing/unit_test.mdx b/docs/docs/guides/testing/unit_test.mdx
index 066f30782..808b45880 100644
--- a/docs/docs/guides/testing/unit_test.mdx
+++ b/docs/docs/guides/testing/unit_test.mdx
@@ -5,97 +5,7 @@ title: "Test an AI function"
There are two types of tests you may want to run on your AI functions:
-- Unit Tests: Tests a single AI function
+- Unit Tests: Tests a single AI function (using the playground)
- Integration Tests: Tests a pipeline of AI functions and potentially buisness logic
-We support both types of tests using BAML. See the next tutorials for more advanced testing capabilities.
-
-## Test using the playground
-
-Use the playground to run tests against individual function impls. The playground allows a type-safe interface for creating tests along with running them.
-Under the hood, the playground runs `baml test` for you and writes the test files to the `__tests__` folder (see below).
-
-
-
-
-Note we currently don't support assertions in these tests -- they must be manually evaluated by a human. Read the next tutorials to learn how to write more advanced tests with different kinds of assertions, including LLM-powered evaluation.
-
-## Create a test from an existing production request
-Use Boundary studio to import an existing request into a test you can run on the VSCode playground. It's a 1-click import process.
-
-
-
-## Creating tests manually
-
-Unit tests created by the playground are stored in the `__tests__` folder.
-
-The project structure should look like this:
-
-```bash
-.
-├── baml_client/
-└── baml_src/
- ├── __tests__/
- │ ├── YourAIFunction/
- │ │ ├── test_name_monkey.json
- │ │ └── test_name_cricket.json
- │ └── YourAIFunction2/
- │ └── test_name_jellyfish.json
- ├── main.baml
- └── foo.baml
-```
-
-You can manually create tests by creating a folder for each function you want to test. Inside each folder, create a json file for each test case you want to run. The json file should be named `test_name.json` where `test_name` is the name of the test case.
-
-To see the structure of the JSON file, you can create a test using the playground and then copy the JSON file into your project.
-
-
- The BAML compiler reads the `__tests__` folder and generates a pytest file for
- you so you don't have to manually write test boilerplate code.
-
-
-
-## Run Playground tests from the terminal
-
-You can also use the BAML CLI to run Playground tests. This is useful if you want to run tests in a CI/CD pipeline.
-
-The command to run is `baml test`. You can run it from the root of your project.
-
-```bash
-# List all tests
-$ baml test
-================= 3/3 tests selected (0 deselected) =================
-ClassifyDocumentTopic (impls: simpleclassifydocumenttopic) (1 tests)
- combined_aquamarine ○
-GetNextQuestion (impls: v1) (1 tests)
- beneficial_moccasin ○
-================= 3/3 tests selected (0 deselected) =================
-
-# Run all tests
-$ baml test run
-
-# Run tests for a specific function
-$ baml test -i "MyFunction:" run
-
-# Run tests for a specific function impl
-$ baml test -i "MyFunction:v1" run
-
-# Run a specific test case.
-$ baml test -i "::smoky_monkey" run
-
-# Run all tests except for a specific test case.
-$ baml test -x "::smoky_monkey" run
-
-# Help
-$ baml test --help
-```
-
-
- Execute `baml test` (without "run" part) to see what tests will be run.
-
+For integration tests, see the [Integration Testing Guide](/docs/guides/testing/test_with_assertions).
\ No newline at end of file
diff --git a/docs/docs/home/faq.mdx b/docs/docs/home/faq.mdx
index 701bb4508..a890a3943 100644
--- a/docs/docs/home/faq.mdx
+++ b/docs/docs/home/faq.mdx
@@ -14,25 +14,23 @@ Nope. We do not proxy LLM calls for you. BAML just generates a bunch of python o
-BAML isn't a full-fledged language -- it's more of a configuration file or templating language. You can load it into your code as if it were YAML.
-
-We started this because we wanted [Jinja](https://jinja.palletsprojects.com/en/3.1.x/), but with types + function declarations, so we decided to make it happen. Earlier we tried making a YAML-based sdk, and even a Python SDK, but they were not powerful enough.
-
+BAML isn't a full-fledged language -- it's more of a configuration file / templating language. You can load it into your code as if it were YAML. Think of it as an extension of [Jinja](https://jinja.palletsprojects.com/en/3.1.x/) or Handlebars.
+Earlier we tried making a YAML-based sdk, and even a Python SDK, but they were not powerful enough.
- We are working on more tools like [PromptFiddle.com](https://promptfiddle.com) to make it easier to edit prompts for non-engineers, but we want to make sure all your prompts can be backed by a file in your codebase.
+ We are working on more tools like [PromptFiddle.com](https://promptfiddle.com) to make it easier to edit prompts for non-engineers, but we want to make sure all your prompts can be backed by a file in your codebase and versioned by Git.
- BAML can be generated into python and Typescript. The only feature not present in TypeScript is streaming. We are working on it! We are also working on Ruby.
+ Typescript, Python, and Ruby
Contact us for more
The VSCode extension and BAML are free to use (Open Source as well!). We only charge for usage of
- BoundaryML Studio.
+ Boundary Studio, our observability platform. Contact us for pricing. We do have a hobbyist tier and a startup tier available.
diff --git a/docs/docs/home/installation.mdx b/docs/docs/home/installation.mdx
index 202692967..819e00d3a 100644
--- a/docs/docs/home/installation.mdx
+++ b/docs/docs/home/installation.mdx
@@ -4,7 +4,7 @@ title: Installation
- [https://marketplace.visualstudio.com/items?itemName=boundary.BAML](https://marketplace.visualstudio.com/items?itemName=boundary.BAML)
+ [https://marketplace.visualstudio.com/items?itemName=boundary.BAML](https://marketplace.visualstudio.com/items?itemName=boundary.Baml-extension)
If you are using python, [enable typechecking in VSCode's](https://code.visualstudio.com/docs/python/settings-reference#_python-language-server-settings) `settings.json`:
```
@@ -46,17 +46,8 @@ title: Installation
-## Ensure BAML CLI can generate your Python / TS client
-
-
-
- Save a `.baml` file using VSCode, and you should see a successful generation message pop up!
-
-
- ```bash
- # Run from any dir/subdir in your project
- # Requires you have a baml_src directory with a main.baml file
- > baml build
- ```
-
-
+## Ensure BAML extension can generate your Python / TS client
+
+Save a `.baml` file using VSCode, and you should see a successful generation message pop up!
+
+You can also run `baml-cli generate --from path-to-baml-src` to generate the client code manually.
\ No newline at end of file
diff --git a/docs/docs/home/roadmap.mdx b/docs/docs/home/roadmap.mdx
index 9a347e060..ea05334a8 100644
--- a/docs/docs/home/roadmap.mdx
+++ b/docs/docs/home/roadmap.mdx
@@ -4,11 +4,12 @@ title: "Roadmap"
### Language Support
-Because we have our own language and our compiler generates native Python/TS code from BAML files, we are able to treat both languages as first class citizens in the ecosystem.
+Features are available in all languages at equal parity unless otherwise noted.
| Language Support | Status | Notes |
| ---------------- | ------ | ----------------------------------- |
| Python | ✅ | |
-| TypeScript | 🚧 | Pending Retry and Streaming Support |
+| TypeScript | ✅ | |
+| Ruby | 🚧 | Alpha release, contact us to use it |
Contact us on Discord if you have a language you'd like to see supported.
diff --git a/docs/docs/learn-baml/hello_world/baml-project-structure.mdx b/docs/docs/learn-baml/hello_world/baml-project-structure.mdx
deleted file mode 100644
index bf7448297..000000000
--- a/docs/docs/learn-baml/hello_world/baml-project-structure.mdx
+++ /dev/null
@@ -1,86 +0,0 @@
----
-title: "BAML Project Structure"
----
-
-At a high level, you will define your AI prompts and interfaces in BAML files.
-The BAML compiler will then generate Python or Typescript code for you to use in
-your application, depending on the generators configured in your `main.baml`:
-
-```rust main.baml
-generator MyGenerator{
- language "python"
- // This is where the generated baml-client will be written to
- project_root "../"
- test_command "poetry run python -m pytest"
- install_command "poetry add baml@latest"
- package_version_command "poetry show baml"
-}
-```
-
-Here is the typical project structure:
-
-```bash
-.
-├── baml_client/ # Generated code
-├── baml_src/ # Prompts live here
-│ ├── __tests__/ # Tests loaded by playground
-│ │ └── YourAIFunction/
-│ │ └── test_1.json
-│ ├── main.baml
-│ ├── any_directory/
-│ │ └── baz.baml
-│ └── foo.baml
-# The rest of your project (not generated nor used by BAML)
-├── app/
-│ ├── __init__.py
-│ └── main.py
-├── pyproject.toml
-└── poetry.lock
-
-```
-
-1. `baml_src/` is where you write your BAML files with the AI
-function declarations, prompts, retry policies, etc. It also contains
-[generator](/v3/syntax/generator) blocks which configure how and where to
-transpile your BAML code.
-
-2. `baml_client/` is where the BAML compiler will generate code for you,
-based on the types and functions you define in your BAML code.
-
-
-```python Python
-from baml_client import baml as b
-
-async def use_llm_for_task():
- await b.CallMyLLM()
-```
-
-```typescript TypeScript
-import b from '@/baml_client'
-
-const use_llm_for_task = async () => {
- await b.CallMyLLM();
-};
-```
-
-
-3. `baml_src/__tests__/**/*.json` is where your test inputs live. The VSCode
-extension allows you to load, delete, create, and run any of these these inputs.
-You can also use these tests ou like. See [here](/v3/syntax/function-testing) for more information.
-
-
- **You should never edit any files inside baml_client directory** as the whole
- directory gets regenerated on every `baml build` (auto runs on save if using
- the VSCode extension).
-
-
-
- If you ever run into any issues with the generated code (like merge
- conflicts), you can always delete the `baml_client` directory and it will get
- regenerated automatically once you fix any other conflicts in your `.baml`
- files.
-
-
-### imports
-
-BAML by default has global imports. Every entity declared in any `.baml` file is available to all other `.baml` files under the same `baml_src` directory. You **can** have multiple `baml_src` directories, but no promises on how the VSCode extension will behave (yet).
diff --git a/docs/docs/learn-baml/hello_world/testing-ai-functions.mdx b/docs/docs/learn-baml/hello_world/testing-ai-functions.mdx
deleted file mode 100644
index 8b3f07f3d..000000000
--- a/docs/docs/learn-baml/hello_world/testing-ai-functions.mdx
+++ /dev/null
@@ -1,70 +0,0 @@
----
-title: "Testing AI functions"
----
-
-
-## Overview
-
-One important way to ensure your AI functions are working as expected is to write unit tests. This is especially important when you're working with AI functions that are used in production, or when you're working with a team.
-
-You have two options for adding / running tests:
-
-- Using the Playground
-- Using the BAML CLI
-
-## Using the Playground
-
-The playground allows a type-safe interface for creating tests along with running them.
-Under the hood, the playground runs `baml test` for you and writes the test files to the `__tests__` folder (see below).
-
-
-
-At the end of this video notice that the test runs are all saved into **Boundary Studio**, our analytics and observability platform, and can be accessed from within VSCode. Check out the [installation](/v3/home/installation) steps to set it up!
-
-## Run tests using the BAML CLI
-
-To understand how to run tests using the CLI, you'll need to understand how BAML projects are structured.
-
-```bash
-.
-├── baml_src/ # Where you write BAML files
-│ ├── __tests__/ # Where you write tests
-│ │ ├── YourAIFunction/ # A folder for each AI function
-│ │ │ └── test_name_cricket.json
-│ │ └── YourAIFunction2/ # Another folder for another AI function
-│ │ └── test_name_jellyfish.json
-```
-
-To run tests, you'll need to run `baml test` from the root of your project. This will run all tests in the `__tests__` folder.
-
-```bash
-# This will LIST all tests found in the __tests__ folder
-$ baml test
-# This will RUN all tests found in the __tests__ folder
-$ baml test run
-```
-
-You can also run tests for a specific AI function by passing the `-i` flag.
-
-```bash
-# This will list all tests found in the __tests__/YourAIFunction folder
-$ baml test -i "YourAIFunction:"
-
-# This will run all tests found in the __tests__/YourAIFunction folder
-$ baml test run -i "YourAIFunction:" -i "YourAIFunction2:"
-```
-
-For more filters on the `baml test` command, run `baml test --help`.
-
-## Evaluating test results
-
-`baml test` and the Playground UI (which uses baml test under the hood) don't have a way to evaluate results other than by manual inspection at the moment.
-
-If you want to add **asserts** or **expectations** on the output, you can declare your tests programmatically using our `pytest` plugin to do so.
-See [this tutorial](/v3/syntax/testing/evaluators)
\ No newline at end of file
diff --git a/docs/docs/learn-baml/hello_world/writing-ai-functions.mdx b/docs/docs/learn-baml/hello_world/writing-ai-functions.mdx
deleted file mode 100644
index 4c3da7880..000000000
--- a/docs/docs/learn-baml/hello_world/writing-ai-functions.mdx
+++ /dev/null
@@ -1,139 +0,0 @@
----
-title: "BAML AI functions in 2 minutes"
----
-
-
-### Pre-requisites
-
-Follow the [installation](/v3/home/installation) instructions and run **baml init** in a new project.
-
-The starting project structure will look something like this:
-
-
-## Overview
-
-Before you call an LLM, ask yourself what kind of input or output youre
-expecting. If you want the LLM to generate text, then you probably want a
-string, but if you're trying to get it to collect user details, you may want it
-to return a complex type like `UserDetails`.
-
-Thinking this way can help you decompose large complex prompts into smaller,
-more measurable functions, and will also help you build more complex workflows
-and agents.
-
-# Extracting a resume from text
-
-The best way to learn BAML is to run an example in our web playground -- [PromptFiddle.com](https://promptfiddle.com).
-
-But at a high-level, BAML is simple to use -- prompts are built using [Jinja syntax](https://jinja.palletsprojects.com/en/3.1.x/) to make working with strings easier. We extended jinja to add type-support, and static analysis of your template variables, and a realtime side-by-side playground for VSCode.
-
-We'll write out an example from PromptFiddle here:
-
-```rust baml_src/main.baml
-// Declare the Resume type we want the AI function to return
-class Resume {
- name string
- education Education[] @description("Extract in the same order listed")
- skills string[] @description("Only include programming languages")
-}
-
-class Education {
- school string
- degree string
- year int
-}
-
-// Declare the function signature, with the prompt that will be used to make the AI function work
-function ExtractResume(resume_text: string) -> Resume {
- // An LLM client we define elsewhere, with some parameters and our API key
- client GPT4Turbo
-
- // The prompt uses Jinja syntax
- prompt #"
- Parse the following resume and return a structured representation of the data in the schema below.
-
- Resume:
- ---
- {{ resume_text }}
- ---
-
- {# special macro to print the output instructions. #}
- {{ ctx.output_format }}
-
- JSON:
- "#
-}
-```
-That's it! If you use the VSCode extension, everytime you save this .baml file, it will convert this configuration file into a usable Python or TypeScript function in milliseconds, with full types.
-
-All your types become Pydantic models in Python, or type definitions in Typescript (soon we'll support generating Zod types).
-
-
-## 2. Usage in Python or TypeScript
-
-Our VSCode extension automatically generates a **baml_client** in your language of choice.
-
-
-
-```python Python
-from baml_client import baml as b
-# BAML types get converted to Pydantic models
-from baml_client.baml_types import Resume
-import asyncio
-
-async def main():
- resume_text = """Jason Doe
-Python, Rust
-University of California, Berkeley, B.S.
-in Computer Science, 2020
-Also an expert in Tableau, SQL, and C++
-"""
-
- # this function comes from the autogenerated "baml_client".
- # It calls the LLM you specified and handles the parsing.
- resume = await b.ExtractResume(resume_text)
-
- # Fully type-checked and validated!
- assert isinstance(resume, Resume)
-
-
-if __name__ == "__main__":
- asyncio.run(main())
-```
-
-```typescript TypeScript
-import b from 'baml_client'
-
-async function main() {
- const resume_text = `Jason Doe
-Python, Rust
-University of California, Berkeley, B.S.
-in Computer Science, 2020
-Also an expert in Tableau, SQL, and C++
-`
-
- // this function comes from the autogenerated "baml_client".
- // It calls the LLM you specified and handles the parsing.
- const resume = await b.ExtractResume(resume_text)
-
- // Fully type-checked and validated!
- assert resume.name === "Jason Doe"
-}
-
-if (require.main === module) {
- main();
-}
-```
-
-
-
-
- The BAML client exports async versions of your functions, so you can parallelize things easily if you need to. To run async functions sequentially you can easily just wrap them in the `asyncio.run(....)`.
-
- Let us know if you want synchronous versions of your functions instead!
-
-
-## Further reading
-- Continue on to the Testing + Extraction tutorials!
-- See other types of [function signatures](/v3/syntax/function) possible in BAML.
-- Learn more about [prompt variables](/v3/syntax/prompt_engineering/variables).
diff --git a/docs/docs/learn-baml/multi-step/example1.mdx b/docs/docs/learn-baml/multi-step/example1.mdx
deleted file mode 100644
index eabb5959d..000000000
--- a/docs/docs/learn-baml/multi-step/example1.mdx
+++ /dev/null
@@ -1,6 +0,0 @@
----
-title: "Write a Chatbot to collect user info"
-description: "Write a chatbot that can collect user information"
----
-
-We'll be adding more details here soon! Reach out to us on Discord for any prompt engineering support.
\ No newline at end of file
diff --git a/docs/docs/syntax/client/client.mdx b/docs/docs/syntax/client/client.mdx
index c69ab8325..7869178f5 100644
--- a/docs/docs/syntax/client/client.mdx
+++ b/docs/docs/syntax/client/client.mdx
@@ -31,16 +31,13 @@ client Name {
BAML ships with the following providers (you can can also write your own!):
- LLM client providers
- - `baml-anthropic`
- - `baml-azure-chat`
- - `baml-azure-completion`
- - `baml-openai-chat`
- - `baml-openai-completion`
- - `baml-ollama-chat`
- - `baml-ollama-completion` (only for python)
+ - `openai`
+ - `azure-openai`
+ - `anthropic`
+ - `ollama`
- Composite client providers
- - `baml-fallback`
- - `baml-round-robin`
+ - `fallback`
+ - `round-robin`
There are two primary types of LLM clients: chat and completion. BAML abstracts
away the differences between these two types of LLMs by putting that logic in
@@ -55,10 +52,7 @@ completion prompt.
Provider names:
-- `baml-openai-chat`
-- `baml-openai-completion`
-- `baml-azure-chat`
-- `baml-azure-completion`
+- `openai-azure`
You must pick the right provider for the type of model you are using. For
example, if you are using a GPT-3 model, you must use a `chat` provider, but if
@@ -74,7 +68,7 @@ See [Azure Docs](https://learn.microsoft.com/en-us/azure/ai-services/openai/quic
// A client that uses the OpenAI chat API.
client MyGPT35Client {
// Since we're using a GPT-3 model, we must use a chat provider.
- provider baml-openai-chat
+ provider openai
options {
model gpt-3.5-turbo
// Set the api_key parameter to the OPENAI_API_KEY environment variable
@@ -86,7 +80,7 @@ client MyGPT35Client {
client MyAzureClient {
// I configured the deployment to use a GPT-3 model,
// so I must use a chat provider.
- provider baml-azure-chat
+ provider openai-azure
options {
api_key env.AZURE_OPENAI_KEY
// This may change in the future
@@ -103,7 +97,7 @@ client MyAzureClient {
Provider names:
-- `baml-anthropic`
+- `anthropic`
Accepts any options as defined by [Anthropic SDK](https://github.com/anthropics/anthropic-sdk-python/blob/fc90c357176b67cfe3a8152bbbf07df0f12ce27c/src/anthropic/types/completion_create_params.py#L20)
@@ -124,14 +118,18 @@ client MyClient {
Provider names:
-- `baml-ollama-chat`
-- `baml-ollama-completion` (only for python)
+- `ollama`
Accepts any options as defined by [Ollama SDK](https://github.com/ollama/ollama/blob/main/docs/api.md#generate-a-chat-completion).
+#### Requirements
+Make sure you disable CORS if you are trying to run ollama using the BAML VSCode playground:
+1. in your terminal run `OLLAMA_ORIGINS="*" ollama serve`
+2. Run `ollama run llama2` (or your model), and you should be good to go.
+
```rust
client MyClient {
- provider baml-ollama-chat
+ provider ollama
options {
model mistral
options {
@@ -154,7 +152,7 @@ multiple clients. Here's an example:
```rust
client MyClient {
- provider baml-round-robin
+ provider round-robin
options {
strategy [
MyGptClient
@@ -185,18 +183,18 @@ client MyClient {
}
```
-### Creating your own (Advanced)
-
-Creating a provider is requires a bit more deeper understanding. You can see our source code for any of our providers here:
-
-- [OpenAI + Azure Chat](https://github.com/BoundaryML/baml/blob/canary/clients/python/baml_core/registrations/providers/openai_chat_provider.py)
-- [OpenAI + Azure Completion](https://github.com/BoundaryML/baml/blob/canary/clients/python/baml_core/registrations/providers/openai_completion_provider.py)
-- [Anthropic](https://github.com/BoundaryML/baml/blob/canary/clients/python/baml_core/registrations/providers/anthropic_provider.py)
-- [Fallback](https://github.com/BoundaryML/baml/blob/canary/clients/python/baml_core/registrations/providers/fallback_provider.py)
-
-You can add your own provider by creating a new file anywhere in your application logic, then setting the provider name in .baml to name you registered it with in your implementation.
+## Other providers
+You can use the `openai` provider if the provider you're trying to use has the same ChatML response format.
-
- We are actively working on how to make custom providers better / easier. If
- you have ideas, please reach out to us on discord!
-
+Some providers ask you to add a `base_url`, which you can do like this:
+
+```rust
+client MyClient {
+ provider openai
+ options {
+ model some-custom-model
+ api_key env.OPEN
+ base_url "https://some-custom-url-here"
+ }
+}
+```
\ No newline at end of file
diff --git a/docs/docs/syntax/function-testing.mdx b/docs/docs/syntax/function-testing.mdx
index 34f3ccea2..9799409bf 100644
--- a/docs/docs/syntax/function-testing.mdx
+++ b/docs/docs/syntax/function-testing.mdx
@@ -154,7 +154,7 @@ from baml_client.testing import baml_test
# Import your baml-generated LLM functions
from baml_client import baml
# Import any custom types defined in .baml files
-from baml_client.baml_types import Sentiment
+from baml_client.types import Sentiment
@baml_test
@pytest.mark.parametrize(
@@ -178,7 +178,7 @@ The parametrize decorator also allows you to specify a custom name for each test
```python
from baml_client import baml as b
-from baml_client.baml_types import Sentiment, IClassifySentiment
+from baml_client.types import Sentiment, IClassifySentiment
test_cases = [
{"input": "I am ecstatic", "expected_output": Sentiment.POSITIVE, "id": "ecstatic-test"},
@@ -207,7 +207,7 @@ from baml_client.testing import baml_test
# Import your baml-generated LLM functions
from baml_client import baml
# Import any custom types defined in .baml files
-from baml_client.baml_types import Sentiment
+from baml_client.types import Sentiment
@baml_test
@pytest.mark.asyncio
@@ -245,7 +245,7 @@ Alternatively you can just write a test function for each input type.
```python
from baml_client.testing import baml_test
from baml_client import baml
-from baml_client.baml_types import Sentiment
+from baml_client.types import Sentiment
@baml_test
@pytest.mark.asyncio
diff --git a/docs/docs/syntax/generator.mdx b/docs/docs/syntax/generator.mdx
index a0dba2432..4b6175b6c 100644
--- a/docs/docs/syntax/generator.mdx
+++ b/docs/docs/syntax/generator.mdx
@@ -10,62 +10,11 @@ Here is how you can add a generator block:
```rust
generator MyGenerator{
- language "python"
- // This is where the generated baml-client will be written to
- project_root "../"
- test_command "poetry run python -m pytest"
- install_command "poetry add baml@latest"
- package_version_command "poetry show baml"
+ output_type typescript // or python/pydantic, ruby
}
```
| Property | Description | Options | Default |
| ------------------- | ------------------------------------------------ | --------------------------------- | ---------------------------------------------- |
-| language | The language of the generated client | python | |
-| project_root | The directory where we'll output the generated baml_client | | ../ |
-| test_command | What `baml test` uses to run your playground tests | | |
-| install_command | The command for setting up the environment with all dependencies. `baml update` calls this | string | |
-| package_version_command | The command to get the version of the baml package | string |
-
-## Example generators
-
-### Python with poetry
-
-```rust
-generator MyGenerator{
- language "python"
- // This is where the generated baml-client will be written to
- project_root "../"
- test_command "poetry run python -m pytest"
- install_command "poetry add baml@latest"
- package_version_command "poetry show baml"
-}
-```
-
-### Python with venv
-
-```rust
-generator MyGenerator {
- language "python"
- project_root "../"
- test_command "source ./.venv/bin/activate && python -m pytest"
- install_command "source ./.venv/bin/activate && pip install -r requirements.txt"
- package_version_command "source ./.venv/bin/activate && pip show baml"
-}
-```
-
-### Using secret ops platforms
-
-If you're using software like [Infisical](https://infisical.com/) or [Doppler](https://www.doppler.com/), do the following:
-
-```rust
-generator MyGenerator {
- language "python"
- project_root "../"
- // Add the prefix to the test command
- test_command "infisical run -- poetry run python -m pytest"
- install_command "poetry add baml@latest"
- package_version_command "poetry show baml"
-}
-```
-
+| output_type | The language of the generated client | python/pydantic, ruby, typescript | |
+| output_dir | The directory where we'll output the generated baml_client | | ../ |
diff --git a/docs/docs/syntax/overview.mdx b/docs/docs/syntax/overview.mdx
index 8fa416d7e..c51d6d9e6 100644
--- a/docs/docs/syntax/overview.mdx
+++ b/docs/docs/syntax/overview.mdx
@@ -8,23 +8,12 @@ A BAML project has the following structure:
.
├── baml_client/ # Generated code
├── baml_src/ # Prompts live here
-│ ├── __tests__/ # Tests loaded by playground
-│ │ ├── YourAIFunction/
-│ │ │ ├── test_name_monkey.json
-│ │ │ └── test_name_cricket.json
-│ │ └── YourAIFunction2/
-│ │ └── test_name_jellyfish.json
-│ ├── main.baml
-│ ├── any_directory/
-│ │ ├── bar.baml
-│ │ └── baz.baml
│ └── foo.baml
# The rest of your project (not generated nor used by BAML)
├── app/
│ ├── __init__.py
│ └── main.py
-├── pyproject.toml
-└── poetry.lock
+└── pyproject.toml
```
diff --git a/docs/docs/syntax/prompt_engineering/overview.mdx b/docs/docs/syntax/prompt_engineering/overview.mdx
index 6f442a19a..9f044879f 100644
--- a/docs/docs/syntax/prompt_engineering/overview.mdx
+++ b/docs/docs/syntax/prompt_engineering/overview.mdx
@@ -1,32 +1,89 @@
---
-title: Seeing the full prompt
+title: Prompt Syntax
---
-## VSCode playground
+Prompts are written using the [Jinja templating language](https://jinja.palletsprojects.com/en/3.0.x/templates/).
-BAML exposes some capabilities to help lint and standardize prompts. To see the full prompt in realtime, you can use the [VSCode Extension](/docs/home/installation)!
-Once the extension loads, you'll have helpers to open the prompt in the playground.
+There are **2 jinja macros** (or functions) that we have included into the language for you. We recommend viewing what they do using the VSCode preview (or in [promptfiddle.com](promptfiddle.com)), so you can see the full string transform in real time.
-
+1. **`{{ _.role("user") }}`**: This divides up the string into different message roles.
+2. **`{{ ctx.output_format }}`**: This prints out the output format instructions for the prompt.
+You can add your own prefix instructions like this: `{{ ctx.output_format(prefix="Please please please format your output like this:")}}`. We have more parameters you can customize. Docs coming soon.
+3. **`{{ ctx.client }}`**: This prints out the client model the function is using
-
+"ctx" is contextual information about the prompt (like the output format or client). "_." is a special namespace for other BAML functions.
-## Why do we have utilities?
-In LLMs it is often useful to rename or provide descriptions about the task you are attempting to do. For example, you may want to rename a property from duration to duration_in_minutes so the LLM can understand what value it should output better. However, changing the string you pass to an LLM should NOT require you to change buisness logic code. This is where utilities come in.
-For example we may find we get better accuracy if we change from
+Here is what a prompt with jinja looks like using these macros:
-```
-{
-"duration": int
+```rust
+enum Category {
+ Refund
+ CancelOrder
+ TechnicalSupport
+ AccountIssue
+ Question
+}
+
+class Message {
+ role string
+ message string
+}
+
+
+function ClassifyConversation(messages: Message[]) -> Category[] {
+ client GPT4Turbo
+ prompt #"
+ Classify this conversation:
+ {% for m in messages %}
+ {{ _.role(m.role) }}
+ {{ m.message }}
+ {% endfor %}
+
+ Use the following categories:
+ {{ ctx.output_format}}
+ "#
}
```
-to
+### Template strings
+You can create your own typed templates using the `template_string` keyword, and call them from a prompt:
+
+```rust
+// Extract the logic out of the prompt:
+template_string PrintMessages(messages: Message[]) -> string {
+ {% for m in messages %}
+ {{ _.role(m.role) }}
+ {{ m.message }}
+ {% endfor %}
+}
+
+function ClassifyConversation(messages: Message[]) -> Category[] {
+ client GPT4Turbo
+ prompt #"
+ Classify this conversation:
+ {{ PrintMessages(messages) }}
-```json
-{
- "duration_in_minutes": int
+ Use the following categories:
+ {{ ctx.output_format}}
+ "#
}
```
+
+### Conditionals
+You can use these special variables to write conditionals, like if you want to change your prompt depending on the model:
+
+ ```rust
+ {% if ctx.client.name == "GPT4Turbo" %}
+ // Do something
+ {% endif %}
+ ```
+
+You can use conditionals on your input objects as well:
+
+ ```rust
+ {% if messages[0].role == "user" %}
+ // Do something
+ {% endif %}
+ ```
diff --git a/docs/docs/syntax/strings.mdx b/docs/docs/syntax/strings.mdx
index a2df9879b..657fa726d 100644
--- a/docs/docs/syntax/strings.mdx
+++ b/docs/docs/syntax/strings.mdx
@@ -55,10 +55,7 @@ Block strings are automatically dedented and stripped of the first and last newl
### Code Strings
-Sometimes, you may want to inject some code snippets in python or some other language properties, like [@get](/v3/syntax/class#get). You can get syntax highlighting for these by using **code strings**. You do this by denoting the language before the block string (no spaces).
-
-Like all block strings, it must be surrounded by `#"` and `"#`, and they are trimmed and dedented.
-
+In case you need to add some code documentation or whatnot in a baml file, you can type this in:
```llvm
python#"
print("Hello World")
@@ -66,3 +63,4 @@ python#"
return 1
"#
```
+these are not functional code blocks they are can just be used for documentation purposes.
diff --git a/docs/docs/syntax/prompt_engineering/type-deserializer.mdx b/docs/docs/syntax/type-deserializer.mdx
similarity index 100%
rename from docs/docs/syntax/prompt_engineering/type-deserializer.mdx
rename to docs/docs/syntax/type-deserializer.mdx
diff --git a/docs/docs/syntax/type.mdx b/docs/docs/syntax/type.mdx
index 0ad2065c2..a94623cb6 100644
--- a/docs/docs/syntax/type.mdx
+++ b/docs/docs/syntax/type.mdx
@@ -65,13 +65,21 @@ temperature of 32 degrees Fahrenheit or cost of $100.00.
the unit be part of the variable name. For example, `temperature_fahrenheit`
and `cost_usd` (see [@alias](/docs/syntax/class#alias)).
-### ❌ Images
+### ✅ Images
-- Not yet supported. Reach out to us on discord if you need this.
+You can use an image like this:
-### ❌ Tensors
+```rust
+function DescribeImage(myImg: image) -> string {
+ client GPT4Turbo
+ prompt #"
+ {{ _.role("user")}}
+ Describe the image in four words:
+ {{ myImg }}
+ "#
+}
+```
-- Not yet supported. Reach out to us on discord if you need this.
## Composite/Structured Types
diff --git a/docs/mint.json b/docs/mint.json
index 9ac5a1658..0191cb60e 100644
--- a/docs/mint.json
+++ b/docs/mint.json
@@ -91,6 +91,12 @@
"group": "Types",
"pages": ["docs/syntax/type", "docs/syntax/enum", "docs/syntax/class"]
},
+ {
+ "group": "prompt",
+ "pages": [
+ "docs/syntax/prompt_engineering/overview"
+ ]
+ },
{
"group": "client",
@@ -100,13 +106,7 @@
"docs/syntax/client/redundancy"
]
},
- {
- "group": "prompt",
- "pages": [
- "docs/syntax/prompt_engineering/overview",
- "docs/syntax/prompt_engineering/type-deserializer"
- ]
- },
+
{
"group": "How-to Guides",
"pages": [
@@ -120,8 +120,7 @@
"group": "Testing",
"pages": [
"docs/guides/testing/unit_test",
- "docs/guides/testing/test_with_assertions",
- "docs/guides/testing/advanced_testing_guide"
+ "docs/guides/testing/test_with_assertions"
],
"icon": "flask-vial"
},
diff --git a/docs/v3/guides/classification/level1.mdx b/docs/v3/guides/classification/level1.mdx
index 6f13c354d..dbb294045 100644
--- a/docs/v3/guides/classification/level1.mdx
+++ b/docs/v3/guides/classification/level1.mdx
@@ -155,7 +155,7 @@ Our VSCode extension automatically generates a python **baml_client** to access
```python Python
from baml_client import baml as b
# The Category type is generated from your BAML code.
-from baml_client.baml_types import Category
+from baml_client.types import Category
import asyncio
diff --git a/docs/v3/guides/classification/level2.mdx b/docs/v3/guides/classification/level2.mdx
index 10844f7c9..ddf453f25 100644
--- a/docs/v3/guides/classification/level2.mdx
+++ b/docs/v3/guides/classification/level2.mdx
@@ -113,7 +113,7 @@ Here is the same python code as before for reference:
```python Python
from baml_client import baml as b
# The Category type is generated from your BAML code.
-from baml_client.baml_types import Category
+from baml_client.types import Category
import asyncio
diff --git a/docs/v3/guides/entity_extraction/level1.mdx b/docs/v3/guides/entity_extraction/level1.mdx
index f16ac42be..a58402239 100644
--- a/docs/v3/guides/entity_extraction/level1.mdx
+++ b/docs/v3/guides/entity_extraction/level1.mdx
@@ -135,7 +135,7 @@ Our VSCode extension automatically generates a python **baml_client** to access
from baml_client import baml as b
# Import your generated Email model.
# We generate this pydantic model for you.
-from baml_client.baml_types import Email
+from baml_client.types import Email
import asyncio
async def main():
@@ -158,7 +158,7 @@ if __name__ == "__main__":
```typescript TypeScript
import b from '@/baml_client'
-import { Email } from '@/baml_client/baml_types'
+import { Email } from '@/baml_client/types'
const main = async () => {
const order_info = await b.GetOrderInfo({
diff --git a/docs/v3/guides/entity_extraction/level2.mdx b/docs/v3/guides/entity_extraction/level2.mdx
index 0c6eba958..a0279ccff 100644
--- a/docs/v3/guides/entity_extraction/level2.mdx
+++ b/docs/v3/guides/entity_extraction/level2.mdx
@@ -137,7 +137,7 @@ Now that `cost` might be null, we need to handle that:
```python Python
from baml_client import baml as b
-from baml_client.baml_types import Email
+from baml_client.types import Email
import asyncio
async def main():
@@ -161,7 +161,7 @@ if __name__ == "__main__":
```typescript TypeScript
import b from '@/baml_client'
-import { Email } from '@/baml_client/baml_types'
+import { Email } from '@/baml_client/types'
const main = async () => {
const order_info = await b.GetOrderInfo({
diff --git a/docs/v3/syntax/function-testing.mdx b/docs/v3/syntax/function-testing.mdx
index 6774beddc..6eddbea45 100644
--- a/docs/v3/syntax/function-testing.mdx
+++ b/docs/v3/syntax/function-testing.mdx
@@ -122,7 +122,7 @@ pytest -m baml_test -k 'test_group_name and test_case_name'
# Import your baml-generated functions
from baml_client import baml as b
# Import any custom types defined in .baml files
-from baml_client.baml_types import Sentiment, IClassifySentiment
+from baml_client.types import Sentiment, IClassifySentiment
# This automatically generates a test case for each impl
# of ClassifySentiment.
@@ -138,7 +138,7 @@ async def test_happy_user(ClassifySentimentImpl: IClassifySentiment):
# Import your baml-generated functions
from baml_client import baml as b
# Import any custom types defined in .baml files
-from baml_client.baml_types import Sentiment, IClassifySentiment
+from baml_client.types import Sentiment, IClassifySentiment
# This automatically generates a test case for each impl
# of ClassifySentiment.
@@ -195,7 +195,7 @@ from baml_client.testing import baml_test
# Import your baml-generated LLM functions
from baml_client import baml
# Import any custom types defined in .baml files
-from baml_client.baml_types import Sentiment
+from baml_client.types import Sentiment
@baml_test
@pytest.mark.parametrize(
@@ -219,7 +219,7 @@ The parametrize decorator also allows you to specify a custom name for each test
```python
from baml_client import baml as b
-from baml_client.baml_types import Sentiment, IClassifySentiment
+from baml_client.types import Sentiment, IClassifySentiment
test_cases = [
{"input": "I am ecstatic", "expected_output": Sentiment.POSITIVE, "id": "ecstatic-test"},
@@ -248,7 +248,7 @@ from baml_client.testing import baml_test
# Import your baml-generated LLM functions
from baml_client import baml
# Import any custom types defined in .baml files
-from baml_client.baml_types import Sentiment
+from baml_client.types import Sentiment
@baml_test
@pytest.mark.asyncio
@@ -286,7 +286,7 @@ Alternatively you can just write a test function for each input type.
```python
from baml_client.testing import baml_test
from baml_client import baml
-from baml_client.baml_types import Sentiment
+from baml_client.types import Sentiment
@baml_test
@pytest.mark.asyncio
diff --git a/docs/v3/syntax/function.mdx b/docs/v3/syntax/function.mdx
index c2f560fc1..569476659 100644
--- a/docs/v3/syntax/function.mdx
+++ b/docs/v3/syntax/function.mdx
@@ -78,7 +78,7 @@ function SummarizeCustomerDetails {
```python Python Usage
from baml_client import baml as b
# Import the CustomerInfo pydantic model we generated
-from baml_client.baml_types import CustomerInfo
+from baml_client.types import CustomerInfo
response = await b.SummarizeCustomerDetails(
CustomerInfo(name="John", age=30, address="123 Main St"))
diff --git a/docs/v3/syntax/impl.mdx b/docs/v3/syntax/impl.mdx
index 0120815ae..5cd4da471 100644
--- a/docs/v3/syntax/impl.mdx
+++ b/docs/v3/syntax/impl.mdx
@@ -101,7 +101,7 @@ client GPT35Client {
```python Python
from baml_client import baml as b
-from baml_client.baml_types import Sentiment
+from baml_client.types import Sentiment
async def pipeline(arg: str) -> str:
sentiment = await b.GetSentiment(arg)
diff --git a/engine/baml-cli/src/init_command/sample/baml/baml_src/example_2.baml b/engine/baml-cli/src/init_command/sample/baml/baml_src/example_2.baml
index 350d862f0..79931ed39 100644
--- a/engine/baml-cli/src/init_command/sample/baml/baml_src/example_2.baml
+++ b/engine/baml-cli/src/init_command/sample/baml/baml_src/example_2.baml
@@ -20,7 +20,7 @@ function ClassifyConversation(messages: Message[]) -> Category[] {
{% for m in messages %}
- {{ _.chat(m.role) }}
+ {{ _.role(m.role) }}
{{ m.message }}
{% endfor %}
diff --git a/integ-tests/baml_src/clients.baml b/integ-tests/baml_src/clients.baml
index beb605af4..7872f35b7 100644
--- a/integ-tests/baml_src/clients.baml
+++ b/integ-tests/baml_src/clients.baml
@@ -69,7 +69,7 @@ client GPT35Azure {
client Claude {
- provider baml-anthropic-chat
+ provider anthropic
options {
model claude-3-haiku-20240307
api_key env.ANTHROPIC_API_KEY
@@ -88,23 +88,6 @@ client Resilient_SimpleSyntax {
]
}
}
-
-client Resilient_ComplexSyntax {
- provider baml-fallback
- options {
- strategy [
- {
- client GPT4Turbo
- }
- {
- client GPT35
- }
- {
- client Claude
- }
- ]
- }
-}
client Lottery_SimpleSyntax {
provider baml-round-robin
@@ -116,18 +99,3 @@ client Lottery_SimpleSyntax {
]
}
}
-
-client Lottery_ComplexSyntax {
- provider baml-round-robin
- options {
- start 0
- strategy [
- {
- client GPT35
- }
- {
- client Claude
- }
- ]
- }
-}
\ No newline at end of file
diff --git a/integ-tests/baml_src/fiddle-examples/chat-roles.baml b/integ-tests/baml_src/fiddle-examples/chat-roles.baml
index 93a2f0056..f5de39901 100644
--- a/integ-tests/baml_src/fiddle-examples/chat-roles.baml
+++ b/integ-tests/baml_src/fiddle-examples/chat-roles.baml
@@ -11,16 +11,16 @@ function ClassifyMessage2(input: string) -> Category {
client GPT4
prompt #"
- {{ _.chat("system") }}
- // You can use _.chat("system") to indicate that this text should be a system message
+ {{ _.role("system") }}
+ // You can use _.role("system") to indicate that this text should be a system message
Classify the following INPUT into ONE
of the following categories:
{{ ctx.output_format }}
- {{ _.chat("user") }}
- // And _.chat("user") to indicate that this text should be a user message
+ {{ _.role("user") }}
+ // And _.role("user") to indicate that this text should be a user message
INPUT: {{ input }}
diff --git a/integ-tests/baml_src/fiddle-examples/images/image.baml b/integ-tests/baml_src/fiddle-examples/images/image.baml
index c68e44508..59a0cfc12 100644
--- a/integ-tests/baml_src/fiddle-examples/images/image.baml
+++ b/integ-tests/baml_src/fiddle-examples/images/image.baml
@@ -1,7 +1,7 @@
function DescribeImage(img: image) -> string {
client GPT4Turbo
prompt #"
- {{ _.chat(role="user") }}
+ {{ _.role("user") }}
Describe the image below in 5 words:
@@ -24,7 +24,7 @@ class ClassWithImage {
function DescribeImage2(classWithImage: ClassWithImage, img2: image) -> string {
client GPT4Turbo
prompt #"
- {{ _.chat(role="user") }}
+ {{ _.role("user") }}
You should return 2 answers that answer the following commands.
1. Describe this in 5 words:
@@ -52,7 +52,7 @@ function DescribeImage3(classWithImage: ClassWithImage, img2: image) -> string {
function DescribeImage4(classWithImage: ClassWithImage, img2: image) -> string {
client GPT4Turbo
prompt #"
- {{ _.chat(role="system")}}
+ {{ _.role("system")}}
Describe this in 5 words:
{{ classWithImage.myImage }}
diff --git a/integ-tests/baml_src/test-files/functions/v2/basic.baml b/integ-tests/baml_src/test-files/functions/v2/basic.baml
index 94cd1b82e..c026ff655 100644
--- a/integ-tests/baml_src/test-files/functions/v2/basic.baml
+++ b/integ-tests/baml_src/test-files/functions/v2/basic.baml
@@ -3,7 +3,7 @@
function ExtractResume2(resume: string) -> Resume {
client Resilient_ComplexSyntax
prompt #"
- {{ _.chat('system') }}
+ {{ _.role('system') }}
Extract the following information from the resume:
diff --git a/integ-tests/baml_src/test-files/testing_pipeline/resume.baml b/integ-tests/baml_src/test-files/testing_pipeline/resume.baml
index f8d6f1d97..e73e5ec61 100644
--- a/integ-tests/baml_src/test-files/testing_pipeline/resume.baml
+++ b/integ-tests/baml_src/test-files/testing_pipeline/resume.baml
@@ -16,10 +16,10 @@ class Education {
}
template_string AddRole(foo: string) #"
- {{ _.chat('system')}}
+ {{ _.role('system')}}
You are a {{ foo }}. be nice
- {{ _.chat('user') }}
+ {{ _.role('user') }}
"#
function ExtractResume(resume: string, img: image) -> Resume {
diff --git a/integ-tests/python/baml_client/inlinedbaml.py b/integ-tests/python/baml_client/inlinedbaml.py
index 653841bdf..a04355394 100644
--- a/integ-tests/python/baml_client/inlinedbaml.py
+++ b/integ-tests/python/baml_client/inlinedbaml.py
@@ -17,12 +17,12 @@
file_map = {
- "clients.baml": "retry_policy Bar {\n max_retries 3\n strategy {\n type exponential_backoff\n }\n}\n\nretry_policy Foo {\n max_retries 3\n strategy {\n type constant_delay\n delay_ms 100\n }\n}\n\nclient GPT4 {\n provider baml-openai-chat\n options {\n model gpt-4\n api_key env.OPENAI_API_KEY\n }\n} \n\n\nclient GPT4o {\n provider baml-openai-chat\n options {\n model gpt-4o\n api_key env.OPENAI_API_KEY\n }\n} \n\n\nclient GPT4Turbo {\n retry_policy Bar\n provider baml-openai-chat\n options {\n model gpt-4-turbo\n api_key env.OPENAI_API_KEY\n }\n} \n\nclient GPT35 {\n provider baml-openai-chat\n options {\n model \"gpt-3.5-turbo\"\n api_key env.OPENAI_API_KEY\n }\n}\n\nclient Ollama {\n provider ollama\n options {\n model llama2\n api_key \"\"\n }\n}\n\nclient GPT35Azure {\n provider azure-openai\n options {\n resource_name \"west-us-azure-baml\"\n deployment_id \"gpt-35-turbo-default\"\n // base_url \"https://west-us-azure-baml.openai.azure.com/openai/deployments/gpt-35-turbo-default\"\n api_version \"2024-02-01\"\n api_key env.AZURE_OPENAI_API_KEY\n }\n}\n\n\nclient Claude {\n provider baml-anthropic-chat\n options {\n model claude-3-haiku-20240307\n api_key env.ANTHROPIC_API_KEY\n max_tokens 1000\n }\n}\n\nclient Resilient_SimpleSyntax {\n retry_policy Foo\n provider baml-fallback\n options {\n strategy [\n GPT4Turbo\n GPT35\n Lottery_SimpleSyntax\n ]\n }\n} \n\nclient Resilient_ComplexSyntax {\n provider baml-fallback\n options {\n strategy [\n {\n client GPT4Turbo\n }\n {\n client GPT35\n }\n {\n client Claude\n }\n ]\n }\n}\n \nclient Lottery_SimpleSyntax {\n provider baml-round-robin\n options {\n start 0\n strategy [\n GPT35\n Claude\n ]\n }\n}\n\nclient Lottery_ComplexSyntax {\n provider baml-round-robin\n options {\n start 0\n strategy [\n {\n client GPT35\n }\n {\n client Claude\n }\n ] \n }\n}",
+ "clients.baml": "retry_policy Bar {\n max_retries 3\n strategy {\n type exponential_backoff\n }\n}\n\nretry_policy Foo {\n max_retries 3\n strategy {\n type constant_delay\n delay_ms 100\n }\n}\n\nclient GPT4 {\n provider baml-openai-chat\n options {\n model gpt-4\n api_key env.OPENAI_API_KEY\n }\n} \n\n\nclient GPT4o {\n provider baml-openai-chat\n options {\n model gpt-4o\n api_key env.OPENAI_API_KEY\n }\n} \n\n\nclient GPT4Turbo {\n retry_policy Bar\n provider baml-openai-chat\n options {\n model gpt-4-turbo\n api_key env.OPENAI_API_KEY\n }\n} \n\nclient GPT35 {\n provider baml-openai-chat\n options {\n model \"gpt-3.5-turbo\"\n api_key env.OPENAI_API_KEY\n }\n}\n\nclient Ollama {\n provider ollama\n options {\n model llama2\n api_key \"\"\n }\n}\n\nclient GPT35Azure {\n provider azure-openai\n options {\n resource_name \"west-us-azure-baml\"\n deployment_id \"gpt-35-turbo-default\"\n // base_url \"https://west-us-azure-baml.openai.azure.com/openai/deployments/gpt-35-turbo-default\"\n api_version \"2024-02-01\"\n api_key env.AZURE_OPENAI_API_KEY\n }\n}\n\n\nclient Claude {\n provider anthropic\n options {\n model claude-3-haiku-20240307\n api_key env.ANTHROPIC_API_KEY\n max_tokens 1000\n }\n}\n\nclient Resilient_SimpleSyntax {\n retry_policy Foo\n provider baml-fallback\n options {\n strategy [\n GPT4Turbo\n GPT35\n Lottery_SimpleSyntax\n ]\n }\n} \n\nclient Resilient_ComplexSyntax {\n provider baml-fallback\n options {\n strategy [\n {\n client GPT4Turbo\n }\n {\n client GPT35\n }\n {\n client Claude\n }\n ]\n }\n}\n \nclient Lottery_SimpleSyntax {\n provider baml-round-robin\n options {\n start 0\n strategy [\n GPT35\n Claude\n ]\n }\n}\n\nclient Lottery_ComplexSyntax {\n provider baml-round-robin\n options {\n start 0\n strategy [\n {\n client GPT35\n }\n {\n client Claude\n }\n ] \n }\n}",
"fiddle-examples/chain-of-thought.baml": "class Email {\n subject string\n body string\n from_address string\n}\n\nenum OrderStatus {\n ORDERED\n SHIPPED\n DELIVERED\n CANCELLED\n}\n\nclass OrderInfo {\n order_status OrderStatus\n tracking_number string?\n estimated_arrival_date string?\n}\n\nfunction GetOrderInfo(email: Email) -> OrderInfo {\n client GPT4\n prompt #\"\n Given the email below:\n\n ```\n from: {{email.from_address}}\n Email Subject: {{email.subject}}\n Email Body: {{email.body}}\n ```\n\n Extract this info from the email in JSON format:\n {{ ctx.output_format }}\n\n Before you output the JSON, please explain your\n reasoning step-by-step. Here is an example on how to do this:\n 'If we think step by step we can see that ...\n therefore the output JSON is:\n {\n ... the json schema ...\n }'\n \"#\n}",
- "fiddle-examples/chat-roles.baml": "// This will be available as an enum in your Python and Typescript code.\nenum Category2 {\n Refund\n CancelOrder\n TechnicalSupport\n AccountIssue\n Question\n}\n\nfunction ClassifyMessage2(input: string) -> Category {\n client GPT4\n\n prompt #\"\n {{ _.chat(\"system\") }}\n // You can use _.chat(\"system\") to indicate that this text should be a system message\n\n Classify the following INPUT into ONE\n of the following categories:\n\n {{ ctx.output_format }}\n\n {{ _.chat(\"user\") }}\n // And _.chat(\"user\") to indicate that this text should be a user message\n\n INPUT: {{ input }}\n\n Response:\n \"#\n}",
+ "fiddle-examples/chat-roles.baml": "// This will be available as an enum in your Python and Typescript code.\nenum Category2 {\n Refund\n CancelOrder\n TechnicalSupport\n AccountIssue\n Question\n}\n\nfunction ClassifyMessage2(input: string) -> Category {\n client GPT4\n\n prompt #\"\n {{ _.role(\"system\") }}\n // You can use _.role(\"system\") to indicate that this text should be a system message\n\n Classify the following INPUT into ONE\n of the following categories:\n\n {{ ctx.output_format }}\n\n {{ _.role(\"user\") }}\n // And _.role(\"user\") to indicate that this text should be a user message\n\n INPUT: {{ input }}\n\n Response:\n \"#\n}",
"fiddle-examples/classify-message.baml": "// This will be available as an enum in your Python and Typescript code.\nenum Category {\n Refund\n CancelOrder\n TechnicalSupport\n AccountIssue\n Question\n}\n\nfunction ClassifyMessage(input: string) -> Category {\n client GPT4\n\n prompt #\"\n Classify the following INPUT into ONE\n of the following categories:\n\n INPUT: {{ input }}\n\n {{ ctx.output_format }}\n\n Response:\n \"#\n}",
"fiddle-examples/extract-names.baml": "function ExtractNames(input: string) -> string[] {\n client GPT4\n prompt #\"\n Extract the names from this INPUT:\n \n INPUT:\n ---\n {{ input }}\n ---\n\n {{ ctx.output_format }}\n\n Response:\n \"#\n}\n",
- "fiddle-examples/images/image.baml": "function DescribeImage(img: image) -> string {\n client GPT4Turbo\n prompt #\"\n {{ _.chat(role=\"user\") }}\n\n\n Describe the image below in 5 words:\n {{ img }}\n \"#\n\n}\n\nclass FakeImage {\n url string\n}\n\nclass ClassWithImage {\n myImage image\n param2 string\n fake_image FakeImage\n}\n\n// chat role user present\nfunction DescribeImage2(classWithImage: ClassWithImage, img2: image) -> string {\n client GPT4Turbo\n prompt #\"\n {{ _.chat(role=\"user\") }}\n You should return 2 answers that answer the following commands.\n\n 1. Describe this in 5 words:\n {{ classWithImage.myImage }}\n\n 2. Also tell me what's happening here in one sentence:\n {{ img2 }}\n \"#\n}\n\n// no chat role\nfunction DescribeImage3(classWithImage: ClassWithImage, img2: image) -> string {\n client GPT4Turbo\n prompt #\"\n Describe this in 5 words:\n {{ classWithImage.myImage }}\n\n Tell me also what's happening here in one sentence and relate it to the word {{ classWithImage.param2 }}:\n {{ img2 }}\n \"#\n}\n\n\n// system prompt and chat prompt\nfunction DescribeImage4(classWithImage: ClassWithImage, img2: image) -> string {\n client GPT4Turbo\n prompt #\"\n {{ _.chat(role=\"system\")}}\n\n Describe this in 5 words:\n {{ classWithImage.myImage }}\n\n Tell me also what's happening here in one sentence and relate it to the word {{ classWithImage.param2 }}:\n {{ img2 }}\n \"#\n}",
+ "fiddle-examples/images/image.baml": "function DescribeImage(img: image) -> string {\n client GPT4Turbo\n prompt #\"\n {{ _.role(\"user\") }}\n\n\n Describe the image below in 5 words:\n {{ img }}\n \"#\n\n}\n\nclass FakeImage {\n url string\n}\n\nclass ClassWithImage {\n myImage image\n param2 string\n fake_image FakeImage\n}\n\n// chat role user present\nfunction DescribeImage2(classWithImage: ClassWithImage, img2: image) -> string {\n client GPT4Turbo\n prompt #\"\n {{ _.role(\"user\") }}\n You should return 2 answers that answer the following commands.\n\n 1. Describe this in 5 words:\n {{ classWithImage.myImage }}\n\n 2. Also tell me what's happening here in one sentence:\n {{ img2 }}\n \"#\n}\n\n// no chat role\nfunction DescribeImage3(classWithImage: ClassWithImage, img2: image) -> string {\n client GPT4Turbo\n prompt #\"\n Describe this in 5 words:\n {{ classWithImage.myImage }}\n\n Tell me also what's happening here in one sentence and relate it to the word {{ classWithImage.param2 }}:\n {{ img2 }}\n \"#\n}\n\n\n// system prompt and chat prompt\nfunction DescribeImage4(classWithImage: ClassWithImage, img2: image) -> string {\n client GPT4Turbo\n prompt #\"\n {{ _.role(\"system\")}}\n\n Describe this in 5 words:\n {{ classWithImage.myImage }}\n\n Tell me also what's happening here in one sentence and relate it to the word {{ classWithImage.param2 }}:\n {{ img2 }}\n \"#\n}",
"fiddle-examples/symbol-tuning.baml": "enum Category3 {\n Refund @alias(\"k1\")\n @description(\"Customer wants to refund a product\")\n\n CancelOrder @alias(\"k2\")\n @description(\"Customer wants to cancel an order\")\n\n TechnicalSupport @alias(\"k3\")\n @description(\"Customer needs help with a technical issue unrelated to account creation or login\")\n\n AccountIssue @alias(\"k4\")\n @description(\"Specifically relates to account-login or account-creation\")\n\n Question @alias(\"k5\")\n @description(\"Customer has a question\")\n}\n\nfunction ClassifyMessage3(input: string) -> Category {\n client GPT4\n\n prompt #\"\n Classify the following INPUT into ONE\n of the following categories:\n\n INPUT: {{ input }}\n\n {{ ctx.output_format }}\n\n Response:\n \"#\n}",
"main.baml": "generator lang_python {\n output_type python/pydantic\n output_dir \"../python\"\n}\n\ngenerator lang_typescript {\n output_type typescript\n output_dir \"../typescript\"\n}\n",
"test-files/aliases/classes.baml": "class TestClassAlias {\n key string @alias(\"key-dash\") @description(#\"\n This is a description for key\n af asdf\n \"#)\n key2 string @alias(\"key21\")\n key3 string @alias(\"key with space\")\n key4 string //unaliased\n key5 string @alias(\"key.with.punctuation/123\")\n}\n\nfunction FnTestClassAlias(input: string) -> TestClassAlias {\n client GPT35\n prompt #\"\n {{ctx.output_format}}\n \"#\n}\n\ntest FnTestClassAlias {\n functions [FnTestClassAlias]\n args {\n input \"example input\"\n }\n}\n",
@@ -54,12 +54,12 @@
"test-files/functions/output/unions.baml": "class UnionTest_ReturnType {\n prop1 string | bool\n prop2 (float | bool)[]\n prop3 (float[] | bool[])\n}\n\nfunction UnionTest_Function(input: string | bool) -> UnionTest_ReturnType {\n client GPT35\n prompt #\"\n Return a JSON blob with this schema: \n {{ctx.output_format}}\n\n JSON:\n \"#\n}\n\ntest UnionTest_Function {\n functions [UnionTest_Function]\n args {\n input \"example input\"\n }\n}\n",
"test-files/functions/prompts/no-chat-messages.baml": "\n\nfunction PromptTestClaude(input: string) -> string {\n client Claude\n prompt #\"\n Tell me a haiku about {{ input }}\n \"#\n}\n\nfunction PromptTestOpenAI(input: string) -> string {\n client GPT35\n prompt #\"\n Tell me a haiku about {{ input }}\n \"#\n}",
"test-files/functions/prompts/with-chat-messages.baml": "\nfunction PromptTestOpenAIChat(input: string) -> string {\n client GPT35\n prompt #\"\n {{ _.role(\"system\") }}\n You are an assistant that always responds in a very excited way with emojis and also outputs this word 4 times after giving a response: {{ input }}\n \n {{ _.role(\"user\") }}\n Tell me a haiku about {{ input }}\n \"#\n}\n\nfunction PromptTestOpenAIChatNoSystem(input: string) -> string {\n client GPT35\n prompt #\"\n You are an assistant that always responds in a very excited way with emojis and also outputs this word 4 times after giving a response: {{ input }}\n \n {{ _.role(\"user\") }}\n Tell me a haiku about {{ input }}\n \"#\n}\n\nfunction PromptTestClaudeChat(input: string) -> string {\n client Claude\n prompt #\"\n {{ _.role(\"system\") }}\n You are an assistant that always responds in a very excited way with emojis and also outputs this word 4 times after giving a response: {{ input }}\n \n {{ _.role(\"user\") }}\n Tell me a haiku about {{ input }}\n \"#\n}\n\nfunction PromptTestClaudeChatNoSystem(input: string) -> string {\n client Claude\n prompt #\"\n You are an assistant that always responds in a very excited way with emojis and also outputs this word 4 times after giving a response: {{ input }}\n \n {{ _.role(\"user\") }}\n Tell me a haiku about {{ input }}\n \"#\n}\n\ntest PromptTestOpenAIChat {\n functions [PromptTestClaude, PromptTestOpenAI, PromptTestOpenAIChat, PromptTestOpenAIChatNoSystem, PromptTestClaudeChat, PromptTestClaudeChatNoSystem]\n args {\n input \"cats\"\n }\n}\n\ntest TestClaude {\n functions [PromptTestClaudeChatNoSystem]\n args {\n input \"lion\"\n }\n}",
- "test-files/functions/v2/basic.baml": "\n\nfunction ExtractResume2(resume: string) -> Resume {\n client Resilient_ComplexSyntax\n prompt #\"\n {{ _.chat('system') }}\n\n Extract the following information from the resume:\n\n Resume:\n <<<<\n {{ resume }}\n <<<<\n\n Output JSON schema:\n {{ ctx.output_format }}\n\n JSON:\n \"#\n}\n\n\nclass WithReasoning {\n value string\n reasoning string @description(#\"\n Why the value is a good fit.\n \"#)\n}\n\n\nclass SearchParams {\n dateRange int? @description(#\"\n In ISO duration format, e.g. P1Y2M10D.\n \"#)\n location string[]\n jobTitle WithReasoning? @description(#\"\n An exact job title, not a general category.\n \"#)\n company WithReasoning? @description(#\"\n The exact name of the company, not a product or service.\n \"#)\n description WithReasoning[] @description(#\"\n Any specific projects or features the user is looking for.\n \"#)\n tags (Tag | string)[]\n}\n\nenum Tag {\n Security\n AI\n Blockchain\n}\n\nfunction GetQuery(query: string) -> SearchParams {\n client GPT4\n prompt #\"\n Extract the following information from the query:\n\n Query:\n <<<<\n {{ query }}\n <<<<\n\n OUTPUT_JSON_SCHEMA:\n {{ ctx.output_format }}\n\n Before OUTPUT_JSON_SCHEMA, list 5 intentions the user may have.\n --- EXAMPLES ---\n 1. \n 2. \n 3. \n 4. \n 5. \n\n {\n ... // OUTPUT_JSON_SCHEMA\n }\n \"#\n}\n\nclass RaysData {\n dataType DataType\n value Resume | Event\n}\n\nenum DataType {\n Resume\n Event\n}\n\nclass Event {\n title string\n date string\n location string\n description string\n}\n\nfunction GetDataType(text: string) -> RaysData {\n client GPT4\n prompt #\"\n Extract the relevant info.\n\n Text:\n <<<<\n {{ text }}\n <<<<\n\n Output JSON schema:\n {{ ctx.output_format }}\n\n JSON:\n \"#\n}\n",
+ "test-files/functions/v2/basic.baml": "\n\nfunction ExtractResume2(resume: string) -> Resume {\n client Resilient_ComplexSyntax\n prompt #\"\n {{ _.role('system') }}\n\n Extract the following information from the resume:\n\n Resume:\n <<<<\n {{ resume }}\n <<<<\n\n Output JSON schema:\n {{ ctx.output_format }}\n\n JSON:\n \"#\n}\n\n\nclass WithReasoning {\n value string\n reasoning string @description(#\"\n Why the value is a good fit.\n \"#)\n}\n\n\nclass SearchParams {\n dateRange int? @description(#\"\n In ISO duration format, e.g. P1Y2M10D.\n \"#)\n location string[]\n jobTitle WithReasoning? @description(#\"\n An exact job title, not a general category.\n \"#)\n company WithReasoning? @description(#\"\n The exact name of the company, not a product or service.\n \"#)\n description WithReasoning[] @description(#\"\n Any specific projects or features the user is looking for.\n \"#)\n tags (Tag | string)[]\n}\n\nenum Tag {\n Security\n AI\n Blockchain\n}\n\nfunction GetQuery(query: string) -> SearchParams {\n client GPT4\n prompt #\"\n Extract the following information from the query:\n\n Query:\n <<<<\n {{ query }}\n <<<<\n\n OUTPUT_JSON_SCHEMA:\n {{ ctx.output_format }}\n\n Before OUTPUT_JSON_SCHEMA, list 5 intentions the user may have.\n --- EXAMPLES ---\n 1. \n 2. \n 3. \n 4. \n 5. \n\n {\n ... // OUTPUT_JSON_SCHEMA\n }\n \"#\n}\n\nclass RaysData {\n dataType DataType\n value Resume | Event\n}\n\nenum DataType {\n Resume\n Event\n}\n\nclass Event {\n title string\n date string\n location string\n description string\n}\n\nfunction GetDataType(text: string) -> RaysData {\n client GPT4\n prompt #\"\n Extract the relevant info.\n\n Text:\n <<<<\n {{ text }}\n <<<<\n\n Output JSON schema:\n {{ ctx.output_format }}\n\n JSON:\n \"#\n}\n",
"test-files/providers/providers.baml": "\n\nfunction TestOllama(input: string) -> string {\n client Ollama\n prompt #\"\n Write a nice haiku about {{ input }}\n \"#\n}\n\ntest TestProvider {\n functions [TestOllama]\n args {\n input \"the moon\"\n }\n}\n",
"test-files/strategies/fallback.baml": "\nclient FaultyClient {\n provider openai\n options {\n model unknown-model\n api_key env.OPENAI_API_KEY\n }\n}\n\n\nclient FallbackClient {\n provider fallback\n options {\n // first 2 clients are expected to fail.\n strategy [\n FaultyClient,\n RetryClientConstant,\n GPT35\n ]\n }\n}\n\nfunction TestFallbackClient() -> string {\n client FallbackClient\n // TODO make it return the client name instead\n prompt #\"\n Say a haiku about mexico.\n \"#\n}",
"test-files/strategies/retry.baml": "\nretry_policy Exponential {\n max_retries 3\n strategy {\n type exponential_backoff\n }\n}\n\nretry_policy Constant {\n max_retries 3\n strategy {\n type constant_delay\n delay_ms 100\n }\n}\n\nclient RetryClientConstant {\n provider openai\n retry_policy Constant\n options {\n model \"gpt-3.5-turbo\"\n api_key \"blah\"\n }\n}\n\nclient RetryClientExponential {\n provider openai\n retry_policy Exponential\n options {\n model \"gpt-3.5-turbo\"\n api_key \"blahh\"\n }\n}\n\nfunction TestRetryConstant() -> string {\n client RetryClientConstant\n prompt #\"\n Say a haiku\n \"#\n}\n\nfunction TestRetryExponential() -> string {\n client RetryClientExponential\n prompt #\"\n Say a haiku\n \"#\n}\n",
"test-files/strategies/roundrobin.baml": "",
- "test-files/testing_pipeline/resume.baml": "class Resume {\n name string\n email string\n phone string\n experience Education[]\n education string[]\n skills string[]\n}\n\nclass Education {\n institution string\n location string\n degree string\n major string[]\n graduation_date string?\n}\n\ntemplate_string AddRole(foo: string) #\"\n {{ _.chat('system')}}\n You are a {{ foo }}. be nice\n\n {{ _.chat('user') }}\n\"#\n\nfunction ExtractResume(resume: string, img: image) -> Resume {\n client GPT4\n prompt #\"\n {{ AddRole(\"Software Engineer\") }}\n\n Extract data:\n \n\n <<<<\n {{ resume }}\n <<<<\n\n {% if img %}\n {{img}}\n {% endif %}\n\n {{ ctx.output_format }}\n \"#\n}\n\ntest sam_resume {\n functions [ExtractResume]\n input {\n img {\n url \"https://avatars.githubusercontent.com/u/1016595?v=4\"\n }\n resume #\"\n Sam Lijin\n he/him | jobs@sxlijin.com | sxlijin.github.io | sxlijin | sxlijin\n\n Experience\n Trunk\n | July 2021 - current\n Trunk Check | Senior Software Engineer | Services TL, Mar 2023 - current | IC, July 2021 - Feb 2023\n Proposed, designed, and led a team of 3 to build a web experience for Check (both a web-only onboarding flow and SaaS offerings)\n Proposed and built vulnerability scanning into Check, enabling it to compete with security products such as Snyk\n Helped grow Check from <1K users to 90K+ users by focusing on product-led growth\n Google | Sept 2017 - June 2021\n User Identity SRE | Senior Software Engineer | IC, Mar 2021 - June 2021\n Designed an incremental key rotation system to limit the global outage risk to Google SSO\n Discovered and severed an undocumented Gmail serving dependency on Identity-internal systems\n Cloud Firestore | Senior Software Engineer | EngProd TL, Aug 2019 - Feb 2021 | IC, Sept 2017 - July 2019\n Metadata TTL system: backlog of XX trillion records, sustained 1M ops/sec, peaking at 3M ops/sec\n\n Designed and implemented a logging system with novel observability and privacy requirements\n Designed and implemented Jepsen-style testing to validate correctness guarantees\n Datastore Migration: zero downtime, xM RPS and xxPB of data over xM customers and 36 datacenters\n\n Designed composite index migration, queue processing migration, progressive rollout, fast rollback, and disk stockout mitigations; implemented transaction log replay, state transitions, and dark launch process\n Designed and implemented end-to-end correctness and performance testing\n Velocity improvements for 60-eng org\n\n Proposed and implemented automated rollbacks: got us out of a 3-month release freeze and prevented 5 outages over the next 6 months\n Proposed and implemented new development and release environments spanning 30+ microservices\n Incident response for API proxy rollback affecting every Google Cloud service\n\n Google App Engine Memcache | Software Engineer | EngProd TL, Apr 2019 - July 2019\n Proposed and led execution of test coverage improvement strategy for a new control plane: reduced rollbacks and ensured strong consistency of a distributed cache serving xxM QPS\n Designed and implemented automated performance regression testing for two critical serving paths\n Used to validate Google-wide rollout of AMD CPUs, by proving a 50p latency delta of <10µs\n Implemented on shared Borg (i.e. vulnerable to noisy neighbors) with <12% variance\n Miscellaneous | Sept 2017 - June 2021\n Redesigned the Noogler training on Google-internal storage technologies & trained 2500+ Nooglers\n Landed multiple google3-wide refactorings, each spanning xxK files (e.g. SWIG to CLIF)\n Education\n Vanderbilt University (Nashville, TN) | May 2017 | B.S. in Computer Science, Mathematics, and Political Science\n\n Stuyvesant HS (New York, NY) | 2013\n\n Skills\n C++, Java, Typescript, Javascript, Python, Bash; light experience with Rust, Golang, Scheme\n gRPC, Bazel, React, Linux\n Hobbies: climbing, skiing, photography\n \"#\n }\n}\n\ntest vaibhav_resume {\n functions [ExtractResume]\n input {\n resume #\"\n Vaibhav Gupta\n linkedin/vaigup\n (972) 400-5279\n vaibhavtheory@gmail.com\n EXPERIENCE\n Google,\n Software Engineer\n Dec 2018-Present\n Seattle, WA\n •\n Augmented Reality,\n Depth Team\n •\n Technical Lead for on-device optimizations\n •\n Optimized and designed front\n facing depth algorithm\n on Pixel 4\n •\n Focus: C++ and SIMD on custom silicon\n \n \n EDUCATION\n University of Texas at Austin\n Aug 2012-May 2015\n Bachelors of Engineering, Integrated Circuits\n Bachelors of Computer Science\n \"#\n }\n}",
+ "test-files/testing_pipeline/resume.baml": "class Resume {\n name string\n email string\n phone string\n experience Education[]\n education string[]\n skills string[]\n}\n\nclass Education {\n institution string\n location string\n degree string\n major string[]\n graduation_date string?\n}\n\ntemplate_string AddRole(foo: string) #\"\n {{ _.role('system')}}\n You are a {{ foo }}. be nice\n\n {{ _.role('user') }}\n\"#\n\nfunction ExtractResume(resume: string, img: image) -> Resume {\n client GPT4\n prompt #\"\n {{ AddRole(\"Software Engineer\") }}\n\n Extract data:\n \n\n <<<<\n {{ resume }}\n <<<<\n\n {% if img %}\n {{img}}\n {% endif %}\n\n {{ ctx.output_format }}\n \"#\n}\n\ntest sam_resume {\n functions [ExtractResume]\n input {\n img {\n url \"https://avatars.githubusercontent.com/u/1016595?v=4\"\n }\n resume #\"\n Sam Lijin\n he/him | jobs@sxlijin.com | sxlijin.github.io | sxlijin | sxlijin\n\n Experience\n Trunk\n | July 2021 - current\n Trunk Check | Senior Software Engineer | Services TL, Mar 2023 - current | IC, July 2021 - Feb 2023\n Proposed, designed, and led a team of 3 to build a web experience for Check (both a web-only onboarding flow and SaaS offerings)\n Proposed and built vulnerability scanning into Check, enabling it to compete with security products such as Snyk\n Helped grow Check from <1K users to 90K+ users by focusing on product-led growth\n Google | Sept 2017 - June 2021\n User Identity SRE | Senior Software Engineer | IC, Mar 2021 - June 2021\n Designed an incremental key rotation system to limit the global outage risk to Google SSO\n Discovered and severed an undocumented Gmail serving dependency on Identity-internal systems\n Cloud Firestore | Senior Software Engineer | EngProd TL, Aug 2019 - Feb 2021 | IC, Sept 2017 - July 2019\n Metadata TTL system: backlog of XX trillion records, sustained 1M ops/sec, peaking at 3M ops/sec\n\n Designed and implemented a logging system with novel observability and privacy requirements\n Designed and implemented Jepsen-style testing to validate correctness guarantees\n Datastore Migration: zero downtime, xM RPS and xxPB of data over xM customers and 36 datacenters\n\n Designed composite index migration, queue processing migration, progressive rollout, fast rollback, and disk stockout mitigations; implemented transaction log replay, state transitions, and dark launch process\n Designed and implemented end-to-end correctness and performance testing\n Velocity improvements for 60-eng org\n\n Proposed and implemented automated rollbacks: got us out of a 3-month release freeze and prevented 5 outages over the next 6 months\n Proposed and implemented new development and release environments spanning 30+ microservices\n Incident response for API proxy rollback affecting every Google Cloud service\n\n Google App Engine Memcache | Software Engineer | EngProd TL, Apr 2019 - July 2019\n Proposed and led execution of test coverage improvement strategy for a new control plane: reduced rollbacks and ensured strong consistency of a distributed cache serving xxM QPS\n Designed and implemented automated performance regression testing for two critical serving paths\n Used to validate Google-wide rollout of AMD CPUs, by proving a 50p latency delta of <10µs\n Implemented on shared Borg (i.e. vulnerable to noisy neighbors) with <12% variance\n Miscellaneous | Sept 2017 - June 2021\n Redesigned the Noogler training on Google-internal storage technologies & trained 2500+ Nooglers\n Landed multiple google3-wide refactorings, each spanning xxK files (e.g. SWIG to CLIF)\n Education\n Vanderbilt University (Nashville, TN) | May 2017 | B.S. in Computer Science, Mathematics, and Political Science\n\n Stuyvesant HS (New York, NY) | 2013\n\n Skills\n C++, Java, Typescript, Javascript, Python, Bash; light experience with Rust, Golang, Scheme\n gRPC, Bazel, React, Linux\n Hobbies: climbing, skiing, photography\n \"#\n }\n}\n\ntest vaibhav_resume {\n functions [ExtractResume]\n input {\n resume #\"\n Vaibhav Gupta\n linkedin/vaigup\n (972) 400-5279\n vaibhavtheory@gmail.com\n EXPERIENCE\n Google,\n Software Engineer\n Dec 2018-Present\n Seattle, WA\n •\n Augmented Reality,\n Depth Team\n •\n Technical Lead for on-device optimizations\n •\n Optimized and designed front\n facing depth algorithm\n on Pixel 4\n •\n Focus: C++ and SIMD on custom silicon\n \n \n EDUCATION\n University of Texas at Austin\n Aug 2012-May 2015\n Bachelors of Engineering, Integrated Circuits\n Bachelors of Computer Science\n \"#\n }\n}",
}
def get_baml_files():
diff --git a/integ-tests/typescript/baml_client/inlinedbaml.ts b/integ-tests/typescript/baml_client/inlinedbaml.ts
index 7124d0ebe..baa83d63e 100644
--- a/integ-tests/typescript/baml_client/inlinedbaml.ts
+++ b/integ-tests/typescript/baml_client/inlinedbaml.ts
@@ -17,12 +17,12 @@ $ pnpm add @boundaryml/baml
/* eslint-disable */
const fileMap = {
- "clients.baml": "retry_policy Bar {\n max_retries 3\n strategy {\n type exponential_backoff\n }\n}\n\nretry_policy Foo {\n max_retries 3\n strategy {\n type constant_delay\n delay_ms 100\n }\n}\n\nclient GPT4 {\n provider baml-openai-chat\n options {\n model gpt-4\n api_key env.OPENAI_API_KEY\n }\n} \n\n\nclient GPT4o {\n provider baml-openai-chat\n options {\n model gpt-4o\n api_key env.OPENAI_API_KEY\n }\n} \n\n\nclient GPT4Turbo {\n retry_policy Bar\n provider baml-openai-chat\n options {\n model gpt-4-turbo\n api_key env.OPENAI_API_KEY\n }\n} \n\nclient GPT35 {\n provider baml-openai-chat\n options {\n model \"gpt-3.5-turbo\"\n api_key env.OPENAI_API_KEY\n }\n}\n\nclient Ollama {\n provider ollama\n options {\n model llama2\n api_key \"\"\n }\n}\n\nclient GPT35Azure {\n provider azure-openai\n options {\n resource_name \"west-us-azure-baml\"\n deployment_id \"gpt-35-turbo-default\"\n // base_url \"https://west-us-azure-baml.openai.azure.com/openai/deployments/gpt-35-turbo-default\"\n api_version \"2024-02-01\"\n api_key env.AZURE_OPENAI_API_KEY\n }\n}\n\n\nclient Claude {\n provider baml-anthropic-chat\n options {\n model claude-3-haiku-20240307\n api_key env.ANTHROPIC_API_KEY\n max_tokens 1000\n }\n}\n\nclient Resilient_SimpleSyntax {\n retry_policy Foo\n provider baml-fallback\n options {\n strategy [\n GPT4Turbo\n GPT35\n Lottery_SimpleSyntax\n ]\n }\n} \n\nclient Resilient_ComplexSyntax {\n provider baml-fallback\n options {\n strategy [\n {\n client GPT4Turbo\n }\n {\n client GPT35\n }\n {\n client Claude\n }\n ]\n }\n}\n \nclient Lottery_SimpleSyntax {\n provider baml-round-robin\n options {\n start 0\n strategy [\n GPT35\n Claude\n ]\n }\n}\n\nclient Lottery_ComplexSyntax {\n provider baml-round-robin\n options {\n start 0\n strategy [\n {\n client GPT35\n }\n {\n client Claude\n }\n ] \n }\n}",
+ "clients.baml": "retry_policy Bar {\n max_retries 3\n strategy {\n type exponential_backoff\n }\n}\n\nretry_policy Foo {\n max_retries 3\n strategy {\n type constant_delay\n delay_ms 100\n }\n}\n\nclient GPT4 {\n provider baml-openai-chat\n options {\n model gpt-4\n api_key env.OPENAI_API_KEY\n }\n} \n\n\nclient GPT4o {\n provider baml-openai-chat\n options {\n model gpt-4o\n api_key env.OPENAI_API_KEY\n }\n} \n\n\nclient GPT4Turbo {\n retry_policy Bar\n provider baml-openai-chat\n options {\n model gpt-4-turbo\n api_key env.OPENAI_API_KEY\n }\n} \n\nclient GPT35 {\n provider baml-openai-chat\n options {\n model \"gpt-3.5-turbo\"\n api_key env.OPENAI_API_KEY\n }\n}\n\nclient Ollama {\n provider ollama\n options {\n model llama2\n api_key \"\"\n }\n}\n\nclient GPT35Azure {\n provider azure-openai\n options {\n resource_name \"west-us-azure-baml\"\n deployment_id \"gpt-35-turbo-default\"\n // base_url \"https://west-us-azure-baml.openai.azure.com/openai/deployments/gpt-35-turbo-default\"\n api_version \"2024-02-01\"\n api_key env.AZURE_OPENAI_API_KEY\n }\n}\n\n\nclient Claude {\n provider anthropic\n options {\n model claude-3-haiku-20240307\n api_key env.ANTHROPIC_API_KEY\n max_tokens 1000\n }\n}\n\nclient Resilient_SimpleSyntax {\n retry_policy Foo\n provider baml-fallback\n options {\n strategy [\n GPT4Turbo\n GPT35\n Lottery_SimpleSyntax\n ]\n }\n} \n\nclient Resilient_ComplexSyntax {\n provider baml-fallback\n options {\n strategy [\n {\n client GPT4Turbo\n }\n {\n client GPT35\n }\n {\n client Claude\n }\n ]\n }\n}\n \nclient Lottery_SimpleSyntax {\n provider baml-round-robin\n options {\n start 0\n strategy [\n GPT35\n Claude\n ]\n }\n}\n\nclient Lottery_ComplexSyntax {\n provider baml-round-robin\n options {\n start 0\n strategy [\n {\n client GPT35\n }\n {\n client Claude\n }\n ] \n }\n}",
"fiddle-examples/chain-of-thought.baml": "class Email {\n subject string\n body string\n from_address string\n}\n\nenum OrderStatus {\n ORDERED\n SHIPPED\n DELIVERED\n CANCELLED\n}\n\nclass OrderInfo {\n order_status OrderStatus\n tracking_number string?\n estimated_arrival_date string?\n}\n\nfunction GetOrderInfo(email: Email) -> OrderInfo {\n client GPT4\n prompt #\"\n Given the email below:\n\n ```\n from: {{email.from_address}}\n Email Subject: {{email.subject}}\n Email Body: {{email.body}}\n ```\n\n Extract this info from the email in JSON format:\n {{ ctx.output_format }}\n\n Before you output the JSON, please explain your\n reasoning step-by-step. Here is an example on how to do this:\n 'If we think step by step we can see that ...\n therefore the output JSON is:\n {\n ... the json schema ...\n }'\n \"#\n}",
- "fiddle-examples/chat-roles.baml": "// This will be available as an enum in your Python and Typescript code.\nenum Category2 {\n Refund\n CancelOrder\n TechnicalSupport\n AccountIssue\n Question\n}\n\nfunction ClassifyMessage2(input: string) -> Category {\n client GPT4\n\n prompt #\"\n {{ _.chat(\"system\") }}\n // You can use _.chat(\"system\") to indicate that this text should be a system message\n\n Classify the following INPUT into ONE\n of the following categories:\n\n {{ ctx.output_format }}\n\n {{ _.chat(\"user\") }}\n // And _.chat(\"user\") to indicate that this text should be a user message\n\n INPUT: {{ input }}\n\n Response:\n \"#\n}",
+ "fiddle-examples/chat-roles.baml": "// This will be available as an enum in your Python and Typescript code.\nenum Category2 {\n Refund\n CancelOrder\n TechnicalSupport\n AccountIssue\n Question\n}\n\nfunction ClassifyMessage2(input: string) -> Category {\n client GPT4\n\n prompt #\"\n {{ _.role(\"system\") }}\n // You can use _.role(\"system\") to indicate that this text should be a system message\n\n Classify the following INPUT into ONE\n of the following categories:\n\n {{ ctx.output_format }}\n\n {{ _.role(\"user\") }}\n // And _.role(\"user\") to indicate that this text should be a user message\n\n INPUT: {{ input }}\n\n Response:\n \"#\n}",
"fiddle-examples/classify-message.baml": "// This will be available as an enum in your Python and Typescript code.\nenum Category {\n Refund\n CancelOrder\n TechnicalSupport\n AccountIssue\n Question\n}\n\nfunction ClassifyMessage(input: string) -> Category {\n client GPT4\n\n prompt #\"\n Classify the following INPUT into ONE\n of the following categories:\n\n INPUT: {{ input }}\n\n {{ ctx.output_format }}\n\n Response:\n \"#\n}",
"fiddle-examples/extract-names.baml": "function ExtractNames(input: string) -> string[] {\n client GPT4\n prompt #\"\n Extract the names from this INPUT:\n \n INPUT:\n ---\n {{ input }}\n ---\n\n {{ ctx.output_format }}\n\n Response:\n \"#\n}\n",
- "fiddle-examples/images/image.baml": "function DescribeImage(img: image) -> string {\n client GPT4Turbo\n prompt #\"\n {{ _.chat(role=\"user\") }}\n\n\n Describe the image below in 5 words:\n {{ img }}\n \"#\n\n}\n\nclass FakeImage {\n url string\n}\n\nclass ClassWithImage {\n myImage image\n param2 string\n fake_image FakeImage\n}\n\n// chat role user present\nfunction DescribeImage2(classWithImage: ClassWithImage, img2: image) -> string {\n client GPT4Turbo\n prompt #\"\n {{ _.chat(role=\"user\") }}\n You should return 2 answers that answer the following commands.\n\n 1. Describe this in 5 words:\n {{ classWithImage.myImage }}\n\n 2. Also tell me what's happening here in one sentence:\n {{ img2 }}\n \"#\n}\n\n// no chat role\nfunction DescribeImage3(classWithImage: ClassWithImage, img2: image) -> string {\n client GPT4Turbo\n prompt #\"\n Describe this in 5 words:\n {{ classWithImage.myImage }}\n\n Tell me also what's happening here in one sentence and relate it to the word {{ classWithImage.param2 }}:\n {{ img2 }}\n \"#\n}\n\n\n// system prompt and chat prompt\nfunction DescribeImage4(classWithImage: ClassWithImage, img2: image) -> string {\n client GPT4Turbo\n prompt #\"\n {{ _.chat(role=\"system\")}}\n\n Describe this in 5 words:\n {{ classWithImage.myImage }}\n\n Tell me also what's happening here in one sentence and relate it to the word {{ classWithImage.param2 }}:\n {{ img2 }}\n \"#\n}",
+ "fiddle-examples/images/image.baml": "function DescribeImage(img: image) -> string {\n client GPT4Turbo\n prompt #\"\n {{ _.role(\"user\") }}\n\n\n Describe the image below in 5 words:\n {{ img }}\n \"#\n\n}\n\nclass FakeImage {\n url string\n}\n\nclass ClassWithImage {\n myImage image\n param2 string\n fake_image FakeImage\n}\n\n// chat role user present\nfunction DescribeImage2(classWithImage: ClassWithImage, img2: image) -> string {\n client GPT4Turbo\n prompt #\"\n {{ _.role(\"user\") }}\n You should return 2 answers that answer the following commands.\n\n 1. Describe this in 5 words:\n {{ classWithImage.myImage }}\n\n 2. Also tell me what's happening here in one sentence:\n {{ img2 }}\n \"#\n}\n\n// no chat role\nfunction DescribeImage3(classWithImage: ClassWithImage, img2: image) -> string {\n client GPT4Turbo\n prompt #\"\n Describe this in 5 words:\n {{ classWithImage.myImage }}\n\n Tell me also what's happening here in one sentence and relate it to the word {{ classWithImage.param2 }}:\n {{ img2 }}\n \"#\n}\n\n\n// system prompt and chat prompt\nfunction DescribeImage4(classWithImage: ClassWithImage, img2: image) -> string {\n client GPT4Turbo\n prompt #\"\n {{ _.role(\"system\")}}\n\n Describe this in 5 words:\n {{ classWithImage.myImage }}\n\n Tell me also what's happening here in one sentence and relate it to the word {{ classWithImage.param2 }}:\n {{ img2 }}\n \"#\n}",
"fiddle-examples/symbol-tuning.baml": "enum Category3 {\n Refund @alias(\"k1\")\n @description(\"Customer wants to refund a product\")\n\n CancelOrder @alias(\"k2\")\n @description(\"Customer wants to cancel an order\")\n\n TechnicalSupport @alias(\"k3\")\n @description(\"Customer needs help with a technical issue unrelated to account creation or login\")\n\n AccountIssue @alias(\"k4\")\n @description(\"Specifically relates to account-login or account-creation\")\n\n Question @alias(\"k5\")\n @description(\"Customer has a question\")\n}\n\nfunction ClassifyMessage3(input: string) -> Category {\n client GPT4\n\n prompt #\"\n Classify the following INPUT into ONE\n of the following categories:\n\n INPUT: {{ input }}\n\n {{ ctx.output_format }}\n\n Response:\n \"#\n}",
"main.baml": "generator lang_python {\n output_type python/pydantic\n output_dir \"../python\"\n}\n\ngenerator lang_typescript {\n output_type typescript\n output_dir \"../typescript\"\n}\n",
"test-files/aliases/classes.baml": "class TestClassAlias {\n key string @alias(\"key-dash\") @description(#\"\n This is a description for key\n af asdf\n \"#)\n key2 string @alias(\"key21\")\n key3 string @alias(\"key with space\")\n key4 string //unaliased\n key5 string @alias(\"key.with.punctuation/123\")\n}\n\nfunction FnTestClassAlias(input: string) -> TestClassAlias {\n client GPT35\n prompt #\"\n {{ctx.output_format}}\n \"#\n}\n\ntest FnTestClassAlias {\n functions [FnTestClassAlias]\n args {\n input \"example input\"\n }\n}\n",
@@ -54,12 +54,12 @@ const fileMap = {
"test-files/functions/output/unions.baml": "class UnionTest_ReturnType {\n prop1 string | bool\n prop2 (float | bool)[]\n prop3 (float[] | bool[])\n}\n\nfunction UnionTest_Function(input: string | bool) -> UnionTest_ReturnType {\n client GPT35\n prompt #\"\n Return a JSON blob with this schema: \n {{ctx.output_format}}\n\n JSON:\n \"#\n}\n\ntest UnionTest_Function {\n functions [UnionTest_Function]\n args {\n input \"example input\"\n }\n}\n",
"test-files/functions/prompts/no-chat-messages.baml": "\n\nfunction PromptTestClaude(input: string) -> string {\n client Claude\n prompt #\"\n Tell me a haiku about {{ input }}\n \"#\n}\n\nfunction PromptTestOpenAI(input: string) -> string {\n client GPT35\n prompt #\"\n Tell me a haiku about {{ input }}\n \"#\n}",
"test-files/functions/prompts/with-chat-messages.baml": "\nfunction PromptTestOpenAIChat(input: string) -> string {\n client GPT35\n prompt #\"\n {{ _.role(\"system\") }}\n You are an assistant that always responds in a very excited way with emojis and also outputs this word 4 times after giving a response: {{ input }}\n \n {{ _.role(\"user\") }}\n Tell me a haiku about {{ input }}\n \"#\n}\n\nfunction PromptTestOpenAIChatNoSystem(input: string) -> string {\n client GPT35\n prompt #\"\n You are an assistant that always responds in a very excited way with emojis and also outputs this word 4 times after giving a response: {{ input }}\n \n {{ _.role(\"user\") }}\n Tell me a haiku about {{ input }}\n \"#\n}\n\nfunction PromptTestClaudeChat(input: string) -> string {\n client Claude\n prompt #\"\n {{ _.role(\"system\") }}\n You are an assistant that always responds in a very excited way with emojis and also outputs this word 4 times after giving a response: {{ input }}\n \n {{ _.role(\"user\") }}\n Tell me a haiku about {{ input }}\n \"#\n}\n\nfunction PromptTestClaudeChatNoSystem(input: string) -> string {\n client Claude\n prompt #\"\n You are an assistant that always responds in a very excited way with emojis and also outputs this word 4 times after giving a response: {{ input }}\n \n {{ _.role(\"user\") }}\n Tell me a haiku about {{ input }}\n \"#\n}\n\ntest PromptTestOpenAIChat {\n functions [PromptTestClaude, PromptTestOpenAI, PromptTestOpenAIChat, PromptTestOpenAIChatNoSystem, PromptTestClaudeChat, PromptTestClaudeChatNoSystem]\n args {\n input \"cats\"\n }\n}\n\ntest TestClaude {\n functions [PromptTestClaudeChatNoSystem]\n args {\n input \"lion\"\n }\n}",
- "test-files/functions/v2/basic.baml": "\n\nfunction ExtractResume2(resume: string) -> Resume {\n client Resilient_ComplexSyntax\n prompt #\"\n {{ _.chat('system') }}\n\n Extract the following information from the resume:\n\n Resume:\n <<<<\n {{ resume }}\n <<<<\n\n Output JSON schema:\n {{ ctx.output_format }}\n\n JSON:\n \"#\n}\n\n\nclass WithReasoning {\n value string\n reasoning string @description(#\"\n Why the value is a good fit.\n \"#)\n}\n\n\nclass SearchParams {\n dateRange int? @description(#\"\n In ISO duration format, e.g. P1Y2M10D.\n \"#)\n location string[]\n jobTitle WithReasoning? @description(#\"\n An exact job title, not a general category.\n \"#)\n company WithReasoning? @description(#\"\n The exact name of the company, not a product or service.\n \"#)\n description WithReasoning[] @description(#\"\n Any specific projects or features the user is looking for.\n \"#)\n tags (Tag | string)[]\n}\n\nenum Tag {\n Security\n AI\n Blockchain\n}\n\nfunction GetQuery(query: string) -> SearchParams {\n client GPT4\n prompt #\"\n Extract the following information from the query:\n\n Query:\n <<<<\n {{ query }}\n <<<<\n\n OUTPUT_JSON_SCHEMA:\n {{ ctx.output_format }}\n\n Before OUTPUT_JSON_SCHEMA, list 5 intentions the user may have.\n --- EXAMPLES ---\n 1. \n 2. \n 3. \n 4. \n 5. \n\n {\n ... // OUTPUT_JSON_SCHEMA\n }\n \"#\n}\n\nclass RaysData {\n dataType DataType\n value Resume | Event\n}\n\nenum DataType {\n Resume\n Event\n}\n\nclass Event {\n title string\n date string\n location string\n description string\n}\n\nfunction GetDataType(text: string) -> RaysData {\n client GPT4\n prompt #\"\n Extract the relevant info.\n\n Text:\n <<<<\n {{ text }}\n <<<<\n\n Output JSON schema:\n {{ ctx.output_format }}\n\n JSON:\n \"#\n}\n",
+ "test-files/functions/v2/basic.baml": "\n\nfunction ExtractResume2(resume: string) -> Resume {\n client Resilient_ComplexSyntax\n prompt #\"\n {{ _.role('system') }}\n\n Extract the following information from the resume:\n\n Resume:\n <<<<\n {{ resume }}\n <<<<\n\n Output JSON schema:\n {{ ctx.output_format }}\n\n JSON:\n \"#\n}\n\n\nclass WithReasoning {\n value string\n reasoning string @description(#\"\n Why the value is a good fit.\n \"#)\n}\n\n\nclass SearchParams {\n dateRange int? @description(#\"\n In ISO duration format, e.g. P1Y2M10D.\n \"#)\n location string[]\n jobTitle WithReasoning? @description(#\"\n An exact job title, not a general category.\n \"#)\n company WithReasoning? @description(#\"\n The exact name of the company, not a product or service.\n \"#)\n description WithReasoning[] @description(#\"\n Any specific projects or features the user is looking for.\n \"#)\n tags (Tag | string)[]\n}\n\nenum Tag {\n Security\n AI\n Blockchain\n}\n\nfunction GetQuery(query: string) -> SearchParams {\n client GPT4\n prompt #\"\n Extract the following information from the query:\n\n Query:\n <<<<\n {{ query }}\n <<<<\n\n OUTPUT_JSON_SCHEMA:\n {{ ctx.output_format }}\n\n Before OUTPUT_JSON_SCHEMA, list 5 intentions the user may have.\n --- EXAMPLES ---\n 1. \n 2. \n 3. \n 4. \n 5. \n\n {\n ... // OUTPUT_JSON_SCHEMA\n }\n \"#\n}\n\nclass RaysData {\n dataType DataType\n value Resume | Event\n}\n\nenum DataType {\n Resume\n Event\n}\n\nclass Event {\n title string\n date string\n location string\n description string\n}\n\nfunction GetDataType(text: string) -> RaysData {\n client GPT4\n prompt #\"\n Extract the relevant info.\n\n Text:\n <<<<\n {{ text }}\n <<<<\n\n Output JSON schema:\n {{ ctx.output_format }}\n\n JSON:\n \"#\n}\n",
"test-files/providers/providers.baml": "\n\nfunction TestOllama(input: string) -> string {\n client Ollama\n prompt #\"\n Write a nice haiku about {{ input }}\n \"#\n}\n\ntest TestProvider {\n functions [TestOllama]\n args {\n input \"the moon\"\n }\n}\n",
"test-files/strategies/fallback.baml": "\nclient FaultyClient {\n provider openai\n options {\n model unknown-model\n api_key env.OPENAI_API_KEY\n }\n}\n\n\nclient FallbackClient {\n provider fallback\n options {\n // first 2 clients are expected to fail.\n strategy [\n FaultyClient,\n RetryClientConstant,\n GPT35\n ]\n }\n}\n\nfunction TestFallbackClient() -> string {\n client FallbackClient\n // TODO make it return the client name instead\n prompt #\"\n Say a haiku about mexico.\n \"#\n}",
"test-files/strategies/retry.baml": "\nretry_policy Exponential {\n max_retries 3\n strategy {\n type exponential_backoff\n }\n}\n\nretry_policy Constant {\n max_retries 3\n strategy {\n type constant_delay\n delay_ms 100\n }\n}\n\nclient RetryClientConstant {\n provider openai\n retry_policy Constant\n options {\n model \"gpt-3.5-turbo\"\n api_key \"blah\"\n }\n}\n\nclient RetryClientExponential {\n provider openai\n retry_policy Exponential\n options {\n model \"gpt-3.5-turbo\"\n api_key \"blahh\"\n }\n}\n\nfunction TestRetryConstant() -> string {\n client RetryClientConstant\n prompt #\"\n Say a haiku\n \"#\n}\n\nfunction TestRetryExponential() -> string {\n client RetryClientExponential\n prompt #\"\n Say a haiku\n \"#\n}\n",
"test-files/strategies/roundrobin.baml": "",
- "test-files/testing_pipeline/resume.baml": "class Resume {\n name string\n email string\n phone string\n experience Education[]\n education string[]\n skills string[]\n}\n\nclass Education {\n institution string\n location string\n degree string\n major string[]\n graduation_date string?\n}\n\ntemplate_string AddRole(foo: string) #\"\n {{ _.chat('system')}}\n You are a {{ foo }}. be nice\n\n {{ _.chat('user') }}\n\"#\n\nfunction ExtractResume(resume: string, img: image) -> Resume {\n client GPT4\n prompt #\"\n {{ AddRole(\"Software Engineer\") }}\n\n Extract data:\n \n\n <<<<\n {{ resume }}\n <<<<\n\n {% if img %}\n {{img}}\n {% endif %}\n\n {{ ctx.output_format }}\n \"#\n}\n\ntest sam_resume {\n functions [ExtractResume]\n input {\n img {\n url \"https://avatars.githubusercontent.com/u/1016595?v=4\"\n }\n resume #\"\n Sam Lijin\n he/him | jobs@sxlijin.com | sxlijin.github.io | sxlijin | sxlijin\n\n Experience\n Trunk\n | July 2021 - current\n Trunk Check | Senior Software Engineer | Services TL, Mar 2023 - current | IC, July 2021 - Feb 2023\n Proposed, designed, and led a team of 3 to build a web experience for Check (both a web-only onboarding flow and SaaS offerings)\n Proposed and built vulnerability scanning into Check, enabling it to compete with security products such as Snyk\n Helped grow Check from <1K users to 90K+ users by focusing on product-led growth\n Google | Sept 2017 - June 2021\n User Identity SRE | Senior Software Engineer | IC, Mar 2021 - June 2021\n Designed an incremental key rotation system to limit the global outage risk to Google SSO\n Discovered and severed an undocumented Gmail serving dependency on Identity-internal systems\n Cloud Firestore | Senior Software Engineer | EngProd TL, Aug 2019 - Feb 2021 | IC, Sept 2017 - July 2019\n Metadata TTL system: backlog of XX trillion records, sustained 1M ops/sec, peaking at 3M ops/sec\n\n Designed and implemented a logging system with novel observability and privacy requirements\n Designed and implemented Jepsen-style testing to validate correctness guarantees\n Datastore Migration: zero downtime, xM RPS and xxPB of data over xM customers and 36 datacenters\n\n Designed composite index migration, queue processing migration, progressive rollout, fast rollback, and disk stockout mitigations; implemented transaction log replay, state transitions, and dark launch process\n Designed and implemented end-to-end correctness and performance testing\n Velocity improvements for 60-eng org\n\n Proposed and implemented automated rollbacks: got us out of a 3-month release freeze and prevented 5 outages over the next 6 months\n Proposed and implemented new development and release environments spanning 30+ microservices\n Incident response for API proxy rollback affecting every Google Cloud service\n\n Google App Engine Memcache | Software Engineer | EngProd TL, Apr 2019 - July 2019\n Proposed and led execution of test coverage improvement strategy for a new control plane: reduced rollbacks and ensured strong consistency of a distributed cache serving xxM QPS\n Designed and implemented automated performance regression testing for two critical serving paths\n Used to validate Google-wide rollout of AMD CPUs, by proving a 50p latency delta of <10µs\n Implemented on shared Borg (i.e. vulnerable to noisy neighbors) with <12% variance\n Miscellaneous | Sept 2017 - June 2021\n Redesigned the Noogler training on Google-internal storage technologies & trained 2500+ Nooglers\n Landed multiple google3-wide refactorings, each spanning xxK files (e.g. SWIG to CLIF)\n Education\n Vanderbilt University (Nashville, TN) | May 2017 | B.S. in Computer Science, Mathematics, and Political Science\n\n Stuyvesant HS (New York, NY) | 2013\n\n Skills\n C++, Java, Typescript, Javascript, Python, Bash; light experience with Rust, Golang, Scheme\n gRPC, Bazel, React, Linux\n Hobbies: climbing, skiing, photography\n \"#\n }\n}\n\ntest vaibhav_resume {\n functions [ExtractResume]\n input {\n resume #\"\n Vaibhav Gupta\n linkedin/vaigup\n (972) 400-5279\n vaibhavtheory@gmail.com\n EXPERIENCE\n Google,\n Software Engineer\n Dec 2018-Present\n Seattle, WA\n •\n Augmented Reality,\n Depth Team\n •\n Technical Lead for on-device optimizations\n •\n Optimized and designed front\n facing depth algorithm\n on Pixel 4\n •\n Focus: C++ and SIMD on custom silicon\n \n \n EDUCATION\n University of Texas at Austin\n Aug 2012-May 2015\n Bachelors of Engineering, Integrated Circuits\n Bachelors of Computer Science\n \"#\n }\n}",
+ "test-files/testing_pipeline/resume.baml": "class Resume {\n name string\n email string\n phone string\n experience Education[]\n education string[]\n skills string[]\n}\n\nclass Education {\n institution string\n location string\n degree string\n major string[]\n graduation_date string?\n}\n\ntemplate_string AddRole(foo: string) #\"\n {{ _.role('system')}}\n You are a {{ foo }}. be nice\n\n {{ _.role('user') }}\n\"#\n\nfunction ExtractResume(resume: string, img: image) -> Resume {\n client GPT4\n prompt #\"\n {{ AddRole(\"Software Engineer\") }}\n\n Extract data:\n \n\n <<<<\n {{ resume }}\n <<<<\n\n {% if img %}\n {{img}}\n {% endif %}\n\n {{ ctx.output_format }}\n \"#\n}\n\ntest sam_resume {\n functions [ExtractResume]\n input {\n img {\n url \"https://avatars.githubusercontent.com/u/1016595?v=4\"\n }\n resume #\"\n Sam Lijin\n he/him | jobs@sxlijin.com | sxlijin.github.io | sxlijin | sxlijin\n\n Experience\n Trunk\n | July 2021 - current\n Trunk Check | Senior Software Engineer | Services TL, Mar 2023 - current | IC, July 2021 - Feb 2023\n Proposed, designed, and led a team of 3 to build a web experience for Check (both a web-only onboarding flow and SaaS offerings)\n Proposed and built vulnerability scanning into Check, enabling it to compete with security products such as Snyk\n Helped grow Check from <1K users to 90K+ users by focusing on product-led growth\n Google | Sept 2017 - June 2021\n User Identity SRE | Senior Software Engineer | IC, Mar 2021 - June 2021\n Designed an incremental key rotation system to limit the global outage risk to Google SSO\n Discovered and severed an undocumented Gmail serving dependency on Identity-internal systems\n Cloud Firestore | Senior Software Engineer | EngProd TL, Aug 2019 - Feb 2021 | IC, Sept 2017 - July 2019\n Metadata TTL system: backlog of XX trillion records, sustained 1M ops/sec, peaking at 3M ops/sec\n\n Designed and implemented a logging system with novel observability and privacy requirements\n Designed and implemented Jepsen-style testing to validate correctness guarantees\n Datastore Migration: zero downtime, xM RPS and xxPB of data over xM customers and 36 datacenters\n\n Designed composite index migration, queue processing migration, progressive rollout, fast rollback, and disk stockout mitigations; implemented transaction log replay, state transitions, and dark launch process\n Designed and implemented end-to-end correctness and performance testing\n Velocity improvements for 60-eng org\n\n Proposed and implemented automated rollbacks: got us out of a 3-month release freeze and prevented 5 outages over the next 6 months\n Proposed and implemented new development and release environments spanning 30+ microservices\n Incident response for API proxy rollback affecting every Google Cloud service\n\n Google App Engine Memcache | Software Engineer | EngProd TL, Apr 2019 - July 2019\n Proposed and led execution of test coverage improvement strategy for a new control plane: reduced rollbacks and ensured strong consistency of a distributed cache serving xxM QPS\n Designed and implemented automated performance regression testing for two critical serving paths\n Used to validate Google-wide rollout of AMD CPUs, by proving a 50p latency delta of <10µs\n Implemented on shared Borg (i.e. vulnerable to noisy neighbors) with <12% variance\n Miscellaneous | Sept 2017 - June 2021\n Redesigned the Noogler training on Google-internal storage technologies & trained 2500+ Nooglers\n Landed multiple google3-wide refactorings, each spanning xxK files (e.g. SWIG to CLIF)\n Education\n Vanderbilt University (Nashville, TN) | May 2017 | B.S. in Computer Science, Mathematics, and Political Science\n\n Stuyvesant HS (New York, NY) | 2013\n\n Skills\n C++, Java, Typescript, Javascript, Python, Bash; light experience with Rust, Golang, Scheme\n gRPC, Bazel, React, Linux\n Hobbies: climbing, skiing, photography\n \"#\n }\n}\n\ntest vaibhav_resume {\n functions [ExtractResume]\n input {\n resume #\"\n Vaibhav Gupta\n linkedin/vaigup\n (972) 400-5279\n vaibhavtheory@gmail.com\n EXPERIENCE\n Google,\n Software Engineer\n Dec 2018-Present\n Seattle, WA\n •\n Augmented Reality,\n Depth Team\n •\n Technical Lead for on-device optimizations\n •\n Optimized and designed front\n facing depth algorithm\n on Pixel 4\n •\n Focus: C++ and SIMD on custom silicon\n \n \n EDUCATION\n University of Texas at Austin\n Aug 2012-May 2015\n Bachelors of Engineering, Integrated Circuits\n Bachelors of Computer Science\n \"#\n }\n}",
}
export const getBamlFiles = () => {
return fileMap;
diff --git a/typescript/codemirror-lang-baml/src/index.ts b/typescript/codemirror-lang-baml/src/index.ts
index ef8d68dc4..a3c08955a 100644
--- a/typescript/codemirror-lang-baml/src/index.ts
+++ b/typescript/codemirror-lang-baml/src/index.ts
@@ -109,9 +109,10 @@ const exampleCompletion = BAMLLanguage.data.of({
{ label: 'function' },
),
snippetCompletion(
- 'prompt #"\n {{ _.chat("user") }}\n INPUT:\n ---\n {{ your-variable }}\n ---\n Response:\n"#',
+ 'prompt #"\n {{ _.role("user") }}\n INPUT:\n ---\n {{ your-variable }}\n ---\n Response:\n"#',
{ label: 'prompt #"' },
),
+ snippetCompletion('_.role("${role}")', { label: '_.role"' }),
snippetCompletion('#"${mystring}"#', { label: '#"' }),
snippetCompletion('client GPT4 {\n provider baml-openai-chat\n options {\n model gpt4 \n}}', {
label: 'client GPT4',
diff --git a/typescript/fiddle-frontend/public/_examples/intro/chat-roles/baml_src/main.baml b/typescript/fiddle-frontend/public/_examples/intro/chat-roles/baml_src/main.baml
index db9d52bbd..68888af4c 100644
--- a/typescript/fiddle-frontend/public/_examples/intro/chat-roles/baml_src/main.baml
+++ b/typescript/fiddle-frontend/public/_examples/intro/chat-roles/baml_src/main.baml
@@ -11,8 +11,8 @@ function ClassifyMessage(input: string) -> Category {
client GPT4Turbo
prompt #"
- {# _.chat("system") starts a system message #}
- {{ _.chat("system") }}
+ {# _.role("system") starts a system message #}
+ {{ _.role("system") }}
Classify the following INPUT into ONE
of the following categories:
@@ -20,7 +20,7 @@ function ClassifyMessage(input: string) -> Category {
{{ ctx.output_format }}
{# This starts a user message #}
- {{ _.chat("user") }}
+ {{ _.role("user") }}
INPUT: {{ input }}
diff --git a/typescript/fiddle-frontend/public/_examples/intro/images/baml_src/main.baml b/typescript/fiddle-frontend/public/_examples/intro/images/baml_src/main.baml
index 2ac777b6e..83201aef5 100644
--- a/typescript/fiddle-frontend/public/_examples/intro/images/baml_src/main.baml
+++ b/typescript/fiddle-frontend/public/_examples/intro/images/baml_src/main.baml
@@ -1,7 +1,7 @@
function DescribeImage(img: image) -> string {
client GPT4Turbo
prompt #"
- {{ _.chat(role="user") }}
+ {{ _.role("user") }}
Describe the image below in 5 words:
@@ -20,7 +20,7 @@ class ClassWithImage {
function DescribeImage2(classWithImage: ClassWithImage, img2: image) -> string {
client GPT4Turbo
prompt #"
- {{ _.chat(role="user") }}
+ {{ _.role("user") }}
You should return 2 answers that answer the following commands.
1. Describe this in 5 words:
@@ -48,7 +48,7 @@ function DescribeImage3(classWithImage: ClassWithImage, img2: image) -> string {
function DescribeImage4(classWithImage: ClassWithImage, img2: image) -> string {
client GPT4Turbo
prompt #"
- {{ _.chat(role="system")}}
+ {{ _.role("system")}}
Describe this in 5 words:
{{ classWithImage.myImage }}