diff --git a/README.md b/README.md
index c0d4e4129..76f366849 100644
--- a/README.md
+++ b/README.md
@@ -8,10 +8,9 @@
# BAML
-An LLM function is a prompt template with some defined input variables, and a specific output type like a class, enum, union, optional string, etc.
-
-**BAML is a configuration file format to write better and cleaner LLM functions.**
+**BAML is a domain-specific-language to write and test LLM functions.**
+An LLM function is a prompt template with some defined input variables, and a specific output type like a class, enum, union, optional string, etc.
With BAML you can write and test a complex LLM function in 1/10 of the time it takes to setup a python LLM testing environment.
## Try it out in the playground -- [PromptFiddle.com](https://promptfiddle.com)
diff --git a/docs/docs/calling-baml/dynamic-types.mdx b/docs/docs/calling-baml/dynamic-types.mdx
index e69de29bb..6a542e7af 100644
--- a/docs/docs/calling-baml/dynamic-types.mdx
+++ b/docs/docs/calling-baml/dynamic-types.mdx
@@ -0,0 +1,193 @@
+
+
+Sometimes you have a **output schemas that change at runtime** -- for example if you have a list of Categories that you need to classify that come from a database, or your schema is user-provided.
+
+
+**Dynamic types are types that can be modified at runtime**, which means you can change the output schema of a function at runtime.
+
+Here are the steps to make this work:
+1. Add `@@dynamic` to the class or enum definition to mark it as dynamic
+
+```rust baml
+enum Category {
+ VALUE1 // normal static enum values that don't change
+ VALUE2
+ @@dynamic // this enum can have more values added at runtime
+}
+
+function DynamicCategorizer(input: string) -> Category {
+ client GPT4
+ prompt #"
+ Given a string, classify it into a category
+ {{ input }}
+
+ {{ ctx.output_format }}
+ "#
+}
+
+```
+
+2. Create a TypeBuilder and modify the existing type. All dynamic types you define in BAML will be available as properties of `TypeBuilder`. Think of the typebuilder as a registry of modified runtime types that the baml function will read from when building the output schema in the prompt.
+
+
+
+```python python
+from baml_client.type_builder import TypeBuilder
+from baml_client import b
+
+async def run():
+ tb = TypeBuilder()
+ tb.Category.add_value('VALUE3')
+ tb.Category.add_value('VALUE4')
+ # Pass the typebuilder in the baml_options argument -- the last argument of the function.
+ res = await b.DynamicCategorizer("some input", { "tb": tb })
+ # Now res can be VALUE1, VALUE2, VALUE3, or VALUE4
+ print(res)
+
+```
+
+```typescript TypeScript
+import TypeBuilder from '../baml_client/type_builder'
+import {
+ b
+} from '../baml_client'
+
+async function run() {
+ const tb = new TypeBuilder()
+ tb.Category.addValue('VALUE3')
+ tb.Category.addValue('VALUE4')
+ const res = await b.DynamicCategorizer("some input", { tb: tb })
+ // Now res can be VALUE1, VALUE2, VALUE3, or VALUE4
+ console.log(res)
+}
+```
+
+
+```ruby Ruby
+Not available yet
+```
+
+
+### Dynamic BAML Classes
+Existing BAML classes marked with @@dynamic will be available as properties of `TypeBuilder`.
+
+```rust BAML
+class User {
+ name string
+ age int
+ @@dynamic
+}
+
+function DynamicUserCreator(user_info: string) -> User {
+ client GPT4
+ prompt #"
+ Extract the information from this chunk of text:
+ "{{ user_info }}"
+
+ {{ ctx.output_format }}
+ "#
+}
+```
+
+Modify the `User` schema at runtime:
+
+
+
+```python python
+from baml_client.type_builder import TypeBuilder
+from baml_client import b
+
+async def run():
+ tb = TypeBuilder()
+ tb.User.add_property('email', 'string')
+ tb.User.add_property('address', 'string')
+ res = await b.DynamicUserCreator("some user info", { "tb": tb })
+ # Now res can have email and address fields
+ print(res)
+
+```
+
+```typescript TypeScript
+import TypeBuilder from '../baml_client/type_builder'
+import {
+ b
+} from '../baml_client'
+
+async function run() {
+ const tb = new TypeBuilder()
+ tb.User.add_property('email', tb.string())
+ tb.User.add_property('address', tb.string())
+ const res = await b.DynamicUserCreator("some user info", { tb: tb })
+ // Now res can have email and address fields
+ console.log(res)
+}
+```
+
+
+### Creating new dynamic classes or enums not in BAML
+Here we create a new `Hobbies` enum, and a new class called `Address`.
+
+
+
+
+```python python
+from baml_client.type_builder import TypeBuilder
+from baml_client import b
+
+async def run():
+ tb = TypeBuilder()
+ const hobbiesEnum = tb.add_enum('Hobbies')
+ hobbiesEnum.add_value('Soccer')
+ hobbiesEnum.add_value('Reading')
+
+ address_class = tb.add_class('Address')
+ address_class.add_property('street', tb.string())
+
+ tb.User.add_property('hobby', hobbiesEnum.type().optional())
+ tb.User.add_property('address', addressClass.type().optional())
+ res = await b.DynamicUserCreator("some user info", { "tb": tb })
+ # Now res might have the hobby property, which can be Soccer or Reading
+ print(res)
+
+```
+
+```typescript TypeScript
+import TypeBuilder from '../baml_client/type_builder'
+import {
+ b
+} from '../baml_client'
+
+async function run() {
+ const tb = new TypeBuilder()
+ const hobbiesEnum = tb.addEnum('Hobbies')
+ hobbiesEnum.addValue('Soccer')
+ hobbiesEnum.addValue('Reading')
+
+ const addressClass = tb.addClass('Address')
+ addressClass.addProperty('street', tb.string())
+
+
+ tb.User.addProperty('hobby', hobbiesEnum.type().optional())
+ tb.User.addProperty('address', addressClass.type())
+ const res = await b.DynamicUserCreator("some user info", { tb: tb })
+ // Now res might have the hobby property, which can be Soccer or Reading
+ console.log(res)
+}
+```
+
+
+### Adding descriptions to dynamic types
+
+
+
+```python python
+tb = TypeBuilder()
+tb.User.add_property("email", tb.string()).description("The user's email")
+```
+
+```typescript TypeScript
+const tb = new TypeBuilder()
+tb.User.addProperty("email", tb.string()).description("The user's email")
+```
+
+
\ No newline at end of file
diff --git a/docs/docs/calling-baml/multi-modal.mdx b/docs/docs/calling-baml/multi-modal.mdx
index c9a9f2e68..b847c85cf 100644
--- a/docs/docs/calling-baml/multi-modal.mdx
+++ b/docs/docs/calling-baml/multi-modal.mdx
@@ -2,7 +2,7 @@
## Multi-modal input
### Images
-Calling a BAML function with an `image` input argument type:
+Calling a BAML function with an `image` input argument type (see [image types](/docs/snippets/supported-types))
```python Python
from baml_py import Image
@@ -48,7 +48,7 @@ we're working on it!
### Audio
-Calling functions that have `audio` types.
+Calling functions that have `audio` types. See [audio types](/docs/snippets/supported-types)
```python Python
diff --git a/docs/docs/get-started/what-is-baml.mdx b/docs/docs/get-started/what-is-baml.mdx
index 6af620129..0962c4991 100644
--- a/docs/docs/get-started/what-is-baml.mdx
+++ b/docs/docs/get-started/what-is-baml.mdx
@@ -7,7 +7,7 @@ title: What is BAML?
**BAML is a domain-specific langauge to write and test LLM functions.**
-In BAML prompts are treated like functions. An LLM function is a prompt template with some defined input variables, and a specific output type like a class, enum, union, optional string, etc.
+In BAML, prompts are treated like functions. An LLM function is a prompt template with some defined input variables, and a specific output type like a class, enum, union, optional string, etc.
With BAML you can write and test a complex LLM function in 1/10 of the time it takes to setup a python LLM testing environment.
@@ -55,4 +55,4 @@ Share your creations and ask questions in our [Discord](https://discord.gg/BTNBe
- [BAML + FastAPI + Streaming](https://github.com/BoundaryML/baml-examples/tree/main/fastapi-starter)
## First steps
-We recommend checking the examples in [PromptFiddle.com](https://promptfiddle.com). Once you're ready to start, [install the toolchain](./installation) and read the [guides](../guides/overview).
+We recommend checking the examples in [PromptFiddle.com](https://promptfiddle.com). Once you're ready to start, [install the toolchain](/docs/get-started/quickstart/python) and read the [guides](/docs/calling-baml/calling-functions).
diff --git a/docs/docs_old/guides/boundary_studio/tracing-tagging.mdx b/docs/docs_old/guides/boundary_studio/tracing-tagging.mdx
deleted file mode 100644
index 96628c27c..000000000
--- a/docs/docs_old/guides/boundary_studio/tracing-tagging.mdx
+++ /dev/null
@@ -1,78 +0,0 @@
----
-title: "Tracing and tagging functions"
----
-
-BAML allows you to trace any function with the **@trace** decorator.
-This will make the function's input and output show up in the Boundary dashboard. This works for any python function you define yourself. BAML LLM functions (or any other function declared in a .baml file) are already traced by default. Logs are only sent to the Dashboard if you setup your environment variables correctly.
-
-
-### Prerequisites
-Make sure you setup the [Boundary dashboard](/quickstart#setting-up-the-boundary-dashboard) project before you start.
-
-Make sure you also CTRl+S a .baml file to generate the `baml_client`
-
-### Example
-
-In the example below, we trace each of the two functions `pre_process_text` and `full_analysis`:
-
-```python
-from baml_client import baml
-from baml_client.types import Book, AuthorInfo
-from baml_client.tracing import trace
-
-# You can also add a custom name with trace(name="my_custom_name")
-# By default, we use the function's name.
-@trace
-async def pre_process_text(text):
- return text.replace("\n", " ")
-
-
-@trace
-async def full_analysis(book: Book):
- sentiment = await baml.ClassifySentiment(
- pre_process_text(book.content)
- )
- book_analysis = await baml.AnalyzeBook(book)
- return book_analysis
-
-
-@trace
-async def test_book1():
- content = """Before I could reply that he [Gatsby] was my neighbor...
- """
- processed_content = await pre_process_text(content)
- return await full_analysis(
- Book(
- title="The Great Gatsby",
- author=AuthorInfo(firstName="F. Scott", lastName="Fitzgerald"),
- content=processed_content,
- ),
- )
-```
-
-
-This allows us to see each function invocation, as well as all its children in the dashboard:
-
-
-
-See [running tests](/running-tests) for more information on how to run this test.
-
-### Adding custom tags
-
-The dashboard view allows you to see custom tags for each of the function calls. This is useful for adding metadata to your traces and allow you to query your generated logs more easily.
-
-To add a custom tag, you can import **update_trace_tags(..)** as below:
-
-```python
-from baml_client.tracing import set_tags, trace
-import typing
-
-@trace()
-async def pre_process_text(text):
- set_tags(userId="1234")
-
- # You can also create a dictionary and pass it in
- tags_dict: typing.Dict[str, str] = {"userId": "1234"}
- set_tags(**tags_dict) # "**" unpacks the dictionary
- return text.replace("\n", " ")
-```
diff --git a/docs/docs_old/guides/hello_world/baml-project-structure.mdx b/docs/docs_old/guides/hello_world/baml-project-structure.mdx
deleted file mode 100644
index 84643385d..000000000
--- a/docs/docs_old/guides/hello_world/baml-project-structure.mdx
+++ /dev/null
@@ -1,71 +0,0 @@
----
-title: "BAML Project Structure"
----
-
-At a high level, you will define your AI prompts and interfaces in BAML files.
-The BAML compiler will then generate Python or Typescript code for you to use in
-your application, depending on the generators configured in your `main.baml`:
-
-```rust main.baml
-generator MyGenerator{
- output_type typescript
- output_dir "../"
-}
-```
-
-Here is the typical project structure:
-
-```bash
-.
-├── baml_client/ # Generated code
-├── baml_src/ # Prompts and baml tests live here
-│ └── foo.baml
-# The rest of your project (not generated nor used by BAML)
-├── app/
-│ ├── __init__.py
-│ └── main.py
-└── pyproject.toml
-
-```
-
-1. `baml_src/` is where you write your BAML files with the AI
-function declarations, prompts, retry policies, etc. It also contains
-[generator](/docs/syntax/generator) blocks which configure how and where to
-transpile your BAML code.
-
-2. `baml_client/` is where the BAML compiler will generate code for you,
-based on the types and functions you define in your BAML code. Here's how you'd access the generated functions from baml_client:
-
-
-```python Python
-from baml_client import baml as b
-
-async def use_llm_for_task():
- await b.CallMyLLM()
-```
-
-```typescript TypeScript
-import b from '@/baml_client'
-
-const use_llm_for_task = async () => {
- await b.CallMyLLM();
-};
-```
-
-
-
-
- **You should never edit any files inside baml_client directory** as the whole
- directory gets regenerated on every `baml build` (auto runs on save if using
- the VSCode extension).
-
-
-
- If you ever run into any issues with the generated code (like merge
- conflicts), you can always delete the `baml_client` directory and it will get
- regenerated automatically on save.
-
-
-### imports
-
-BAML by default has global imports. Every entity declared in any `.baml` file is available to all other `.baml` files under the same `baml_src` directory. You **can** have multiple `baml_src` directories, but no promises on how the VSCode extension will behave (yet).
diff --git a/docs/docs_old/guides/hello_world/testing-ai-functions.mdx b/docs/docs_old/guides/hello_world/testing-ai-functions.mdx
deleted file mode 100644
index 7fac24b87..000000000
--- a/docs/docs_old/guides/hello_world/testing-ai-functions.mdx
+++ /dev/null
@@ -1,22 +0,0 @@
----
-title: "Testing AI functions"
----
-
-
-One important way to ensure your AI functions are working as expected is to write unit tests. This is especially important when you're working with AI functions that are used in production, or when you're working with a team.
-
-To test functions:
-1. Install the VSCode extension
-2. Create a test in any .baml file:
-```rust
-test MyTest {
- functions [ExtractResume]
- args {
- resume_text "hello"
- }
-}
-
-```
-3. Run the test in the VSCode extension!
-
-We have more capabilities like assertions coming soon!
\ No newline at end of file
diff --git a/docs/docs_old/guides/hello_world/writing-ai-functions.mdx b/docs/docs_old/guides/hello_world/writing-ai-functions.mdx
deleted file mode 100644
index e3584ab5f..000000000
--- a/docs/docs_old/guides/hello_world/writing-ai-functions.mdx
+++ /dev/null
@@ -1,145 +0,0 @@
----
-title: "BAML AI functions in 2 minutes"
----
-
-
-### Pre-requisites
-
-Follow the [installation](/v3/home/installation) instructions.
-
-{/* The starting project structure will look something like this: */}
-{/* */}
-
-## Overview
-
-Before you call an LLM, ask yourself what kind of input or output youre
-expecting. If you want the LLM to generate text, then you probably want a
-string, but if you're trying to get it to collect user details, you may want it
-to return a complex type like `UserDetails`.
-
-Thinking this way can help you decompose large complex prompts into smaller,
-more measurable functions, and will also help you build more complex workflows
-and agents.
-
-# Extracting a resume from text
-
-The best way to learn BAML is to run an example in our web playground -- [PromptFiddle.com](https://promptfiddle.com).
-
-But at a high-level, BAML is simple to use -- prompts are built using [Jinja syntax](https://jinja.palletsprojects.com/en/3.1.x/) to make working with strings easier. But we extended jinja to add type-support, static analysis of your template variables, and we have a real-time preview of prompts in the BAML VSCode extension no matter how much logic your prompts use.
-
-Here's an example from PromptFiddle:
-
-```rust baml_src/main.baml
-client GPT4Turbo {
- provider openai
- options {
- model gpt-4-turbo
- api_key env.OPENAI_API_KEY
- }
-}
-// Declare the Resume type we want the AI function to return
-class Resume {
- name string
- education Education[] @description("Extract in the same order listed")
- skills string[] @description("Only include programming languages")
-}
-
-class Education {
- school string
- degree string
- year int
-}
-
-// Declare the function signature, with the prompt that will be used to make the AI function work
-function ExtractResume(resume_text: string) -> Resume {
- // An LLM client we define elsewhere, with some parameters and our API key
- client GPT4Turbo
-
- // The prompt uses Jinja syntax
- prompt #"
- Parse the following resume and return a structured representation of the data in the schema below.
-
- Resume:
- ---
- {{ resume_text }}
- ---
-
- {# special macro to print the output instructions. #}
- {{ ctx.output_format }}
-
- JSON:
- "#
-}
-```
-That's it! If you use the VSCode extension, everytime you save this .baml file, it will convert this configuration file into a usable Python or TypeScript function in milliseconds, with full types.
-
-All your types become Pydantic models in Python, or type definitions in Typescript (soon we'll support generating Zod types).
-
-
-## 2. Usage in Python or TypeScript
-
-Our VSCode extension automatically generates a **baml_client** in the language of choice. (Click the tabs for Python or TypeScript)
-
-
-
-```python Python
-from baml_client import baml as b
-# BAML types get converted to Pydantic models
-from baml_client.types import Resume
-import asyncio
-
-async def main():
- resume_text = """Jason Doe
-Python, Rust
-University of California, Berkeley, B.S.
-in Computer Science, 2020
-Also an expert in Tableau, SQL, and C++
-"""
-
- # this function comes from the autogenerated "baml_client".
- # It calls the LLM you specified and handles the parsing.
- resume = await b.ExtractResume(resume_text)
-
- # Fully type-checked and validated!
- assert isinstance(resume, Resume)
-
-
-if __name__ == "__main__":
- asyncio.run(main())
-```
-
-```typescript TypeScript
-import b from 'baml_client'
-
-async function main() {
- const resume_text = `Jason Doe
-Python, Rust
-University of California, Berkeley, B.S.
-in Computer Science, 2020
-Also an expert in Tableau, SQL, and C++
-`
-
- // this function comes from the autogenerated "baml_client".
- // It calls the LLM you specified and handles the parsing.
- const resume = await b.ExtractResume(resume_text)
-
- // Fully type-checked and validated!
- assert resume.name === "Jason Doe"
-}
-
-if (require.main === module) {
- main();
-}
-```
-
-
-
-
- The BAML client exports async versions of your functions, so you can parallelize things easily if you need to. To run async functions sequentially you can easily just wrap them in the `asyncio.run(....)`.
-
- Let us know if you want synchronous versions of your functions instead!
-
-
-## Further reading
-- Browse more PromptFiddle [examples](https://promptfiddle.com)
-- See other types of [function signatures](/docs/syntax/function) possible in BAML.
\ No newline at end of file
diff --git a/docs/docs_old/guides/improve_results/diagnose.mdx b/docs/docs_old/guides/improve_results/diagnose.mdx
deleted file mode 100644
index 97da6264e..000000000
--- a/docs/docs_old/guides/improve_results/diagnose.mdx
+++ /dev/null
@@ -1,16 +0,0 @@
----
-title: "Improve my prompt automatically"
----
-
-Use **Boundary Studio** to automatically improve your prompt by using the **Diagnose** feature! We use **GPT-4 powered analysis** to provide you with improvements you can make to your prompt. We aim to incorporate all the best learnings we've acquired from working with many different customers and models.
-
-We have more improvements here planned, like different suggestions depending on your model being used and task type.
-
-To access it:
-1. Click on the "comment" icon on one of the requests.
-2. Click on the "Diagnose" tab.
-
-
-
-
-This feature is limited for users on the free tier, and available as many times as needed for paid users.
diff --git a/docs/docs_old/guides/improve_results/fine_tune.mdx b/docs/docs_old/guides/improve_results/fine_tune.mdx
deleted file mode 100644
index d7c5d3ca5..000000000
--- a/docs/docs_old/guides/improve_results/fine_tune.mdx
+++ /dev/null
@@ -1,5 +0,0 @@
----
-title: "Fine-tune a model using my production data"
----
-
-Reach out to us on Discord if you want to improve performance, reduce costs or latencies using fine-tuned models! We are working on seamless integrations with fine-tuning platforms.
\ No newline at end of file
diff --git a/docs/docs_old/guides/overview.mdx b/docs/docs_old/guides/overview.mdx
deleted file mode 100644
index d3c800c1c..000000000
--- a/docs/docs_old/guides/overview.mdx
+++ /dev/null
@@ -1,53 +0,0 @@
----
-title: "Table of contents"
----
-
-These tutorials assume you've already done the [Learn BAML](/docs/guides/hello_world/level0) tutorials first and have a hang of some of the basics.
-
-Ping us on [Discord](https://discord.gg/BTNBeXGuaS) if you have any questions!
-
\ No newline at end of file
diff --git a/docs/docs_old/guides/prompt_engineering/chat-prompts.mdx b/docs/docs_old/guides/prompt_engineering/chat-prompts.mdx
deleted file mode 100644
index 3e25606f5..000000000
--- a/docs/docs_old/guides/prompt_engineering/chat-prompts.mdx
+++ /dev/null
@@ -1,5 +0,0 @@
----
-title: "System vs user prompts"
----
-
-See [PromptFiddle demo](https://promptfiddle.com/chat-roles)
\ No newline at end of file
diff --git a/docs/docs_old/guides/prompt_engineering/conditional_rendering.mdx b/docs/docs_old/guides/prompt_engineering/conditional_rendering.mdx
deleted file mode 100644
index 8f70ece36..000000000
--- a/docs/docs_old/guides/prompt_engineering/conditional_rendering.mdx
+++ /dev/null
@@ -1,7 +0,0 @@
----
-title: "Conditionally generate the prompt based on the input variables"
----
-
-Prompts use Jinja syntax to render variables. You can use any jinja syntax you like.
-
-Examples coming soon!
\ No newline at end of file
diff --git a/docs/docs_old/guides/prompt_engineering/serialize_complex_input.mdx b/docs/docs_old/guides/prompt_engineering/serialize_complex_input.mdx
deleted file mode 100644
index f82d0b4a5..000000000
--- a/docs/docs_old/guides/prompt_engineering/serialize_complex_input.mdx
+++ /dev/null
@@ -1,5 +0,0 @@
----
-title: "Customize input variables"
----
-
-Examples coming soon!
\ No newline at end of file
diff --git a/docs/docs_old/guides/prompt_engineering/serialize_list.mdx b/docs/docs_old/guides/prompt_engineering/serialize_list.mdx
deleted file mode 100644
index b5bda6733..000000000
--- a/docs/docs_old/guides/prompt_engineering/serialize_list.mdx
+++ /dev/null
@@ -1,5 +0,0 @@
----
-title: "Serialize a List of chat messages into a prompt"
----
-
-Example coming soon!
\ No newline at end of file
diff --git a/docs/docs_old/guides/prompt_engineering/strategies.mdx b/docs/docs_old/guides/prompt_engineering/strategies.mdx
deleted file mode 100644
index 820db3911..000000000
--- a/docs/docs_old/guides/prompt_engineering/strategies.mdx
+++ /dev/null
@@ -1 +0,0 @@
-# TODO: add symbol tuning here
\ No newline at end of file
diff --git a/docs/docs_old/guides/resilience/fallback.mdx b/docs/docs_old/guides/resilience/fallback.mdx
deleted file mode 100644
index 6fec86273..000000000
--- a/docs/docs_old/guides/resilience/fallback.mdx
+++ /dev/null
@@ -1,5 +0,0 @@
----
-title: "Fall-back to another model on failure"
----
-
-Checkout the [Fallback API reference](/docs/syntax/client/redundancy) to learn how to make a BAML client fall-back to a different LLM on failure.
\ No newline at end of file
diff --git a/docs/docs_old/guides/resilience/retries.mdx b/docs/docs_old/guides/resilience/retries.mdx
deleted file mode 100644
index 26b885ce5..000000000
--- a/docs/docs_old/guides/resilience/retries.mdx
+++ /dev/null
@@ -1,5 +0,0 @@
----
-title: "Add retries to my AI function (and different retry policies)."
----
-
-Checkout the [retry_policy reference](/docs/syntax/client/retry) to add retries to your AI function.
\ No newline at end of file
diff --git a/docs/docs_old/guides/streaming/streaming.mdx b/docs/docs_old/guides/streaming/streaming.mdx
deleted file mode 100644
index c96e4d8d2..000000000
--- a/docs/docs_old/guides/streaming/streaming.mdx
+++ /dev/null
@@ -1,46 +0,0 @@
----
-title: "Streaming structured data"
----
-
-### Streaming partial objects
-The following returns an object that slowly gets filled in as the response comes in. This is useful if you want to start processing the response before it's fully complete.
-You can stream anything from a `string` output type, to a complex object.
-
-Example:
-```
-{"prop1": "hello"}
-{"prop1": "hello how are you"}
-{"prop1": "hello how are you", "prop2": "I'm good, how are you?"}
-{"prop1": "hello how are you", "prop2": "I'm good, how are you?", "prop3": "I'm doing great, thanks for asking!"}
-```
-
-### Python
-```python FastAPI
-from baml_client import b
-
-@app.get("/extract_resume")
-async def extract_resume(resume_text: str):
- async def stream_resume(resume):
- stream = b.stream.ExtractResume(resume_text)
- async for chunk in stream:
- yield str(chunk.model_dump_json()) + "\n"
-
- return StreamingResponse(stream_resume(resume), media_type="text/plain")
-```
-
-
-### TypeScript
-```typescript
-import { b } from '../baml_client'; // or whatever path baml_client is in
-
-export async function streamText() {
- const stream = b.stream.MyFunction(MyInput(...));
- for await (const output of stream) {
- console.log(`streaming: ${output}`); // this is the output type of my function
- }
-
- const finalOutput = await stream.getFinalResponse();
- console.log(`final response: ${finalOutput}`);
-}
-```
-
diff --git a/docs/docs_old/guides/testing/test_with_assertions.mdx b/docs/docs_old/guides/testing/test_with_assertions.mdx
deleted file mode 100644
index ed839cccb..000000000
--- a/docs/docs_old/guides/testing/test_with_assertions.mdx
+++ /dev/null
@@ -1,89 +0,0 @@
----
-title: "Evaluate results with assertions or using LLM Evals"
----
-
-
-
-# Python guide
-To add assertions to your tests, or add more complex testing scenarios, you can use pytest to test your functions, since Playground BAML tests don't currently support assertions.
-
-### Example
-```python test_file.py
-from baml_client import baml as b
-from baml_client.types import Email
-import pytest
-
-# Run `poetry run pytest -m baml_test` in this directory.
-# Setup Boundary Studio to see test details!
-@pytest.mark.asyncio
-async def test_get_order_info():
- order_info = await b.GetOrderInfo(Email(
- subject="Order #1234",
- body="Your order has been shipped. It will arrive on 1st Jan 2022. Product: iPhone 13. Cost: $999.99"
- ))
-
- assert order_info.cost == 999.99
-```
-
- Make sure your test file, the Test class AND/or the test function is prefixed with `Test` or `test` respectively. Otherwise, pytest will not pick up your tests. E.g. `test_foo.py`, `TestFoo`, `test_foo`
-
-
-
-Run `pytest -k 'order_info'` to run this test. To show have pytest show print statements add the `-s` flag.
-
-
- Make sure you are running these commands from your python virtual environment
- (or **`poetry shell`** if you use poetry)
-
-
-For more advanced testing scenarios, helpful commands, and gotchas, check out the [Advanced Guide](./advanced_testing_guide)
-
-
-
-### Using an LLM eval
-You can also declare a new BAML function that you can use in your tests to validate results.
-
-This is helpful for testing more ambiguous LLM free-form text generations. You can measure anything from sentiment, to the tone of of the text.
-
-For example, the following GPT-4-powered function can be used in your tests to assert that a given generated sentence is professional-sounding:
-
-```rust
-enum ProfessionalismRating {
- GREAT
- OK
- BAD
-}
-
-function ValidateProfessionalism(input: string) -> ProfessionalismRating {
- client GPT4
- prompt #"
- Is this text professional-sounding?
-
- Use the following scale:
- {{ ctx.output_format }}
-
- Sentence: {{ input }}
-
- ProfessionalismRating:
- "#
-}
-```
-
-```python
-from baml_client import baml as b
-from baml_client.types import Email, ProfessionalismRating
-import pytest
-
-@pytest.mark.asyncio
-async def test_message_professionalism():
- order_info = await b.GetOrderInfo(Email(
- subject="Order #1234",
- body="Your order has been shipped. It will arrive on 1st Jan 2022. Product: iPhone 13. Cost: $999.99"
- ))
-
- assert order_info.cost == 999.99
-
- professionalism_rating = await b.ValidateProfessionalism(order_info.body)
- assert professionalism_rating == b.ProfessionalismRating.GREAT
-```
-
diff --git a/docs/docs_old/guides/testing/unit_test.mdx b/docs/docs_old/guides/testing/unit_test.mdx
deleted file mode 100644
index 808b45880..000000000
--- a/docs/docs_old/guides/testing/unit_test.mdx
+++ /dev/null
@@ -1,11 +0,0 @@
----
-title: "Test an AI function"
----
-
-
-There are two types of tests you may want to run on your AI functions:
-
-- Unit Tests: Tests a single AI function (using the playground)
-- Integration Tests: Tests a pipeline of AI functions and potentially buisness logic
-
-For integration tests, see the [Integration Testing Guide](/docs/guides/testing/test_with_assertions).
\ No newline at end of file
diff --git a/docs/docs_old/home/baml-in-2-min.mdx b/docs/docs_old/home/baml-in-2-min.mdx
deleted file mode 100644
index 78d05488a..000000000
--- a/docs/docs_old/home/baml-in-2-min.mdx
+++ /dev/null
@@ -1,4 +0,0 @@
----
-title: "BAML in 2 minutes"
-url: "docs/guides/hello_world/writing-ai-functions"
----
diff --git a/docs/docs_old/home/demo.mdx b/docs/docs_old/home/demo.mdx
deleted file mode 100644
index 91e6e5afa..000000000
--- a/docs/docs_old/home/demo.mdx
+++ /dev/null
@@ -1,19 +0,0 @@
----
-title: "Interactive Demo"
----
-
-## Interactive playground
-You can try BAML online over at [Prompt Fiddle](https://www.promptfiddle.com)
-
-
-## Examples built with BAML
-
-You can find the code here: https://github.com/BoundaryML/baml-examples/tree/main/nextjs-starter
-
-
-| Example | Link |
-| - | - |
-| Streaming Simple Objects | https://baml-examples.vercel.app/examples/stream-object |
-| RAG + Citations | https://baml-examples.vercel.app/examples/rag |
-| Generative UI / Streaming charts | https://baml-examples.vercel.app/examples/book-analyzer |
-| Getting a recipe | https://baml-examples.vercel.app/examples/get-recipe |
diff --git a/docs/docs_old/home/example_nextjs.mdx b/docs/docs_old/home/example_nextjs.mdx
deleted file mode 100644
index aa0cb5398..000000000
--- a/docs/docs_old/home/example_nextjs.mdx
+++ /dev/null
@@ -1,94 +0,0 @@
----
-title: "Typescript Installation"
----
-
-Here's a sample repository:
-https://github.com/BoundaryML/baml-examples/tree/main/nextjs-starter
-
-To set up BAML in typescript do the following:
-
-
-
- https://marketplace.visualstudio.com/items?itemName=boundary.BAML
-
- - syntax highlighting
- - testing playground
- - prompt previews
-
-
-
- ```bash npm
- npm install @boundaryml/baml
- ```
-
- ```bash pnpm
- pnpm add @boundaryml/baml
- ```
-
- ```bash yarn
- yarn add @boundaryml/baml
- ```
-
-
-
- This will give you some starter BAML code in a `baml_src` directory.
-
- ```bash npm
- npx baml-cli init
- ```
-
- ```bash pnpm
- pnpx baml-cli init
- ```
-
- ```bash yarn
- yarn baml-cli init
- ```
-
-
-
-
- This command will help you convert `.baml` files to `.ts` files. Everytime you modify your `.baml` files,
- you must re-run this command, and regenerate the `baml_client` folder.
-
-
- If you download our [VSCode extension](https://marketplace.visualstudio.com/items?itemName=Boundary.baml-extension), it will automatically generate `baml_client` on save!
-
-
- ```json package.json
- {
- "scripts": {
- // Add a new command
- "baml-generate": "baml-cli generate",
- // Always call baml-generate on every build.
- "build": "npm run baml-generate && tsc --build",
- }
- }
- ```
-
-
- If `baml_client` doesn't exist, make sure to run `npm run baml-generate`
-
- ```typescript index.ts
- import {b} from "baml_client"
- import type {Resume} from "baml_client/types"
-
- async function Example(raw_resume: string): Resume {
- // BAML's internal parser guarantees ExtractResume
- // to be always return a Resume type
- const response = await b.ExtractResume(raw_resume);
- return response;
- }
-
- async function ExampleStream(raw_resume: string): Resume {
- const stream = b.stream.ExtractResume(raw_resume);
- for await (const msg of stream) {
- console.log(msg) // This will be a Partial type
- }
-
- // This is guaranteed to be a Resume type.
- return await stream.get_final_response();
- }
- ```
-
-
\ No newline at end of file
diff --git a/docs/docs_old/home/example_python.mdx b/docs/docs_old/home/example_python.mdx
deleted file mode 100644
index 9005b1e76..000000000
--- a/docs/docs_old/home/example_python.mdx
+++ /dev/null
@@ -1,77 +0,0 @@
----
-title: "Python Installation"
----
-
-Here's a sample repository:
-https://github.com/BoundaryML/baml-examples/tree/main/python-fastapi-starter
-
-To set up BAML in typescript do the following:
-
-
-
- https://marketplace.visualstudio.com/items?itemName=boundary.BAML
-
- - syntax highlighting
- - testing playground
- - prompt previews
-
-
- In your VSCode User Settings, highly recommend adding this to get better autocomplete for python in general, not just BAML.
-
- ```json
- {
- "python.analysis.typeCheckingMode": "basic"
- }
- ```
-
-
-
- ```bash
- pip install baml-py
- ```
-
-
- This will give you some starter BAML code in a `baml_src` directory.
-
- ```bash
- baml-cli init
- ```
-
-
-
- This command will help you convert `.baml` files to `.py` files. Everytime you modify your `.baml` files,
- you must re-run this command, and regenerate the `baml_client` folder.
-
-
- If you download our [VSCode extension](https://marketplace.visualstudio.com/items?itemName=Boundary.baml-extension), it will automatically generate `baml_client` on save!
-
-
- ```bash
- baml-cli generate
- ```
-
-
- If `baml_client` doesn't exist, make sure to run the previous step!
-
- ```python main.py
- from baml_client import b
- from baml_client.types import Resume
-
- async def example(raw_resume: str) -> Resume:
- # BAML's internal parser guarantees ExtractResume
- # to be always return a Resume type
- response = await b.ExtractResume(raw_resume)
- return response
-
- async def example_stream(raw_resume: str) -> Resume:
- stream = b.stream.ExtractResume(raw_resume)
- async for msg in stream:
- print(msg) # This will be a PartialResume type
-
- # This will be a Resume type
- final = stream.get_final_response()
-
- return final
- ```
-
-
\ No newline at end of file
diff --git a/docs/docs_old/home/example_ruby.mdx b/docs/docs_old/home/example_ruby.mdx
deleted file mode 100644
index 81cca424f..000000000
--- a/docs/docs_old/home/example_ruby.mdx
+++ /dev/null
@@ -1,73 +0,0 @@
----
-title: "Ruby Installation"
----
-
-Here's a sample repository: [`BoundaryML/baml-examples`](https://github.com/BoundaryML/baml-examples/tree/main/ruby-example)
-
-To set up BAML in typescript do the following:
-
-
-
- https://marketplace.visualstudio.com/items?itemName=boundary.BAML
-
- - syntax highlighting
- - testing playground
- - prompt previews
-
-
- ```bash
- bundle init
- bundle add baml sorbet-runtime sorbet-struct-comparable
- ```
-
-
- This will give you some starter BAML code in a `baml_src` directory.
-
- ```bash
- bundle exec baml-cli init
- ```
-
-
-
- This command will help you convert `.baml` files to `.rb` files. Everytime you modify your `.baml` files,
- you must re-run this command, and regenerate the `baml_client` folder.
-
-
- If you download our [VSCode extension](https://marketplace.visualstudio.com/items?itemName=Boundary.baml-extension), it will automatically generate `baml_client` on save!
-
-
- ```bash
- bundle exec baml-cli generate
- ```
-
-
- If `baml_client` doesn't exist, make sure to run the previous step!
-
- ```ruby main.rb
- require_relative "baml_client/client"
-
- def example(raw_resume)
- # r is an instance of Baml::Types::Resume, defined in baml_client/types
- r = Baml.Client.ExtractResume(resume: raw_resume)
-
- puts "ExtractResume response:"
- puts r.inspect
- end
-
- def example_stream(raw_resume)
- stream = Baml.Client.stream.ExtractResume(resume: raw_resume)
-
- stream.each do |msg|
- # msg is an instance of Baml::PartialTypes::Resume
- # defined in baml_client/partial_types
- puts msg.inspect
- end
-
- stream.get_final_response
- end
-
- example 'Grace Hopper created COBOL'
- example_stream 'Grace Hopper created COBOL'
- ```
-
-
diff --git a/docs/docs_old/home/faq.mdx b/docs/docs_old/home/faq.mdx
deleted file mode 100644
index a890a3943..000000000
--- a/docs/docs_old/home/faq.mdx
+++ /dev/null
@@ -1,36 +0,0 @@
----
-title: FAQs
----
-
-
-
-You don't! BAML files get converted into Python or Typescript using the BAML CLI. You can run the generated code locally or in the cloud.
-
-
-Contact us at contact@boundaryml.com for more details. We have a free tier available.
-
-
-Nope. We do not proxy LLM calls for you. BAML just generates a bunch of python or TypeScript code you can run on your machine. If you opt-in to our logging and analytics we only send logs to our backend. Deploying your app is like deploying any other python/TS application.
-
-
-
-BAML isn't a full-fledged language -- it's more of a configuration file / templating language. You can load it into your code as if it were YAML. Think of it as an extension of [Jinja](https://jinja.palletsprojects.com/en/3.1.x/) or Handlebars.
-
-Earlier we tried making a YAML-based sdk, and even a Python SDK, but they were not powerful enough.
-
-
-
- We are working on more tools like [PromptFiddle.com](https://promptfiddle.com) to make it easier to edit prompts for non-engineers, but we want to make sure all your prompts can be backed by a file in your codebase and versioned by Git.
-
-
-
- Typescript, Python, and Ruby
- Contact us for more
-
-
-
-
- The VSCode extension and BAML are free to use (Open Source as well!). We only charge for usage of
- Boundary Studio, our observability platform. Contact us for pricing. We do have a hobbyist tier and a startup tier available.
-
-
diff --git a/docs/docs_old/home/installation.mdx b/docs/docs_old/home/installation.mdx
deleted file mode 100644
index 819e00d3a..000000000
--- a/docs/docs_old/home/installation.mdx
+++ /dev/null
@@ -1,53 +0,0 @@
----
-title: Installation
----
-
-
-
- [https://marketplace.visualstudio.com/items?itemName=boundary.BAML](https://marketplace.visualstudio.com/items?itemName=boundary.Baml-extension)
-
- If you are using python, [enable typechecking in VSCode's](https://code.visualstudio.com/docs/python/settings-reference#_python-language-server-settings) `settings.json`:
- ```
- "python.analysis.typecheckingMode": "basic"
- ```
-
-
-
- ```bash Python
- pip install baml-py
- ```
-
- ```bash Typescript
- npm install @boundaryml/baml
- ```
-
-
-
-
- ```bash Python
- # Should be installed via pip install baml-py
- baml-cli init
- ```
-
- ```bash Typescript (npx)
- npx baml-cli init
- ```
-
- ```bash Typescript (pnpx)
- pnpx baml-cli init
- ```
-
-
-
- - [PromptFiddle](https://promptfiddle.com): Interactive examples to learn BAML. (recommended)
- - [BAML Tutorials](docs/guides): Advanced guides on using BAML.
- - [BAML Syntax](/v3/syntax): Documentation for BAML syntax.
- - [BAML Starters for NextJS and FastAPI](https://github.com/BoundaryML/baml-examples/tree/main)
-
-
-
-## Ensure BAML extension can generate your Python / TS client
-
-Save a `.baml` file using VSCode, and you should see a successful generation message pop up!
-
-You can also run `baml-cli generate --from path-to-baml-src` to generate the client code manually.
\ No newline at end of file
diff --git a/docs/docs_old/home/overview.mdx b/docs/docs_old/home/overview.mdx
deleted file mode 100644
index d0e98c7c4..000000000
--- a/docs/docs_old/home/overview.mdx
+++ /dev/null
@@ -1,58 +0,0 @@
----
-title: What is BAML?
-"og:description": BAML is a domain-specific langauge to write better and cleaner LLM functions.
-"og:image": https://mintlify.s3-us-west-1.amazonaws.com/gloo/images/v3/AITeam.png
-"twitter:image": https://mintlify.s3-us-west-1.amazonaws.com/gloo/images/v3/AITeam.png
----
-
-An LLM function is a prompt template with some defined input variables, and a specific output type like a class, enum, union, optional string, etc.
-
-**BAML is a domain-specific langauge to write better and cleaner LLM functions.**
-
-
-With BAML you can write and test a complex LLM function in 1/10 of the time it takes to setup a python LLM testing environment.
-
-## Try it out in the playground -- [PromptFiddle.com](https://promptfiddle.com)
-
-Share your creations and ask questions in our [Discord](https://discord.gg/BTNBeXGuaS).
-
-
-## Demo video
-
-
-
-## Features
-
-### Language features
-- **Python / Typescript / Ruby support**: Plug-and-play BAML with other languages
-- **JSON correction**: BAML fixes bad JSON returned by LLMs (e.g. unquoted keys, newlines, comments, extra quotes, and more)
-- **Wide model support**: Ollama, Openai, Anthropic, Gemini. Tested on small models like Llama2
-- **Streaming**: Stream structured partial outputs
-- **Resilience and fallback features**: Add retries, redundancy, to your LLM calls
-
-### Developer Experience
-- **Stability**: BAML works the same in EVERY language we support. No more "it works in python but not in Typescript"
-- **Type safety**: BAML is a typed language, so you get type errors before you run your LLM, and autocomplete in your IDE
-- **Realtime Prompt Previews**: See the full prompt always, even if it has loops and conditionals
-- **Testing support**: Test functions in the playground with 1 click
-
-### Production features
-- **Observability Platform**: Use Boundary Studio to visualize your functions and replay production requests with 1 click
-
-## Companies using BAML
-
-- [Zenfetch](https://zenfetch.com/) - ChatGPT for your bookmarks
-- [Vetrec](https://www.vetrec.io/) - AI-powered Clinical Notes for Veterinarians
-- [MagnaPlay](https://www.magnaplay.com/) - Production-quality machine translation for games
-- [Aer Compliance](https://www.aercompliance.com/) - AI-powered compliance tasks
-- [Haven](https://www.usehaven.ai/) - Automate Tenant communications with AI
-- [Muckrock](https://www.muckrock.com/) - FOIA request tracking and filing
-- and more! [Let us know](https://calendly.com/boundaryml/meeting-with-founders) if you want to be showcased or want to work with us 1-1 to solve your usecase.
-
-## Starter projects
-
-- [BAML + NextJS 14 + Streaming](https://github.com/BoundaryML/baml-examples/tree/main/nextjs-starter)
-- [BAML + FastAPI + Streaming](https://github.com/BoundaryML/baml-examples/tree/main/fastapi-starter)
-
-## First steps
-We recommend checking the examples in [PromptFiddle.com](https://promptfiddle.com). Once you're ready to start, [install the toolchain](./installation) and read the [guides](../guides/overview).
diff --git a/docs/docs_old/home/roadmap.mdx b/docs/docs_old/home/roadmap.mdx
deleted file mode 100644
index ea05334a8..000000000
--- a/docs/docs_old/home/roadmap.mdx
+++ /dev/null
@@ -1,15 +0,0 @@
----
-title: "Roadmap"
----
-
-### Language Support
-
-Features are available in all languages at equal parity unless otherwise noted.
-
-| Language Support | Status | Notes |
-| ---------------- | ------ | ----------------------------------- |
-| Python | ✅ | |
-| TypeScript | ✅ | |
-| Ruby | 🚧 | Alpha release, contact us to use it |
-
-Contact us on Discord if you have a language you'd like to see supported.
diff --git a/docs/docs_old/home/running-tests.mdx b/docs/docs_old/home/running-tests.mdx
deleted file mode 100644
index 2e3eb0fe1..000000000
--- a/docs/docs_old/home/running-tests.mdx
+++ /dev/null
@@ -1,26 +0,0 @@
----
-title: "Running Tests"
----
-
-## Using the playground
-
-Use the playground to run tests against individual function impls.
-
-
-
-## From BAML Studio
-
-Coming soon
-You can also create tests from production logs in BAML Studio. Any weird or atypical
-user inputs can be used to create a test case with just 1 click.
-
-## Programmatically
-
-Tests can also be defined using common testing frameworks like pytest.
diff --git a/docs/docs_old/syntax/class.mdx b/docs/docs_old/syntax/class.mdx
deleted file mode 100644
index f5b0fc74b..000000000
--- a/docs/docs_old/syntax/class.mdx
+++ /dev/null
@@ -1,67 +0,0 @@
----
-title: "class"
----
-
-Classes consist of a name, a list of properties, and their [types](/docs/syntax/type).
-In the context of LLMs, classes describe the type of the variables you can inject into prompts and extract out from the response. In python, classes are represented by [pydantic](https://pydantic-docs.helpmanual.io/) models.
-
-
-```llvm Baml
-class Foo {
- property1 string
- property2 int?
- property3 Bar[]
- property4 MyEnum
-}
-```
-
-```python Python Equivalent
-from pydantic import BaseModel
-from path.to.bar import Bar
-from path.to.my_enum import MyEnum
-
-class Foo(BaseModel):
- property1: str
- property2: Optional[int]= None
- property3: List[Bar]
- property4: MyEnum
-```
-
-```typescript Typescript Equivalent
-import z from "zod";
-import { BarZod } from "./path/to/bar";
-import { MyEnumZod } from "./path/to/my_enum";
-
-const FooZod = z.object({
- property1: z.string(),
- property2: z.number().int().nullable().optional(),
- property3: z.array(BarZod),
- property4: MyEnumZod,
-});
-
-type Foo = z.infer;
-```
-
-
-
-## Properties
-
-Classes may have any number of properties.
-Property names must follow:
-
-- Must start with a letter
-- Must contain only letters, numbers, and underscores
-- Must be unique within the class
-
-The type of a property can be any [supported type](/docs/syntax/type)
-
-### Default values
-
-- Not yet supported. For optional properties, the default value is `None` in python.
-
-## Inheritance
-
-Not supported. Like rust, we take the stance that [composition is better than inheritance](https://www.digitalocean.com/community/tutorials/composition-vs-inheritance).
-
-## aliases, descriptions
-Classes support aliases, descriptions, and other kinds of attributes. See the [prompt engineering docs](./prompt_engineering/class)
diff --git a/docs/docs_old/syntax/client/client.mdx b/docs/docs_old/syntax/client/client.mdx
deleted file mode 100644
index fd309a96c..000000000
--- a/docs/docs_old/syntax/client/client.mdx
+++ /dev/null
@@ -1,229 +0,0 @@
----
-title: client
----
-
-A **client** is the mechanism by which a function calls an LLM.
-
-## Syntax
-
-```rust
-client Name {
- provider ProviderName
- options {
- // ...
- }
-}
-```
-
-- `Name`: The name of the client (can be any [a-zA-Z], numbers or `_`). Must start with a letter.
-
-## Properties
-
-| Property | Type | Description | Required |
-| -------------- | -------------------- | -------------------------------------------------- | -------- |
-| `provider` | name of the provider | The provider to use. | Yes |
-| `options` | key-value pair | These are passed through directly to the provider. | No |
-| `retry_policy` | name of the policy | [Learn more](/docs/syntax/client/retry) | No |
-
-## Providers
-
-BAML ships with the following providers (you can can also write your own!):
-
-- LLM client providers
- - `openai`
- - `azure-openai`
- - `anthropic`
- - `google-ai`
- - `ollama`
-- Composite client providers
- - `fallback`
- - `round-robin`
-
-There are two primary types of LLM clients: chat and completion. BAML abstracts
-away the differences between these two types of LLMs by putting that logic in
-the clients.
-
-You can call a chat client with a single completion prompt and it will
-automatically map it to a chat prompt. Similarly you can call a completion
-client with multiple chat prompts and it will automatically map it to a
-completion prompt.
-
-### OpenAI/Azure
-
-Provider names:
-
-- `openai-azure`
-
-You must pick the right provider for the type of model you are using. For
-example, if you are using a GPT-3 model, you must use a `chat` provider, but if
-you're using the instruct model, you must use a `completion` provider.
-
-You can see all models supported by OpenAI [here](https://platform.openai.com/docs/models).
-
-Accepts any options as defined by [OpenAI/Azure SDK](https://github.com/openai/openai-python/blob/9e6e1a284eeb2c20c05a03831e5566a4e9eaba50/src/openai/types/chat/completion_create_params.py#L28)
-
-See [Azure Docs](https://learn.microsoft.com/en-us/azure/ai-services/openai/quickstart?tabs=command-line,python&pivots=programming-language-python#create-a-new-python-application) to learn how to get your Azure API key.
-
-```rust
-// A client that uses the OpenAI chat API.
-client MyGPT35Client {
- // Since we're using a GPT-3 model, we must use a chat provider.
- provider openai
- options {
- model gpt-3.5-turbo
- // Set the api_key parameter to the OPENAI_API_KEY environment variable
- api_key env.OPENAI_API_KEY
- }
-}
-
-// A client that uses the OpenAI chat API.
-client MyAzureClient {
- // I configured the deployment to use a GPT-3 model,
- // so I must use a chat provider.
- provider openai-azure
- options {
- api_key env.AZURE_OPENAI_KEY
- // This may change in the future
- api_version "2023-05-15"
- api_type azure
- azure_endpoint env.AZURE_OPENAI_ENDPOINT
- model "gpt-35-turbo-default"
- }
-}
-```
-
-
-### Anthropic
-
-Provider names:
-
-- `anthropic`
-
-Accepts any options as defined by [Anthropic SDK](https://github.com/anthropics/anthropic-sdk-python/blob/fc90c357176b67cfe3a8152bbbf07df0f12ce27c/src/anthropic/types/completion_create_params.py#L20)
-
-```rust
-client MyClient {
- provider baml-anthropic
- options {
- model claude-2
- max_tokens_to_sample 300
- }
-}
-```
-### Google
-
-Provider names:
-- `google-ai`
-
-Accepts any options as defined by the [Gemini SDK](https://ai.google.dev/gemini-api/docs/get-started/tutorial?lang=rest#configuration).
-
-```rust
-client MyGoogleClient {
- provider google-ai
- options{
- model "gemini-1.5-pro-001"
- }
-}
-```
-
-### Ollama
-
-- BAML Python Client >= 0.18.0
-- BAML Typescript Client >= 0.0.6
-
-Provider names:
-
-- `ollama`
-
-Accepts any options as defined by [Ollama SDK](https://github.com/ollama/ollama/blob/main/docs/api.md#generate-a-chat-completion).
-
-```rust
-client MyOllamaClient {
- provider ollama
- options {
- model llama2
- }
-}
-```
-#### Requirements
-
-1. For Ollama, in your terminal run `ollama serve`
-2. In another window, run `ollama run llama2` (or your model), and you should be good to go.
-3. If your Ollama port is not 11434, you can specify the endpoint manually.
-
-```rust
-client MyClient {
- provider ollama
- options {
- model llama2
- options {
- temperature 0
- base_url "http://localhost:" // Default is 11434
- }
- }
-}
-```
-
-
-This is not the Vertex AI Gemini API, but the Google Generative AI Gemini API, which supports the same models but at a different endpoint.
-
-
-### Fallback
-
-The `baml-fallback` provider allows you to define a resilient client, by
-specifying strategies for re-running failed requests. See
-[Fallbacks/Redundancy](/docs/syntax/client/redundancy) for more information.
-
-### Round Robin
-
-The `baml-round-robin` provider allows you to load-balance your requests across
-multiple clients. Here's an example:
-
-```rust
-client MyClient {
- provider round-robin
- options {
- strategy [
- MyGptClient
- MyAnthropicClient
- ]
- }
-}
-```
-
-This will alternate between `MyGptClient` and `MyAnthropicClient` for successive
-requests, starting from a randomly selected client (so that if you run multiple
-instances of your application, they won't all start with the same client).
-
-If you want to control which client is used for the first request, you can specify
-a `start` index, which tells BAML to start with the client at index `start`, like
-so:
-
-```rust
-client MyClient {
- provider baml-round-robin
- options {
- start 1
- strategy [
- MyGptClient
- MyAnthropicClient
- ]
- }
-}
-```
-
-## Other providers
-You can use the `openai` provider if the provider you're trying to use has the same ChatML response format (i.e. HuggingFace via their Inference Endpoint or your own local endpoint)
-
-Some providers ask you to add a `base_url`, which you can do like this:
-
-```rust
-client MyClient {
- provider openai
- options {
- model some-custom-model
- api_key env.OPEN
- base_url "https://some-custom-url-here"
- }
-}
-```
\ No newline at end of file
diff --git a/docs/docs_old/syntax/client/redundancy.mdx b/docs/docs_old/syntax/client/redundancy.mdx
deleted file mode 100644
index 3e11ce8f8..000000000
--- a/docs/docs_old/syntax/client/redundancy.mdx
+++ /dev/null
@@ -1,52 +0,0 @@
----
-title: Fallbacks/Redundancy
----
-
-Many LLMs are subject to fail due to transient errors. Setting up a fallback allows you to switch to a different LLM when prior LLMs fail (e.g. outage, high latency, rate limits, etc).
-
-To accomplish this, instead of new syntax, you can simple define a `client` using a `baml-fallback` provider.
-
-The `baml-fallback` provider takes a `strategy` option, which is a list of `client`s to try in order. If the first client fails, the second client is tried, and so on.
-
-```rust
-client MySafeClient {
- provider baml-fallback
- options {
- // First try GPT4 client, if it fails, try GPT35 client.
- strategy [
- GPT4,
- GPT35
- // If you had more clients, you could add them here.
- // Anthropic
- ]
- }
-}
-
-client GPT4 {
- provider baml-openai-chat
- options {
- // ...
- }
-}
-
-client GPT35 {
- provider baml-openai-chat
- options {
- // ...
- }
-}
-```
-
-Fallbacks are triggered on any error.
-
-Errors codes are:
-| Name | Error Code |
-| ----------------- | -------------------- |
-| BAD_REQUEST | 400 |
-| UNAUTHORIZED | 401 |
-| FORBIDDEN | 403 |
-| NOT_FOUND | 404 |
-| RATE_LIMITED | 429 |
-| INTERNAL_ERROR | 500 |
-| SERVICE_UNAVAILABLE | 503 |
-| UNKNOWN | 1 |
diff --git a/docs/docs_old/syntax/client/retry.mdx b/docs/docs_old/syntax/client/retry.mdx
deleted file mode 100644
index 0584bc561..000000000
--- a/docs/docs_old/syntax/client/retry.mdx
+++ /dev/null
@@ -1,80 +0,0 @@
----
-title: retry_policy
----
-
-Many LLMs are subject to fail due to transient errors. The retry policy allows you to configure how many times and how the client should retry a failed operation before giving up.
-
-## Syntax
-
-```rust
-retry_policy PolicyName {
- max_retries int
- strategy {
- type constant_delay
- delay_ms int? // defaults to 200
- } | {
- type exponential_backoff
- delay_ms int? // defaults to 200
- max_delay_ms int? // defaults to 10000
- multiplier float? // defaults to 1.5
- }
-}
-```
-
-### Properties
-
-| Name | Description | Required |
-| ------------- | ----------------------------------------------------------------------- | -------------------------------------- |
-| `max_retries` | The maximum number of times the client should retry a failed operation. | YES |
-| `strategy` | The strategy to use for retrying failed operations. | NO, defauts to `constant_delay(200ms)` |
-
-You can read more about specific retry strategy param:
-
-- [constant_delay](https://tenacity.readthedocs.io/en/latest/api.html?highlight=wait_exponential#tenacity.wait.wait_fixed)
-- [exponential_backoff](https://tenacity.readthedocs.io/en/latest/api.html?highlight=wait_exponential#tenacity.wait.wait_exponential)
-
-## Conditions for retrying
-
-If the client encounters a transient error, it will retry the operation. The following errors are considered transient:
-| Name | Error Code | Retry |
-| ----------------- | -------------------- | --- |
-| BAD_REQUEST | 400 | NO |
-| UNAUTHORIZED | 401 | NO |
-| FORBIDDEN | 403 | NO |
-| NOT_FOUND | 404 | NO |
-| RATE_LIMITED | 429 | YES |
-| INTERNAL_ERROR | 500 | YES |
-| SERVICE_UNAVAILABLE | 503 | YES |
-| UNKNOWN | 1 | YES |
-
-The UNKNOWN error code is used when the client encounters an error that is not listed above. This is usually a temporary error, but it is not guaranteed.
-
-## Example
-
-
- Each client may have a different retry policy, or no retry policy at all. But
- you can also reuse the same retry policy across multiple clients.
-
-
-```rust
-// in a .baml file
-
-retry_policy MyRetryPolicy {
- max_retries 5
- strategy {
- type exponential_backoff
- }
-}
-
-// A client that uses the OpenAI chat API.
-client MyGPT35Client {
- provider baml-openai-chat
- // Set the retry policy to the MyRetryPolicy defined above.
- // Any impl that uses this client will retry failed operations.
- retry_policy MyRetryPolicy
- options {
- model gpt-3.5-turbo
- api_key env.OPENAI_API_KEY
- }
-}
-```
diff --git a/docs/docs_old/syntax/comments.mdx b/docs/docs_old/syntax/comments.mdx
deleted file mode 100644
index d29f2ebed..000000000
--- a/docs/docs_old/syntax/comments.mdx
+++ /dev/null
@@ -1,36 +0,0 @@
----
-title: comments
----
-
-## Single line / trailing comments
-
-Denoted by `//`.
-
-```rust
-// hello there!
-foo // this is a trailing comment
-```
-
-## Docstrings
-
-To add a docstring to any block, use `///`.
-
-```rust
-/// This is a docstring for a class
-class {
- /// This is a docstring for a property
- property1 string
-}
-```
-
-## Multiline comments
-
-Multiline comments are denoted via `{//` and `//}`.
-
-```rust
-{// this is a multiline comment
- foo
- bar
-//}
-```
-
diff --git a/docs/docs_old/syntax/enum.mdx b/docs/docs_old/syntax/enum.mdx
deleted file mode 100644
index 7b0feb15a..000000000
--- a/docs/docs_old/syntax/enum.mdx
+++ /dev/null
@@ -1,39 +0,0 @@
----
-title: "enum"
----
-
-Enums are useful for classification tasks. BAML has helper functions that can help you serialize an enum into your prompt in a neatly formatted list (more on that later).
-
-To define your own custom enum in BAML:
-
-
-```rust BAML
-enum MyEnum {
- Value1
- Value2
- Value3
-}
-```
-
-```python Python Equivalent
-from enum import StrEnum
-
-class MyEnum(StrEnum):
- Value1 = "Value1"
- Value2 = "Value2"
- Value3 = "Value3"
-```
-
-```typescript Typescript Equivalent
-enum MyEnum {
- Value1 = "Value1",
- Value2 = "Value2",
- Value3 = "Value3",
-}
-```
-
-
-
-- You may have as many values as you'd like.
-- Values may not be duplicated or empty.
-- Values may not contain spaces or special characters and must not start with a number.
diff --git a/docs/docs_old/syntax/function-testing.mdx b/docs/docs_old/syntax/function-testing.mdx
deleted file mode 100644
index 9799409bf..000000000
--- a/docs/docs_old/syntax/function-testing.mdx
+++ /dev/null
@@ -1,279 +0,0 @@
----
-title: "Unit Testing"
----
-
-There are two types of tests you may want to run on your AI functions:
-
-- Unit Tests: Tests a single AI function
-- Integration Tests: Tests a pipeline of AI functions and potentially buisness logic
-
-We support both types of tests using BAML.
-
-## Using the playground
-
-Use the playground to run tests against individual functions
-
-
-
-## Baml CLI
-
-You can run tests defined
-
-## From BAML Studio
-
-Coming soon
-You can also create tests from production logs in BAML Studio. Any weird or atypical
-user inputs can be used to create a test case with just 1 click.
-
-## JSON Files (`__tests__` folder)
-
-Unit tests created by the playground are stored in the `__tests__` folder.
-
-The project structure should look like this:
-
-```bash
-.
-├── baml_client/
-└── baml_src/
- ├── __tests__/
- │ ├── YourAIFunction/
- │ │ ├── test_name_monkey.json
- │ │ └── test_name_cricket.json
- │ └── YourAIFunction2/
- │ └── test_name_jellyfish.json
- ├── main.baml
- └── foo.baml
-```
-
-You can manually create tests by creating a folder for each function you want to test. Inside each folder, create a json file for each test case you want to run. The json file should be named `test_name.json` where `test_name` is the name of the test case.
-
-To see the structure of the JSON file, you can create a test using the playground and then copy the JSON file into your project.
-
-
- The BAML compiler reads the `__tests__` folder and generates a pytest file for
- you so you don't have to manually write test boilerplate code.
-
-
-## Programmatic Testing (using pytest)
-
-For python, you can leverage **pytest** to run tests. All you need is to add a **@baml_test** decorator to your test functions to get your test data visualized on the baml dashboard.
-
-### Running tests
-
-
- Make sure you are running these commands from your python virtual environment
- (or **`poetry shell`** if you use poetry)
-
-
-```bash
-# From your project root
-# Lists all tests
-pytest -m baml_test --collect-only
-```
-
-```bash
-# From your project root
-# Runs all tests
-# For every function
-pytest -m baml_test
-```
-
-To run tests for a subdirectory
-
-```bash
-# From your project root
-# Note the underscore at the end of the folder name
-pytest -m baml_test ./your-tests-folder/
-```
-
-To run tests that have a specific name or group name
-
-```bash
-# From your project root
-pytest -m baml_test -k test_group_name
-```
-
-You can read more about the `-k` arg of pytest here ([PyTest Docs](https://docs.pytest.org/en/latest/example/markers.html#using-k-expr-to-select-tests-based-on-their-name))
-
-`-k` will match any tests with that given name.
-
-To run a specific test case in a test group
-
-```bash
-# From your project root
-pytest -m baml_test -k 'test_group_name and test_case_name'
-```
-
-### Unit Test an AI Function
-
-Section in progress..
-
-### Integration Tests (test a pipeline calling multiple functions)
-
-
- TypeScript support for testing is still in closed alpha - please contact us if you would like to use it!
-
-
-
-
-```python Test Pipeline
-# Import your baml-generated LLM functions
-from baml_client import baml as b
-
-# Import testing library
-from baml_client.testing import baml_test
-
-# Mark this as a baml test (recorded on dashboard and does some setup)
-@baml_test
-async def test_pipeline():
- message = "I am ecstatic"
- response = await b.ClassifySentiment(message)
- assert response == Sentiment.POSITIVE
- response = await b.GetHappyResponse(message)
-```
-
-
-
-
- Make sure your test file, the Test class and/or test function is prefixed with
- `test` or `Test` respectively. Otherwise, pytest will not pick up your tests.
-
-
-### Parameterized Tests
-
-Parameterized tests allow you declare a list of inputs and expected outputs for a test case. baml will run the test for each input and compare the output to the expected output.
-
-```python
-from baml_client.testing import baml_test
-# Import your baml-generated LLM functions
-from baml_client import baml
-# Import any custom types defined in .baml files
-from baml_client.types import Sentiment
-
-@baml_test
-@pytest.mark.parametrize(
- "input, expected_output",
- [
- ("I am ecstatic", Sentiment.POSITIVE),
- ("I am sad", Sentiment.NEGATIVE),
- ("I am angry", Sentiment.NEGATIVE),
- ],
-)
-async def test_sentiments(input, expected_output):
- response = await baml.ClassifySentiment(input)
- assert response == expected_output
-```
-
-This will generate 3 test_cases on the dashboard, one for each input.
-
-### Using custom names for each test
-
-The parametrize decorator also allows you to specify a custom name for each test case. See below on how we name each test case using the ids parameter.
-
-```python
-from baml_client import baml as b
-from baml_client.types import Sentiment, IClassifySentiment
-
-test_cases = [
- {"input": "I am ecstatic", "expected_output": Sentiment.POSITIVE, "id": "ecstatic-test"},
- {"input": "I am sad", "expected_output": Sentiment.NEGATIVE, "id": "sad-test"},
- {"input": "I am angry", "expected_output": Sentiment.NEGATIVE, "id": "angry-test"},
-]
-
-@b.ClassifySentiment.test
-@pytest.mark.parametrize(
- "test_case",
- test_cases,
- ids=[case['id'] for case in test_cases]
-)
-# Note the argument name "test_case" is set by the first parameter in the parametrize() decorator
-async def test_sentiments(ClassifySentimentImpl: IClassifySentiment, test_case):
- response = await ClassifySentimentImpl(test_case["input"])
- assert response == test_case["expected_output"]
-```
-
-### Grouping Tests by Input Type
-
-Alternatively, you can group things together logically by defining one test case or test class per input type your testing. In our case, we'll split up all Positive sentiments into their own group.
-
-```python
-from baml_client.testing import baml_test
-# Import your baml-generated LLM functions
-from baml_client import baml
-# Import any custom types defined in .baml files
-from baml_client.types import Sentiment
-
-@baml_test
-@pytest.mark.asyncio
-@pytest.mark.parametrize(
- # Note we only need to pass in one variable to the test, the "input".
- "input",
- [
- "I am ecstatic",
- "I am super happy!"
- ],
-)
-class TestHappySentiments:
- async def test_sentiments(input, expected_output):
- response = await baml.ClassifySentiment(input)
- assert response == Sentiment.POSITIVE
-
-@baml_test
-@pytest.mark.asyncio
-@pytest.mark.parametrize(
- # Note we only need to pass in one variable to the test, the "input".
- "input",
- [
- "I am sad",
- "I am angry"
- ],
-)
-class TestSadSentiments:
- async def test_sentiments(input, expected_output):
- response = await baml.ClassifySentiment(input)
- assert response == Sentiment.NEGATIVE
-```
-
-Alternatively you can just write a test function for each input type.
-
-```python
-from baml_client.testing import baml_test
-from baml_client import baml
-from baml_client.types import Sentiment
-
-@baml_test
-@pytest.mark.asyncio
-@pytest.mark.parametrize(
- "input",
- [
- "I am ecstatic",
- "I am super happy!",
- "I am thrilled",
- "I am overjoyed",
- ],
-)
-async def test_happy_sentiments(input):
- response = await baml.ClassifySentiment(input)
- assert response == Sentiment.POSITIVE
-
-@baml_test
-@pytest.mark.asyncio
-@pytest.mark.parametrize(
- "input",
- [
- "I am sad",
- "I am angry",
- "I am upset",
- "I am frustrated",
- ],
-)
-async def test_sad_sentiments(input):
- response = await baml.ClassifySentiment(input)
- assert response == Sentiment.NEGATIVE
-```
diff --git a/docs/docs_old/syntax/function.mdx b/docs/docs_old/syntax/function.mdx
deleted file mode 100644
index 338480230..000000000
--- a/docs/docs_old/syntax/function.mdx
+++ /dev/null
@@ -1,13 +0,0 @@
----
-title: "function"
----
-
-A **function** is the contract between the application and the AI model. It defines the desired **input** and **output**.
-
-
-
-
-With baml, you can modify the implementation of a function and keep the application logic that uses the
-function unchanged.
-
-Checkout [PromptFiddle](https://promptfiddle.com) to see various BAML function examples.
\ No newline at end of file
diff --git a/docs/docs_old/syntax/generator.mdx b/docs/docs_old/syntax/generator.mdx
deleted file mode 100644
index fa788dc7d..000000000
--- a/docs/docs_old/syntax/generator.mdx
+++ /dev/null
@@ -1,21 +0,0 @@
----
-title: generator
----
-
-The `generator` configuration needs to be added anywhere in .baml files to generate the `baml_client` in Python or Typescript.
-
-We recommend running **baml init** to have this setup for you with sane defaults.
-
-Here is how you can add a generator block:
-
-```rust
-generator MyGenerator{
- output_type typescript // or python/pydantic, ruby
- output_dir ../
-}
-```
-
-| Property | Description | Options | Default |
-| ------------------- | ------------------------------------------------ | --------------------------------- | ---------------------------------------------- |
-| output_type | The language of the generated client | python/pydantic, ruby, typescript | |
-| output_dir | The directory where we'll output the generated baml_client | | ../ |
diff --git a/docs/docs_old/syntax/overview.mdx b/docs/docs_old/syntax/overview.mdx
deleted file mode 100644
index a97ee35d5..000000000
--- a/docs/docs_old/syntax/overview.mdx
+++ /dev/null
@@ -1,62 +0,0 @@
----
-title: BAML Project Structure
----
-
-A BAML project has the following structure:
-
-```bash
-.
-├── baml_client/ # Generated code
-├── baml_src/ # Prompts live here
-│ └── foo.baml
-# The rest of your project (not generated nor used by BAML)
-├── app/
-│ ├── __init__.py
-│ └── main.py
-└── pyproject.toml
-
-```
-
-1. `baml_src/` is the directory where you write your BAML files with the AI
- function declarations, prompts, retry policies, etc. It also contains
- [generator](/syntax/generator) blocks which configure how and where to
- transpile your BAML code.
-
-2. `baml_client/` is the directory where BAML will generate code, and where you'll
- import the generated code from.
-
-
-
-```python Python
-from baml_client import baml as b
-
-await b.YourAIFunction()
-```
-
-```typescript TypeScript
-import b from "@/baml_client";
-
-await b.YourAIFunction();
-```
-
-
-
-
- **You should never edit any files inside baml_client directory** as the whole
- directory gets regenerated on every `baml build` (auto runs on save if using
- the VSCode extension).
-
-
-
- If you ever run into any issues with the generated code (like merge
- conflicts), you can always delete the `baml_client` directory and it will get
- regenerated automatically once you fix any other conflicts in your `.baml`
- files.
-
-
-### imports
-
-BAML by default has global imports. Every entity declared in any `.baml` file
-is available to all other `.baml` files under the same `baml_src` directory.
-You **can** have multiple `baml_src` directories, but no promises on how the
-VSCode extension will behave (yet).
diff --git a/docs/docs_old/syntax/prompt_engineering/overview.mdx b/docs/docs_old/syntax/prompt_engineering/overview.mdx
deleted file mode 100644
index 9f044879f..000000000
--- a/docs/docs_old/syntax/prompt_engineering/overview.mdx
+++ /dev/null
@@ -1,89 +0,0 @@
----
-title: Prompt Syntax
----
-
-Prompts are written using the [Jinja templating language](https://jinja.palletsprojects.com/en/3.0.x/templates/).
-
-There are **2 jinja macros** (or functions) that we have included into the language for you. We recommend viewing what they do using the VSCode preview (or in [promptfiddle.com](promptfiddle.com)), so you can see the full string transform in real time.
-
-1. **`{{ _.role("user") }}`**: This divides up the string into different message roles.
-2. **`{{ ctx.output_format }}`**: This prints out the output format instructions for the prompt.
-You can add your own prefix instructions like this: `{{ ctx.output_format(prefix="Please please please format your output like this:")}}`. We have more parameters you can customize. Docs coming soon.
-3. **`{{ ctx.client }}`**: This prints out the client model the function is using
-
-"ctx" is contextual information about the prompt (like the output format or client). "_." is a special namespace for other BAML functions.
-
-
-
-Here is what a prompt with jinja looks like using these macros:
-
-```rust
-enum Category {
- Refund
- CancelOrder
- TechnicalSupport
- AccountIssue
- Question
-}
-
-class Message {
- role string
- message string
-}
-
-
-function ClassifyConversation(messages: Message[]) -> Category[] {
- client GPT4Turbo
- prompt #"
- Classify this conversation:
- {% for m in messages %}
- {{ _.role(m.role) }}
- {{ m.message }}
- {% endfor %}
-
- Use the following categories:
- {{ ctx.output_format}}
- "#
-}
-```
-
-### Template strings
-You can create your own typed templates using the `template_string` keyword, and call them from a prompt:
-
-```rust
-// Extract the logic out of the prompt:
-template_string PrintMessages(messages: Message[]) -> string {
- {% for m in messages %}
- {{ _.role(m.role) }}
- {{ m.message }}
- {% endfor %}
-}
-
-function ClassifyConversation(messages: Message[]) -> Category[] {
- client GPT4Turbo
- prompt #"
- Classify this conversation:
- {{ PrintMessages(messages) }}
-
- Use the following categories:
- {{ ctx.output_format}}
- "#
-}
-```
-
-### Conditionals
-You can use these special variables to write conditionals, like if you want to change your prompt depending on the model:
-
- ```rust
- {% if ctx.client.name == "GPT4Turbo" %}
- // Do something
- {% endif %}
- ```
-
-You can use conditionals on your input objects as well:
-
- ```rust
- {% if messages[0].role == "user" %}
- // Do something
- {% endif %}
- ```
diff --git a/docs/docs_old/syntax/strings.mdx b/docs/docs_old/syntax/strings.mdx
deleted file mode 100644
index 657fa726d..000000000
--- a/docs/docs_old/syntax/strings.mdx
+++ /dev/null
@@ -1,66 +0,0 @@
----
-title: strings
----
-
-BAML treats strings as first-class citizens, to support more struggle-free prompt engineering.
-
-## Quoted Strings
-
-This is a valid **inline string**, which is surrounded by double quotes.
-
-```llvm
-"Hello World"
-```
-
-## Unquoted Strings
-
-BAML also supports simple **unquoted in-line** strings. The string below is valid! These are useful for simple strings such as configuration options.
-
-```
-Hello World
-```
-
-Unquoted strings **may not** have any of the following since they are reserved characters (note this may change in the future):
-
-- Quotes "double" or 'single'
-- At-signs @
-- Curlies {}
-- hashtags #
-- Parentheses ()
-- Brackets []
-- commas ,
-- newlines
-
-When in doubt, use a quoted string or a block string, but the VSCode extension will warn you if there is a parsing issue.
-
-## Block Strings
-
-If a string is on multiple lines, it must be surrounded by #" and "#. This is called a **block string**.
-
-```llvm
-#"
-Hello
-World
-"#
-```
-
-Block strings are automatically dedented and stripped of the first and last newline. This means that the following will render the same thing as above
-
-```llvm
-#"
- Hello
- World
-"#
-```
-
-### Code Strings
-
-In case you need to add some code documentation or whatnot in a baml file, you can type this in:
-```llvm
-python#"
- print("Hello World")
- def foo():
- return 1
-"#
-```
-these are not functional code blocks they are can just be used for documentation purposes.
diff --git a/docs/docs_old/syntax/type-deserializer.mdx b/docs/docs_old/syntax/type-deserializer.mdx
deleted file mode 100644
index 461b97620..000000000
--- a/docs/docs_old/syntax/type-deserializer.mdx
+++ /dev/null
@@ -1,110 +0,0 @@
----
-title: Parsing and Deserialization
----
-
-Baml uses a custom `Deserializer` to parse a string into the desired type. **You don't have to do anything to enable to deserializer, it comes built in.**
-
-Instead of doing the following, you can rely on BAML to do the parsing for you.
-
-```python
-# Example parsing code you might be writing today
-# without baml
-import json
-
-openai_response_text = await openai.completions.create(
- ...
-)
-response = SomePydanticModel(**json.loads(openai_response_text))
-
-```
-
-## Examples
-
-
-
-| LLM Output | Desired Type | Baml Output | How |
-| ------------------------------------------------------------------------------------------------------------------------------------------------ | ------------ | --------------- | ------------------------------------------------------------------------------------------ |
-| `great` | Style | `Style.GREAT` | We handle case insensitivity |
-| `"great"` | Style | `Style.GREAT` | We handle extra quotes |
-| `great` | Style[] | `[Style.GREAT]` | Non-array types are automatically wrapped in an array |
-| `{ "feeling": "great" }` | Style | `Style.GREAT` | When looking for a singular value, we can parse dictionaries of 1 keys as singular objects |
-|
Some text that goes before... \```json {"feeling": "great"} \``` Some text that came after
| Style | `Style.GREAT` | We can find the inner json object and parse it even when surrounded by lots of text |
-
-
-Note, we can apply the same parsing logic to any type, not just enums. e.g. in the
-case of numbers, we can remove commas and parse the number. This page outlines all
-the rules we use to parse each type.
-
-
- The deserializer makes 0 external calls and runs fully locally!
-
-
-## Error handling
-
-All parsing errors are handled by the `Deserializer` and will raise a `DeserializerException`.
-
-
-
-```python Python
-from baml_client import baml as b
-from baml_client import DeserializerException
-
-try:
- response = await b.SomeAIFunction(query="I want to buy a car")
-except DeserializerException as e:
- # The parser was not able read the response as the expected type
- print(e)
-```
-
-```typescript TypeScript
-import b, { DeserializerException } from "@/baml_client";
-
-const main = async () => {
- try {
- await b.ClassifyMessage("I want to cancel my order");
- } catch (e) {
- if (e instanceof DeserializerException) {
- // The parser was not able read the response as the expected type
- console.log(e);
- }
- throw e;
- }
-};
-
-if (require.main === module) {
- main();
-}
-```
-
-
-
-## Primitive Types
-
-TODO: Include a section on how each type is parsed and coerced from other types.
-
-## Composite/Structured Types
-
-### enum
-
-**See:** [Prompt engineering > Enum > @alias](/docs/syntax/prompt_engineering/enum#deserialization-with-alias)
-
-### class
-
-**See:** [Prompt engineering > Class](/docs/syntax/class)
-
-### Optional (?)
-
-If the type is optional, the parser will attempt to parse the value as the type, or return `null` if we failed to parse.
-
-### Union (|)
-
-Unions are parsed in left to right order. The first type that successfully parses the value will be returned.
-If no types are able to parse the value, a `DeserializerException` will be raised.
-
-### List/Array ([])
-
-Lists parse each element in the list as the type specified in the list.
-
-- It will always return a list, even if the list is empty.
-- If an element fails to parse, it is skipped and not included in the final list.
-- If the value is not a list, the parser will attempt to parse the value as the type and return a list with a single element.
diff --git a/docs/docs_old/syntax/type.mdx b/docs/docs_old/syntax/type.mdx
deleted file mode 100644
index 9789f0606..000000000
--- a/docs/docs_old/syntax/type.mdx
+++ /dev/null
@@ -1,268 +0,0 @@
----
-title: Supported Types
----
-
-## Primitive Types
-
-### ✅ bool
-
-- **When to use it:** When you need to represent a simple true/false condition.
-- **Syntax:** `bool`
-
-### ✅ int
-
-- **When to use it:** When you want numeric values
-- **Syntax:** `int`
-
-### ✅ float
-
-- **When to use it:** When dealing with numerical values that require precision (like measurements or monetary values).
-- **Syntax:** `float`
-
-### ✅ string
-
-- **Syntax:** `string`
-
-### ✅ char
-
-- **When to use it:** When you need to represent a single letter, digit, or other symbols.
-- **Syntax:** `char`
-
-### ✅ null
-
-- **Syntax:** `null`
-
-### ✅ Images
-
-You can use an image like this:
-
-```rust
-function DescribeImage(myImg: image) -> string {
- client GPT4Turbo
- prompt #"
- {{ _.role("user")}}
- Describe the image in four words:
- {{ myImg }}
- "#
-}
-```
-
-
-### ✅ Audio
-We support audio for existing models that support it, such as Gemini Pro and Flash.
-You can use audio like this:
-
-```rust
-function DescribeAudio(myAudio: audio) -> string {
- client GPT4Turbo
- prompt #"
- {{ _.role("user")}}
- Describe the tone of this song in four words:
- {{ myAudio }}
- "#
-}
-```
-
-
-### ⚠️ bytes
-
-- Not yet supported. Use a `string[]` or `int[]` instead.
-
-### ⚠️ any/json
-
-- Not supported.
-
- We don't want to encourage its use as it defeats the purpose of having a
- type system. if you really need it, for now use `string` and call
- `json.parse` yourself. Also, message us on discord so we can understand your
- use case and consider supporting it.
-
-
-### Dates/Times
-
-#### ⚠️ datetime
-
-- Not yet supported. Use a `string` or `int` (milliseconds since epoch) instead.
-
-#### ⚠️ datetime interval
-
-- Not yet supported. Use a `string` or `int` (milliseconds since epoch) instead.
-
-### ⚠️ Unit Values (currency, temperature, etc)
-
-Many times you may want to represent a number with a unit. For example, a
-temperature of 32 degrees Fahrenheit or cost of $100.00.
-
-- Not yet supported. We recommend using a number (`int` or `float`) and having
- the unit be part of the variable name. For example, `temperature_fahrenheit`
- and `cost_usd` (see [@alias](/docs/syntax/class#alias)).
-
-
-
-
-## Composite/Structured Types
-
-### ✅ enum
-
-**See also:** [Enum](/docs/syntax/enum)
-
-A user-defined type consisting of a set of named constants.
-- **When to use it:** Use it when you need a model to choose from a known set of values, like in classification problems
-- **Syntax:**
-
-```rust
-enum Name {
- Value1
- Value2
-}
-```
-
-- **Example:**
-
-```rust
-enum Color {
- Red
- Green
- Blue
-}
-```
-
-### ✅ class
-
-**See also:** [Class](/docs/syntax/class)
-
-- **What it is:** User-defined complex data structures.
-- **When to use it:** When you need an LLM to call another function (e.g. OpenAI's function calling), you can model the function's parameters as a class. You can also get models to return complex structured data by using a class.
-- **Syntax:**
-
-```rust
-class ClassName {
- ...
-}
-```
-
-- **Example:**
-
-```rust
-class Car {
- model string
- year int
-}
-```
-
-### ✅ Optional (?)
-
-- **What it is:** A type that represents a value that might or might not be present.
-- **When to use it:** When a variable might not have a value and you want to explicitly handle its absence.
-- **Syntax:** `?`
-- **Example:** `int?` or `(MyClass | int)?`
-
-### ✅ Union (|)
-
-- **What it is:** A type that can hold one of several specified types.
-- **When to use it:** When a variable can legitimately be of more than one type. This can be helpful with function calling, where you want to return different types of data depending on which function should be called.
-- **Syntax:** `|`
-- **Example:** `int | string` or `(int | string) | MyClass` or `string | MyClass | int[]`
-
- Order is important. `int | string` is not the same as `string | int`.
-
- For example, if you have a `"1"` string, it will be parsed as an `int` if
- you use `int | string`, but as a `string` if you use `string | int`.
-
-
-### ✅ List/Array ([])
-
-- **What it is:** A collection of elements of the same type.
-- **When to use it:** When you need to store a list of items of the same type.
-- **Syntax:** `[]`
-- **Example:** `string[]` or `(int | string)[]` or `int[][]`
-
-
-
-
Array types can be nested to create multi-dimensional arrays
-
An array type cannot be optional
-
-
-
-### ❌ Dictionary
-
-- Not yet supported. Use a `class` instead.
-
-### ❌ Set
-
-- Not yet supported. Use a `List` instead.
-
-### ❌ Tuple
-
-- Not yet supported. Use a `class` instead.
-
-## Examples and Equivalents
-
-Here are some examples and what their equivalents are in different languages.
-
-### Example 1
-
-
-```baml Baml
-int?|string[]|MyClass
-````
-
-```python Python Equivalent
-Union[Optional[int], List[str], MyClass]
-```
-
-```typescript TypeScript Equivalent
-(number | null) | string[] | MyClass
-```
-
-
-
-### Example 2
-
-
-```baml Baml
-string[]
-```
-
-```python Python Equivalent
-List[str]
-```
-
-```typescript TypeScript Equivalent
-string[]
-```
-
-
-
-### Example 3
-
-
-```baml Baml
-(int|float)[]
-```
-```python Python Equivalent
-List[Union[int, float]]
-```
-
-```typescript TypeScript Equivalent
-number[]
-```
-
-
-
-### Example 4
-
-
-```baml Baml
-(int? | string[] | MyClass)[]
-```
-
-```python Python Equivalent
-Optional[List[Union[Optional[int], List[str], MyClass]]]
-```
-
-```typescript TypeScript Equivalent
-((number | null) | string[] | MyClass)[]
-```
-
-