Skip to content

Commit

Permalink
New docs
Browse files Browse the repository at this point in the history
  • Loading branch information
hellovai committed Jun 18, 2024
1 parent cc1626e commit 05356d6
Show file tree
Hide file tree
Showing 14 changed files with 326 additions and 82 deletions.
4 changes: 2 additions & 2 deletions docs/docs/guides/boundary_studio/tracing-tagging.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -29,10 +29,10 @@ async def pre_process_text(text):

@trace
async def full_analysis(book: Book):
sentiment = await baml.ClassifySentiment.get_impl("v1").run(
sentiment = await baml.ClassifySentiment(
pre_process_text(book.content)
)
book_analysis = await baml.AnalyzeBook.get_impl("v1").run(book)
book_analysis = await baml.AnalyzeBook(book)
return book_analysis


Expand Down
27 changes: 15 additions & 12 deletions docs/docs/guides/overview.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -11,40 +11,43 @@ Ping us on [Discord](https://discord.gg/BTNBeXGuaS) if you have any questions!
<div style={{marginBottom: "20px"}}>
<h2>Testing</h2>
<ul>
<li><a href="/v3/how-to/testing/unit_test">Test an AI function</a></li>
<li><a href="/v3/how-to/testing/test_with_assertions">Evaluate results with assertions or using LLM Evals</a></li>
<li><a href="/docs/guides/testing/unit_test">Test an AI function</a></li>
<li><a href="/docs/guides/testing/test_with_assertions">Evaluate results with assertions or using LLM Evals</a></li>
</ul>
</div>
<div style={{marginBottom: "20px"}}>
<h2>Observability</h2>
<ul>
<li><a href="/v3/how-to/boundary_studio/tracing-tagging">Tracing and tagging functions</a></li>
<li><a href="/docs/guides/boundary_studio/tracing-tagging">Tracing and tagging functions</a></li>
</ul>
</div>
<div style={{marginBottom: "20px"}}>
<h2>Improve LLM results</h2>
<h2>Streaming</h2>
<ul>
<li><a href="/v3/how-to/improve_results/diagnose">Improve my prompt automatically</a></li>
<li><a href="/v3/how-to/improve_results/fine_tune">Fine-tune a model using my production data</a></li>
<li><a href="/docs/guides/streaming/streaming">Streaming structured data</a></li>
</ul>
</div>
</div>
<div style={{flex: "1 0 50%", maxWidth: "50%", padding: "8px"}}>
<div style={{marginBottom: "20px"}}>
<h2>Prompt engineering</h2>
<ul>
<li><a href="/v3/how-to/prompt_engineering/serialize_list">Serialize a List of chat messages into a prompt</a></li>
<li><a href="/v3/how-to/prompt_engineering/serialize_complex_input">Customize input variables</a></li>
<li><a href="/v3/how-to/prompt_engineering/conditional_rendering">Conditionally generate the prompt based on the input variables</a></li>
<li><a href="/docs/guides/prompt_engineering/chat-prompts">System vs user prompts</a></li>
</ul>
</div>
<div style={{marginBottom: "20px"}}>
<h2>Resilience / Reliability</h2>
<ul>
<li><a href="/v3/how-to/resilience/retries">Add retries to my AI function (and different retry policies).</a></li>
<li><a href="/v3/how-to/resilience/fallback">Fall-back to another model on failure</a></li>
<li><a href="/docs/guides/resilience/retries">Add retries to my AI function (and different retry policies).</a></li>
<li><a href="/docs/guides/resilience/fallback">Fall-back to another model on failure</a></li>
</ul>
</div>
<div style={{marginBottom: "20px"}}>
<h2>Improve LLM results</h2>
<ul>
<li><a href="/docs/guides/improve_results/diagnose">Improve my prompt automatically</a></li>
<li><a href="/docs/guides/improve_results/fine_tune">Fine-tune a model using my production data</a></li>
</ul>
</div>

</div>
</div>
17 changes: 5 additions & 12 deletions docs/docs/guides/testing/test_with_assertions.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -11,7 +11,6 @@ To add assertions to your tests, or add more complex testing scenarios, you can
```python test_file.py
from baml_client import baml as b
from baml_client.types import Email
from baml_client.testing import baml_test
import pytest

# Run `poetry run pytest -m baml_test` in this directory.
Expand Down Expand Up @@ -55,21 +54,15 @@ enum ProfessionalismRating {
BAD
}

function ValidateProfessionalism {
// The string to validate
input string
output ProfessionalismRating
}

impl<llm, ValidateProfessionalism> v1 {
function ValidateProfessionalism(input: string) -> ProfessionalismRating {
client GPT4
prompt #"
Is this text professional-sounding?
Use the following scale:
{#print_enum(ProfessionalismRating)}
{{ ctx.output_format }}
Sentence: {#input}
Sentence: {{ input }}
ProfessionalismRating:
"#
Expand All @@ -79,9 +72,9 @@ impl<llm, ValidateProfessionalism> v1 {
```python
from baml_client import baml as b
from baml_client.types import Email, ProfessionalismRating
from baml_client.testing import baml_test
import pytest

@baml_test
@pytest.mark.asyncio
async def test_message_professionalism():
order_info = await b.GetOrderInfo(Email(
subject="Order #1234",
Expand Down
2 changes: 1 addition & 1 deletion docs/docs/home/baml-in-2-min.mdx
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
---
title: "BAML in 2 minutes"
url: "/docs/guides/hello_world/writing-ai-functions"
url: "docs/guides/hello_world/writing-ai-functions"
---
14 changes: 3 additions & 11 deletions docs/docs/home/comparisons/marvin.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -77,12 +77,7 @@ enum RequestType {
INQUIRY @alias("general inquiry")
}

function ClassifyRequest {
input string
output RequestType
}

impl<llm, ClassifyRequest> {
function ClassifyRequest(input: string) -> RequestType {
client GPT4 // choose even open source models
prompt #"
You are an expert classifier that always maintains as much semantic meaning
Expand All @@ -91,11 +86,10 @@ impl<llm, ClassifyRequest> {
TEXT:
---
Reset my password
{{ input }}
---
LABELS:
{#print_enum(RequestType)}
{{ ctx.output_format }}
The best label for the text is:
"#
Expand Down Expand Up @@ -129,5 +123,3 @@ Marvin was a big source of inspiration for us -- their approach is simple and el
BAML does have some limitations we are continuously working on. Here are a few of them:
1. It is a new language. However, it is fully open source and getting started takes less than 10 minutes. We are on-call 24/7 to help with any issues (and even provide prompt engineering tips)
1. Developing requires VSCode. You _could_ use vim and we have workarounds but we don't recommend it.
1. Explicitly defining system / and user prompts. We have worked with many customers across healthcare and finance and have not seen any issues but we will support this soon.
1. BAML does not support images. Until this is available you can definitely use BAML alongside other frameworks.
11 changes: 4 additions & 7 deletions docs/docs/home/comparisons/pydantic.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -342,10 +342,7 @@ Here we use a "GPT4" client, but you can use any model. See [client docs](/docs/
</Note>
{/*
```rust
function ExtractResume {
input (resume_text: string)
output Resume
}
class Education {
school string
Expand All @@ -359,18 +356,18 @@ class Resume {
education Education[]
}
impl<llm, ExtractResume> version1 {
function ExtractResume(resume_text: string) -> Resume {
client GPT4
prompt #"
Parse the following resume and return a structured representation of the data in the schema below.
Resume:
---
{#input.resume_text}
{{ input.resume_text }}
---
Output in this JSON format:
{#print_type(output)}
{{ ctx.output_format }}
Output JSON:
"#
Expand Down
19 changes: 19 additions & 0 deletions docs/docs/home/demo.mdx
Original file line number Diff line number Diff line change
@@ -0,0 +1,19 @@
---
title: "Interactive Demo"
---

## Interactive playground
You can try BAML online over at [Prompt Fiddle](https://www.promptfiddle.com)


## Examples built with BAML

You can find the code here: https://github.com/BoundaryML/baml-examples/tree/main/nextjs-starter


| Example | Link |
| - | - |
| Streaming Simple Objects | https://baml-examples.vercel.app/examples/stream-object |
| RAG + Citations | https://baml-examples.vercel.app/examples/rag |
| Generative UI / Streaming charts | https://baml-examples.vercel.app/examples/book-analyzer |
| Getting a recipe | https://baml-examples.vercel.app/examples/get-recipe |
94 changes: 94 additions & 0 deletions docs/docs/home/example_nextjs.mdx
Original file line number Diff line number Diff line change
@@ -0,0 +1,94 @@
---
title: "Typescript Installation"
---

Here's a sample repository:
https://github.com/BoundaryML/baml-examples/tree/main/nextjs-starter

To set up BAML in typescript do the following:

<Steps>
<Step title="Install BAML VSCode Extension">
https://marketplace.visualstudio.com/items?itemName=boundary.BAML

- syntax highlighting
- testing playground
- prompt previews
</Step>
<Step title="Install baml">
<CodeGroup>
```bash npm
npm install @boundaryml/baml
```

```bash pnpm
pnpm add @boundaryml/baml
```

```bash yarn
yarn add @boundaryml/baml
```
</CodeGroup>
</Step>
<Step title="Add some starter code">
This will give you some starter BAML code in a `baml_src` directory.
<CodeGroup>
```bash npm
npx baml-cli init
```

```bash pnpm
pnpx baml-cli init
```

```bash yarn
yarn baml-cli init
```
</CodeGroup>
</Step>
<Step title="Update your package.json">

This command will help you convert `.baml` files to `.ts` files. Everytime you modify your `.baml` files,
you must re-run this command, and regenerate the `baml_client` folder.

<Tip>
If you download our [VSCode extension](https://marketplace.visualstudio.com/items?itemName=Boundary.baml-extension), it will automatically generate `baml_client` on save!
</Tip>

```json package.json
{
"scripts": {
// Add a new command
"baml-generate": "baml-cli generate",
// Always call baml-generate on every build.
"build": "npm run baml-generate && tsc --build",
}
}
```
</Step>
<Step title="Use a baml function in typescript!">
<Tip>If `baml_client` doesn't exist, make sure to run `npm run baml-generate`</Tip>

```typescript index.ts
import {b} from "baml_client"
import type {Resume} from "baml_client/types"

async function Example(raw_resume: string): Resume {
// BAML's internal parser guarantees ExtractResume
// to be always return a Resume type
const response = await b.ExtractResume(raw_resume);
return response;
}

async function ExampleStream(raw_resume: string): Resume {
const stream = b.stream.ExtractResume(raw_resume);
for await (const msg of stream) {
console.log(msg) // This will be a Partial<Resume> type
}

// This is guaranteed to be a Resume type.
return await stream.get_final_response();
}
```
</Step>
</Steps>
77 changes: 77 additions & 0 deletions docs/docs/home/example_python.mdx
Original file line number Diff line number Diff line change
@@ -0,0 +1,77 @@
---
title: "Python Installation"
---

Here's a sample repository:
https://github.com/BoundaryML/baml-examples/tree/main/python-fastapi-starter

To set up BAML in typescript do the following:

<Steps>
<Step title="Install BAML VSCode Extension">
https://marketplace.visualstudio.com/items?itemName=boundary.BAML

- syntax highlighting
- testing playground
- prompt previews

<Tip>
In your VSCode User Settings, highly recommend adding this to get better autocomplete for python in general, not just BAML.

```json
{
"python.analysis.typeCheckingMode": "basic"
}
```
</Tip>
</Step>
<Step title="Install baml">
```bash
pip install baml-py
```
</Step>
<Step title="Add some starter code">
This will give you some starter BAML code in a `baml_src` directory.

```bash
baml-cli init
```
</Step>
<Step title="Generate python code from .baml files">

This command will help you convert `.baml` files to `.py` files. Everytime you modify your `.baml` files,
you must re-run this command, and regenerate the `baml_client` folder.

<Tip>
If you download our [VSCode extension](https://marketplace.visualstudio.com/items?itemName=Boundary.baml-extension), it will automatically generate `baml_client` on save!
</Tip>

```bash
baml-cli generate
```
</Step>
<Step title="Use a baml function in python!">
<Tip>If `baml_client` doesn't exist, make sure to run the previous step!</Tip>

```python main.py
from baml_client import b
from baml_client.types import Resume

async def example(raw_resume: str) -> Resume:
# BAML's internal parser guarantees ExtractResume
# to be always return a Resume type
response = await b.ExtractResume(raw_resume)
return response

async def example_stream(raw_resume: str) -> Resume:
stream = b.stream.ExtractResume(raw_resume)
async for msg in stream:
print(msg) # This will be a PartialResume type

# This will be a Resume type
final = stream.get_final_response()

return final
```
</Step>
</Steps>
Loading

0 comments on commit 05356d6

Please sign in to comment.