Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Update documentation #107

Merged
merged 2 commits into from
Mar 27, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
7 changes: 4 additions & 3 deletions burr/core/application.py
Original file line number Diff line number Diff line change
Expand Up @@ -912,13 +912,13 @@ async def astream_result(
@telemetry.capture_function_usage
def visualize(
self,
output_file_path: Optional[str],
output_file_path: Optional[str] = None,
include_conditions: bool = False,
include_state: bool = False,
view: bool = False,
engine: Literal["graphviz"] = "graphviz",
**engine_kwargs: Any,
):
) -> Optional["graphviz.Digraph"]: # noqa: F821
"""Visualizes the application graph using graphviz. This will render the graph.

:param output_file_path: The path to save this to, None if you don't want to save. Do not pass an extension
Expand Down Expand Up @@ -976,7 +976,8 @@ def visualize(
label=condition.name if include_conditions and condition is not default else None,
style="dashed" if transition.condition is not default else "solid",
)
digraph.render(output_file_path, view=view)
if output_file_path:
digraph.render(output_file_path, view=view)
return digraph

@staticmethod
Expand Down
10 changes: 10 additions & 0 deletions docs/examples/agents.rst
Original file line number Diff line number Diff line change
@@ -0,0 +1,10 @@
====================
Agents
====================

Burr allows you to create agents that can interact with each other via State.

Multi-Agent Example
--------------------

See `github repository example <https://github.com/DAGWorks-Inc/burr/tree/main/examples/multi-agent-collaboration>`_.
21 changes: 20 additions & 1 deletion docs/examples/chatbot.rst
Original file line number Diff line number Diff line change
@@ -1,5 +1,24 @@
================
GPT-like chatbot
Chatbots
================

Chat bots are a simple example where state influences the conversation. This is a
perfect use case for using Burr.

GPT-like chatbot
----------------

See `github repository example <https://github.com/DAGWorks-Inc/burr/tree/main/examples/gpt>`_.


Conversational RAG chatbot
--------------------------
See `github example <https://github.com/DAGWorks-Inc/burr/tree/main/examples/conversational_rag>`_.

Accompanying video walkthrough:

.. raw:: html

<div>
<iframe width="800" height="455" src="https://www.youtube.com/embed/t54DCiOH270?si=QpPNs7m2t0L0V8Va" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen></iframe>
</div>
1 change: 1 addition & 0 deletions docs/examples/index.rst
Original file line number Diff line number Diff line change
Expand Up @@ -12,5 +12,6 @@ Examples of more complex/powerful use-cases of Burr. Download/copy these to adap
.. toctree::
simple
chatbot
agents
ml_training
simulation
15 changes: 15 additions & 0 deletions examples/blog_post/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,15 @@
Example that goes with [introductory blog post](https://blog.dagworks.io/p/burr-develop-stateful-ai-applications).

## 🏃Quick start

```bash
pip install "burr[start]" jupyter
```

Run the notebook:

```bash
jupyter notebook
```

Then open `blog.ipynb` and run the cells.
40 changes: 40 additions & 0 deletions examples/conversational_rag/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,40 @@
# Conversational RAG with memory
This example demonstrates how to build a conversational RAG agent with "memory".

The "memory" here is stored in state, which Burr then can help you track,
manage, and introspect.

The set up of this example is that you have:

1. Some initial "documents" i.e. knowledge.
2. We bootstrap a vector store with these documents.
3. We then have a pipeline that uses a vector store for a RAG query.
4. We hook everything together with Burr that will manage the state
of the conversation and asking for user inputs.

To run this example, install Burr and the necessary dependencies:

```bash
pip install "burr[start]" -r requirements.txt
```

Then run the server in the background:

```bash
burr
```

Make sure you have an `OPENAI_API_KEY` set in your environment.

Then run
```bash
python application.py
```

You'll then have a text terminal where you can interact. Type exit to stop.

# Video Walkthrough via Notebook
Watch the video walkthrough in the notebook (1.5x+ speed recommended):
<div>
<iframe width="800" height="455" src="https://www.youtube.com/embed/t54DCiOH270?si=QpPNs7m2t0L0V8Va" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen></iframe>
</div>
49 changes: 31 additions & 18 deletions examples/conversational_rag/application.py
Original file line number Diff line number Diff line change
Expand Up @@ -8,6 +8,7 @@
from burr.core.action import action
from burr.lifecycle import LifecycleAdapter, PostRunStepHook, PreRunStepHook

# create the pipeline
conversational_rag = dataflows.import_module("conversational_rag")
conversational_rag_driver = (
driver.Builder()
Expand All @@ -17,6 +18,11 @@
)


def bootstrap_vector_db(rag_driver: driver.Driver, input_texts: List[str]) -> object:
"""Bootstrap the vector database with some input texts."""
return rag_driver.execute(["vector_store"], inputs={"input_texts": input_texts})["vector_store"]


class PrintStepHook(PostRunStepHook, PreRunStepHook):
"""Custom hook to print the action/result after each step."""

Expand All @@ -27,24 +33,28 @@ def pre_run_step(self, action: Action, **future_kwargs):
print("⏳Processing input from user...")

def post_run_step(self, *, state: "State", action: Action, result: dict, **future_kwargs):
if action.name == "human_converse":
print("🎙💬", result["question"], "\n")
if action.name == "ai_converse":
print("💬", result["conversational_rag_response"], "\n")
print("🤖💬", result["conversational_rag_response"], "\n")


@action(
reads=["input_texts", "question", "chat_history"],
reads=["question", "chat_history"],
writes=["chat_history"],
)
def ai_converse(state: State) -> Tuple[dict, State]:
"""AI conversing step. This calls out to an API on the Hamilton hub (hub.dagworks.io)
to do basic RAG"""
def ai_converse(state: State, vector_store: object) -> Tuple[dict, State]:
"""AI conversing step. Uses Hamilton to execute the conversational pipeline."""
result = conversational_rag_driver.execute(
["conversational_rag_response"],
inputs={
"input_texts": state["input_texts"],
"question": state["question"],
"chat_history": state["chat_history"],
},
# we use overrides here because we want to pass in the vector store
overrides={
"vector_store": vector_store,
},
)
new_history = f"AI: {result['conversational_rag_response']}"
return result, state.append(chat_history=new_history)
Expand All @@ -55,7 +65,7 @@ def ai_converse(state: State) -> Tuple[dict, State]:
writes=["question", "chat_history"],
)
def human_converse(state: State, user_question: str) -> Tuple[dict, State]:
"""Human converse step -- this simply massages the state to be the right shape"""
"""Human converse step -- make sure we get input, and store it as state."""
state = state.update(question=user_question).append(chat_history=f"Human: {user_question}")
return {"question": user_question}, state

Expand All @@ -65,26 +75,29 @@ def application(
storage_dir: Optional[str] = "~/.burr",
hooks: Optional[List[LifecycleAdapter]] = None,
) -> Application:
# our initial knowledge base
input_text = [
"harrison worked at kensho",
"stefan worked at Stitch Fix",
"stefan likes tacos",
"elijah worked at TwoSigma",
"elijah likes mango",
"stefan used to work at IBM",
"elijah likes to go biking",
"stefan likes to bake sourdough",
]
vector_store = bootstrap_vector_db(conversational_rag_driver, input_text)
app = (
ApplicationBuilder()
.with_state(
**{
"input_texts": [
"harrison worked at kensho",
"stefan worked at Stitch Fix",
"stefan likes tacos",
"elijah worked at TwoSigma",
"elijah likes mango",
"stefan used to work at IBM",
"elijah likes to go biking",
"stefan likes to bake sourdough",
],
"question": "",
"chat_history": [],
}
)
.with_actions(
ai_converse=ai_converse,
# bind the vector store to the AI conversational step
ai_converse=ai_converse.bind(vector_store=vector_store),
human_converse=human_converse,
terminal=burr.core.Result("chat_history"),
)
Expand Down
Loading
Loading