Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Adds README to multi-agent-collaboration example #110

Merged
merged 1 commit into from
Mar 27, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
66 changes: 66 additions & 0 deletions examples/multi-agent-collaboration/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,66 @@
# Multi Agent Collaboration

This example resembles the example from following [cookbook](https://github.com/langchain-ai/langgraph/blob/main/examples/multi_agent/multi-agent-collaboration.ipynb).

There are three implementations:

1. `hamilton_application.py` -- this uses [Hamilton](https://github.com/dagworks-inc/hamilton) inside the actions.
2. `lecl_application.py` -- this uses LangChain's LCEL inside the actions.
3. `application.py` -- this tries to simplify the graph to have tool calling happen inside the actions.

# `hamilton_application.py` vs `lecl_application.py`:

- They should be functionally equivalent, except that langchain uses deprecated
openai tool constructs underneath, while Hamilton uses the non-deprecated function calling
constructs.
- Compare the two examples to see the code. Burr however doesn't change.

## show me the prompts
With Hamilton the prompts can be found in `func_agent.py`.

With LangChain that's difficult. You'll need to dive into their code to see what ends up being sent.

# Tracing
You'll see that both `hamilton_application.py` and `lecl_application.py`
have some lightweight `tracing` set up. This is a simple way to plug into Burr's
tracer functionality -- this will allow you to see more in the Burr UI.

More functionality is on the roadmap!

# Running the example

Install the dependencies:

```bash
pip install "burr[start]" -r requirements.txt
```

Make sure you have the API Keys in your environment:

```bash
export OPENAI_API_KEY=YOUR_KEY
export TAVILY_API_KEY=YOUR_KEY
```


To run the example, you can do:

```bash
python hamilton_application.py
```
Application run:
![hamilton image](hamilton-multi-agent-v2.png)

or
```bash
python lecl_application.py
```
Application run:
![lcel image](lcel-multi-agent.png)

or
```bash
python application.py
```
Application run:
![simpler hamilton image](hamilton-multi-agent.png)
38 changes: 34 additions & 4 deletions examples/multi-agent-collaboration/application.py
Original file line number Diff line number Diff line change
Expand Up @@ -10,7 +10,9 @@
from burr.tracking import client as burr_tclient

# Initialize some things needed for tools.
# see func_agent.py for the code for this pipeline
tool_dag = driver.Builder().with_modules(func_agent).build()
# Will run code provided by the LLM.
repl = PythonREPL()


Expand All @@ -30,7 +32,14 @@ def python_repl(code: str) -> dict:

@action(reads=["query", "messages"], writes=["messages", "next_hop"])
def chart_generator(state: State) -> tuple[dict, State]:
"""
Generates a chart based on the provided query and updates the state.

:param state: The current state of the application.
:return: A tuple containing the result of the tool execution and the updated state.
"""
query = state["query"]
# see func_agent.py for the code for this pipeline
result = tool_dag.execute(
["executed_tool_calls", "parsed_tool_calls", "llm_function_message"],
inputs={
Expand Down Expand Up @@ -63,7 +72,14 @@ def chart_generator(state: State) -> tuple[dict, State]:

@action(reads=["query", "messages"], writes=["messages", "next_hop"])
def researcher(state: State) -> tuple[dict, State]:
"""
Performs research based on the provided query and updates the state.

:param state: The current state of the application.
:return: A tuple containing the result of the tool execution and the updated state.
"""
query = state["query"]
# see func_agent.py for the code for this pipeline
result = tool_dag.execute(
["executed_tool_calls", "parsed_tool_calls", "llm_function_message"],
inputs={
Expand Down Expand Up @@ -92,16 +108,20 @@ def researcher(state: State) -> tuple[dict, State]:

@action(reads=[], writes=[])
def terminal_step(state: State) -> tuple[dict, State]:
"""One can express a terminal action like this."""
return {}, state


class PrintStepHook(PostRunStepHook):
"""Example of a post run step hook that prints the state and action."""

def post_run_step(self, *, state: "State", action: "Action", **future_kwargs):
print("action=====\n", action)
print("state======\n", state)


def default_state_and_entry_point() -> tuple[dict, str]:
"""Default state and entry point for the application."""
return {
"messages": [],
"query": "Fetch the UK's GDP over the past 5 years,"
Expand All @@ -112,6 +132,11 @@ def default_state_and_entry_point() -> tuple[dict, str]:


def main(app_instance_id: str = None):
"""
Builds the application and runs it.

:param app_instance_id: The ID of the application instance to load the state from.
"""
project_name = "demo:hamilton-multi-agent"
if app_instance_id:
state, entry_point = burr_tclient.LocalTrackingClient.load_state(
Expand All @@ -138,12 +163,12 @@ def main(app_instance_id: str = None):
"researcher",
"terminal",
core.expr("next_hop == 'complete'"),
), # core.expr("'FINAL ANSWER' in messages[-1]['content']")),
),
(
"chart_generator",
"terminal",
core.expr("next_hop == 'complete'"),
), # core.expr("'FINAL ANSWER' in messages[-1]['content']")),
),
)
.with_entrypoint(entry_point)
.with_hooks(PrintStepHook())
Expand All @@ -157,8 +182,13 @@ def main(app_instance_id: str = None):


if __name__ == "__main__":
# Add an app_id to restart from last sequence in that state
# e.g. fine the ID in the UI and then put it in here "app_4d1618d2-79d1-4d89-8e3f-70c216c71e63"
"""
The entry point of the application.

If an app_id is provided, the application will restart from the last
sequence in that state.
E.g. fine the ID in the UI and then put it in here "app_4d1618d2-79d1-4d89-8e3f-70c216c71e63"
"""
_app_id = None
main(_app_id)

Expand Down
35 changes: 30 additions & 5 deletions examples/multi-agent-collaboration/func_agent.py
Original file line number Diff line number Diff line change
@@ -1,3 +1,8 @@
"""
Hamilton module that defines the pipeline for
hitting an LLM model and asking it what to do.
"""

import inspect
import json
from typing import Callable
Expand All @@ -11,18 +16,29 @@ def llm_client() -> openai.OpenAI:


def tool_names(tool_function_specs: list[dict]) -> list[str]:
"""Get the names of the tools from the tool function specs."""
return [tool["function"]["name"] for tool in tool_function_specs]


def _langchain_tool_spec(tool: Callable) -> dict:
"""Converts a tool to a langchain tool spec."""
t = convert_to_openai_function(tool)
# print(t)
return t


def _tool_function_spec(tool: Callable) -> dict:
"""Converts a python function into a specification for function calling.

This is a little hacky. But it works.

It takes a function, introspects it, and returns a spec.

:param tool:
:return:
"""
# TODO: maybe just get people to wrap any external tool in a function
# to make it clear WTF is going on.
# to make it clear what is going on.
if hasattr(tool, "name") and hasattr(tool, "description") and hasattr(tool, "args_schema"):
return {"type": "function", "function": _langchain_tool_spec(tool)}
func_sig = inspect.signature(tool)
Expand Down Expand Up @@ -83,10 +99,12 @@ def _tool_function_spec(tool: Callable) -> dict:


def tool_function_specs(tools: list[Callable]) -> list[dict]:
"""Converts a list of tools into a list of tool function specs."""
return [_tool_function_spec(tool) for tool in tools]


def base_system_prompt(tool_names: list[str], system_message: str) -> str:
"""Creates the base system prompt for the pipeline."""
return (
"You are a helpful AI assistant, collaborating with other assistants."
" Use the provided tools to progress towards answering the question."
Expand Down Expand Up @@ -118,6 +136,13 @@ def get_current_weather(location: str, unit: str = "fahrenheit") -> str:


def message_history(base_system_prompt: str, user_query: str, messages: list[dict]) -> list[dict]:
"""Creates the message history for the LLM model.

:param base_system_prompt:
:param user_query:
:param messages:
:return:
"""
base = [
{"role": "system", "content": base_system_prompt},
{"role": "user", "content": user_query},
Expand Down Expand Up @@ -152,21 +177,20 @@ def llm_function_response(
tool_choice="auto",
)
return response
# tool_calls = response_message.tool_calls
# return response.choices[0].message.content


def llm_function_message(
llm_function_response: openai.types.chat.chat_completion.ChatCompletion,
) -> dict:
"""Parses the LLM response message. Does extra parsing for tool invocations."""
response_message = llm_function_response.choices[0].message
if response_message.tool_calls:
return {
"role": response_message.role,
"content": None,
"tool_calls": [
{
"id": t.uid,
"id": t.id,
"type": "function",
"function": {"name": t.function.name, "arguments": t.function.arguments},
}
Expand All @@ -193,7 +217,7 @@ def parsed_tool_calls(
if tool_calls:
for tool_call in tool_calls:
func_call = {
"id": tool_call.uid,
"id": tool_call.id,
"function_name": tool_call.function.name,
"function_args": tool_call.function.arguments,
}
Expand Down Expand Up @@ -244,6 +268,7 @@ def executed_tool_calls(


if __name__ == "__main__":
# some code to test a few things.
jspec = _tool_function_spec(get_current_weather)
import pprint

Expand Down
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Loading