Skip to content

Commit

Permalink
Merge pull request #2710 from langchain-ai/eugene/more_concept_work
Browse files Browse the repository at this point in the history
more concepts changes
  • Loading branch information
eyurtsev authored Dec 11, 2024
2 parents 51f85ff + 6153c77 commit d468655
Show file tree
Hide file tree
Showing 2 changed files with 115 additions and 52 deletions.
167 changes: 115 additions & 52 deletions docs/docs/concepts/human_in_the_loop.md
Original file line number Diff line number Diff line change
Expand Up @@ -13,31 +13,61 @@ A **human-in-the-loop** (or "on-the-loop") workflow integrates human input into

Key use cases for **human-in-the-loop** workflows in LLM-based applications include:

1. **🛠️ [Reviewing tool calls](#review-and-edit)**: Humans can review, edit, or approve tool calls requested by the LLM before tool execution.
1. **🛠️ Reviewing tool calls**: Humans can review, edit, or approve tool calls requested by the LLM before tool execution.
2. **✅ Validating LLM outputs**: Humans can review, edit, or approve content generated by the LLM.
3. **💡 Providing context**: Enable the LLM to explicitly request human input for clarification or additional details or to support multi-turn conversations.

## `interrupt`

The [`interrupt` function][langgraph.types.interrupt] in LangGraph enables human-in-the-loop workflows by pausing the graph at a specific node, presenting information to a human, and resuming the graph with their input. This function is useful for tasks like approvals, edits, or collecting additional input. The [`interrupt` function][langgraph.types.interrupt] is used in conjunction with the [`Command`](../reference/types.md#langgraph.types.Command) object to resume the graph with a value provided by the human.

```python
from langgraph.types import interrupt

def human_node(state: State):
value = interrupt(
# Any JSON serializable value to surface to the human.
# For example, a question or a piece of text or a set of keys in the state
some_data
)
...
# Update the state with the human's input or route the graph based on the input.
...

# Run the graph and hit the breakpoint
thread_config = {"configurable": {"thread_id": "some_id"}}
graph.invoke(some_input, config=thread_config)

# Resume the graph with the human's input
graph.invoke(Command(resume=value_from_human), config=thread_config)
```

Please read the [Breakpoints](breakpoints.md) guide for more information on using the `interrupt` function.

## Design Patterns

1. **Approval**: Pause the graph before a critical step, such as an API call, to review and approve the action. If the action is rejected, you can prevent the graph from executing the step, and potentially take an alternative action.
2. **Editing**: Pause the graph to review and edit the agent's state. This is useful for correcting mistakes or updating the agent's state.
There are typically three different things that you might want to do when you interrupt a graph:

1. **Approval/Rejection**: Pause the graph before a critical step, such as an API call, to review and approve the action. If the action is rejected, you can prevent the graph from executing the step, and potentially take an alternative action. This pattern often involve **routing** the graph based on the human's input.
2. **Editing**: Pause the graph to review and edit the graph state. This is useful for correcting mistakes or updating the state with additional information. This pattern often involves **updating** the state with the human's input.
3. **Input**: Explicitly request human input at a particular step in the graph. This is useful for collecting additional information or context to inform the agent's decision-making process or for supporting **multi-turn conversations**.


### Approval
### Approve or Reject

<figure markdown="1">
![image](img/human_in_the_loop/approve-or-reject.png){: style="max-height:400px"}
<figcaption>Depending on the human's approval or rejection, the graph can proceed with the action or take an alternative path.</figcaption>
</figure>

Pause the graph before a critical step, such as an API call, to review and approve the action. If the action is rejected, you can prevent the graph from executing the step, and potentially take an alternative action.
Pause the graph before a critical step, such as an API call, to review and approve the action. If the action is rejected, you can prevent the graph from executing the step, and potentially take an alternative action.

```python
from langgraph.types import interrupt

def human_approval(state: State):
...
from typing import Literal
from langgraph.types import interrupt, Command

def human_approval(state: State) -> Command[Literal["some_node", "another_node"]]:
is_approved = interrupt(
{
"question": "Is this correct?",
Expand All @@ -48,79 +78,106 @@ def human_approval(state: State):
)

if is_approved:
# Proceed with the action
...
return Command(goto="some_node")
else:
# Do something else
...
return Command(goto="another_node")

# Add the node to the graph in an appropriate location
# and connect it to the relevant nodes.
graph_builder.add_node("human_approval", human_approval)
graph = graph_builder.compile(checkpointer=checkpointer)

...

# After running the graph and hitting the breakpoint, the graph will pause.
# Resume it with either an approval or rejection.
thread_config = {"configurable": {"thread_id": "some_id"}}
graph.invoke(Command(resume=True), config=thread_config)
```


### Edit
### Review & Edit State

<figure markdown="1">
![image](img/human_in_the_loop/tool-call-review.png){: style="max-height:400px"}
<figcaption>A human can review and edit the output from the LLM before proceeding. This is particularly
critical in applications where the tool calls requested by the LLM may be sensitive or require human oversight.
![image](img/human_in_the_loop/edit-graph-state-simple.png){: style="max-height:400px"}
<figcaption>A human can review and edit the state of the graph. This is useful for correcting mistakes or updating the state with additional information.
</figcaption>
</figure>

```python
from langgraph.types import interrupt

def human_editing(state: State):
...
result = interrupt(
# Interrupt information to surface to the client.
# Can be any JSON serializable value.
{
"task": "Review the output from the LLM and make any necessary edits.",
"llm_generated_summary": state["llm_generated_summary"]
}
)

=== "Review tool calls"
# Update the state with the edited text
return {
"llm_generated_summary": result["edited_text"]
}

TODO: Create an example for tool call review.
# Add the node to the graph in an appropriate location
# and connect it to the relevant nodes.
graph_builder.add_node("human_editing", human_editing)
graph = graph_builder.compile(checkpointer=checkpointer)

...

=== "Review text output from the LLM and make any necessary edits."
# After running the graph and hitting the breakpoint, the graph will pause.
# Resume it with the edited text.
thread_config = {"configurable": {"thread_id": "some_id"}}
graph.invoke(
Command(resume={"edited_text": "The edited text"}),
config=thread_config
)
```

```python
from langgraph.types import interrupt
### Review Tool Call

def human_editing(state: State):
...
result = interrupt(
# Interrupt information to surface to the client.
# Can be any JSON serializable value.
{
"task": "Review the output from the LLM and make any necessary edits.",
"llm_output": state["llm_output"]
}
)
<figure markdown="1">
![image](img/human_in_the_loop/tool-call-review.png){: style="max-height:400px"}
<figcaption>A human can review and edit the output from the LLM before proceeding. This is particularly
critical in applications where the tool calls requested by the LLM may be sensitive or require human oversight.
</figcaption>
</figure>

# Update the state with the edited text
return {
"llm_output": result["edited_text"]
```python
def human_review_node(state) -> Command[Literal["call_llm", "run_tool"]]:
# This is the value we'll be providing via Command(resume=<human_review>)
human_review = interrupt(
{
"question": "Is this correct?",
# Surface tool calls for review
"tool_call": tool_call
}
)

# Add the node to the graph in an appropriate location
# and connect it to the relevant nodes.
graph_builder.add_node("human_editing", human_editing)
graph = graph_builder.compile(checkpointer=checkpointer)
review_action, review_data = human_review

...
# Approve the tool call and continue
if review_data == "approve":
return Command(goto="run_tool")

# After running the graph and hitting the breakpoint, the graph will pause.
# Resume it with the edited text.
thread_config = {"configurable": {"thread_id": "some_id"}}
graph.invoke(
Command(resume={"edited_text": "The edited text"}),
config=thread_config
)
```
# Modify the tool call manually and then continue
elif review_action == "update":
...
updated_msg = get_updated_msg(review_data)
# Remember that modify an existing message you will need
# pass the message with a matching ID.
return Command(goto="run_tool", update={"messages": [updated_message]})

# Give natural language feedback, and then pass that back to the agent
elif review_action == "feedback":
...
feedback_msg = get_feedback_msg(review_data)
return Command(goto="call_llm", update={"messages": [feedback_msg]})
```

### Multi-turn conversation (Input)
### Multi-turn conversation

<figure markdown="1">
![image](img/human_in_the_loop/multi-turn-conversation.png){: style="max-height:400px"}
Expand Down Expand Up @@ -155,7 +212,6 @@ graph_builder.add_node("human_input", human_input)
graph_builder.add_edge("human_input", "agent")
graph = graph_builder.compile(checkpointer=checkpointer)


# After running the graph and hitting the breakpoint, the graph will pause.
# Resume it with the human's input.
graph.invoke(
Expand All @@ -164,6 +220,13 @@ graph.invoke(
)
```

## Best practices

* Use the [`interrupt`](breakpoints.md#the-interrupt-function) function to set breakpoints and collect user input.
* Use [`Command`](breakpoints.md#the-command-primitive) to resume execution and control the graph state.
* Consider putting all side effects (e.g., API calls) after the `interrupt` to prevent duplication.
* Understand [how resuming from a breakpoint works](breakpoints.md#how-does-resuming-from-a-breakpoint-work) to avoid common gotchas.

## Additional Resources 📚

- [**Conceptual Guide: Persistence**](persistence.md#replay): Read the persistence guide for more context on replaying.
Expand Down
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.

0 comments on commit d468655

Please sign in to comment.