diff --git a/docs/docs/concepts/human_in_the_loop.md b/docs/docs/concepts/human_in_the_loop.md
index 493d8f149..ac02d57c2 100644
--- a/docs/docs/concepts/human_in_the_loop.md
+++ b/docs/docs/concepts/human_in_the_loop.md
@@ -13,31 +13,61 @@ A **human-in-the-loop** (or "on-the-loop") workflow integrates human input into
Key use cases for **human-in-the-loop** workflows in LLM-based applications include:
-1. **🛠️ [Reviewing tool calls](#review-and-edit)**: Humans can review, edit, or approve tool calls requested by the LLM before tool execution.
+1. **🛠️ Reviewing tool calls**: Humans can review, edit, or approve tool calls requested by the LLM before tool execution.
2. **✅ Validating LLM outputs**: Humans can review, edit, or approve content generated by the LLM.
3. **💡 Providing context**: Enable the LLM to explicitly request human input for clarification or additional details or to support multi-turn conversations.
+## `interrupt`
+
+The [`interrupt` function][langgraph.types.interrupt] in LangGraph enables human-in-the-loop workflows by pausing the graph at a specific node, presenting information to a human, and resuming the graph with their input. This function is useful for tasks like approvals, edits, or collecting additional input. The [`interrupt` function][langgraph.types.interrupt] is used in conjunction with the [`Command`](../reference/types.md#langgraph.types.Command) object to resume the graph with a value provided by the human.
+
+```python
+from langgraph.types import interrupt
+
+def human_node(state: State):
+ value = interrupt(
+ # Any JSON serializable value to surface to the human.
+ # For example, a question or a piece of text or a set of keys in the state
+ some_data
+ )
+ ...
+ # Update the state with the human's input or route the graph based on the input.
+ ...
+
+# Run the graph and hit the breakpoint
+thread_config = {"configurable": {"thread_id": "some_id"}}
+graph.invoke(some_input, config=thread_config)
+
+# Resume the graph with the human's input
+graph.invoke(Command(resume=value_from_human), config=thread_config)
+```
+
+Please read the [Breakpoints](breakpoints.md) guide for more information on using the `interrupt` function.
+
## Design Patterns
-1. **Approval**: Pause the graph before a critical step, such as an API call, to review and approve the action. If the action is rejected, you can prevent the graph from executing the step, and potentially take an alternative action.
-2. **Editing**: Pause the graph to review and edit the agent's state. This is useful for correcting mistakes or updating the agent's state.
+There are typically three different things that you might want to do when you interrupt a graph:
+
+1. **Approval/Rejection**: Pause the graph before a critical step, such as an API call, to review and approve the action. If the action is rejected, you can prevent the graph from executing the step, and potentially take an alternative action. This pattern often involve **routing** the graph based on the human's input.
+2. **Editing**: Pause the graph to review and edit the graph state. This is useful for correcting mistakes or updating the state with additional information. This pattern often involves **updating** the state with the human's input.
3. **Input**: Explicitly request human input at a particular step in the graph. This is useful for collecting additional information or context to inform the agent's decision-making process or for supporting **multi-turn conversations**.
-### Approval
+### Approve or Reject
-Pause the graph before a critical step, such as an API call, to review and approve the action. If the action is rejected, you can prevent the graph from executing the step, and potentially take an alternative action.
+Pause the graph before a critical step, such as an API call, to review and approve the action. If the action is rejected, you can prevent the graph from executing the step, and potentially take an alternative action.
```python
-from langgraph.types import interrupt
-def human_approval(state: State):
- ...
+from typing import Literal
+from langgraph.types import interrupt, Command
+
+def human_approval(state: State) -> Command[Literal["some_node", "another_node"]]:
is_approved = interrupt(
{
"question": "Is this correct?",
@@ -48,79 +78,106 @@ def human_approval(state: State):
)
if is_approved:
- # Proceed with the action
- ...
+ return Command(goto="some_node")
else:
- # Do something else
- ...
+ return Command(goto="another_node")
# Add the node to the graph in an appropriate location
# and connect it to the relevant nodes.
graph_builder.add_node("human_approval", human_approval)
graph = graph_builder.compile(checkpointer=checkpointer)
-...
-
# After running the graph and hitting the breakpoint, the graph will pause.
# Resume it with either an approval or rejection.
thread_config = {"configurable": {"thread_id": "some_id"}}
graph.invoke(Command(resume=True), config=thread_config)
```
-
-### Edit
+### Review & Edit State
+```python
+from langgraph.types import interrupt
+
+def human_editing(state: State):
+ ...
+ result = interrupt(
+ # Interrupt information to surface to the client.
+ # Can be any JSON serializable value.
+ {
+ "task": "Review the output from the LLM and make any necessary edits.",
+ "llm_generated_summary": state["llm_generated_summary"]
+ }
+ )
-=== "Review tool calls"
+ # Update the state with the edited text
+ return {
+ "llm_generated_summary": result["edited_text"]
+ }
- TODO: Create an example for tool call review.
+# Add the node to the graph in an appropriate location
+# and connect it to the relevant nodes.
+graph_builder.add_node("human_editing", human_editing)
+graph = graph_builder.compile(checkpointer=checkpointer)
+...
-=== "Review text output from the LLM and make any necessary edits."
+# After running the graph and hitting the breakpoint, the graph will pause.
+# Resume it with the edited text.
+thread_config = {"configurable": {"thread_id": "some_id"}}
+graph.invoke(
+ Command(resume={"edited_text": "The edited text"}),
+ config=thread_config
+)
+```
- ```python
- from langgraph.types import interrupt
+### Review Tool Call
- def human_editing(state: State):
- ...
- result = interrupt(
- # Interrupt information to surface to the client.
- # Can be any JSON serializable value.
- {
- "task": "Review the output from the LLM and make any necessary edits.",
- "llm_output": state["llm_output"]
- }
- )
+
- # Update the state with the edited text
- return {
- "llm_output": result["edited_text"]
+```python
+def human_review_node(state) -> Command[Literal["call_llm", "run_tool"]]:
+ # This is the value we'll be providing via Command(resume=)
+ human_review = interrupt(
+ {
+ "question": "Is this correct?",
+ # Surface tool calls for review
+ "tool_call": tool_call
}
+ )
- # Add the node to the graph in an appropriate location
- # and connect it to the relevant nodes.
- graph_builder.add_node("human_editing", human_editing)
- graph = graph_builder.compile(checkpointer=checkpointer)
+ review_action, review_data = human_review
- ...
+ # Approve the tool call and continue
+ if review_data == "approve":
+ return Command(goto="run_tool")
- # After running the graph and hitting the breakpoint, the graph will pause.
- # Resume it with the edited text.
- thread_config = {"configurable": {"thread_id": "some_id"}}
- graph.invoke(
- Command(resume={"edited_text": "The edited text"}),
- config=thread_config
- )
- ```
+ # Modify the tool call manually and then continue
+ elif review_action == "update":
+ ...
+ updated_msg = get_updated_msg(review_data)
+ # Remember that modify an existing message you will need
+ # pass the message with a matching ID.
+ return Command(goto="run_tool", update={"messages": [updated_message]})
+
+ # Give natural language feedback, and then pass that back to the agent
+ elif review_action == "feedback":
+ ...
+ feedback_msg = get_feedback_msg(review_data)
+ return Command(goto="call_llm", update={"messages": [feedback_msg]})
+```
-### Multi-turn conversation (Input)
+### Multi-turn conversation