-
Notifications
You must be signed in to change notification settings - Fork 1.2k
Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
(wip) docs: revamp conceptual docs (#1663)
* docs: revamp conceptual docs * multi-agent draft * typo * Update Why LangGraph page * update multi-agent * Update persistence page * update * Clarification of graph re-playing * update persistence * more updates * more updates * Add multiple schemas section to glossary * more updates * Clarify I/O schema * dynamic breakpoints * Update agentic concepts * Fix comments * Update figureds * cr * cr * cr --------- Co-authored-by: Lance Martin <[email protected]> Co-authored-by: Harrison Chase <[email protected]>
- Loading branch information
1 parent
182dcea
commit f8ba4c3
Showing
28 changed files
with
761 additions
and
452 deletions.
There are no files selected for viewing
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Large diffs are not rendered by default.
Oops, something went wrong.
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -1,55 +1,58 @@ | ||
# LangGraph for Agentic Applications | ||
# Why LangGraph? | ||
|
||
## What does it mean to be agentic? | ||
LLMs are extremely powerful, particularly when connected to other systems such as a retriever or APIs. This is why many LLM applications use a control flow of steps before and / or after LLM calls. As an example [RAG](https://github.com/langchain-ai/rag-from-scratch) performs retrieval of relevant documents to a question, and passes those documents to an LLM in order to ground the response. Often a control flow of steps before and / or after an LLM is called a "chain." Chains are a popular paradigm for programming with LLMs and offer a high degree of reliability; the same set of steps runs with each chain invocation. | ||
|
||
Other people may talk about a system being an "agent" - we prefer to talk about systems being "agentic". But what does this actually mean? | ||
|
||
When we talk about systems being "agentic", we are talking about systems that use an LLM to decide the control flow of an application. There are different levels that an LLM can be used to decide the control flow, and this spectrum of "agentic" makes more sense to us than defining an arbitrary cutoff for what is or isn't an agent. | ||
|
||
Examples of using an LLM to decide the control of an application: | ||
However, we often want LLM systems that can pick their own control flow! This is one definition of an [agent](https://blog.langchain.dev/what-is-an-agent/): an agent is a system that uses an LLM to decide the control flow of an application. Unlike a chain, an agent given an LLM some degree of control over the sequence of steps in the application. Examples of using an LLM to decide the control of an application: | ||
|
||
- Using an LLM to route between two potential paths | ||
- Using an LLM to decide which of many tools to call | ||
- Using an LLM to decide whether the generated answer is sufficient or more work is need | ||
|
||
The more times these types of decisions are made inside an application, the more agentic it is. | ||
If these decisions are being made in a loop, then its even more agentic! | ||
There are many different types of [agent architectures](https://blog.langchain.dev/what-is-a-cognitive-architecture/) to consider, which given an LLM varying levels of control. On one extreme, a router allows an LLM to select a single step from a specified set of options and, on the other extreme, a fully autonomous long-running agent may have complete freedom to select any sequence of steps that it wants for a given problem. | ||
|
||
![Agent Types](img/agent_types.png) | ||
|
||
There are other concepts often associated with being agentic, but we would argue these are a by-product of the above definition: | ||
Several concepts are utilized in many agent architectures: | ||
|
||
- [Tool calling](agentic_concepts.md#tool-calling): this is often how LLMs make decisions | ||
- Action taking: often times, the LLMs' outputs are used as the input to an action | ||
- [Memory](agentic_concepts.md#memory): reliable systems need to have knowledge of things that occurred | ||
- [Planning](agentic_concepts.md#planning): planning steps (either explicit or implicit) are useful for ensuring that the LLM, when making decisions, makes them in the highest fidelity way. | ||
|
||
## Why LangGraph? | ||
## Challenges | ||
|
||
In practice, there is often a trade-off between control and reliability. As we give LLMs more control, the application often become less reliable. This can be due to factors such as LLM non-determinism and / or errors in selecting tools (or steps) that the agent uses (takes). | ||
|
||
![Agent Challenge](img/challenge.png) | ||
|
||
LangGraph has several core principles that we believe make it the most suitable framework for building agentic applications: | ||
## Core Principles | ||
|
||
- [Controllability](../how-tos/index.md#controllability) | ||
- [Human-in-the-Loop](../how-tos/index.md#human-in-the-loop) | ||
- [Streaming First](../how-tos/index.md#streaming) | ||
The motivation of LangGraph is to help bend the curve, preserving higher reliability as we give the agent more control over the application. We'll outline a few specific pillars of LangGraph that make it well suited for building reliable agents. | ||
|
||
![Langgraph](img/langgraph.png) | ||
|
||
**Controllability** | ||
|
||
LangGraph is extremely low level. This gives you a high degree of control over what the system you are building actually does. We believe this is important because it is still hard to get agentic systems to work reliably, and we've seen that the more control you exercise over them, the more likely it is that they will "work". | ||
LangGraph gives the developer a high degree of [control](../how-tos/index.md#controllability) by expressing the flow of the application as a set of nodes and edges. All nodes can access and modify a common state (memory). The control flow of the application can set using edges that connect nodes, either deterministically or via conditional logic. | ||
|
||
**Persistence** | ||
|
||
LangGraph gives the developer many options for [persisting](../how-tos/index.md#persistence) graph state using short-term or long-term (e.g., via a database) memory. | ||
|
||
**Human-in-the-Loop** | ||
|
||
LangGraph comes with a built-in persistence layer as a first-class concept. This enables several different human-in-the-loop interaction patterns. We believe that "Human-Agent Interaction" patterns will be the new "Human-Computer Interaction", and have built LangGraph with built in persistence to enable this. | ||
The persistence layer enables several different [human-in-the-loop](../how-tos/index.md#human-in-the-loop) interaction patterns with agents; for example, it's possible to pause an agent, review its state, edit it state, and approve a follow-up step. | ||
|
||
**Streaming First** | ||
**Streaming** | ||
|
||
LangGraph comes with first class support for streaming. Agentic applications often take a while to run, and so giving the user some idea of what is happening is important, and streaming is a great way to do that. LangGraph supports streaming of both events ([like a tool call being taken](../how-tos/stream-updates.ipynb)) as well as of [tokens that an LLM may emit](../how-tos/streaming-tokens.ipynb). | ||
LangGraph comes with first class support for [streaming](../how-tos/index.md#streaming), which can expose state to the user (or developer) over the course of agent execution. LangGraph supports streaming of both events ([like a tool call being taken](../how-tos/stream-updates.ipynb)) as well as of [tokens that an LLM may emit](../how-tos/streaming-tokens.ipynb). | ||
|
||
## Deployment | ||
## Debugging | ||
|
||
So you've built your LangGraph object - now what? | ||
Once you've built a graph, you often want to test and debug it. [LangGraph Studio](https://github.com/langchain-ai/langgraph-studio?tab=readme-ov-file) is a specialized IDE for visualization and debugging of LangGraph applications. | ||
|
||
Now you need to deploy it. | ||
There are many ways to deploy LangGraph objects, and the right solution depends on your needs and use case. | ||
We'll highlight two ways here: using [LangGraph Cloud](../cloud/index.md) or rolling your own solution. | ||
![Langgraph Studio](img/lg_studio.png) | ||
|
||
[LangGraph Cloud](../cloud/index.md) is an opinionated way to deploy LangGraph objects from the LangChain team. Please see the [LangGraph Cloud documentation](../cloud/index.md) for all the details about what it involves, to see if it is a good fit for you. | ||
## Deployment | ||
|
||
If it is not a good fit, you may want to roll your own deployment. In this case, we would recommend using [FastAPI](https://fastapi.tiangolo.com/) to stand up a server. You can then call this graph from inside the FastAPI server as you see fit. | ||
Once you have confidence in your LangGraph application, many developers want an easy path to deployment. [LangGraph Cloud](../cloud/index.md) is an opinionated, simple way to deploy LangGraph objects from the LangChain team. Of course, you can also use services like [FastAPI](https://fastapi.tiangolo.com/) and call your graph from inside the FastAPI server as you see fit. |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,63 @@ | ||
# Human-in-the-loop | ||
|
||
Agentic systems often require some human-in-the-loop (or "on-the-loop") interaction patterns. This is because agentic systems are still not very reliable, so having a human involved is required for any sensitive tasks/actions. These are all easily enabled in LangGraph, largely due to built-in [persistence](./persistence.md), implemented via checkpointers. | ||
|
||
The reason a checkpointer is necessary is that a lot of these interaction patterns involve running a graph up until a certain point, waiting for some sort of human feedback, and then continuing. When you want to "continue" you will need to access the state of the graph prior to the interrupt. LangGraph persistence enables this by checkpointing the state at every superstep. | ||
|
||
There are a few common human-in-the-loop interaction patterns we see emerging. | ||
|
||
## Approval | ||
|
||
![](./img/human_in_the_loop/approval.png) | ||
|
||
A basic pattern is to have the agent wait for approval before executing certain tools. This may be all tools, or just a subset of tools. This is generally recommend for more sensitive actions (like writing to a database). This can easily be done in LangGraph by setting a [breakpoint](./low_level.md#breakpoints) before specific nodes. | ||
|
||
See [this guide](../how-tos/human_in_the_loop/breakpoints.ipynb) for how do this in LangGraph. | ||
|
||
## Wait for input | ||
|
||
![](./img/human_in_the_loop/wait_for_input.png) | ||
|
||
A similar one is to have the agent wait for human input. This can be done by: | ||
|
||
1. Create a node specifically for human input | ||
2. Add a breakpoint before the node | ||
3. Get user input | ||
4. Update the state with that user input, acting as that node | ||
5. Resume execution | ||
|
||
See [this guide](../how-tos/human_in_the_loop/wait-user-input.ipynb) for how do this in LangGraph. | ||
|
||
## Edit agent actions | ||
|
||
![](./img/human_in_the_loop/edit_graph_state.png) | ||
|
||
This is a more advanced interaction pattern. In this interaction pattern the human can actually edit some of the agent's previous decisions. This can be done either during the flow (after a [breakpoint](./low_level.md#breakpoints), part of the [approval](#approval) flow) or after the fact (as part of [time-travel](#time-travel)) | ||
|
||
See [this guide](../how-tos/human_in_the_loop/edit-graph-state.ipynb) for how do this in LangGraph. | ||
|
||
## Time travel | ||
|
||
This is a pretty advanced interaction pattern. In this interaction pattern, the human can look back at the list of previous checkpoints, find one they like, optionally [edit it](#edit-agent-actions), and then resume execution from there. | ||
|
||
See [this guide](../how-tos/human_in_the_loop/time-travel.ipynb) for how to do this in LangGraph. | ||
|
||
## Review Tool Calls | ||
|
||
This is a specific type of human-in-the-loop interaction but it's worth calling out because it is so common. A lot of agent decisions are made via tool calling, so having a clear UX for reviewing tool calls is handy. | ||
|
||
A tool call consists of: | ||
|
||
- The name of the tool to call | ||
- Arguments to pass to the tool | ||
|
||
Note that these tool calls can obviously be used for actually calling functions, but they can also be used for other purposes, like to route the agent in a specific direction. | ||
You will want to review the tool call for both of these use cases. | ||
|
||
When reviewing tool calls, there are few actions you may want to take. | ||
|
||
1. Approve the tool call (and let the agent continue on its way) | ||
2. Manually change the tool call, either the tool name or the tool arguments (and let the agent continue on its way after that) | ||
3. Leave feedback on the tool call. This differs from (2) in that you are not changing the tool call directly, but rather leaving natural language feedback suggesting the LLM call it differently (or call a different tool). You could do this by either adding a `ToolMessage` and having the feedback be the result of the tool call, or by adding a `ToolMessage` (that simulates an error) and then a `HumanMessage` (with the feedback). | ||
|
||
See [this guide](../how-tos/human_in_the_loop/review-tool-calls.ipynb) for how to do this in LangGraph. |
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
This file was deleted.
Oops, something went wrong.
Oops, something went wrong.