Skip to content

Commit

Permalink
Add agents example notebook, closes #811
Browse files Browse the repository at this point in the history
  • Loading branch information
davidmezzetti committed Nov 18, 2024
1 parent eb573e0 commit 5f933e5
Show file tree
Hide file tree
Showing 6 changed files with 536 additions and 9 deletions.
6 changes: 6 additions & 0 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -128,6 +128,12 @@ Agents connect embeddings, pipelines, workflows and other agents together to aut

txtai agents are built on top of the Transformers Agent framework. This supports all LLMs txtai supports (Hugging Face, llama.cpp, OpenAI / Claude / AWS Bedrock via LiteLLM).

See the link below to learn more.

| Notebook | Description | |
|:----------|:-------------|------:|
| [What's new in txtai 8.0](https://github.com/neuml/txtai/blob/master/examples/67_Whats_new_in_txtai_8_0.ipynb) | Agents with txtai | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/neuml/txtai/blob/master/examples/67_Whats_new_in_txtai_8_0.ipynb) |

#### Retrieval augmented generation

Retrieval augmented generation (RAG) reduces the risk of LLM hallucinations by constraining the output with a knowledge base as context. RAG is commonly used to "chat with your data".
Expand Down
6 changes: 2 additions & 4 deletions docs/agent/configuration.md
Original file line number Diff line number Diff line change
Expand Up @@ -45,7 +45,7 @@ agent = Agent(
llm: string|llm instance
```
LLM pipeline instance or a LLM path. See the [LLM pipeline](../../pipeline/text/llm) for more information.
LLM path or LLM pipeline instance. See the [LLM pipeline](../../pipeline/text/llm) for more information.
## tools
Expand All @@ -57,7 +57,7 @@ List of tools to supply to the agent. Supports the following configurations.
### function
A function tool takes the following dictionary configuration.
A function tool takes the following dictionary fields.
| Field | Description |
|:------------|:-------------------------|
Expand All @@ -77,7 +77,6 @@ Embeddings indexes have built-in support. Provide the following dictionary confi
| description | embeddings index description |
| **kwargs | Parameters to pass to [embeddings.load](../../embeddings/methods/#txtai.embeddings.Embeddings.load) |


### transformers

A Transformers tool instance can be provided. Additionally, the following strings load tools directly from Transformers.
Expand All @@ -86,7 +85,6 @@ A Transformers tool instance can be provided. Additionally, the following string
|:------------|:----------------------------------------------------------|
| websearch | Runs a websearch using built-in Transformers Agent tool |


## method

```yaml
Expand Down
18 changes: 13 additions & 5 deletions docs/agent/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -9,7 +9,7 @@ Agents excel at complex tasks where multiple tools and/or methods are required.
people working on the same task. When the request is simple and/or there is a rule-based process, other methods such as RAG and Workflows
should be explored.

The following code snippet shows a basic agent.
The following code snippet defines a basic agent.

```python
from datetime import datetime
Expand Down Expand Up @@ -47,7 +47,7 @@ agent = Agent(
)
```

The agent above has access to a two embeddings database (Wikipedia and ArXiv) and the web. Given the user's input request, the Agent decides the best tool to solve the task. In this case, a web search is executed.
The agent above has access to two embeddings databases (Wikipedia and ArXiv) and the web. Given the user's input request, the agent decides the best tool to solve the task.

## Example

Expand All @@ -57,13 +57,13 @@ The first example will solve a problem with multiple data points. See below.
agent("Which city has the highest population, Boston or New York?")
```

This requires looking up the populations of each city before knowing how to answer the question.
This requires looking up the population of each city before knowing how to answer the question. Multiple search requests are run to generate a final answer.

## Agentic RAG

Standard Retrieval Augmented Generation (RAG) runs a single vector search query to obtain a context and builds a prompt with the context + input question. Agentic RAG is a more complex process that goes through multiple iterations. It can also utilize multiple databases to come to a final conclusion.
Standard retrieval augmented generation (RAG) runs a single vector search to obtain a context and builds a prompt with the context + input question. Agentic RAG is a more complex process that goes through multiple iterations. It can also utilize multiple databases to come to a final conclusion.

The example below aggregates information from multiple examples and builds a report on a topic.
The example below aggregates information from multiple sources and builds a report on a topic.

```python
researcher = """
Expand Down Expand Up @@ -138,3 +138,11 @@ concepts about Signal Processing.
Write the output in Markdown.
""")
```

# More examples

See the link below to learn more.

| Notebook | Description | |
|:----------|:-------------|------:|
| [What's new in txtai 8.0](https://github.com/neuml/txtai/blob/master/examples/67_Whats_new_in_txtai_8_0.ipynb) | Agents with txtai | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/neuml/txtai/blob/master/examples/67_Whats_new_in_txtai_8_0.ipynb) |
1 change: 1 addition & 0 deletions docs/examples.md
Original file line number Diff line number Diff line change
Expand Up @@ -126,6 +126,7 @@ New functionality added in major releases.
| [What's new in txtai 4.0](https://github.com/neuml/txtai/blob/master/examples/24_Whats_new_in_txtai_4_0.ipynb) | Content storage, SQL, object storage, reindex and compressed indexes | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/neuml/txtai/blob/master/examples/24_Whats_new_in_txtai_4_0.ipynb) |
| [What's new in txtai 6.0](https://github.com/neuml/txtai/blob/master/examples/46_Whats_new_in_txtai_6_0.ipynb) | Sparse, hybrid and subindexes for embeddings, LLM improvements | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/neuml/txtai/blob/master/examples/46_Whats_new_in_txtai_6_0.ipynb) |
| [What's new in txtai 7.0](https://github.com/neuml/txtai/blob/master/examples/59_Whats_new_in_txtai_7_0.ipynb) | Semantic graph 2.0, LoRA/QLoRA training and binary API support | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/neuml/txtai/blob/master/examples/59_Whats_new_in_txtai_7_0.ipynb) |
| [What's new in txtai 8.0](https://github.com/neuml/txtai/blob/master/examples/67_Whats_new_in_txtai_8_0.ipynb) | Agents with txtai | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/neuml/txtai/blob/master/examples/67_Whats_new_in_txtai_8_0.ipynb) |

## Applications

Expand Down
6 changes: 6 additions & 0 deletions docs/usecases.md
Original file line number Diff line number Diff line change
Expand Up @@ -44,6 +44,12 @@ Agents connect embeddings, pipelines, workflows and other agents together to aut

txtai agents are built on top of the Transformers Agent framework. This supports all LLMs txtai supports (Hugging Face, llama.cpp, OpenAI / Claude / AWS Bedrock via LiteLLM).

See the link below to learn more.

| Notebook | Description | |
|:----------|:-------------|------:|
| [What's new in txtai 8.0](https://github.com/neuml/txtai/blob/master/examples/67_Whats_new_in_txtai_8_0.ipynb) | Agents with txtai | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/neuml/txtai/blob/master/examples/67_Whats_new_in_txtai_8_0.ipynb) |

### Retrieval augmented generation

Retrieval augmented generation (RAG) reduces the risk of LLM hallucinations by constraining the output with a knowledge base as context. RAG is commonly used to "chat with your data".
Expand Down
Loading

0 comments on commit 5f933e5

Please sign in to comment.