-
Notifications
You must be signed in to change notification settings - Fork 926
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
add llamaindex integration #1809
Merged
+6,043
−0
Merged
Changes from 22 commits
Commits
Show all changes
24 commits
Select commit
Hold shift + click to select a range
043fb64
init llama index integration
LawyZheng 954e31d
add llama index skyvern client
LawyZheng d279b5c
fix mypy
LawyZheng e1bedf7
use one interface to choose v1 or v2 task
LawyZheng 7afbe18
update tool interface
LawyZheng e7af348
update README
LawyZheng 74cff9d
update version
LawyZheng 5f1cf02
update interface
LawyZheng 3384b89
revert langchain readme change
LawyZheng ba8f157
update version
LawyZheng 2f69ac5
update readme
LawyZheng 1ea267b
revert langchain README update
LawyZheng 2e774c7
update client interface
LawyZheng a4dfc47
update client interface
LawyZheng 118e94f
update agent interface
LawyZheng 7f622c4
fix bug
LawyZheng 9cb7581
fix bug
LawyZheng 7e550f5
update docstring and fix bug
LawyZheng 4a745c4
update README
LawyZheng a1748db
update README
LawyZheng 79a525b
update interface
LawyZheng dd05233
update README
LawyZheng 273ccb3
update README
LawyZheng 449fe00
update version
LawyZheng File filter
Filter by extension
Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
There are no files selected for viewing
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,295 @@ | ||
<!-- START doctoc generated TOC please keep comment here to allow auto update --> | ||
<!-- DON'T EDIT THIS SECTION, INSTEAD RE-RUN doctoc TO UPDATE --> | ||
**Table of Contents** *generated with [DocToc](https://github.com/thlorenz/doctoc)* | ||
|
||
- [Skyvern LlamaIndex](#skyvern-llamaindex) | ||
- [Installation](#installation) | ||
- [Basic Usage](#basic-usage) | ||
- [Run a task(sync) locally in your local environment](#run-a-tasksync-locally-in-your-local-environment) | ||
- [Run a task(async) locally in your local environment](#run-a-taskasync-locally-in-your-local-environment) | ||
- [Get a task locally in your local environment](#get-a-task-locally-in-your-local-environment) | ||
- [Run a task(sync) by calling skyvern APIs](#run-a-tasksync-by-calling-skyvern-apis) | ||
- [Run a task(async) by calling skyvern APIs](#run-a-taskasync-by-calling-skyvern-apis) | ||
- [Get a task by calling skyvern APIs](#get-a-task-by-calling-skyvern-apis) | ||
- [Advanced Usage](#advanced-usage) | ||
- [Dispatch a task(async) locally in your local environment and wait until the task is finished](#dispatch-a-taskasync-locally-in-your-local-environment-and-wait-until-the-task-is-finished) | ||
- [Dispatch a task(async) by calling skyvern APIs and wait until the task is finished](#dispatch-a-taskasync-by-calling-skyvern-apis-and-wait-until-the-task-is-finished) | ||
|
||
<!-- END doctoc generated TOC please keep comment here to allow auto update --> | ||
|
||
# Skyvern LlamaIndex | ||
|
||
This is a LlamaIndex integration for Skyvern. | ||
|
||
## Installation | ||
|
||
```bash | ||
pip install skyvern-llamaindex | ||
``` | ||
|
||
## Basic Usage | ||
|
||
### Run a task(sync) locally in your local environment | ||
> sync task won't return until the task is finished. | ||
|
||
:warning: :warning: if you want to run this code block, you need to run `skyvern init --openai-api-key <your_openai_api_key>` command in your terminal to set up skyvern first. | ||
|
||
|
||
```python | ||
from dotenv import load_dotenv | ||
from llama_index.agent.openai import OpenAIAgent | ||
from llama_index.llms.openai import OpenAI | ||
from skyvern_llamaindex.agent import SkyvernTool | ||
|
||
# load OpenAI API key from .env | ||
load_dotenv() | ||
|
||
skyvern_tool = SkyvernTool() | ||
|
||
agent = OpenAIAgent.from_tools( | ||
tools=[skyvern_tool.run_task()], | ||
llm=OpenAI(model="gpt-4o"), | ||
verbose=True, | ||
) | ||
|
||
response = agent.chat("Run a task with Skyvern. The task is about 'Navigate to the Hacker News homepage and get the top 3 posts.'") | ||
print(response) | ||
``` | ||
|
||
### Run a task(async) locally in your local environment | ||
> async task will return immediately and the task will be running in the background. | ||
|
||
:warning: :warning: if you want to run the task in the background, you need to keep the agent running until the task is finished, otherwise the task will be killed when the agent finished the chat. | ||
|
||
:warning: :warning: if you want to run this code block, you need to run `skyvern init --openai-api-key <your_openai_api_key>` command in your terminal to set up skyvern first. | ||
|
||
```python | ||
import asyncio | ||
from dotenv import load_dotenv | ||
from llama_index.agent.openai import OpenAIAgent | ||
from llama_index.llms.openai import OpenAI | ||
from skyvern_llamaindex.agent import SkyvernTool | ||
from llama_index.core.tools import FunctionTool | ||
|
||
# load OpenAI API key from .env | ||
load_dotenv() | ||
|
||
async def sleep(seconds: int) -> str: | ||
await asyncio.sleep(seconds) | ||
return f"Slept for {seconds} seconds" | ||
|
||
# define a sleep tool to keep the agent running until the task is finished | ||
sleep_tool = FunctionTool.from_defaults( | ||
async_fn=sleep, | ||
description="Sleep for a given number of seconds", | ||
name="sleep", | ||
) | ||
|
||
skyvern_tool = SkyvernTool() | ||
|
||
agent = OpenAIAgent.from_tools( | ||
tools=[skyvern_tool.dispatch_task(), sleep_tool], | ||
llm=OpenAI(model="gpt-4o"), | ||
verbose=True, | ||
) | ||
|
||
response = agent.chat("Run a task with Skyvern. The task is about 'Navigate to the Hacker News homepage and get the top 3 posts.' Then, sleep for 10 minutes.") | ||
print(response) | ||
``` | ||
|
||
### Get a task locally in your local environment | ||
|
||
:warning: :warning: if you want to run this code block, you need to run `skyvern init --openai-api-key <your_openai_api_key>` command in your terminal to set up skyvern first. | ||
|
||
```python | ||
from dotenv import load_dotenv | ||
from llama_index.agent.openai import OpenAIAgent | ||
from llama_index.llms.openai import OpenAI | ||
from skyvern_llamaindex.agent import SkyvernTool | ||
|
||
# load OpenAI API key from .env | ||
load_dotenv() | ||
|
||
skyvern_tool = SkyvernTool() | ||
|
||
agent = OpenAIAgent.from_tools( | ||
tools=[skyvern_tool.get_task()], | ||
llm=OpenAI(model="gpt-4o"), | ||
verbose=True, | ||
) | ||
|
||
response = agent.chat("Get the task information with Skyvern. The task id is '<task_id>'.") | ||
print(response) | ||
``` | ||
|
||
### Run a task(sync) by calling skyvern APIs | ||
> sync task won't return until the task is finished. | ||
|
||
no need to run `skyvern init` command in your terminal to set up skyvern before using this integration. | ||
|
||
```python | ||
from dotenv import load_dotenv | ||
from llama_index.agent.openai import OpenAIAgent | ||
from llama_index.llms.openai import OpenAI | ||
from skyvern_llamaindex.agent import SkyvernTool | ||
|
||
# load OpenAI API key from .env | ||
load_dotenv() | ||
|
||
skyvern_tool = SkyvernTool(api_key="<your_organization_api_key>") | ||
# or you can load the api_key from SKYVERN_API_KEY in .env | ||
# skyvern_tool = SkyvernTool() | ||
|
||
agent = OpenAIAgent.from_tools( | ||
tools=[skyvern_tool.run_task()], | ||
llm=OpenAI(model="gpt-4o"), | ||
verbose=True, | ||
) | ||
|
||
response = agent.chat("Run a task with Skyvern. The task is about 'Navigate to the Hacker News homepage and get the top 3 posts.'") | ||
print(response) | ||
``` | ||
|
||
### Run a task(async) by calling skyvern APIs | ||
> async task will return immediately and the task will be running in the background. | ||
|
||
no need to run `skyvern init` command in your terminal to set up skyvern before using this integration. | ||
|
||
the task is actually running in the skyvern cloud service, so you don't need to keep your agent running until the task is finished. | ||
|
||
```python | ||
from dotenv import load_dotenv | ||
from llama_index.agent.openai import OpenAIAgent | ||
from llama_index.llms.openai import OpenAI | ||
from skyvern_llamaindex.client import SkyvernTool | ||
|
||
# load OpenAI API key from .env | ||
load_dotenv() | ||
|
||
skyvern_tool = SkyvernTool(api_key="<your_organization_api_key>") | ||
# or you can load the api_key from SKYVERN_API_KEY in .env | ||
# skyvern_tool = SkyvernTool() | ||
|
||
agent = OpenAIAgent.from_tools( | ||
tools=[skyvern_tool.dispatch_task()], | ||
llm=OpenAI(model="gpt-4o"), | ||
verbose=True, | ||
) | ||
|
||
response = agent.chat("Run a task with Skyvern. The task is about 'Navigate to the Hacker News homepage and get the top 3 posts.'") | ||
print(response) | ||
``` | ||
|
||
|
||
### Get a task by calling skyvern APIs | ||
|
||
no need to run `skyvern init` command in your terminal to set up skyvern before using this integration. | ||
|
||
```python | ||
from dotenv import load_dotenv | ||
from llama_index.agent.openai import OpenAIAgent | ||
from llama_index.llms.openai import OpenAI | ||
from skyvern_llamaindex.client import SkyvernTool | ||
|
||
# load OpenAI API key from .env | ||
load_dotenv() | ||
|
||
skyvern_tool = SkyvernTool(api_key="<your_organization_api_key>") | ||
# or you can load the api_key from SKYVERN_API_KEY in .env | ||
# skyvern_tool = SkyvernTool() | ||
|
||
agent = OpenAIAgent.from_tools( | ||
tools=[skyvern_tool.get_task()], | ||
llm=OpenAI(model="gpt-4o"), | ||
verbose=True, | ||
) | ||
|
||
response = agent.chat("Get the task information with Skyvern. The task id is '<task_id>'.") | ||
print(response) | ||
``` | ||
|
||
## Advanced Usage | ||
|
||
To provide some examples of how to integrate Skyvern with other llama-index tools in the agent. | ||
|
||
### Dispatch a task(async) locally in your local environment and wait until the task is finished | ||
> dispatch task will return immediately and the task will be running in the background. You can use `get_task` tool to poll the task information until the task is finished. | ||
|
||
:warning: :warning: if you want to run this code block, you need to run `skyvern init --openai-api-key <your_openai_api_key>` command in your terminal to set up skyvern first. | ||
|
||
```python | ||
import asyncio | ||
from dotenv import load_dotenv | ||
from llama_index.agent.openai import OpenAIAgent | ||
from llama_index.llms.openai import OpenAI | ||
from llama_index.core.tools import FunctionTool | ||
from skyvern_llamaindex.agent import SkyvernTool | ||
|
||
# load OpenAI API key from .env | ||
load_dotenv() | ||
|
||
async def sleep(seconds: int) -> str: | ||
await asyncio.sleep(seconds) | ||
return f"Slept for {seconds} seconds" | ||
|
||
sleep_tool = FunctionTool.from_defaults( | ||
async_fn=sleep, | ||
description="Sleep for a given number of seconds", | ||
name="sleep", | ||
) | ||
|
||
skyvern_tool = SkyvernTool() | ||
|
||
agent = OpenAIAgent.from_tools( | ||
tools=[skyvern_tool.dispatch_task(), skyvern_tool.get_task(), sleep_tool], | ||
llm=OpenAI(model="gpt-4o"), | ||
verbose=True, | ||
max_function_calls=10, | ||
) | ||
|
||
response = agent.chat("Run a task with Skyvern. The task is about 'Navigate to the Hacker News homepage and get the top 3 posts.' Then, get this task information until it's completed. The task information re-get interval should be 60s.") | ||
print(response) | ||
|
||
``` | ||
|
||
### Dispatch a task(async) by calling skyvern APIs and wait until the task is finished | ||
> dispatch task will return immediately and the task will be running in the background. You can use `get_task` tool to poll the task information until the task is finished. | ||
|
||
no need to run `skyvern init` command in your terminal to set up skyvern before using this integration. | ||
|
||
```python | ||
import asyncio | ||
from dotenv import load_dotenv | ||
from llama_index.agent.openai import OpenAIAgent | ||
from llama_index.llms.openai import OpenAI | ||
from llama_index.core.tools import FunctionTool | ||
from skyvern_llamaindex.client import SkyvernTool | ||
|
||
# load OpenAI API key from .env | ||
load_dotenv() | ||
|
||
async def sleep(seconds: int) -> str: | ||
await asyncio.sleep(seconds) | ||
return f"Slept for {seconds} seconds" | ||
|
||
sleep_tool = FunctionTool.from_defaults( | ||
async_fn=sleep, | ||
description="Sleep for a given number of seconds", | ||
name="sleep", | ||
) | ||
|
||
skyvern_tool = SkyvernTool(api_key="<your_organization_api_key>") | ||
# or you can load the api_key from SKYVERN_API_KEY in .env | ||
# skyvern_tool = SkyvernTool() | ||
|
||
agent = OpenAIAgent.from_tools( | ||
tools=[skyvern_tool.dispatch_task(), skyvern_tool.get_task(), sleep_tool], | ||
llm=OpenAI(model="gpt-4o"), | ||
verbose=True, | ||
max_function_calls=10, | ||
) | ||
|
||
response = agent.chat("Run a task with Skyvern. The task is about 'Navigate to the Hacker News homepage and get the top 3 posts.' Then, get this task information until it's completed. The task information re-get interval should be 60s.") | ||
print(response) | ||
|
||
``` |
Oops, something went wrong.
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It appears that the model parameter is set to
gpt-4o
in several code examples. Please check if this is a typographical error and if it should be corrected togpt-4
or the appropriate model name.