diff --git a/README.md b/README.md index 5498dbc7..3ee255ad 100644 --- a/README.md +++ b/README.md @@ -26,119 +26,179 @@

-# What is Promptulate? -`Promptulate AI` focuses on building a developer platform for large language model applications, dedicated to providing developers and businesses with the ability to build, extend, and evaluate large language model applications. `Promptulate` is a large language model automation and application development framework under `Promptulate AI`, designed to help developers build industry-level large model applications at a lower cost. It includes most of the common components for application layer development in the LLM field, such as external tool components, model components, Agent intelligent agents, external data source integration modules, data storage modules, and lifecycle modules. With `Promptulate`, you can easily build your own LLM applications. +## Overview + +**Promptulate** is an AI Agent application development framework crafted by **Cogit Lab**, which offers developers an extremely concise and efficient way to build Agent applications through a Pythonic development paradigm. The core philosophy of Promptulate is to borrow and integrate the wisdom of the open-source community, incorporating the highlights of various development frameworks to lower the barrier to entry and unify the consensus among developers. With Promptulate, you can manipulate components like LLM, Agent, Tool, RAG, etc., with the most succinct code, as most tasks can be easily completed with just a few lines of code. 🚀 + +## 💡 Features + +- 🐍 Pythonic Code Style: Embraces the habits of Python developers, providing a Pythonic SDK calling approach, putting everything within your grasp with just one `pne.chat` function to encapsulate all essential functionalities. +- 🧠 Model Compatibility: Supports nearly all types of large models on the market and allows for easy customization to meet specific needs. +- 🕵️‍♂️ Diverse Agents: Offers various types of Agents, such as WebAgent, ToolAgent, CodeAgent, etc., capable of planning, reasoning, and acting to handle complex problems. +- 🔗 Low-Cost Integration: Effortlessly integrates tools from different frameworks like LangChain, significantly reducing integration costs. +- 🔨 Functions as Tools: Converts any Python function directly into a tool usable by Agents, simplifying the tool creation and usage process. +- 🪝 Lifecycle and Hooks: Provides a wealth of Hooks and comprehensive lifecycle management, allowing the insertion of custom code at various stages of Agents, Tools, and LLMs. +- 💻 Terminal Integration: Easily integrates application terminals, with built-in client support, offering rapid debugging capabilities for prompts. +- ⏱️ Prompt Caching: Offers a caching mechanism for LLM Prompts to reduce repetitive work and enhance development efficiency. + +> Below, `pne` stands for Promptulate, which is the nickname for Promptulate. The `p` and `e` represent the beginning and end of Promptulate, respectively, and `n` stands for 9, which is a shorthand for the nine letters between `p` and `e`. + +## Supported Base Models + +Promptulate integrates the capabilities of [litellm](https://github.com/BerriAI/litellm), supporting nearly all types of large models on the market, including but not limited to the following models: + +| Provider | [Completion](https://docs.litellm.ai/docs/#basic-usage) | [Streaming](https://docs.litellm.ai/docs/completion/stream#streaming-responses) | [Async Completion](https://docs.litellm.ai/docs/completion/stream#async-completion) | [Async Streaming](https://docs.litellm.ai/docs/completion/stream#async-streaming) | [Async Embedding](https://docs.litellm.ai/docs/embedding/supported_embedding) | [Async Image Generation](https://docs.litellm.ai/docs/image_generation) | +| ------------- | ------------- | ------------- | ------------- | ------------- | ------------- | ------------- | +| [openai](https://docs.litellm.ai/docs/providers/openai) | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | +| [azure](https://docs.litellm.ai/docs/providers/azure) | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | +| [aws - sagemaker](https://docs.litellm.ai/docs/providers/aws_sagemaker) | ✅ | ✅ | ✅ | ✅ | ✅ | +| [aws - bedrock](https://docs.litellm.ai/docs/providers/bedrock) | ✅ | ✅ | ✅ | ✅ |✅ | +| [google - vertex_ai [Gemini]](https://docs.litellm.ai/docs/providers/vertex) | ✅ | ✅ | ✅ | ✅ | +| [google - palm](https://docs.litellm.ai/docs/providers/palm) | ✅ | ✅ | ✅ | ✅ | +| [google AI Studio - gemini](https://docs.litellm.ai/docs/providers/gemini) | ✅ | | ✅ | | | +| [mistral ai api](https://docs.litellm.ai/docs/providers/mistral) | ✅ | ✅ | ✅ | ✅ | ✅ | +| [cloudflare AI Workers](https://docs.litellm.ai/docs/providers/cloudflare_workers) | ✅ | ✅ | ✅ | ✅ | +| [cohere](https://docs.litellm.ai/docs/providers/cohere) | ✅ | ✅ | ✅ | ✅ | ✅ | +| [anthropic](https://docs.litellm.ai/docs/providers/anthropic) | ✅ | ✅ | ✅ | ✅ | +| [huggingface](https://docs.litellm.ai/docs/providers/huggingface) | ✅ | ✅ | ✅ | ✅ | ✅ | +| [replicate](https://docs.litellm.ai/docs/providers/replicate) | ✅ | ✅ | ✅ | ✅ | +| [together_ai](https://docs.litellm.ai/docs/providers/togetherai) | ✅ | ✅ | ✅ | ✅ | +| [openrouter](https://docs.litellm.ai/docs/providers/openrouter) | ✅ | ✅ | ✅ | ✅ | +| [ai21](https://docs.litellm.ai/docs/providers/ai21) | ✅ | ✅ | ✅ | ✅ | +| [baseten](https://docs.litellm.ai/docs/providers/baseten) | ✅ | ✅ | ✅ | ✅ | +| [vllm](https://docs.litellm.ai/docs/providers/vllm) | ✅ | ✅ | ✅ | ✅ | +| [nlp_cloud](https://docs.litellm.ai/docs/providers/nlp_cloud) | ✅ | ✅ | ✅ | ✅ | +| [aleph alpha](https://docs.litellm.ai/docs/providers/aleph_alpha) | ✅ | ✅ | ✅ | ✅ | +| [petals](https://docs.litellm.ai/docs/providers/petals) | ✅ | ✅ | ✅ | ✅ | +| [ollama](https://docs.litellm.ai/docs/providers/ollama) | ✅ | ✅ | ✅ | ✅ | +| [deepinfra](https://docs.litellm.ai/docs/providers/deepinfra) | ✅ | ✅ | ✅ | ✅ | +| [perplexity-ai](https://docs.litellm.ai/docs/providers/perplexity) | ✅ | ✅ | ✅ | ✅ | +| [Groq AI](https://docs.litellm.ai/docs/providers/groq) | ✅ | ✅ | ✅ | ✅ | +| [anyscale](https://docs.litellm.ai/docs/providers/anyscale) | ✅ | ✅ | ✅ | ✅ | +| [voyage ai](https://docs.litellm.ai/docs/providers/voyage) | | | | | ✅ | +| [xinference [Xorbits Inference]](https://docs.litellm.ai/docs/providers/xinference) | | | | | ✅ | + +For more details, please visit the [litellm documentation](https://docs.litellm.ai/docs/providers). + +You can easily build any third-party model calls using the following method: -# Envisage -To create a powerful and flexible LLM application development platform for creating autonomous agents that can automate various tasks and applications, `Promptulate` implements an automated AI platform through six components: Core AI Engine, Agent System, APIs and Tools Provider, Multimodal Processing, Knowledge Base, and Task-specific Modules. The Core AI Engine is the core component of the framework, responsible for processing and understanding various inputs, generating outputs, and making decisions. The Agent System is a module that provides high-level guidance and control over AI agent behavior. The APIs and Tools Provider offers APIs and integration libraries for interacting with tools and services. Multimodal Processing is a set of modules for processing and understanding different data types, such as text, images, audio, and video, using deep learning models to extract meaningful information from different data modalities. The Knowledge Base is a large structured knowledge repository for storing and organizing world information, enabling AI agents to access and reason about a vast amount of knowledge. The Task-specific Modules are a set of modules specifically designed to perform specific tasks, such as sentiment analysis, machine translation, or object detection. By combining these components, the framework provides a comprehensive, flexible, and powerful platform for automating various complex tasks and applications. - - -# Features +```python +import promptulate as pne -- Large language model support: Support for various types of large language models through extensible interfaces. -- Dialogue terminal: Provides a simple dialogue terminal for direct interaction with large language models. -- Role presets: Provides preset roles for invoking GPT from different perspectives. -- Long conversation mode: Supports long conversation chat and persistence in multiple ways. -- External tools: Integrated external tool capabilities for powerful functions such as web search and executing Python code. -- KEY pool: Provides an API key pool to completely solve the key rate limiting problem. -- Intelligent agent: Integrates advanced agents such as ReAct and self-ask, empowering LLM with external tools. -- Autonomous agent mode: Supports calling official API interfaces, autonomous agents, or using agents provided by Promptulate. -- Chinese optimization: Specifically optimized for the Chinese context, more suitable for Chinese scenarios. -- Data export: Supports dialogue export in formats such as markdown. -- Hooks and lifecycles: Provides Agent, Tool, and LLM lifecycles and hook systems. -- Advanced abstraction: Supports plugin extensions, storage extensions, and large language model extensions. +resp: str = pne.chat(model="ollama/llama2", messages=[{"content": "Hello, how are you?", "role": "user"}]) +``` -# Quick Start +## 📗 Related Documentation -- [Quick Start/Official Documentation](https://undertone0809.github.io/promptulate/#/) +- [Getting Started/Official Documentation](https://undertone0809.github.io/promptulate/#/) - [Current Development Plan](https://undertone0809.github.io/promptulate/#/other/plan) -- [Contribution/Developer's Guide](https://undertone0809.github.io/promptulate/#/other/contribution) -- [FAQ](https://undertone0809.github.io/promptulate/#/other/faq) +- [Contributing/Developer's Manual](https://undertone0809.github.io/promptulate/#/other/contribution) +- [Frequently Asked Questions](https://undertone0809.github.io/promptulate/#/other/faq) - [PyPI Repository](https://pypi.org/project/promptulate/) -To install the framework, open the terminal and run the following command: +## 🛠 Quick Start + +- Open the terminal and enter the following command to install the framework: ```shell script pip install -U promptulate ``` -> Your Python version should be 3.8 or higher. +> Note: Your Python version should be 3.8 or higher. -Get started with your "HelloWorld" using the simple program below: +Robust output formatting is a fundamental basis for LLM application development. We hope that LLMs can return stable data. With pne, you can easily perform formatted output. In the following example, we use Pydantic's BaseModel to encapsulate a data structure that needs to be returned. ```python -import os +from typing import List import promptulate as pne +from pydantic import BaseModel, Field -os.environ['OPENAI_API_KEY'] = "your-key" +class LLMResponse(BaseModel): + provinces: List[str] = Field(description="List of provinces' names") -agent = pne.WebAgent() -answer = agent.run("What is the temperature tomorrow in Shanghai") -print(answer) +resp: LLMResponse = pne.chat("Please tell me all provinces in China.", output_schema=LLMResponse) +print(resp) ``` -``` -The temperature tomorrow in Shanghai is expected to be 23°C. -``` +**Output:** -> Most of the time, we refer to template as pne, where p and e represent the words that start and end template, and n represents 9, which is the abbreviation of the nine words between p and e. +```text +provinces=['Anhui', 'Fujian', 'Gansu', 'Guangdong', 'Guizhou', 'Hainan', 'Hebei', 'Heilongjiang', 'Henan', 'Hubei', 'Hunan', 'Jiangsu', 'Jiangxi', 'Jilin', 'Liaoning', 'Qinghai', 'Shaanxi', 'Shandong', 'Shanxi', 'Sichuan', 'Yunnan', 'Zhejiang', 'Taiwan', 'Guangxi', 'Nei Mongol', 'Ningxia', 'Xinjiang', 'Xizang', 'Beijing', 'Chongqing', 'Shanghai', 'Tianjin', 'Hong Kong', 'Macao'] +``` -To integrate a variety of external tools, including web search, calculators, and more, into your LLM Agent application, you can use the promptulate library alongside langchain. The langchain library allows you to build a ToolAgent with a collection of tools, such as an image generator based on OpenAI's DALL-E model. +Additionally, influenced by the [Plan-and-Solve](https://arxiv.org/abs/2305.04091) paper, pne also allows developers to build Agents capable of dealing with complex problems through planning, reasoning, and action. The Agent's planning abilities can be activated using the `enable_plan` parameter. -Below is an example of how to use the promptulate and langchain libraries to create an image from a text description: +![plan-and-execute.png](./docs/images/plan-and-execute.png) -> You need to set the `OPENAI_API_KEY` environment variable to your OpenAI API key. Click [here](https://undertone0809.github.io/promptulate/#/modules/tools/langchain_tool_usage?id=langchain-tool-usage) to see the detail. +In this example, we use [Tavily](https://app.tavily.com/) as the search engine, which is a powerful tool for searching information on the web. To use Tavily, you need to obtain an API key from Tavily. ```python -import promptulate as pne -from langchain.agents import load_tools +import os -tools: list = load_tools(["dalle-image-generator"]) -agent = pne.ToolAgent(tools=tools) -output = agent.run("Create an image of a halloween night at a haunted museum") +os.environ["TAVILY_API_KEY"] = "your_tavily_api_key" +os.environ["OPENAI_API_KEY"] = "your_openai_api_key" ``` -output: +In this case, we are using the TavilySearchResults Tool wrapped by LangChain. -```text -Here is the generated image: [![Halloween Night at a Haunted Museum](https://oaidalleapiprodscus.blob.core.windows.net/private/org-OyRC1wqD0EP6oWMS2n4kZgVi/user-JWA0mHqDqYh3oPpQtXbWUPgu/img-SH09tWkWZLJVltxifLi6jFy7.png)] +```python +from langchain_community.tools.tavily_search import TavilySearchResults + +tools = [TavilySearchResults(max_results=5)] ``` -![Halloween Night at a Haunted Museum](./docs/images/dall-e-gen.png) +```python +import promptulate as pne -For more detailed information, please refer to the [Quick Start/Official Documentation](https://undertone0809.github.io/promptulate/#/). +pne.chat("what is the hometown of the 2024 Australia open winner?", model="gpt-4-1106-preview", enable_plan=True) +``` -# Architecture +**Output:** -Currently, `promptulate` is in the rapid development stage and there are still many aspects that need to be improved and discussed. Your participation and discussions are highly welcome. As a large language model automation and application development framework, `promptulate` mainly consists of the following components: +```text +[Agent] Assistant Agent start... +[User instruction] what is the hometown of the 2024 Australia open winner? +[Plan] {"goals": ["Find the hometown of the 2024 Australian Open winner"], "tasks": [{"task_id": 1, "description": "Identify the winner of the 2024 Australian Open."}, {"task_id": 2, "description": "Research the identified winner to find their place of birth or hometown."}, {"task_id": 3, "description": "Record the hometown of the 2024 Australian Open winner."}], "next_task_id": 1} +[Agent] Tool Agent start... +[User instruction] Identify the winner of the 2024 Australian Open. +[Thought] Since the current date is March 26, 2024, and the Australian Open typically takes place in January, the event has likely concluded for the year. To identify the winner, I should use the Tavily search tool to find the most recent information on the 2024 Australian Open winner. +[Action] tavily_search_results_json args: {'query': '2024 Australian Open winner'} +[Observation] [{'url': 'https://ausopen.com/articles/news/sinner-winner-italian-takes-first-major-ao-2024', 'content': 'The agile right-hander, who had claimed victory from a two-set deficit only once previously in his young career, is the second Italian man to achieve singles glory at a major, following Adriano Panatta in1976.With victories over Andrey Rublev, 10-time AO champion Novak Djokovic, and Medvedev, the Italian is the youngest player to defeat top 5 opponents in the final three matches of a major since Michael Stich did it at Wimbledon in 1991 – just weeks before Sinner was born.\n He saved the only break he faced with an ace down the tee, and helped by scoreboard pressure, broke Medvedev by slamming a huge forehand to force an error from his more experienced rival, sealing the fourth set to take the final to a decider.\n Sensing a shift in momentum as Medvedev served to close out the second at 5-3, Sinner set the RLA crowd alight with a pair of brilliant passing shots en route to creating a break point opportunity, which Medvedev snuffed out with trademark patience, drawing a forehand error from his opponent. “We are trying to get better every day, even during the tournament we try to get stronger, trying to understand every situation a little bit better, and I’m so glad to have you there supporting me, understanding me, which sometimes it’s not easy because I am a little bit young sometimes,” he said with a smile.\n Medvedev, who held to love in his first three service games of the second set, piled pressure on the Italian, forcing the right-hander to produce his best tennis to save four break points in a nearly 12-minute second game.\n'}, {'url': 'https://www.cbssports.com/tennis/news/australian-open-2024-jannik-sinner-claims-first-grand-slam-title-in-epic-comeback-win-over-daniil-medvedev/', 'content': '"\nOur Latest Tennis Stories\nSinner makes epic comeback to win Australian Open\nSinner, Sabalenka win Australian Open singles titles\n2024 Australian Open odds, Sinner vs. Medvedev picks\nSabalenka defeats Zheng to win 2024 Australian Open\n2024 Australian Open odds, Sabalenka vs. Zheng picks\n2024 Australian Open odds, Medvedev vs. Zverev picks\nAustralian Open odds: Djokovic vs. Sinner picks, bets\nAustralian Open odds: Gauff vs. Sabalenka picks, bets\nAustralian Open odds: Zheng vs. Yastremska picks, bets\nNick Kyrgios reveals he\'s contemplating retirement\n© 2004-2024 CBS Interactive. Jannik Sinner claims first Grand Slam title in epic comeback win over Daniil Medvedev\nSinner, 22, rallied back from a two-set deficit to become the third ever Italian Grand Slam men\'s singles champion\nAfter almost four hours, Jannik Sinner climbed back from a two-set deficit to win his first ever Grand Slam title with an epic 3-6, 3-6, 6-4, 6-4, 6-3 comeback victory against Daniil Medvedev. Sinner became the first Italian man to win the Australian Open since 1976, and just the eighth man to successfully come back from two sets down in a major final.\n He did not drop a single set until his meeting with Djokovic, and that win in itself was an accomplishment as Djokovic was riding a 33-match winning streak at the Australian Open and had never lost a semifinal in Melbourne.\n @janniksin • @wwos • @espn • @eurosport • @wowowtennis pic.twitter.com/DTCIqWoUoR\n"We are trying to get better everyday, and even during the tournament, trying to get stronger and understand the situation a little bit better," Sinner said.'}, {'url': 'https://www.bbc.com/sport/tennis/68120937', 'content': 'Live scores, results and order of play\nAlerts: Get tennis news sent to your phone\nRelated Topics\nTop Stories\nFA Cup: Blackburn Rovers v Wrexham - live text commentary\nRussian skater Valieva given four-year ban for doping\nLinks to Barcelona are \'totally untrue\' - Arteta\nElsewhere on the BBC\nThe truth behind the fake grooming scandal\nFeaturing unseen police footage and interviews with the officers at the heart of the case\nDid their father and uncle kill Nazi war criminals?\n A real-life murder mystery following three brothers in their quest for the truth\nWhat was it like to travel on the fastest plane?\nTake a behind-the-scenes look at the supersonic story of the Concorde\nToxic love, ruthless ambition and shocking betrayal\nTell Me Lies follows a passionate college relationship with unimaginable consequences...\n "\nMarathon man Medvedev runs out of steam\nMedvedev is the first player to lose two Grand Slam finals after winning the opening two sets\nSo many players with the experience of a Grand Slam final have talked about how different the occasion can be, particularly if it is the first time, and potentially overwhelming.\n Jannik Sinner beats Daniil Medvedev in Melbourne final\nJannik Sinner is the youngest player to win the Australian Open men\'s title since Novak Djokovic in 2008\nJannik Sinner landed the Grand Slam title he has long promised with an extraordinary fightback to beat Daniil Medvedev in the Australian Open final.\n "\nSinner starts 2024 in inspired form\nSinner won the first Australian Open men\'s final since 2005 which did not feature Roger Federer, Rafael Nadal or Novak Djokovic\nSinner was brought to the forefront of conversation when discussing Grand Slam champions in 2024 following a stunning end to last season.\n'}] +[Execute Result] {'thought': "The search results have provided consistent information about the winner of the 2024 Australian Open. Jannik Sinner is mentioned as the winner in multiple sources, which confirms the answer to the user's question.", 'action_name': 'finish', 'action_parameters': {'content': 'Jannik Sinner won the 2024 Australian Open.'}} +[Execute] Execute End. +[Revised Plan] {"goals": ["Find the hometown of the 2024 Australian Open winner"], "tasks": [{"task_id": 2, "description": "Research Jannik Sinner to find his place of birth or hometown."}, {"task_id": 3, "description": "Record the hometown of Jannik Sinner, the 2024 Australian Open winner."}], "next_task_id": 2} +[Agent] Tool Agent start... +[User instruction] Research Jannik Sinner to find his place of birth or hometown. +[Thought] To find Jannik Sinner's place of birth or hometown, I should use the search tool to find the most recent and accurate information. +[Action] tavily_search_results_json args: {'query': 'Jannik Sinner place of birth hometown'} +[Observation] [{'url': 'https://www.sportskeeda.com/tennis/jannik-sinner-nationality', 'content': "During the semifinal of the Cup, Sinner faced Djokovic for the third time in a row and became the first player to defeat him in a singles match. Jannik Sinner Nationality\nJannik Sinner is an Italian national and was born in Innichen, a town located in the mainly German-speaking area of South Tyrol in northern Italy. A. Jannik Sinner won his maiden Masters 1000 title at the 2023 Canadian Open defeating Alex de Minaur in the straight sets of the final.\n Apart from his glorious triumph at Melbourne Park in 2024, Jannik Sinner's best Grand Slam performance came at the 2023 Wimbledon, where he reached the semifinals. In 2020, Sinner became the youngest player since Novak Djokovic in 2006 to reach the quarter-finals of the French Open."}, {'url': 'https://en.wikipedia.org/wiki/Jannik_Sinner', 'content': "At the 2023 Australian Open, Sinner lost in the 4th round to eventual runner-up Stefanos Tsitsipas in 5 sets.[87]\nSinner then won his seventh title at the Open Sud de France in Montpellier, becoming the first player to win a tour-level title in the season without having dropped a single set and the first since countryman Lorenzo Musetti won the title in Naples in October 2022.[88]\nAt the ABN AMRO Open he defeated top seed and world No. 3 Stefanos Tsitsipas taking his revenge for the Australian Open loss, for his biggest win ever.[89] At the Cincinnati Masters, he lost in the third round to Félix Auger-Aliassime after being up a set, a break, and 2 match points.[76]\nSeeded 11th at the US Open, he reached the fourth round after defeating Brandon Nakashima in four sets.[77] Next, he defeated Ilya Ivashka in a five set match lasting close to four hours to reach the quarterfinals for the first time at this Major.[78] At five hours and 26 minutes, it was the longest match of Sinner's career up until this point and the fifth-longest in the tournament history[100] as well as the second longest of the season after Andy Murray against Thanasi Kokkinakis at the Australian Open.[101]\nHe reached back to back quarterfinals in Wimbledon after defeating Juan Manuel Cerundolo, Diego Schwartzman, Quentin Halys and Daniel Elahi Galan.[102] He then reached his first Major semifinal after defeating Roman Safiullin, before losing to Novak Djokovic in straight sets.[103] In the following round in the semifinals, he lost in straight sets to career rival and top seed Carlos Alcaraz who returned to world No. 1 following the tournament.[92] In Miami, he reached the quarterfinals of this tournament for a third straight year after defeating Grigor Dimitrov and Andrey Rublev, thus returning to the top 10 in the rankings at world No. In the final, he came from a two-set deficit to beat Daniil Medvedev to become the first Italian player, male or female, to win the Australian Open singles title, and the third man to win a Major (the second of which is in the Open Era), the first in 48 years.[8][122]"}, {'url': 'https://www.thesportreview.com/biography/jannik-sinner/', 'content': '• Date of birth: 16 August 2001\n• Age: 22 years old\n• Place of birth: San Candido, Italy\n• Nationality: Italian\n• Height: 188cm / 6ft 2ins\n• Weight: 76kg / 167lbs\n• Plays: Right-handed\n• Turned Pro: 2018\n• Career Prize Money: US$ 4,896,338\n• Instagram: @janniksin\nThe impressive 22-year-old turned professional back in 2018 and soon made an impact on the tour, breaking into the top 100 in the world rankings for the first time in 2019.\n Jannik Sinner (Photo: Dubai Duty Free Tennis Championships)\nSinner ended the season as number 78 in the world, becoming the youngest player since Rafael Nadal in 2003 to end the year in the top 80.\n The Italian then ended the 2019 season in style, qualifying for the 2019 Next Gen ATP Finals and going on to win the tournament with a win over Alex de Minaur in the final.\n Sinner then reached the main draw of a grand slam for the first time at the 2019 US Open, when he came through qualifying to reach the first round, where he lost to Stan Wawrinka.\n Asked to acknowledge some of the key figures in his development, Sinner replied: “I think first of all, my family who always helped me and gave me the confidence to actually change my life when I was 13-and-a-half, 14 years old.\n'}] +[Execute Result] {'thought': 'The search results have provided two different places of birth for Jannik Sinner: Innichen and San Candido. These are actually the same place, as San Candido is the Italian name and Innichen is the German name for the town. Since the user asked for the place of birth or hometown, I can now provide this information.', 'action_name': 'finish', 'action_parameters': {'content': 'Jannik Sinner was born in San Candido (Italian) / Innichen (German), Italy.'}} +[Execute] Execute End. +[Revised Plan] {"goals": ["Find the hometown of the 2024 Australian Open winner"], "tasks": [], "next_task_id": null} +[Agent Result] Jannik Sinner was born in San Candido (Italian) / Innichen (German), Italy. +[Agent] Agent End. +``` -- `Agent`: More advanced execution units responsible for task scheduling and distribution. -- `llm`: Large language model responsible for generating answers, supporting different types of large language models. -- `Memory`: Responsible for storing conversations, supporting different storage methods and extensions such as file storage and database storage. -- `Framework`: Framework layer that implements different prompt frameworks, including the basic `Conversation` model and models such as `self-ask` and `ReAct`. -- `Tool`: Provides external tool extensions for search engines, calculators, etc. -- `Hook&Lifecycle`: Hook system and lifecycle system that allows developers to customize lifecycle logic control. -- `Role presets`: Provides preset roles for customized conversations. -- `Provider`: Provides more data sources or autonomous operations for the system, such as connecting to databases. +For more detailed information, please check the [Getting Started/Official Documentation](https://undertone0809.github.io/promptulate/#/). - +## 📚 Design Principles -# Design Principles +The design principles of the pne framework include modularity, extensibility, interoperability, robustness, maintainability, security, efficiency, and usability. -The design principles of the `promptulate` framework include modularity, scalability, interoperability, robustness, maintainability, security, efficiency, and usability. +- Modularity refers to using modules as the basic unit, allowing for easy integration of new components, models, and tools. +- Extensibility refers to the framework's ability to handle large amounts of data, complex tasks, and high concurrency. +- Interoperability means the framework is compatible with various external systems, tools, and services and can achieve seamless integration and communication. +- Robustness indicates the framework has strong error handling, fault tolerance, and recovery mechanisms to ensure reliable operation under various conditions. +- Security implies the framework has implemented strict measures to protect against unauthorized access and malicious behavior. +- Efficiency is about optimizing the framework's performance, resource usage, and response times to ensure a smooth and responsive user experience. +- Usability means the framework uses user-friendly interfaces and clear documentation, making it easy to use and understand. -- Modularity refers to the ability to integrate new components, models, and tools conveniently, using modules as the basic unit. -- Scalability refers to the framework's capability to handle large amounts of data, complex tasks, and high concurrency. -- Interoperability means that the framework is compatible with various external systems, tools, and services, allowing seamless integration and communication. -- Robustness refers to the framework's ability to handle errors, faults, and recovery mechanisms to ensure reliable operation under different conditions. -- Security involves strict security measures to protect the framework, its data, and users from unauthorized access and malicious behavior. -- Efficiency focuses on optimizing the framework's performance, resource utilization, and response time to ensure a smooth and responsive user experience. -- Usability involves providing a user-friendly interface and clear documentation to make the framework easy to use and understand. +Following these principles and applying the latest artificial intelligence technologies, `pne` aims to provide a powerful and flexible framework for creating automated agents. -By following these principles and incorporating the latest advancements in artificial intelligence technology, `promptulate` aims to provide a powerful and flexible application development framework for creating automated agents. +## 💌 Contact -# Contributions +For more information, please contact: [zeeland4work@gmail.com](mailto:zeeland4work@gmail.com) -I am currently exploring more comprehensive abstraction patterns to improve compatibility with the framework and the extended use of external tools. If you have any suggestions, I welcome discussions and exchanges. +## ⭐ Contribution -If you would like to contribute to this project, please refer to the [current development plan](https://undertone0809.github.io/promptulate/#/other/plan) and [contribution/developer's guide](https://undertone0809.github.io/promptulate/#/other/contribution). I'm excited to see more people getting involved and optimizing it. +We appreciate your interest in contributing to our open-source initiative. We have provided a [Developer's Guide](https://undertone0809.github.io/promptulate/#/other/contribution) outlining the steps to contribute to Promptulate. Please refer to this guide to ensure smooth collaboration and successful contributions. Additionally, you can view the [Current Development Plan](https://undertone0809.github.io/promptulate/#/other/plan) to see the latest development progress 🤝🚀 diff --git a/README_zh.md b/README_zh.md index d858e684..b0431b11 100644 --- a/README_zh.md +++ b/README_zh.md @@ -21,31 +21,69 @@

-`Promptulate AI` 专注于构建大语言模型应用与 AI Agent 的开发者平台,致力于为开发者和企业提供构建、扩展、评估大语言模型应用的能力。`Promptulate` 是 `Promptulate AI` 旗下的大语言模型自动化与应用开发框架,旨在帮助开发者通过更小的成本构建行业级的大模型应用,其包含了LLM领域应用层开发的大部分常用组件,如外部工具组件、模型组件、Agent 智能代理、外部数据源接入模块、数据存储模块、生命周期模块等。 通过 `Promptulate`,你可以用 pythonic 的方式轻松构建起属于自己的 LLM 应用程序。 - -更多地,为构建一个强大而灵活的 LLM 应用开发平台与 AI Agent 构建平台,以创建能够自动化各种任务和应用程序的自主代理,`Promptulate` 通过Core -AI Engine、Agent System、Tools Provider、Multimodal Processing、Knowledge Base 和 Task-specific Modules -6个组件实现自动化AI平台。 Core AI Engine 是该框架的核心组件,负责处理和理解各种输入,生成输出和作出决策。Agent -System 是提供高级指导和控制AI代理行为的模块;APIs and Tools Provider 提供工具和服务交互的API和集成库;Multimodal -Processing 是一组处理和理解不同数据类型(如文本、图像、音频和视频)的模块,使用深度学习模型从不同数据模式中提取有意义的信息;Knowledge -Base 是一个存储和组织世界信息的大型结构化知识库,使AI代理能够访问和推理大量的知识;Task-specific -Modules 是一组专门设计用于执行特定任务的模块,例如情感分析、机器翻译或目标检测等。通过这些组件的组合,框架提供了一个全面、灵活和强大的平台,能够实现各种复杂任务和应用程序的自动化。 - -## 特性 - -- 大语言模型支持:支持不同类型的大语言模型的扩展接口 -- 对话终端:提供简易对话终端,直接体验与大语言模型的对话 -- AgentGroup:提供WebAgent、ToolAgent、CodeAgent等不同的Agent,进行复杂能力处理 -- 长对话模式:支持长对话聊天,支持多种方式的对话持久化 -- 外部工具:集成外部工具能力,可以进行网络搜索、执行Python代码等强大的功能 -- KEY池:提供API key池,彻底解决key限速的问题 -- 智能代理:集成 ReAct,self-ask 等 Prompt 框架,结合外部工具赋能 LLM -- 中文优化:针对中文语境进行特别优化,更适合中文场景 -- 数据导出:支持 Markdown 等格式的对话导出 -- Hook与生命周期:提供 Agent,Tool,llm 的生命周期及 Hook 系统 -- 高级抽象:支持插件扩展、存储扩展、大语言模型扩展 - -## 快速开始 +## Overview + +**Promptulate** 是 **Cogit Lab** 打造的 AI Agent 应用开发框架,通过 Pythonic 的开发范式,旨在为开发者们提供一种极其简洁而高效的 Agent 应用构建体验。 🛠️ Promptulate 的核心理念在于借鉴并融合开源社区的智慧,集成各种开发框架的亮点,以此降低开发门槛并统一开发者的共识。通过 Promptulate,你可以用最简洁的代码来操纵 LLM, Agent, Tool, RAG 等组件,大多数任务仅需几行代码即可轻松完成。🚀 + +## 💡特性 + +- 🐍 Pythonic Code Style: 采用 Python 开发者的习惯,提供 Pythonic 的 SDK 调用方式,一切尽在掌握,仅需一个 pne.chat 函数便可封装所有必需功能。 +- 🧠 模型兼容性: 支持市面上几乎所有类型的大模型,并且可以轻松自定义模型以满足特定需求。 +- 🕵️‍♂️ 多样化 Agent: 提供 WebAgent、ToolAgent、CodeAgent 等多种类型的 Agent,具备计划、推理、行动等处理复杂问题的能力。 +- 🔗 低成本集成: 轻而易举地集成如 LangChain 等不同框架的工具,大幅降低集成成本。 +- 🔨 函数即工具: 将任意 Python 函数直接转化为 Agent 可用的工具,简化了工具的创建和使用过程。 +- 🪝 生命周期与钩子: 提供丰富的 Hook 和完善的生命周期管理,允许在 Agent、Tool、LLM 的各个阶段插入自定义代码。 +- 💻 终端集成: 轻松集成应用终端,自带客户端支持,提供 prompt 的快速调试能力。 +- ⏱️ Prompt 缓存: 提供 LLM Prompt 缓存机制,减少重复工作,提升开发效率。 + +> 下面用 pne 表示 promptulate,pne 是 Promptulate 的昵称,其中 p 和 e 分别代表 promptulate 的开头和结尾,n 代表 9,即 p 和 e 中间的九个字母的简写。 + +## 支持的基础模型 + +pne 集成了 [litellm](https://github.com/BerriAI/litellm) 的能力,支持几乎市面上所有类型的大模型,包括但不限于以下模型: + +| Provider | [Completion](https://docs.litellm.ai/docs/#basic-usage) | [Streaming](https://docs.litellm.ai/docs/completion/stream#streaming-responses) | [Async Completion](https://docs.litellm.ai/docs/completion/stream#async-completion) | [Async Streaming](https://docs.litellm.ai/docs/completion/stream#async-streaming) | [Async Embedding](https://docs.litellm.ai/docs/embedding/supported_embedding) | [Async Image Generation](https://docs.litellm.ai/docs/image_generation) | +| ------------- | ------------- | ------------- | ------------- | ------------- | ------------- | ------------- | +| [openai](https://docs.litellm.ai/docs/providers/openai) | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | +| [azure](https://docs.litellm.ai/docs/providers/azure) | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | +| [aws - sagemaker](https://docs.litellm.ai/docs/providers/aws_sagemaker) | ✅ | ✅ | ✅ | ✅ | ✅ | +| [aws - bedrock](https://docs.litellm.ai/docs/providers/bedrock) | ✅ | ✅ | ✅ | ✅ |✅ | +| [google - vertex_ai [Gemini]](https://docs.litellm.ai/docs/providers/vertex) | ✅ | ✅ | ✅ | ✅ | +| [google - palm](https://docs.litellm.ai/docs/providers/palm) | ✅ | ✅ | ✅ | ✅ | +| [google AI Studio - gemini](https://docs.litellm.ai/docs/providers/gemini) | ✅ | | ✅ | | | +| [mistral ai api](https://docs.litellm.ai/docs/providers/mistral) | ✅ | ✅ | ✅ | ✅ | ✅ | +| [cloudflare AI Workers](https://docs.litellm.ai/docs/providers/cloudflare_workers) | ✅ | ✅ | ✅ | ✅ | +| [cohere](https://docs.litellm.ai/docs/providers/cohere) | ✅ | ✅ | ✅ | ✅ | ✅ | +| [anthropic](https://docs.litellm.ai/docs/providers/anthropic) | ✅ | ✅ | ✅ | ✅ | +| [huggingface](https://docs.litellm.ai/docs/providers/huggingface) | ✅ | ✅ | ✅ | ✅ | ✅ | +| [replicate](https://docs.litellm.ai/docs/providers/replicate) | ✅ | ✅ | ✅ | ✅ | +| [together_ai](https://docs.litellm.ai/docs/providers/togetherai) | ✅ | ✅ | ✅ | ✅ | +| [openrouter](https://docs.litellm.ai/docs/providers/openrouter) | ✅ | ✅ | ✅ | ✅ | +| [ai21](https://docs.litellm.ai/docs/providers/ai21) | ✅ | ✅ | ✅ | ✅ | +| [baseten](https://docs.litellm.ai/docs/providers/baseten) | ✅ | ✅ | ✅ | ✅ | +| [vllm](https://docs.litellm.ai/docs/providers/vllm) | ✅ | ✅ | ✅ | ✅ | +| [nlp_cloud](https://docs.litellm.ai/docs/providers/nlp_cloud) | ✅ | ✅ | ✅ | ✅ | +| [aleph alpha](https://docs.litellm.ai/docs/providers/aleph_alpha) | ✅ | ✅ | ✅ | ✅ | +| [petals](https://docs.litellm.ai/docs/providers/petals) | ✅ | ✅ | ✅ | ✅ | +| [ollama](https://docs.litellm.ai/docs/providers/ollama) | ✅ | ✅ | ✅ | ✅ | +| [deepinfra](https://docs.litellm.ai/docs/providers/deepinfra) | ✅ | ✅ | ✅ | ✅ | +| [perplexity-ai](https://docs.litellm.ai/docs/providers/perplexity) | ✅ | ✅ | ✅ | ✅ | +| [Groq AI](https://docs.litellm.ai/docs/providers/groq) | ✅ | ✅ | ✅ | ✅ | +| [anyscale](https://docs.litellm.ai/docs/providers/anyscale) | ✅ | ✅ | ✅ | ✅ | +| [voyage ai](https://docs.litellm.ai/docs/providers/voyage) | | | | | ✅ | +| [xinference [Xorbits Inference]](https://docs.litellm.ai/docs/providers/xinference) | | | | | ✅ | + +详情可以跳转 [litellm documentation](https://docs.litellm.ai/docs/providers) 查看。 + +你可以使用下面的方式十分轻松的构建起任何第三方模型的调用。 + +```python +import promptulate as pne + +resp: str = pne.chat(model="ollama/llama2", messages = [{ "content": "Hello, how are you?","role": "user"}]) +``` + +## 📗 相关文档 - [快速上手/官方文档](https://undertone0809.github.io/promptulate/#/) - [当前开发计划](https://undertone0809.github.io/promptulate/#/other/plan) @@ -53,81 +91,115 @@ Modules 是一组专门设计用于执行特定任务的模块,例如情感分 - [常见问题](https://undertone0809.github.io/promptulate/#/other/faq) - [pypi仓库](https://pypi.org/project/promptulate/) +## 🛠 快速开始 + - 打开终端,输入以下命令安装框架: ```shell script pip install -U promptulate ``` -> Your Python version should be 3.8 or higher. +> 注意:Your Python version should be 3.8 or higher. -- 通过下面这个简单的程序开始你的 “HelloWorld”。 +格式化输出是 LLM 应用开发鲁棒性的重要基础,我们希望 LLM 可以返回稳定的数据,使用 pne,你可以轻松的进行格式化输出,下面的示例中,我们使用 pydantic 的 BaseModel 封装起一个需要返回的数据结构。 ```python -import os +from typing import List import promptulate as pne +from pydantic import BaseModel, Field -os.environ['OPENAI_API_KEY'] = "your-key" - -agent = pne.WebAgent() -answer = agent.run("What is the temperature tomorrow in Shanghai") -print(answer) -``` +class LLMResponse(BaseModel): + provinces: List[str] = Field(description="List of provinces name") -``` -The temperature tomorrow in Shanghai is expected to be 23°C. +resp: LLMResponse = pne.chat("Please tell me all provinces in China.?", output_schema=LLMResponse) +print(resp) ``` -> 大多数时候我们会将 promptulate 称之为 pne,其中 p 和 e 表示 promptulate 开头和结尾的单词,而 n 表示 9,即 p 和 e 中间的九个单词的简写。 +**Output:** -要将包括网络搜索、计算器等在内的各种外部工具集成到您的Python应用程序中,您可以使用promptulate库与langchain库一起使用。langchain库允许您构建一个带有工具集的ToolAgent,例如基于OpenAI的DALL-E模型的图像生成器。 +```text +provinces=['Anhui', 'Fujian', 'Gansu', 'Guangdong', 'Guizhou', 'Hainan', 'Hebei', 'Heilongjiang', 'Henan', 'Hubei', 'Hunan', 'Jiangsu', 'Jiangxi', 'Jilin', 'Liaoning', 'Qinghai', 'Shaanxi', 'Shandong', 'Shanxi', 'Sichuan', 'Yunnan', 'Zhejiang', 'Taiwan', 'Guangxi', 'Nei Mongol', 'Ningxia', 'Xinjiang', 'Xizang', 'Beijing', 'Chongqing', 'Shanghai', 'Tianjin', 'Hong Kong', 'Macao'] +``` -下面是一个如何使用promptulate和langchain库根据文本描述创建图像的例子: +在 pne,你可以轻松集成各种不同类型不同框架(如LangChain,llama-index)的 tools,如网络搜索、计算器等在外部工具,下面的示例中,我们使用 LangChain 的 duckduckgo 的搜索工具,来获取明天上海的天气。 ```python +import os import promptulate as pne from langchain.agents import load_tools -tools: list = load_tools(["dalle-image-generator"]) -agent = pne.ToolAgent(tools=tools) -output = agent.run("创建一个万圣节夜晚在一个闹鬼的博物馆的图像") +os.environ["OPENAI_API_KEY"] = "your-key" + +tools: list = load_tools(["ddg-search", "arxiv"]) +resp: str = pne.chat(model="gpt-4-1106-preview", messages = [{ "content": "What is the temperature tomorrow in Shanghai","role": "user"}], tools=tools) ``` -output: +在这个示例中,pne 内部集成了拥有推理和反思能力的 [ReAct](https://arxiv.org/abs/2210.03629) 研究,封装成 ToolAgent,拥有强大的推理能力和工具调用能力,可以选择合适的工具进行调用,从而获取更加准确的结果。 + +**Output:** ```text -Here is the generated image: [![Halloween Night at a Haunted Museum](https://oaidalleapiprodscus.blob.core.windows.net/private/org-OyRC1wqD0EP6oWMS2n4kZgVi/user-JWA0mHqDqYh3oPpQtXbWUPgu/img-SH09tWkWZLJVltxifLi6jFy7.png)] +The temperature tomorrow in Shanghai is expected to be 23°C. ``` -![Halloween Night at a Haunted Museum](./docs/images/dall-e-gen.png) +此外,受到 [Plan-and-Solve](https://arxiv.org/abs/2305.04091) 论文的影响,pne 还允许开发者构建具有计划、推理、行动等处理复杂问题的能力的 Agent,通过 enable_plan 参数,可以开启 Agent 的计划能力。 -更多详细资料,请查看[快速上手/官方文档](https://undertone0809.github.io/promptulate/#/) +![plan-and-execute.png](./docs/images/plan-and-execute.png) + +在这个例子中,我们使用 [Tavily](https://app.tavily.com/) 作为搜索引擎,它是一个强大的搜索引擎,可以从网络上搜索信息。要使用 Tavily,您需要从 Tavily 获得一个API密钥。 + +```python +import os + +os.environ["TAVILY_API_KEY"] = "your_tavily_api_key" +os.environ["OPENAI_API_KEY"] = "your_openai_api_key" +``` + +在这个例子中,我们使用 LangChain 封装好的 TavilySearchResults Tool。 + +```python +from langchain_community.tools.tavily_search import TavilySearchResults -## 基础架构 +tools = [TavilySearchResults(max_results=5)] +``` -当前`promptulate`正处于快速开发阶段,仍有许多内容需要完善与讨论,十分欢迎大家的讨论与参与,而其作为一个大语言模型自动化与应用开发框架,主要由以下几部分组成: +```python +import promptulate as pne -- 大语言模型支持:支持不同类型的大语言模型的扩展接口 -- AI Agent:提供WebAgent、ToolAgent、CodeAgent等不同的Agent以及自定Agent能力,进行复杂能力处理 -- 对话终端:提供简易对话终端,直接体验与大语言模型的对话 -- 角色预设:提供预设角色,以不同的角度调用LLM -- 长对话模式:支持长对话聊天,支持多种方式的对话持久化 -- 外部工具:集成外部工具能力,可以进行网络搜索、执行Python代码等强大的功能 -- KEY池:提供API key池,彻底解决key限速的问题 -- 智能代理:集成ReAct,self-ask等高级Agent,结合外部工具赋能LLM -- 中文优化:针对中文语境进行特别优化,更适合中文场景 -- 数据导出:支持 Markdown 等格式的对话导出 -- 高级抽象:支持插件扩展、存储扩展、大语言模型扩展 -- 格式化输出:原生支持大模型的格式化输出,大大提升复杂场景下的任务处理能力与鲁棒性 -- Hook与生命周期:提供Agent,Tool,llm的生命周期及Hook系统 -- 物联网能力:框架为物联网应用开发提供了多种工具,方便物联网开发者使用大模型能力。 +pne.chat("what is the hometown of the 2024 Australia open winner?", model="gpt-4-1106-preview", enable_plan=True) +``` +**Output:** - +```text +[Agent] Assistant Agent start... +[User instruction] what is the hometown of the 2024 Australia open winner? +[Plan] {"goals": ["Find the hometown of the 2024 Australian Open winner"], "tasks": [{"task_id": 1, "description": "Identify the winner of the 2024 Australian Open."}, {"task_id": 2, "description": "Research the identified winner to find their place of birth or hometown."}, {"task_id": 3, "description": "Record the hometown of the 2024 Australian Open winner."}], "next_task_id": 1} +[Agent] Tool Agent start... +[User instruction] Identify the winner of the 2024 Australian Open. +[Thought] Since the current date is March 26, 2024, and the Australian Open typically takes place in January, the event has likely concluded for the year. To identify the winner, I should use the Tavily search tool to find the most recent information on the 2024 Australian Open winner. +[Action] tavily_search_results_json args: {'query': '2024 Australian Open winner'} +[Observation] [{'url': 'https://ausopen.com/articles/news/sinner-winner-italian-takes-first-major-ao-2024', 'content': 'The agile right-hander, who had claimed victory from a two-set deficit only once previously in his young career, is the second Italian man to achieve singles glory at a major, following Adriano Panatta in1976.With victories over Andrey Rublev, 10-time AO champion Novak Djokovic, and Medvedev, the Italian is the youngest player to defeat top 5 opponents in the final three matches of a major since Michael Stich did it at Wimbledon in 1991 – just weeks before Sinner was born.\n He saved the only break he faced with an ace down the tee, and helped by scoreboard pressure, broke Medvedev by slamming a huge forehand to force an error from his more experienced rival, sealing the fourth set to take the final to a decider.\n Sensing a shift in momentum as Medvedev served to close out the second at 5-3, Sinner set the RLA crowd alight with a pair of brilliant passing shots en route to creating a break point opportunity, which Medvedev snuffed out with trademark patience, drawing a forehand error from his opponent. “We are trying to get better every day, even during the tournament we try to get stronger, trying to understand every situation a little bit better, and I’m so glad to have you there supporting me, understanding me, which sometimes it’s not easy because I am a little bit young sometimes,” he said with a smile.\n Medvedev, who held to love in his first three service games of the second set, piled pressure on the Italian, forcing the right-hander to produce his best tennis to save four break points in a nearly 12-minute second game.\n'}, {'url': 'https://www.cbssports.com/tennis/news/australian-open-2024-jannik-sinner-claims-first-grand-slam-title-in-epic-comeback-win-over-daniil-medvedev/', 'content': '"\nOur Latest Tennis Stories\nSinner makes epic comeback to win Australian Open\nSinner, Sabalenka win Australian Open singles titles\n2024 Australian Open odds, Sinner vs. Medvedev picks\nSabalenka defeats Zheng to win 2024 Australian Open\n2024 Australian Open odds, Sabalenka vs. Zheng picks\n2024 Australian Open odds, Medvedev vs. Zverev picks\nAustralian Open odds: Djokovic vs. Sinner picks, bets\nAustralian Open odds: Gauff vs. Sabalenka picks, bets\nAustralian Open odds: Zheng vs. Yastremska picks, bets\nNick Kyrgios reveals he\'s contemplating retirement\n© 2004-2024 CBS Interactive. Jannik Sinner claims first Grand Slam title in epic comeback win over Daniil Medvedev\nSinner, 22, rallied back from a two-set deficit to become the third ever Italian Grand Slam men\'s singles champion\nAfter almost four hours, Jannik Sinner climbed back from a two-set deficit to win his first ever Grand Slam title with an epic 3-6, 3-6, 6-4, 6-4, 6-3 comeback victory against Daniil Medvedev. Sinner became the first Italian man to win the Australian Open since 1976, and just the eighth man to successfully come back from two sets down in a major final.\n He did not drop a single set until his meeting with Djokovic, and that win in itself was an accomplishment as Djokovic was riding a 33-match winning streak at the Australian Open and had never lost a semifinal in Melbourne.\n @janniksin • @wwos • @espn • @eurosport • @wowowtennis pic.twitter.com/DTCIqWoUoR\n"We are trying to get better everyday, and even during the tournament, trying to get stronger and understand the situation a little bit better," Sinner said.'}, {'url': 'https://www.bbc.com/sport/tennis/68120937', 'content': 'Live scores, results and order of play\nAlerts: Get tennis news sent to your phone\nRelated Topics\nTop Stories\nFA Cup: Blackburn Rovers v Wrexham - live text commentary\nRussian skater Valieva given four-year ban for doping\nLinks to Barcelona are \'totally untrue\' - Arteta\nElsewhere on the BBC\nThe truth behind the fake grooming scandal\nFeaturing unseen police footage and interviews with the officers at the heart of the case\nDid their father and uncle kill Nazi war criminals?\n A real-life murder mystery following three brothers in their quest for the truth\nWhat was it like to travel on the fastest plane?\nTake a behind-the-scenes look at the supersonic story of the Concorde\nToxic love, ruthless ambition and shocking betrayal\nTell Me Lies follows a passionate college relationship with unimaginable consequences...\n "\nMarathon man Medvedev runs out of steam\nMedvedev is the first player to lose two Grand Slam finals after winning the opening two sets\nSo many players with the experience of a Grand Slam final have talked about how different the occasion can be, particularly if it is the first time, and potentially overwhelming.\n Jannik Sinner beats Daniil Medvedev in Melbourne final\nJannik Sinner is the youngest player to win the Australian Open men\'s title since Novak Djokovic in 2008\nJannik Sinner landed the Grand Slam title he has long promised with an extraordinary fightback to beat Daniil Medvedev in the Australian Open final.\n "\nSinner starts 2024 in inspired form\nSinner won the first Australian Open men\'s final since 2005 which did not feature Roger Federer, Rafael Nadal or Novak Djokovic\nSinner was brought to the forefront of conversation when discussing Grand Slam champions in 2024 following a stunning end to last season.\n'}] +[Execute Result] {'thought': "The search results have provided consistent information about the winner of the 2024 Australian Open. Jannik Sinner is mentioned as the winner in multiple sources, which confirms the answer to the user's question.", 'action_name': 'finish', 'action_parameters': {'content': 'Jannik Sinner won the 2024 Australian Open.'}} +[Execute] Execute End. +[Revised Plan] {"goals": ["Find the hometown of the 2024 Australian Open winner"], "tasks": [{"task_id": 2, "description": "Research Jannik Sinner to find his place of birth or hometown."}, {"task_id": 3, "description": "Record the hometown of Jannik Sinner, the 2024 Australian Open winner."}], "next_task_id": 2} +[Agent] Tool Agent start... +[User instruction] Research Jannik Sinner to find his place of birth or hometown. +[Thought] To find Jannik Sinner's place of birth or hometown, I should use the search tool to find the most recent and accurate information. +[Action] tavily_search_results_json args: {'query': 'Jannik Sinner place of birth hometown'} +[Observation] [{'url': 'https://www.sportskeeda.com/tennis/jannik-sinner-nationality', 'content': "During the semifinal of the Cup, Sinner faced Djokovic for the third time in a row and became the first player to defeat him in a singles match. Jannik Sinner Nationality\nJannik Sinner is an Italian national and was born in Innichen, a town located in the mainly German-speaking area of South Tyrol in northern Italy. A. Jannik Sinner won his maiden Masters 1000 title at the 2023 Canadian Open defeating Alex de Minaur in the straight sets of the final.\n Apart from his glorious triumph at Melbourne Park in 2024, Jannik Sinner's best Grand Slam performance came at the 2023 Wimbledon, where he reached the semifinals. In 2020, Sinner became the youngest player since Novak Djokovic in 2006 to reach the quarter-finals of the French Open."}, {'url': 'https://en.wikipedia.org/wiki/Jannik_Sinner', 'content': "At the 2023 Australian Open, Sinner lost in the 4th round to eventual runner-up Stefanos Tsitsipas in 5 sets.[87]\nSinner then won his seventh title at the Open Sud de France in Montpellier, becoming the first player to win a tour-level title in the season without having dropped a single set and the first since countryman Lorenzo Musetti won the title in Naples in October 2022.[88]\nAt the ABN AMRO Open he defeated top seed and world No. 3 Stefanos Tsitsipas taking his revenge for the Australian Open loss, for his biggest win ever.[89] At the Cincinnati Masters, he lost in the third round to Félix Auger-Aliassime after being up a set, a break, and 2 match points.[76]\nSeeded 11th at the US Open, he reached the fourth round after defeating Brandon Nakashima in four sets.[77] Next, he defeated Ilya Ivashka in a five set match lasting close to four hours to reach the quarterfinals for the first time at this Major.[78] At five hours and 26 minutes, it was the longest match of Sinner's career up until this point and the fifth-longest in the tournament history[100] as well as the second longest of the season after Andy Murray against Thanasi Kokkinakis at the Australian Open.[101]\nHe reached back to back quarterfinals in Wimbledon after defeating Juan Manuel Cerundolo, Diego Schwartzman, Quentin Halys and Daniel Elahi Galan.[102] He then reached his first Major semifinal after defeating Roman Safiullin, before losing to Novak Djokovic in straight sets.[103] In the following round in the semifinals, he lost in straight sets to career rival and top seed Carlos Alcaraz who returned to world No. 1 following the tournament.[92] In Miami, he reached the quarterfinals of this tournament for a third straight year after defeating Grigor Dimitrov and Andrey Rublev, thus returning to the top 10 in the rankings at world No. In the final, he came from a two-set deficit to beat Daniil Medvedev to become the first Italian player, male or female, to win the Australian Open singles title, and the third man to win a Major (the second of which is in the Open Era), the first in 48 years.[8][122]"}, {'url': 'https://www.thesportreview.com/biography/jannik-sinner/', 'content': '• Date of birth: 16 August 2001\n• Age: 22 years old\n• Place of birth: San Candido, Italy\n• Nationality: Italian\n• Height: 188cm / 6ft 2ins\n• Weight: 76kg / 167lbs\n• Plays: Right-handed\n• Turned Pro: 2018\n• Career Prize Money: US$ 4,896,338\n• Instagram: @janniksin\nThe impressive 22-year-old turned professional back in 2018 and soon made an impact on the tour, breaking into the top 100 in the world rankings for the first time in 2019.\n Jannik Sinner (Photo: Dubai Duty Free Tennis Championships)\nSinner ended the season as number 78 in the world, becoming the youngest player since Rafael Nadal in 2003 to end the year in the top 80.\n The Italian then ended the 2019 season in style, qualifying for the 2019 Next Gen ATP Finals and going on to win the tournament with a win over Alex de Minaur in the final.\n Sinner then reached the main draw of a grand slam for the first time at the 2019 US Open, when he came through qualifying to reach the first round, where he lost to Stan Wawrinka.\n Asked to acknowledge some of the key figures in his development, Sinner replied: “I think first of all, my family who always helped me and gave me the confidence to actually change my life when I was 13-and-a-half, 14 years old.\n'}] +[Execute Result] {'thought': 'The search results have provided two different places of birth for Jannik Sinner: Innichen and San Candido. These are actually the same place, as San Candido is the Italian name and Innichen is the German name for the town. Since the user asked for the place of birth or hometown, I can now provide this information.', 'action_name': 'finish', 'action_parameters': {'content': 'Jannik Sinner was born in San Candido (Italian) / Innichen (German), Italy.'}} +[Execute] Execute End. +[Revised Plan] {"goals": ["Find the hometown of the 2024 Australian Open winner"], "tasks": [], "next_task_id": null} +[Agent Result] Jannik Sinner was born in San Candido (Italian) / Innichen (German), Italy. +[Agent] Agent End. +``` + +更多详细资料,请查看[快速上手/官方文档](https://undertone0809.github.io/promptulate/#/) -## 设计原则 +## 📚 设计原则 -promptulate框架的设计原则包括:模块化、可扩展性、互操作性、鲁棒性、可维护性、安全性、效率和可用性。 +pne 框架的设计原则包括:模块化、可扩展性、互操作性、鲁棒性、可维护性、安全性、效率和可用性。 - 模块化是指以模块为基本单位,允许方便地集成新的组件、模型和工具。 - 可扩展性是指框架能够处理大量数据、复杂任务和高并发的能力。 @@ -137,19 +209,18 @@ promptulate框架的设计原则包括:模块化、可扩展性、互操作性 - 效率是指优化框架的性能、资源使用和响应时间,以确保流畅和敏锐的用户体验。 - 可用性是指该框架采用用户友好的界面和清晰的文档,使其易于使用和理解。 -以上原则的遵循,以及最新的人工智能技术的应用,`promptulate`旨在为创建自动化代理提供强大而灵活的大语言模型应用开发框架。 +以上原则的遵循,以及最新的人工智能技术的应用,`pne` 旨在为创建自动化代理提供强大而灵活的大语言模型应用开发框架。 -## 交流群 +## 💌 联系 -欢迎加入群聊一起交流讨论有关LLM相关的话题,链接过期了可以issue或email提醒一下作者。 +欢迎加入群聊一起交流讨论 LLM & AI Agent 相关的话题,群里会不定期进行技术分享,链接过期了可以 issue 或 email 提醒一下作者。
- +
+For more information please contact: [zeeland4work@gmail.com](zeeland4work@gmail.com) -## 贡献 +## ⭐ 贡献 -本人正在尝试一些更加完善的抽象模式,以更好地兼容该框架,以及外部工具的扩展使用,如果你有更好的建议,欢迎一起讨论交流。 -如果你想为这个项目做贡献,请先查看[当前开发计划](https://undertone0809.github.io/promptulate/#/other/plan) -和[参与贡献/开发者手册](https://undertone0809.github.io/promptulate/#/other/contribution)。我很高兴看到更多的人参与并优化它。 +我们感谢你有兴趣为我们的开源计划做出贡献。我们提供了[开发者指南](https://undertone0809.github.io/promptulate/#/other/contribution),其中概述了为 Promptulate 做出贡献的步骤。请参阅本指南,以确保顺利合作和成功贡献,此外,你也可以查看[当前开发计划](https://undertone0809.github.io/promptulate/#/other/plan)查看最新的开发进展 🤝🚀 diff --git a/docs/README.md b/docs/README.md index 00f12f02..36e6ff03 100644 --- a/docs/README.md +++ b/docs/README.md @@ -21,111 +21,185 @@

-`Promptulate AI` 专注于构建大语言模型应用与 AI Agent 的开发者平台,致力于为开发者和企业提供构建、扩展、评估大语言模型应用的能力。`Promptulate` 是 `Promptulate AI` 旗下的大语言模型自动化与应用开发框架,旨在帮助开发者通过更小的成本构建行业级的大模型应用,其包含了LLM领域应用层开发的大部分常用组件,如外部工具组件、模型组件、Agent 智能代理、外部数据源接入模块、数据存储模块、生命周期模块等。 通过 `Promptulate`,你可以用 pythonic 的方式轻松构建起属于自己的 LLM 应用程序。 - -更多地,为构建一个强大而灵活的 LLM 应用开发平台与 AI Agent 构建平台,以创建能够自动化各种任务和应用程序的自主代理,`Promptulate` 通过Core -AI Engine、Agent System、Tools Provider、Multimodal Processing、Knowledge Base 和 Task-specific Modules -6个组件实现自动化AI平台。 Core AI Engine 是该框架的核心组件,负责处理和理解各种输入,生成输出和作出决策。Agent -System 是提供高级指导和控制AI代理行为的模块;APIs and Tools Provider 提供工具和服务交互的API和集成库;Multimodal -Processing 是一组处理和理解不同数据类型(如文本、图像、音频和视频)的模块,使用深度学习模型从不同数据模式中提取有意义的信息;Knowledge -Base 是一个存储和组织世界信息的大型结构化知识库,使AI代理能够访问和推理大量的知识;Task-specific -Modules 是一组专门设计用于执行特定任务的模块,例如情感分析、机器翻译或目标检测等。通过这些组件的组合,框架提供了一个全面、灵活和强大的平台,能够实现各种复杂任务和应用程序的自动化。 - -## 特性 - -- 大语言模型支持:支持不同类型的大语言模型的扩展接口 -- 对话终端:提供简易对话终端,直接体验与大语言模型的对话 -- AgentGroup:提供WebAgent、ToolAgent、CodeAgent等不同的Agent,进行复杂能力处理 -- 长对话模式:支持长对话聊天,支持多种方式的对话持久化 -- 外部工具:集成外部工具能力,可以进行网络搜索、执行Python代码等强大的功能 -- KEY池:提供API key池,彻底解决key限速的问题 -- 智能代理:集成 ReAct,self-ask 等 Prompt 框架,结合外部工具赋能 LLM -- 中文优化:针对中文语境进行特别优化,更适合中文场景 -- 数据导出:支持 Markdown 等格式的对话导出 -- Hook与生命周期:提供 Agent,Tool,llm 的生命周期及 Hook 系统 -- 高级抽象:支持插件扩展、存储扩展、大语言模型扩展 - -## 快速开始 - -- [快速上手/官方文档](https://undertone0809.github.io/promptulate/#/) -- [当前开发计划](https://undertone0809.github.io/promptulate/#/other/plan) -- [参与贡献/开发者手册](https://undertone0809.github.io/promptulate/#/other/contribution) -- [常见问题](https://undertone0809.github.io/promptulate/#/other/faq) -- [pypi仓库](https://pypi.org/project/promptulate/) - -- 打开终端,输入以下命令安装框架: +## Overview + +**Promptulate** is an AI Agent application development framework crafted by **Cogit Lab**, which offers developers an extremely concise and efficient way to build Agent applications through a Pythonic development paradigm. The core philosophy of Promptulate is to borrow and integrate the wisdom of the open-source community, incorporating the highlights of various development frameworks to lower the barrier to entry and unify the consensus among developers. With Promptulate, you can manipulate components like LLM, Agent, Tool, RAG, etc., with the most succinct code, as most tasks can be easily completed with just a few lines of code. 🚀 + +## 💡 Features + +- 🐍 Pythonic Code Style: Embraces the habits of Python developers, providing a Pythonic SDK calling approach, putting everything within your grasp with just one `pne.chat` function to encapsulate all essential functionalities. +- 🧠 Model Compatibility: Supports nearly all types of large models on the market and allows for easy customization to meet specific needs. +- 🕵️‍♂️ Diverse Agents: Offers various types of Agents, such as WebAgent, ToolAgent, CodeAgent, etc., capable of planning, reasoning, and acting to handle complex problems. +- 🔗 Low-Cost Integration: Effortlessly integrates tools from different frameworks like LangChain, significantly reducing integration costs. +- 🔨 Functions as Tools: Converts any Python function directly into a tool usable by Agents, simplifying the tool creation and usage process. +- 🪝 Lifecycle and Hooks: Provides a wealth of Hooks and comprehensive lifecycle management, allowing the insertion of custom code at various stages of Agents, Tools, and LLMs. +- 💻 Terminal Integration: Easily integrates application terminals, with built-in client support, offering rapid debugging capabilities for prompts. +- ⏱️ Prompt Caching: Offers a caching mechanism for LLM Prompts to reduce repetitive work and enhance development efficiency. + +> Below, `pne` stands for Promptulate, which is the nickname for Promptulate. The `p` and `e` represent the beginning and end of Promptulate, respectively, and `n` stands for 9, which is a shorthand for the nine letters between `p` and `e`. + +## Supported Base Models + +Promptulate integrates the capabilities of [litellm](https://github.com/BerriAI/litellm), supporting nearly all types of large models on the market, including but not limited to the following models: + +| Provider | [Completion](https://docs.litellm.ai/docs/#basic-usage) | [Streaming](https://docs.litellm.ai/docs/completion/stream#streaming-responses) | [Async Completion](https://docs.litellm.ai/docs/completion/stream#async-completion) | [Async Streaming](https://docs.litellm.ai/docs/completion/stream#async-streaming) | [Async Embedding](https://docs.litellm.ai/docs/embedding/supported_embedding) | [Async Image Generation](https://docs.litellm.ai/docs/image_generation) | +| ------------- | ------------- | ------------- | ------------- | ------------- | ------------- | ------------- | +| [openai](https://docs.litellm.ai/docs/providers/openai) | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | +| [azure](https://docs.litellm.ai/docs/providers/azure) | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | +| [aws - sagemaker](https://docs.litellm.ai/docs/providers/aws_sagemaker) | ✅ | ✅ | ✅ | ✅ | ✅ | +| [aws - bedrock](https://docs.litellm.ai/docs/providers/bedrock) | ✅ | ✅ | ✅ | ✅ |✅ | +| [google - vertex_ai [Gemini]](https://docs.litellm.ai/docs/providers/vertex) | ✅ | ✅ | ✅ | ✅ | +| [google - palm](https://docs.litellm.ai/docs/providers/palm) | ✅ | ✅ | ✅ | ✅ | +| [google AI Studio - gemini](https://docs.litellm.ai/docs/providers/gemini) | ✅ | | ✅ | | | +| [mistral ai api](https://docs.litellm.ai/docs/providers/mistral) | ✅ | ✅ | ✅ | ✅ | ✅ | +| [cloudflare AI Workers](https://docs.litellm.ai/docs/providers/cloudflare_workers) | ✅ | ✅ | ✅ | ✅ | +| [cohere](https://docs.litellm.ai/docs/providers/cohere) | ✅ | ✅ | ✅ | ✅ | ✅ | +| [anthropic](https://docs.litellm.ai/docs/providers/anthropic) | ✅ | ✅ | ✅ | ✅ | +| [huggingface](https://docs.litellm.ai/docs/providers/huggingface) | ✅ | ✅ | ✅ | ✅ | ✅ | +| [replicate](https://docs.litellm.ai/docs/providers/replicate) | ✅ | ✅ | ✅ | ✅ | +| [together_ai](https://docs.litellm.ai/docs/providers/togetherai) | ✅ | ✅ | ✅ | ✅ | +| [openrouter](https://docs.litellm.ai/docs/providers/openrouter) | ✅ | ✅ | ✅ | ✅ | +| [ai21](https://docs.litellm.ai/docs/providers/ai21) | ✅ | ✅ | ✅ | ✅ | +| [baseten](https://docs.litellm.ai/docs/providers/baseten) | ✅ | ✅ | ✅ | ✅ | +| [vllm](https://docs.litellm.ai/docs/providers/vllm) | ✅ | ✅ | ✅ | ✅ | +| [nlp_cloud](https://docs.litellm.ai/docs/providers/nlp_cloud) | ✅ | ✅ | ✅ | ✅ | +| [aleph alpha](https://docs.litellm.ai/docs/providers/aleph_alpha) | ✅ | ✅ | ✅ | ✅ | +| [petals](https://docs.litellm.ai/docs/providers/petals) | ✅ | ✅ | ✅ | ✅ | +| [ollama](https://docs.litellm.ai/docs/providers/ollama) | ✅ | ✅ | ✅ | ✅ | +| [deepinfra](https://docs.litellm.ai/docs/providers/deepinfra) | ✅ | ✅ | ✅ | ✅ | +| [perplexity-ai](https://docs.litellm.ai/docs/providers/perplexity) | ✅ | ✅ | ✅ | ✅ | +| [Groq AI](https://docs.litellm.ai/docs/providers/groq) | ✅ | ✅ | ✅ | ✅ | +| [anyscale](https://docs.litellm.ai/docs/providers/anyscale) | ✅ | ✅ | ✅ | ✅ | +| [voyage ai](https://docs.litellm.ai/docs/providers/voyage) | | | | | ✅ | +| [xinference [Xorbits Inference]](https://docs.litellm.ai/docs/providers/xinference) | | | | | ✅ | + +For more details, please visit the [litellm documentation](https://docs.litellm.ai/docs/providers). + +You can easily build any third-party model calls using the following method: + +```python +import promptulate as pne + +resp: str = pne.chat(model="ollama/llama2", messages=[{"content": "Hello, how are you?", "role": "user"}]) +``` + +## 📗 Related Documentation + +- [Getting Started/Official Documentation](https://undertone0809.github.io/promptulate/#/) +- [Current Development Plan](https://undertone0809.github.io/promptulate/#/other/plan) +- [Contributing/Developer's Manual](https://undertone0809.github.io/promptulate/#/other/contribution) +- [Frequently Asked Questions](https://undertone0809.github.io/promptulate/#/other/faq) +- [PyPI Repository](https://pypi.org/project/promptulate/) + +## 🛠 Quick Start + +- Open the terminal and enter the following command to install the framework: ```shell script pip install -U promptulate ``` -- 通过下面这个简单的程序开始你的 “HelloWorld”。 +> Note: Your Python version should be 3.8 or higher. + +Robust output formatting is a fundamental basis for LLM application development. We hope that LLMs can return stable data. With pne, you can easily perform formatted output. In the following example, we use Pydantic's BaseModel to encapsulate a data structure that needs to be returned. ```python -import os +from typing import List import promptulate as pne +from pydantic import BaseModel, Field -os.environ['OPENAI_API_KEY'] = "your-key" +class LLMResponse(BaseModel): + provinces: List[str] = Field(description="List of provinces' names") -agent = pne.WebAgent() -answer = agent.run("What is the temperature tomorrow in Shanghai") -print(answer) +resp: LLMResponse = pne.chat("Please tell me all provinces in China.", output_schema=LLMResponse) +print(resp) ``` +**Output:** + +```text +provinces=['Anhui', 'Fujian', 'Gansu', 'Guangdong', 'Guizhou', 'Hainan', 'Hebei', 'Heilongjiang', 'Henan', 'Hubei', 'Hunan', 'Jiangsu', 'Jiangxi', 'Jilin', 'Liaoning', 'Qinghai', 'Shaanxi', 'Shandong', 'Shanxi', 'Sichuan', 'Yunnan', 'Zhejiang', 'Taiwan', 'Guangxi', 'Nei Mongol', 'Ningxia', 'Xinjiang', 'Xizang', 'Beijing', 'Chongqing', 'Shanghai', 'Tianjin', 'Hong Kong', 'Macao'] ``` -The temperature tomorrow in Shanghai is expected to be 23°C. + +Additionally, influenced by the [Plan-and-Solve](https://arxiv.org/abs/2305.04091) paper, pne also allows developers to build Agents capable of dealing with complex problems through planning, reasoning, and action. The Agent's planning abilities can be activated using the `enable_plan` parameter. + +![plan-and-execute.png](images/plan-and-execute.png) + +In this example, we use [Tavily](https://app.tavily.com/) as the search engine, which is a powerful tool for searching information on the web. To use Tavily, you need to obtain an API key from Tavily. + +```python +import os + +os.environ["TAVILY_API_KEY"] = "your_tavily_api_key" +os.environ["OPENAI_API_KEY"] = "your_openai_api_key" ``` -> 大多数时候我们会将 promptulate 称之为 pne,其中 p 和 e 表示 promptulate 开头和结尾的单词,而 n 表示 9,即 p 和 e 中间的九个单词的简写。 +In this case, we are using the TavilySearchResults Tool wrapped by LangChain. -更多详细资料,请查看[快速上手/官方文档](https://undertone0809.github.io/promptulate/#/) +```python +from langchain_community.tools.tavily_search import TavilySearchResults -## 基础架构 +tools = [TavilySearchResults(max_results=5)] +``` -当前`promptulate`正处于快速开发阶段,仍有许多内容需要完善与讨论,十分欢迎大家的讨论与参与,而其作为一个大语言模型自动化与应用开发框架,主要由以下几部分组成: +```python +import promptulate as pne -- 大语言模型支持:支持不同类型的大语言模型的扩展接口 -- AI Agent:提供WebAgent、ToolAgent、CodeAgent等不同的Agent以及自定Agent能力,进行复杂能力处理 -- 对话终端:提供简易对话终端,直接体验与大语言模型的对话 -- 角色预设:提供预设角色,以不同的角度调用LLM -- 长对话模式:支持长对话聊天,支持多种方式的对话持久化 -- 外部工具:集成外部工具能力,可以进行网络搜索、执行Python代码等强大的功能 -- KEY池:提供API key池,彻底解决key限速的问题 -- 智能代理:集成ReAct,self-ask等高级Agent,结合外部工具赋能LLM -- 中文优化:针对中文语境进行特别优化,更适合中文场景 -- 数据导出:支持 Markdown 等格式的对话导出 -- 高级抽象:支持插件扩展、存储扩展、大语言模型扩展 -- 格式化输出:原生支持大模型的格式化输出,大大提升复杂场景下的任务处理能力与鲁棒性 -- Hook与生命周期:提供Agent,Tool,llm的生命周期及Hook系统 -- 物联网能力:框架为物联网应用开发提供了多种工具,方便物联网开发者使用大模型能力。 +pne.chat("what is the hometown of the 2024 Australia open winner?", model="gpt-4-1106-preview", enable_plan=True) +``` +**Output:** + +```text +[Agent] Assistant Agent start... +[User instruction] what is the hometown of the 2024 Australia open winner? +[Plan] {"goals": ["Find the hometown of the 2024 Australian Open winner"], "tasks": [{"task_id": 1, "description": "Identify the winner of the 2024 Australian Open."}, {"task_id": 2, "description": "Research the identified winner to find their place of birth or hometown."}, {"task_id": 3, "description": "Record the hometown of the 2024 Australian Open winner."}], "next_task_id": 1} +[Agent] Tool Agent start... +[User instruction] Identify the winner of the 2024 Australian Open. +[Thought] Since the current date is March 26, 2024, and the Australian Open typically takes place in January, the event has likely concluded for the year. To identify the winner, I should use the Tavily search tool to find the most recent information on the 2024 Australian Open winner. +[Action] tavily_search_results_json args: {'query': '2024 Australian Open winner'} +[Observation] [{'url': 'https://ausopen.com/articles/news/sinner-winner-italian-takes-first-major-ao-2024', 'content': 'The agile right-hander, who had claimed victory from a two-set deficit only once previously in his young career, is the second Italian man to achieve singles glory at a major, following Adriano Panatta in1976.With victories over Andrey Rublev, 10-time AO champion Novak Djokovic, and Medvedev, the Italian is the youngest player to defeat top 5 opponents in the final three matches of a major since Michael Stich did it at Wimbledon in 1991 – just weeks before Sinner was born.\n He saved the only break he faced with an ace down the tee, and helped by scoreboard pressure, broke Medvedev by slamming a huge forehand to force an error from his more experienced rival, sealing the fourth set to take the final to a decider.\n Sensing a shift in momentum as Medvedev served to close out the second at 5-3, Sinner set the RLA crowd alight with a pair of brilliant passing shots en route to creating a break point opportunity, which Medvedev snuffed out with trademark patience, drawing a forehand error from his opponent. “We are trying to get better every day, even during the tournament we try to get stronger, trying to understand every situation a little bit better, and I’m so glad to have you there supporting me, understanding me, which sometimes it’s not easy because I am a little bit young sometimes,” he said with a smile.\n Medvedev, who held to love in his first three service games of the second set, piled pressure on the Italian, forcing the right-hander to produce his best tennis to save four break points in a nearly 12-minute second game.\n'}, {'url': 'https://www.cbssports.com/tennis/news/australian-open-2024-jannik-sinner-claims-first-grand-slam-title-in-epic-comeback-win-over-daniil-medvedev/', 'content': '"\nOur Latest Tennis Stories\nSinner makes epic comeback to win Australian Open\nSinner, Sabalenka win Australian Open singles titles\n2024 Australian Open odds, Sinner vs. Medvedev picks\nSabalenka defeats Zheng to win 2024 Australian Open\n2024 Australian Open odds, Sabalenka vs. Zheng picks\n2024 Australian Open odds, Medvedev vs. Zverev picks\nAustralian Open odds: Djokovic vs. Sinner picks, bets\nAustralian Open odds: Gauff vs. Sabalenka picks, bets\nAustralian Open odds: Zheng vs. Yastremska picks, bets\nNick Kyrgios reveals he\'s contemplating retirement\n© 2004-2024 CBS Interactive. Jannik Sinner claims first Grand Slam title in epic comeback win over Daniil Medvedev\nSinner, 22, rallied back from a two-set deficit to become the third ever Italian Grand Slam men\'s singles champion\nAfter almost four hours, Jannik Sinner climbed back from a two-set deficit to win his first ever Grand Slam title with an epic 3-6, 3-6, 6-4, 6-4, 6-3 comeback victory against Daniil Medvedev. Sinner became the first Italian man to win the Australian Open since 1976, and just the eighth man to successfully come back from two sets down in a major final.\n He did not drop a single set until his meeting with Djokovic, and that win in itself was an accomplishment as Djokovic was riding a 33-match winning streak at the Australian Open and had never lost a semifinal in Melbourne.\n @janniksin • @wwos • @espn • @eurosport • @wowowtennis pic.twitter.com/DTCIqWoUoR\n"We are trying to get better everyday, and even during the tournament, trying to get stronger and understand the situation a little bit better," Sinner said.'}, {'url': 'https://www.bbc.com/sport/tennis/68120937', 'content': 'Live scores, results and order of play\nAlerts: Get tennis news sent to your phone\nRelated Topics\nTop Stories\nFA Cup: Blackburn Rovers v Wrexham - live text commentary\nRussian skater Valieva given four-year ban for doping\nLinks to Barcelona are \'totally untrue\' - Arteta\nElsewhere on the BBC\nThe truth behind the fake grooming scandal\nFeaturing unseen police footage and interviews with the officers at the heart of the case\nDid their father and uncle kill Nazi war criminals?\n A real-life murder mystery following three brothers in their quest for the truth\nWhat was it like to travel on the fastest plane?\nTake a behind-the-scenes look at the supersonic story of the Concorde\nToxic love, ruthless ambition and shocking betrayal\nTell Me Lies follows a passionate college relationship with unimaginable consequences...\n "\nMarathon man Medvedev runs out of steam\nMedvedev is the first player to lose two Grand Slam finals after winning the opening two sets\nSo many players with the experience of a Grand Slam final have talked about how different the occasion can be, particularly if it is the first time, and potentially overwhelming.\n Jannik Sinner beats Daniil Medvedev in Melbourne final\nJannik Sinner is the youngest player to win the Australian Open men\'s title since Novak Djokovic in 2008\nJannik Sinner landed the Grand Slam title he has long promised with an extraordinary fightback to beat Daniil Medvedev in the Australian Open final.\n "\nSinner starts 2024 in inspired form\nSinner won the first Australian Open men\'s final since 2005 which did not feature Roger Federer, Rafael Nadal or Novak Djokovic\nSinner was brought to the forefront of conversation when discussing Grand Slam champions in 2024 following a stunning end to last season.\n'}] +[Execute Result] {'thought': "The search results have provided consistent information about the winner of the 2024 Australian Open. Jannik Sinner is mentioned as the winner in multiple sources, which confirms the answer to the user's question.", 'action_name': 'finish', 'action_parameters': {'content': 'Jannik Sinner won the 2024 Australian Open.'}} +[Execute] Execute End. +[Revised Plan] {"goals": ["Find the hometown of the 2024 Australian Open winner"], "tasks": [{"task_id": 2, "description": "Research Jannik Sinner to find his place of birth or hometown."}, {"task_id": 3, "description": "Record the hometown of Jannik Sinner, the 2024 Australian Open winner."}], "next_task_id": 2} +[Agent] Tool Agent start... +[User instruction] Research Jannik Sinner to find his place of birth or hometown. +[Thought] To find Jannik Sinner's place of birth or hometown, I should use the search tool to find the most recent and accurate information. +[Action] tavily_search_results_json args: {'query': 'Jannik Sinner place of birth hometown'} +[Observation] [{'url': 'https://www.sportskeeda.com/tennis/jannik-sinner-nationality', 'content': "During the semifinal of the Cup, Sinner faced Djokovic for the third time in a row and became the first player to defeat him in a singles match. Jannik Sinner Nationality\nJannik Sinner is an Italian national and was born in Innichen, a town located in the mainly German-speaking area of South Tyrol in northern Italy. A. Jannik Sinner won his maiden Masters 1000 title at the 2023 Canadian Open defeating Alex de Minaur in the straight sets of the final.\n Apart from his glorious triumph at Melbourne Park in 2024, Jannik Sinner's best Grand Slam performance came at the 2023 Wimbledon, where he reached the semifinals. In 2020, Sinner became the youngest player since Novak Djokovic in 2006 to reach the quarter-finals of the French Open."}, {'url': 'https://en.wikipedia.org/wiki/Jannik_Sinner', 'content': "At the 2023 Australian Open, Sinner lost in the 4th round to eventual runner-up Stefanos Tsitsipas in 5 sets.[87]\nSinner then won his seventh title at the Open Sud de France in Montpellier, becoming the first player to win a tour-level title in the season without having dropped a single set and the first since countryman Lorenzo Musetti won the title in Naples in October 2022.[88]\nAt the ABN AMRO Open he defeated top seed and world No. 3 Stefanos Tsitsipas taking his revenge for the Australian Open loss, for his biggest win ever.[89] At the Cincinnati Masters, he lost in the third round to Félix Auger-Aliassime after being up a set, a break, and 2 match points.[76]\nSeeded 11th at the US Open, he reached the fourth round after defeating Brandon Nakashima in four sets.[77] Next, he defeated Ilya Ivashka in a five set match lasting close to four hours to reach the quarterfinals for the first time at this Major.[78] At five hours and 26 minutes, it was the longest match of Sinner's career up until this point and the fifth-longest in the tournament history[100] as well as the second longest of the season after Andy Murray against Thanasi Kokkinakis at the Australian Open.[101]\nHe reached back to back quarterfinals in Wimbledon after defeating Juan Manuel Cerundolo, Diego Schwartzman, Quentin Halys and Daniel Elahi Galan.[102] He then reached his first Major semifinal after defeating Roman Safiullin, before losing to Novak Djokovic in straight sets.[103] In the following round in the semifinals, he lost in straight sets to career rival and top seed Carlos Alcaraz who returned to world No. 1 following the tournament.[92] In Miami, he reached the quarterfinals of this tournament for a third straight year after defeating Grigor Dimitrov and Andrey Rublev, thus returning to the top 10 in the rankings at world No. In the final, he came from a two-set deficit to beat Daniil Medvedev to become the first Italian player, male or female, to win the Australian Open singles title, and the third man to win a Major (the second of which is in the Open Era), the first in 48 years.[8][122]"}, {'url': 'https://www.thesportreview.com/biography/jannik-sinner/', 'content': '• Date of birth: 16 August 2001\n• Age: 22 years old\n• Place of birth: San Candido, Italy\n• Nationality: Italian\n• Height: 188cm / 6ft 2ins\n• Weight: 76kg / 167lbs\n• Plays: Right-handed\n• Turned Pro: 2018\n• Career Prize Money: US$ 4,896,338\n• Instagram: @janniksin\nThe impressive 22-year-old turned professional back in 2018 and soon made an impact on the tour, breaking into the top 100 in the world rankings for the first time in 2019.\n Jannik Sinner (Photo: Dubai Duty Free Tennis Championships)\nSinner ended the season as number 78 in the world, becoming the youngest player since Rafael Nadal in 2003 to end the year in the top 80.\n The Italian then ended the 2019 season in style, qualifying for the 2019 Next Gen ATP Finals and going on to win the tournament with a win over Alex de Minaur in the final.\n Sinner then reached the main draw of a grand slam for the first time at the 2019 US Open, when he came through qualifying to reach the first round, where he lost to Stan Wawrinka.\n Asked to acknowledge some of the key figures in his development, Sinner replied: “I think first of all, my family who always helped me and gave me the confidence to actually change my life when I was 13-and-a-half, 14 years old.\n'}] +[Execute Result] {'thought': 'The search results have provided two different places of birth for Jannik Sinner: Innichen and San Candido. These are actually the same place, as San Candido is the Italian name and Innichen is the German name for the town. Since the user asked for the place of birth or hometown, I can now provide this information.', 'action_name': 'finish', 'action_parameters': {'content': 'Jannik Sinner was born in San Candido (Italian) / Innichen (German), Italy.'}} +[Execute] Execute End. +[Revised Plan] {"goals": ["Find the hometown of the 2024 Australian Open winner"], "tasks": [], "next_task_id": null} +[Agent Result] Jannik Sinner was born in San Candido (Italian) / Innichen (German), Italy. +[Agent] Agent End. +``` - +For more detailed information, please check the [Getting Started/Official Documentation](https://undertone0809.github.io/promptulate/#/). -## 设计原则 +## 📚 Design Principles -promptulate框架的设计原则包括:模块化、可扩展性、互操作性、鲁棒性、可维护性、安全性、效率和可用性。 +The design principles of the pne framework include modularity, extensibility, interoperability, robustness, maintainability, security, efficiency, and usability. -- 模块化是指以模块为基本单位,允许方便地集成新的组件、模型和工具。 -- 可扩展性是指框架能够处理大量数据、复杂任务和高并发的能力。 -- 互操作性是指该框架与各种外部系统、工具和服务兼容,并且能够实现无缝集成和通信。 -- 鲁棒性是指框架具备强大的错误处理、容错和恢复机制,以确保在各种条件下可靠地运行。 -- 安全性是指框架采用了严格的安全措施,以保护框架、其数据和用户免受未经授权访问和恶意行为的侵害。 -- 效率是指优化框架的性能、资源使用和响应时间,以确保流畅和敏锐的用户体验。 -- 可用性是指该框架采用用户友好的界面和清晰的文档,使其易于使用和理解。 +- Modularity refers to using modules as the basic unit, allowing for easy integration of new components, models, and tools. +- Extensibility refers to the framework's ability to handle large amounts of data, complex tasks, and high concurrency. +- Interoperability means the framework is compatible with various external systems, tools, and services and can achieve seamless integration and communication. +- Robustness indicates the framework has strong error handling, fault tolerance, and recovery mechanisms to ensure reliable operation under various conditions. +- Security implies the framework has implemented strict measures to protect against unauthorized access and malicious behavior. +- Efficiency is about optimizing the framework's performance, resource usage, and response times to ensure a smooth and responsive user experience. +- Usability means the framework uses user-friendly interfaces and clear documentation, making it easy to use and understand. -以上原则的遵循,以及最新的人工智能技术的应用,`promptulate`旨在为创建自动化代理提供强大而灵活的大语言模型应用开发框架。 +Following these principles and applying the latest artificial intelligence technologies, `pne` aims to provide a powerful and flexible framework for creating automated agents. -## 交流群 +## 💌 Contact -欢迎加入群聊一起交流讨论有关LLM相关的话题,链接过期了可以issue或email提醒一下作者。 +Feel free to join the group chat to discuss topics related to LLM & AI Agents. There will be occasional technical shares in the group. If the link expires, please remind the author via issue or email.
- +
-## 贡献 +For more information, please contact: [zeeland4work@gmail.com](mailto:zeeland4work@gmail.com) + +## ⭐ Contribution -本人正在尝试一些更加完善的抽象模式,以更好地兼容该框架,如果你有更好的建议,欢迎一起讨论交流。 -如果你想为这个项目做贡献,请先查看[当前开发计划](https://undertone0809.github.io/promptulate/#/other/plan) -和[参与贡献/开发者手册](https://undertone0809.github.io/promptulate/#/other/contribution)。我很高兴看到更多的人参与并优化它。 +We appreciate your interest in contributing to our open-source initiative. We have provided a [Developer's Guide](https://undertone0809.github.io/promptulate/#/other/contribution) outlining the steps to contribute to Promptulate. Please refer to this guide to ensure smooth collaboration and successful contributions. Additionally, you can view the [Current Development Plan](https://undertone0809.github.io/promptulate/#/other/plan) to see the latest development progress 🤝🚀 diff --git a/docs/get_started/quick_start.md b/docs/get_started/quick_start.md index bd9386b1..9b638434 100644 --- a/docs/get_started/quick_start.md +++ b/docs/get_started/quick_start.md @@ -1,6 +1,6 @@ # 快速开始 -通过该部分教学,你可以快速对 promptulate 有一个整体的认知,了解一些常用模块的基本使用方式,在阅读完该部分之后,你可以继续阅读 [User Cases](modules/usercases/intro.md#user-cases) 来了解 promptulate 的一些最佳实践,在遇到问题的时候,可以查看每个模块的具体使用方式,也欢迎你在 [issue](https://github.com/Undertone0809/promptulate/issues) 中为 promptulate 提供更好的建议。 +通过该部分教学,你可以快速对 promptulate 有一个整体的认知,了解一些常用模块的基本使用方式,在阅读完该部分之后,你可以继续阅读 [User Cases](modules/usercases/intro.md#user-cases) 和 [example](https://github.com/Undertone0809/promptulate/tree/main/example) 来了解 promptulate 的一些最佳实践,在遇到问题的时候,可以查看每个模块的具体使用方式,也欢迎你在 [issue](https://github.com/Undertone0809/promptulate/issues) 中为 promptulate 提供更好的建议。 ## 安装最新版 @@ -234,7 +234,6 @@ pip install poetry make install ``` - 本项目使用配备代码语法检查工具,如果你想提交 pr,则需要在 commit 之前运行 `make polish-codestyle` 进行代码规范格式化,并且运行 `make lint` 通过语法与单元测试的检查。 ## 更多 diff --git a/docs/modules/agents/assistant_agent_usage.md b/docs/modules/agents/assistant_agent_usage.md index 91784d7c..43c30b7b 100644 --- a/docs/modules/agents/assistant_agent_usage.md +++ b/docs/modules/agents/assistant_agent_usage.md @@ -51,7 +51,7 @@ However, it is really easy to create your own tools - see documentation [here](h ```python from langchain_community.tools.tavily_search import TavilySearchResults -tools = [TavilySearchResults(max_results=3)] +tools = [TavilySearchResults(max_results=5)] ``` ## Create the Assistant Agent @@ -69,7 +69,7 @@ agent = AssistantAgent(tools=tools, llm=llm) agent.run("what is the hometown of the 2024 Australia open winner?") ``` -**output** +**Output:** ```text [Agent] Assistant Agent start... diff --git a/docs/use_cases/chat_usage.md b/docs/use_cases/chat_usage.md index 0bcd2d7b..07232951 100644 --- a/docs/use_cases/chat_usage.md +++ b/docs/use_cases/chat_usage.md @@ -166,6 +166,27 @@ print(response) The output of LLM has strong uncertainty. Pne provide the ability to get a formatted object by LLM. The following example shows that if LLM strictly returns you an array listing all provinces in China. +```python +from typing import List +from pydantic import BaseModel, Field +import promptulate as pne + +class LLMResponse(BaseModel): + provinces: List[str] = Field(description="All provinces in China") + + +resp: LLMResponse = pne.chat( + messages="Please tell me all provinces in China.", + output_schema=LLMResponse +) + +print(resp) +``` + + provinces=['Anhui', 'Fujian', 'Gansu', 'Guangdong', 'Guizhou', 'Hainan', 'Hebei', 'Heilongjiang', 'Henan', 'Hubei', 'Hunan', 'Jiangsu', 'Jiangxi', 'Jilin', 'Liaoning', 'Qinghai', 'Shaanxi', 'Shandong', 'Shanxi', 'Sichuan', 'Yunnan', 'Zhejiang', 'Guangxi', 'Inner Mongolia', 'Ningxia', 'Xinjiang', 'Tibet', 'Beijing', 'Chongqing', 'Shanghai', 'Tianjin', 'Hong Kong', 'Macau'] + + + ```python from typing import List import promptulate as pne @@ -216,18 +237,21 @@ print(response.queried_date) ## Using tool -You can use `pne.tools` to add some tools to chat. Now we have `pne.tools.duckduckgo.DuckDuckGoTool()`, it can help you to get the answer from DuckDuckGo. -> ⚠ There are some tiny bugs if you use tools, we are fixing it. We are ready to release the first version of `pne.tools` in the next version. +The Tool feature in `pne.chat()` allows the language model to use specialized tools to assist in providing answers. For instance, when the language model recognizes the need to obtain weather information, it can invoke a predefined function for this purpose. + +This is facilitated by a ToolAgent, which operates within the ReAct framework. The [ReAct](https://react-lm.github.io/) framework endows the ToolAgent with the ability to reason, think, and execute tools. + +To illustrate, if the language model needs to find out the weather forecast for Shanghai tomorrow, it can make use of the DuckDuckGoTool through the ToolAgent to retrieve this information. ```python import promptulate as pne -tools = [pne.tools.duckduckgo.DuckDuckGoTool()] +websearch = pne.tools.DuckDuckGoTool() response = pne.chat( messages="What's the temperature in Shanghai tomorrow?", - tools=tools + tools=[websearch] ) print(response) ``` @@ -235,68 +259,126 @@ print(response) {"tool": {"tool_name": "web_search", "tool_params": {"query": "Weather Shanghai tomorrow"}}, "thought": "I will use the web_search tool to find the temperature in Shanghai tomorrow.", "final_answer": null} +## Custom Tool -```python -from typing import Any, Optional, Union -from promptulate.output_formatter import OutputFormatter -from pydantic import BaseModel, Field +Moreover, you can customize your function easily. The follow example show how to create a custom tool and use it in `pne.chat()`. Here we also we ddg websearch to wrap the function. -class ToolParams(BaseModel): - tool_name: str = Field(description="Tool name") - tool_params: dict = Field(description="Tool parameters, if not, pass in an empty dictionary.") -class LLMResponse(BaseModel): - tool: Optional[ToolParams] = Field(description="The tool to take", default=None) - thought: str = Field(description="Ideas generated based on the current situation.") - final_answer: Optional[str] = Field(description="When you think you can output the final answer, write down the output here", default=None) - -formatter = OutputFormatter(LLMResponse) -instruction = formatter.get_formatted_instructions() -print(instruction) -``` +```python +import promptulate as pne - ## Output format - The output should be formatted as a JSON instance that conforms to the JSON schema below. +def websearch(query: str) -> str: + """Search the web for the query. - As an example, for the schema {"properties": {"foo": {"description": "a list of strings", "type": "array", "items": {"type": "string"}}}, "required": ["foo"]} - the object {"foo": ["bar", "baz"]} is a well-formatted instance of the schema. The object {"properties": {"foo": ["bar", "baz"]}} is not well-formatted. + Args: + query(str): The query word. + + Returns: + str: The search result. + """ + return pne.tools.DuckDuckGoTool().run(query) - Here is the output schema: - ``` - {"properties": {"tool": {"description": "The tool to take", "allOf": [{"$ref": "#/definitions/ToolParams"}]}, "thought": {"description": "Ideas generated based on the current situation.", "type": "string"}, "final_answer": {"description": "When you think you can output the final answer, write down the output here", "type": "string"}}, "required": ["thought"], "definitions": {"ToolParams": {"title": "ToolParams", "type": "object", "properties": {"tool_name": {"title": "Tool Name", "description": "Tool name", "type": "string"}, "tool_params": {"title": "Tool Params", "description": "Tool parameters, if not, pass in an empty dictionary.", "type": "object"}}, "required": ["tool_name", "tool_params"]}}} - ``` +response = pne.chat( + messages="What's the temperature in Shanghai tomorrow?", + tools=[websearch] +) +print(response) +``` + + [Agent] Tool Agent start... + [User instruction] What's the temperature in Shanghai tomorrow? + [Thought] I should use the websearch tool to find the weather forecast of Shanghai tomorrow. + [Action] websearch args: {'query': 'Shanghai weather forecast tomorrow'} + [Observation] 25° / 14°. 1.7 mm. 7 m/s. Open hourly forecast. Updated 18:30. How often is the weather forecast updated? Forecast as PDF Forecast as SVG. Shanghai Weather Forecast. Providing a local hourly Shanghai weather forecast of rain, sun, wind, humidity and temperature. The Long-range 12 day forecast also includes detail for Shanghai weather today. Live weather reports from Shanghai weather stations and weather warnings that include risk of thunder, high UV index and forecast gales. Everything you need to know about today's weather in Shanghai, Shanghai, China. High/Low, Precipitation Chances, Sunrise/Sunset, and today's Temperature History. 上海 (Shanghai) ☀ Weather forecast for 10 days, information from meteorological stations, webcams, sunrise and sunset, wind and precipitation maps for this place ... 00:00 tomorrow 01:00 tomorrow 02:00 tomorrow 03:00 tomorrow 04:00 tomorrow 05:00 tomorrow 06:00 tomorrow 07:00 tomorrow 08:00 tomorrow 09:00 tomorrow Shanghai 7 day weather forecast including weather warnings, temperature, rain, wind, visibility, humidity and UV + [Agent Result] The weather forecast for Shanghai tomorrow is 25° / 14° with 1.7 mm of rain and 7 m/s wind speed. The weather information is updated at 18:30 daily. + [Agent] Agent End. + The weather forecast for Shanghai tomorrow is 25° / 14° with 1.7 mm of rain and 7 m/s wind speed. The weather information is updated at 18:30 daily. +## chat with Plan-Execute-Reflect Agent + +Additionally, you can enhance the capabilities of the ToolAgent by setting enable_plan=True, which activates its ability to handle more complex issues. In the pne framework, this action triggers the AssistantAgent, which can be thought of as a planning-capable ToolAgent. Upon receiving user instructions, the AssistantAgent proactively constructs a feasible plan, executes it, and then reflects on each action post-execution. If the outcome doesn't meet the expected results, the AssistantAgent will recalibrate and re-plan accordingly. + +This example we need to solve the problem of "what is the hometown of the 2024 Australia open winner?" Here we can integrate the LangChain tools to solve the problem. + +> pne support all LangChain Tools, you can see [here](/modules/tools/langchain_tool_usage?id=langchain-tool-usage). Of course, it is really easy to create your own tools - see documentation [here](https://undertone0809.github.io/promptulate/#/modules/tools/custom_tool_usage?id=custom-tool) on how to do that. + +Firstly, we need to install necessary packages. +```bash +pip install langchain_community +``` + +We use [Tavily](https://app.tavily.com/) as a search engine, which is a powerful search engine that can search for information from the web. To use Tavily, you need to get an API key from Tavily. ```python -from typing import Any, Optional, Union -from promptulate.output_formatter import OutputFormatter -from pydantic import BaseModel, Field +import os + +os.environ["TAVILY_API_KEY"] = "your_tavily_api_key" +os.environ["OPENAI_API_KEY"] = "your_openai_api_key" +``` -class WebSearchParams(BaseModel): - query: str = Field(description="query word") -class WebSearchTool(BaseModel): - name: str = Field(description="Tool name") - params: WebSearchParams = Field(description="Tool parameters, if not, pass in an empty dictionary.") +```python +from langchain_community.tools.tavily_search import TavilySearchResults -formatter = OutputFormatter(WebSearchTool) -instruction = formatter.get_formatted_instructions() -print(instruction) -# a = WebSearchTool.schema() -# print(a) +websearch = TavilySearchResults() ``` - ## Output format - The output should be formatted as a JSON instance that conforms to the JSON schema below. - - As an example, for the schema {"properties": {"foo": {"description": "a list of strings", "type": "array", "items": {"type": "string"}}}, "required": ["foo"]} - the object {"foo": ["bar", "baz"]} is a well-formatted instance of the schema. The object {"properties": {"foo": ["bar", "baz"]}} is not well-formatted. - - Here is the output schema: - ``` - {"properties": {"name": {"description": "Tool name", "type": "string"}, "params": {"description": "Tool parameters, if not, pass in an empty dictionary.", "allOf": [{"$ref": "#/definitions/WebSearchParams"}]}}, "required": ["name", "params"], "definitions": {"WebSearchParams": {"title": "WebSearchParams", "type": "object", "properties": {"query": {"title": "Query", "description": "query word", "type": "string"}}, "required": ["query"]}}} - ``` + +```python +import promptulate as pne + +response = pne.chat( + model="gpt-4-1106-preview", + messages="What's the temperature in Shanghai tomorrow?", + tools=[websearch], + enable_plan=True +) +print(response) +``` + +[Agent] Assistant Agent start... +[User instruction] What's the temperature in Shanghai tomorrow? +[Plan] {"goals": ["Find out the temperature in Shanghai tomorrow."], "tasks": [{"task_id": 1, "description": "Open a web browser on your device.", "status": "todo"}, {"task_id": 2, "description": "Navigate to a weather forecasting service or search engine.", "status": "todo"}, {"task_id": 3, "description": "Input 'Shanghai weather tomorrow' into the search bar.", "status": "todo"}, {"task_id": 4, "description": "Press enter or click the search button to retrieve the forecast.", "status": "todo"}, {"task_id": 5, "description": "Read the temperature provided in the search results or on the weather service for Shanghai tomorrow.", "status": "todo"}], "next_task_id": 1} +[Agent] Tool Agent start... +[User instruction] Open a web browser on your device. +[Execute Result] {'thought': "The user seems to be asking for an action that is outside the scope of my capabilities. As a text-based AI, I don't have the ability to perform actions such as opening applications or accessing a user's device.", 'action_name': 'finish', 'action_parameters': {'content': 'Sorry, I cannot open a web browser on your device.'}} +[Execute] Execute End. +[Revised Plan] {"goals": ["Find out the temperature in Shanghai tomorrow."], "tasks": [{"task_id": 1, "description": "Open a web browser on your device.", "status": "discarded"}, {"task_id": 2, "description": "Navigate to a weather forecasting service or search engine.", "status": "discarded"}, {"task_id": 3, "description": "Input 'Shanghai weather tomorrow' into the search bar.", "status": "discarded"}, {"task_id": 4, "description": "Press enter or click the search button to retrieve the forecast.", "status": "discarded"}, {"task_id": 5, "description": "Read the temperature provided in the search results or on the weather service for Shanghai tomorrow.", "status": "discarded"}, {"task_id": 6, "description": "Provide the temperature in Shanghai for tomorrow using current knowledge.", "status": "todo"}], "next_task_id": 6} +[Agent] Tool Agent start... +[User instruction] Provide the temperature in Shanghai for tomorrow using current knowledge. +[Thought] I need to use a tool to find the temperature in Shanghai for tomorrow. Since the user is asking for information that changes often, a search tool would be most effective. +[Action] tavily_search_results_json args: {'query': 'Shanghai temperature forecast March 30, 2024'} +[Observation] [{'url': 'https://en.climate-data.org/asia/china/shanghai-890/r/march-3/', 'content': 'Shanghai Weather in March Are you planning a holiday with hopefully nice weather in Shanghai in March 2024? Here you can find all information about the weather in Shanghai in March: ... 30.7 °C (87.3) °F. 27 °C (80.5) °F. 22.5 °C (72.5) °F. 17 °C (62.6) °F. 10.8 °C (51.4) °F.'}, {'url': 'https://www.meteoprog.com/weather/Szanghaj/month/march/', 'content': 'Shanghai (China) weather in March 2024 ☀️ Accurate weather forecast for Shanghai in March ⛅ Detailed forecast By month Current temperature "near me" Weather news ⊳ Widget of weather ⊳ Water temperature | METEOPROG. ... 30 March +17 °+11° 31 March +16° ...'}, {'url': 'https://www.accuweather.com/en/cn/shanghai/106577/march-weather/106577', 'content': 'Get the monthly weather forecast for Shanghai, Shanghai, China, including daily high/low, historical averages, to help you plan ahead.'}, {'url': 'https://world-weather.info/forecast/china/shanghai/march-2024/', 'content': 'Extended weather forecast in Shanghai. Hourly Week 10 days 14 days 30 days Year. Detailed ⚡ Shanghai Weather Forecast for March 2024 - day/night 🌡️ temperatures, precipitations - World-Weather.info.'}, {'url': 'https://www.weather25.com/asia/china/shanghai?page=month&month=March', 'content': "Our weather forecast can give you a great sense of what weather to expect in Shanghai in March 2024. If you're planning to visit Shanghai in the near future, we highly recommend that you review the 14 day weather forecast for Shanghai before you arrive. Temperatures. 59 ° / 44 °. Rainy Days."}] +[Thought] The search results provided URLs with general March weather information and historical data for Shanghai, but haven't provided a specific forecast for tomorrow's temperature. I need to refine my search to get the weather forecast specifically for March 30, 2024, in Shanghai. +[Action] tavily_search_results_json args: {'query': 'Shanghai weather forecast for March 30, 2024'} +[Observation] [{'url': 'https://www.accuweather.com/en/cn/shanghai/106577/march-weather/106577', 'content': 'Get the monthly weather forecast for Shanghai, Shanghai, China, including daily high/low, historical averages, to help you plan ahead.'}, {'url': 'https://world-weather.info/forecast/china/shanghai/march-2024/', 'content': 'Extended weather forecast in Shanghai. Hourly Week 10 days 14 days 30 days Year. Detailed ⚡ Shanghai Weather Forecast for March 2024 - day/night 🌡️ temperatures, precipitations - World-Weather.info.'}, {'url': 'https://www.weather-atlas.com/en/china/shanghai-weather-march', 'content': "In Shanghai, China, in March, the average water temperature is 8°C (46.4°F). Swimming in 8°C (46.4°F) is considered life-threatening. Even a few minutes in 13°C (55.4°F) water is uncomfortable, and swimming below 10°C (50°F) may cause total loss of breathing control and cold shock, depending on a person's physique."}, {'url': 'https://www.meteoprog.com/weather/Szanghaj/month/march/', 'content': 'Shanghai (China) weather in March 2024 ☀️ Accurate weather forecast for Shanghai in March ⛅ Detailed forecast By month Current temperature "near me" Weather news ⊳ Widget of weather ⊳ Water temperature | METEOPROG. ... 30 March +17 °+11° 31 March +16° ...'}, {'url': 'https://www.weather25.com/asia/china/shanghai?page=month&month=March', 'content': "Our weather forecast can give you a great sense of what weather to expect in Shanghai in March 2024. If you're planning to visit Shanghai in the near future, we highly recommend that you review the 14 day weather forecast for Shanghai before you arrive. Temperatures. 59 ° / 44 °. Rainy Days."}] +[Execute Result] {'thought': "The search has returned a specific forecast for March 30, 2024, which indicates that the temperatures are expected to be +17 °C for the high and +11 °C for the low. This information is sufficient to answer the user's question.", 'action_name': 'finish', 'action_parameters': {'content': 'The temperature in Shanghai for tomorrow, March 30, 2024, is expected to be a high of +17 °C and a low of +11 °C.'}} +[Execute] Execute End. +[Revised Plan] {"goals": ["Find out the temperature in Shanghai tomorrow."], "tasks": [{"task_id": 6, "description": "Provide the temperature in Shanghai for tomorrow using current knowledge.", "status": "done"}], "next_task_id": null} +[Agent Result] The temperature in Shanghai for tomorrow, March 30, 2024, is expected to be a high of +17 °C and a low of +11 °C. +[Agent] Agent End. +The temperature in Shanghai for tomorrow, March 30, 2024, is expected to be a high of +17 °C and a low of +11 °C. + + +## Output Formatter + +The output formatter is a powerful feature in pne. It can help you format the output of LLM. The follow example show how to use the output formatter to format the output of LLM. + + +```python +from typing import List +import promptulate as pne +from pydantic import BaseModel, Field + +class LLMResponse(BaseModel): + provinces: List[str] = Field(description="List of provinces name") + +resp: LLMResponse = pne.chat("Please tell me all provinces in China.?", output_schema=LLMResponse) +print(resp) +``` + + provinces=['Anhui', 'Fujian', 'Gansu', 'Guangdong', 'Guizhou', 'Hainan', 'Hebei', 'Heilongjiang', 'Henan', 'Hubei', 'Hunan', 'Jiangsu', 'Jiangxi', 'Jilin', 'Liaoning', 'Qinghai', 'Shaanxi', 'Shandong', 'Shanxi', 'Sichuan', 'Yunnan', 'Zhejiang', 'Taiwan', 'Guangxi', 'Nei Mongol', 'Ningxia', 'Xinjiang', 'Xizang', 'Beijing', 'Chongqing', 'Shanghai', 'Tianjin', 'Hong Kong', 'Macao'] ## Streaming @@ -351,6 +433,24 @@ for chuck in response: print(chuck.additional_kwargs) ``` +## AIChat + +If you have multi-conversation and only use one LLM, you can use `pne.AIChat` init a chat object. It will save the LLM object and you can use it to chat. + +The follow example show how to use `pne.AIChat` to chat. + + +```python +import promptulate as pne + +ai_chat = pne.AIChat(model="gpt-4-1106-preview", model_config={"temperature": 0.5}) +resp: str = ai_chat.run("Hello") +print(resp) +``` + + Hello! How can I assist you today? + + ## Retrieve && RAG **RAG(Retrieval-Augmented Generation)** is a important data retrieve method. You can use `pne.chat()` to retrieve data from your database. diff --git a/example/agent/assistant_agent_usage.ipynb b/example/agent/assistant_agent_usage.ipynb index f8cec505..070bf928 100644 --- a/example/agent/assistant_agent_usage.ipynb +++ b/example/agent/assistant_agent_usage.ipynb @@ -109,7 +109,7 @@ "source": [ "from langchain_community.tools.tavily_search import TavilySearchResults\n", "\n", - "tools = [TavilySearchResults(max_results=3)]" + "tools = [TavilySearchResults(max_results=5)]" ], "metadata": { "collapsed": true, @@ -173,7 +173,7 @@ { "cell_type": "markdown", "source": [ - "**output**\n", + "**Output:**\n", "\n", "```text\n", "[Agent] Assistant Agent start...\n", diff --git a/example/chat_usage.ipynb b/example/chat_usage.ipynb index 5a9ed77b..6c0c6f7a 100644 --- a/example/chat_usage.ipynb +++ b/example/chat_usage.ipynb @@ -324,6 +324,43 @@ }, "id": "5f82e8d518c46350" }, + { + "cell_type": "code", + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "provinces=['Anhui', 'Fujian', 'Gansu', 'Guangdong', 'Guizhou', 'Hainan', 'Hebei', 'Heilongjiang', 'Henan', 'Hubei', 'Hunan', 'Jiangsu', 'Jiangxi', 'Jilin', 'Liaoning', 'Qinghai', 'Shaanxi', 'Shandong', 'Shanxi', 'Sichuan', 'Yunnan', 'Zhejiang', 'Guangxi', 'Inner Mongolia', 'Ningxia', 'Xinjiang', 'Tibet', 'Beijing', 'Chongqing', 'Shanghai', 'Tianjin', 'Hong Kong', 'Macau']\n" + ] + } + ], + "source": [ + "from typing import List\n", + "from pydantic import BaseModel, Field\n", + "import promptulate as pne\n", + "\n", + "class LLMResponse(BaseModel):\n", + " provinces: List[str] = Field(description=\"All provinces in China\")\n", + "\n", + "\n", + "resp: LLMResponse = pne.chat(\n", + " messages=\"Please tell me all provinces in China.\",\n", + " output_schema=LLMResponse\n", + ")\n", + "\n", + "print(resp)" + ], + "metadata": { + "collapsed": false, + "ExecuteTime": { + "end_time": "2024-03-30T18:41:25.086382900Z", + "start_time": "2024-03-30T18:41:21.153407300Z" + } + }, + "id": "ce7e53fa05df43e3", + "execution_count": 2 + }, { "cell_type": "code", "execution_count": 2, @@ -418,9 +455,12 @@ "cell_type": "markdown", "source": [ "## Using tool\n", - "You can use `pne.tools` to add some tools to chat. Now we have `pne.tools.duckduckgo.DuckDuckGoTool()`, it can help you to get the answer from DuckDuckGo.\n", "\n", - "> ⚠ There are some tiny bugs if you use tools, we are fixing it. We are ready to release the first version of `pne.tools` in the next version." + "The Tool feature in `pne.chat()` allows the language model to use specialized tools to assist in providing answers. For instance, when the language model recognizes the need to obtain weather information, it can invoke a predefined function for this purpose.\n", + "\n", + "This is facilitated by a ToolAgent, which operates within the ReAct framework. The [ReAct](https://react-lm.github.io/) framework endows the ToolAgent with the ability to reason, think, and execute tools.\n", + "\n", + "To illustrate, if the language model needs to find out the weather forecast for Shanghai tomorrow, it can make use of the DuckDuckGoTool through the ToolAgent to retrieve this information." ], "metadata": { "collapsed": false @@ -442,10 +482,10 @@ "source": [ "import promptulate as pne\n", "\n", - "tools = [pne.tools.duckduckgo.DuckDuckGoTool()]\n", + "websearch = pne.tools.DuckDuckGoTool()\n", "response = pne.chat(\n", " messages=\"What's the temperature in Shanghai tomorrow?\",\n", - " tools=tools\n", + " tools=[websearch]\n", ")\n", "print(response)" ], @@ -458,101 +498,216 @@ }, "id": "7640390bf6a79c07" }, + { + "cell_type": "markdown", + "source": [ + "## Custom Tool\n", + "\n", + "Moreover, you can customize your function easily. The follow example show how to create a custom tool and use it in `pne.chat()`. Here we also we ddg websearch to wrap the function." + ], + "metadata": { + "collapsed": false + }, + "id": "d3f5dabeebc74620" + }, { "cell_type": "code", - "execution_count": 2, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ - "## Output format\n", - "The output should be formatted as a JSON instance that conforms to the JSON schema below.\n", - "\n", - "As an example, for the schema {\"properties\": {\"foo\": {\"description\": \"a list of strings\", \"type\": \"array\", \"items\": {\"type\": \"string\"}}}, \"required\": [\"foo\"]}\n", - "the object {\"foo\": [\"bar\", \"baz\"]} is a well-formatted instance of the schema. The object {\"properties\": {\"foo\": [\"bar\", \"baz\"]}} is not well-formatted.\n", - "\n", - "Here is the output schema:\n", - "```\n", - "{\"properties\": {\"tool\": {\"description\": \"The tool to take\", \"allOf\": [{\"$ref\": \"#/definitions/ToolParams\"}]}, \"thought\": {\"description\": \"Ideas generated based on the current situation.\", \"type\": \"string\"}, \"final_answer\": {\"description\": \"When you think you can output the final answer, write down the output here\", \"type\": \"string\"}}, \"required\": [\"thought\"], \"definitions\": {\"ToolParams\": {\"title\": \"ToolParams\", \"type\": \"object\", \"properties\": {\"tool_name\": {\"title\": \"Tool Name\", \"description\": \"Tool name\", \"type\": \"string\"}, \"tool_params\": {\"title\": \"Tool Params\", \"description\": \"Tool parameters, if not, pass in an empty dictionary.\", \"type\": \"object\"}}, \"required\": [\"tool_name\", \"tool_params\"]}}}\n", - "```\n" + "\u001B[31;1m\u001B[1;3m[Agent] Tool Agent start...\u001B[0m\n", + "\u001B[36;1m\u001B[1;3m[User instruction] What's the temperature in Shanghai tomorrow?\u001B[0m\n", + "\u001B[33;1m\u001B[1;3m[Thought] I should use the websearch tool to find the weather forecast of Shanghai tomorrow.\u001B[0m\n", + "\u001B[33;1m\u001B[1;3m[Action] websearch args: {'query': 'Shanghai weather forecast tomorrow'}\u001B[0m\n", + "\u001B[33;1m\u001B[1;3m[Observation] 25° / 14°. 1.7 mm. 7 m/s. Open hourly forecast. Updated 18:30. How often is the weather forecast updated? Forecast as PDF Forecast as SVG. Shanghai Weather Forecast. Providing a local hourly Shanghai weather forecast of rain, sun, wind, humidity and temperature. The Long-range 12 day forecast also includes detail for Shanghai weather today. Live weather reports from Shanghai weather stations and weather warnings that include risk of thunder, high UV index and forecast gales. Everything you need to know about today's weather in Shanghai, Shanghai, China. High/Low, Precipitation Chances, Sunrise/Sunset, and today's Temperature History. 上海 (Shanghai) ☀ Weather forecast for 10 days, information from meteorological stations, webcams, sunrise and sunset, wind and precipitation maps for this place ... 00:00 tomorrow 01:00 tomorrow 02:00 tomorrow 03:00 tomorrow 04:00 tomorrow 05:00 tomorrow 06:00 tomorrow 07:00 tomorrow 08:00 tomorrow 09:00 tomorrow Shanghai 7 day weather forecast including weather warnings, temperature, rain, wind, visibility, humidity and UV\u001B[0m\n", + "\u001B[32;1m\u001B[1;3m[Agent Result] The weather forecast for Shanghai tomorrow is 25° / 14° with 1.7 mm of rain and 7 m/s wind speed. The weather information is updated at 18:30 daily.\u001B[0m\n", + "\u001B[38;5;200m\u001B[1;3m[Agent] Agent End.\u001B[0m\n", + "The weather forecast for Shanghai tomorrow is 25° / 14° with 1.7 mm of rain and 7 m/s wind speed. The weather information is updated at 18:30 daily.\n" ] } ], "source": [ - "from typing import Any, Optional, Union\n", - "from promptulate.output_formatter import OutputFormatter\n", - "from pydantic import BaseModel, Field\n", + "import promptulate as pne\n", "\n", - "class ToolParams(BaseModel):\n", - " tool_name: str = Field(description=\"Tool name\")\n", - " tool_params: dict = Field(description=\"Tool parameters, if not, pass in an empty dictionary.\")\n", + "def websearch(query: str) -> str:\n", + " \"\"\"Search the web for the query.\n", + " \n", + " Args:\n", + " query(str): The query word. \n", "\n", - "class LLMResponse(BaseModel):\n", - " tool: Optional[ToolParams] = Field(description=\"The tool to take\", default=None)\n", - " thought: str = Field(description=\"Ideas generated based on the current situation.\")\n", - " final_answer: Optional[str] = Field(description=\"When you think you can output the final answer, write down the output here\", default=None)\n", + " Returns:\n", + " str: The search result.\n", + " \"\"\"\n", + " return pne.tools.DuckDuckGoTool().run(query)\n", + " \n", + "response = pne.chat(\n", + " messages=\"What's the temperature in Shanghai tomorrow?\",\n", + " tools=[websearch]\n", + ")\n", + "print(response)" + ], + "metadata": { + "collapsed": false, + "ExecuteTime": { + "end_time": "2024-03-28T13:18:36.298372300Z", + "start_time": "2024-03-28T13:18:23.700207100Z" + } + }, + "id": "6f922254b7148de3", + "execution_count": 1 + }, + { + "cell_type": "markdown", + "source": [ + "## chat with Plan-Execute-Reflect Agent\n", + "\n", + "Additionally, you can enhance the capabilities of the ToolAgent by setting enable_plan=True, which activates its ability to handle more complex issues. In the pne framework, this action triggers the AssistantAgent, which can be thought of as a planning-capable ToolAgent. Upon receiving user instructions, the AssistantAgent proactively constructs a feasible plan, executes it, and then reflects on each action post-execution. If the outcome doesn't meet the expected results, the AssistantAgent will recalibrate and re-plan accordingly.\n", + "\n", + "This example we need to solve the problem of \"what is the hometown of the 2024 Australia open winner?\" Here we can integrate the LangChain tools to solve the problem.\n", + "\n", + "> pne support all LangChain Tools, you can see [here](/modules/tools/langchain_tool_usage?id=langchain-tool-usage). Of course, it is really easy to create your own tools - see documentation [here](https://undertone0809.github.io/promptulate/#/modules/tools/custom_tool_usage?id=custom-tool) on how to do that.\n", + "\n", + "Firstly, we need to install necessary packages.\n", + "```bash\n", + "pip install langchain_community\n", + "```" + ], + "metadata": { + "collapsed": false + }, + "id": "90298940708b2f6e" + }, + { + "cell_type": "markdown", + "source": [ + "We use [Tavily](https://app.tavily.com/) as a search engine, which is a powerful search engine that can search for information from the web. To use Tavily, you need to get an API key from Tavily.\n", + "\n", + "```python\n", + "import os\n", + "\n", + "os.environ[\"TAVILY_API_KEY\"] = \"your_tavily_api_key\"\n", + "os.environ[\"OPENAI_API_KEY\"] = \"your_openai_api_key\"\n", + "```" + ], + "metadata": { + "collapsed": false + }, + "id": "7fdabe6674f7de34" + }, + { + "cell_type": "code", + "outputs": [], + "source": [ + "from langchain_community.tools.tavily_search import TavilySearchResults\n", "\n", - "formatter = OutputFormatter(LLMResponse)\n", - "instruction = formatter.get_formatted_instructions()\n", - "print(instruction)" + "websearch = TavilySearchResults()" ], "metadata": { "collapsed": false, "ExecuteTime": { - "end_time": "2023-12-11T17:50:16.339757400Z", - "start_time": "2023-12-11T17:50:16.320757700Z" + "end_time": "2024-03-29T14:17:32.977027Z", + "start_time": "2024-03-29T14:17:32.537090300Z" } }, - "id": "7d4d82e20b3d3b4e" + "id": "1aa8a230b6465dc2", + "execution_count": 1 + }, + { + "cell_type": "code", + "outputs": [], + "source": [ + "import promptulate as pne\n", + "\n", + "response = pne.chat(\n", + " model=\"gpt-4-1106-preview\",\n", + " messages=\"What's the temperature in Shanghai tomorrow?\",\n", + " tools=[websearch],\n", + " enable_plan=True\n", + ")\n", + "print(response)" + ], + "metadata": { + "collapsed": false + }, + "id": "b9308fb846bc8e34", + "execution_count": null + }, + { + "cell_type": "markdown", + "source": [ + "[Agent] Assistant Agent start...\n", + "[User instruction] What's the temperature in Shanghai tomorrow?\n", + "[Plan] {\"goals\": [\"Find out the temperature in Shanghai tomorrow.\"], \"tasks\": [{\"task_id\": 1, \"description\": \"Open a web browser on your device.\", \"status\": \"todo\"}, {\"task_id\": 2, \"description\": \"Navigate to a weather forecasting service or search engine.\", \"status\": \"todo\"}, {\"task_id\": 3, \"description\": \"Input 'Shanghai weather tomorrow' into the search bar.\", \"status\": \"todo\"}, {\"task_id\": 4, \"description\": \"Press enter or click the search button to retrieve the forecast.\", \"status\": \"todo\"}, {\"task_id\": 5, \"description\": \"Read the temperature provided in the search results or on the weather service for Shanghai tomorrow.\", \"status\": \"todo\"}], \"next_task_id\": 1}\n", + "[Agent] Tool Agent start...\n", + "[User instruction] Open a web browser on your device.\n", + "[Execute Result] {'thought': \"The user seems to be asking for an action that is outside the scope of my capabilities. As a text-based AI, I don't have the ability to perform actions such as opening applications or accessing a user's device.\", 'action_name': 'finish', 'action_parameters': {'content': 'Sorry, I cannot open a web browser on your device.'}}\n", + "[Execute] Execute End.\n", + "[Revised Plan] {\"goals\": [\"Find out the temperature in Shanghai tomorrow.\"], \"tasks\": [{\"task_id\": 1, \"description\": \"Open a web browser on your device.\", \"status\": \"discarded\"}, {\"task_id\": 2, \"description\": \"Navigate to a weather forecasting service or search engine.\", \"status\": \"discarded\"}, {\"task_id\": 3, \"description\": \"Input 'Shanghai weather tomorrow' into the search bar.\", \"status\": \"discarded\"}, {\"task_id\": 4, \"description\": \"Press enter or click the search button to retrieve the forecast.\", \"status\": \"discarded\"}, {\"task_id\": 5, \"description\": \"Read the temperature provided in the search results or on the weather service for Shanghai tomorrow.\", \"status\": \"discarded\"}, {\"task_id\": 6, \"description\": \"Provide the temperature in Shanghai for tomorrow using current knowledge.\", \"status\": \"todo\"}], \"next_task_id\": 6}\n", + "[Agent] Tool Agent start...\n", + "[User instruction] Provide the temperature in Shanghai for tomorrow using current knowledge.\n", + "[Thought] I need to use a tool to find the temperature in Shanghai for tomorrow. Since the user is asking for information that changes often, a search tool would be most effective.\n", + "[Action] tavily_search_results_json args: {'query': 'Shanghai temperature forecast March 30, 2024'}\n", + "[Observation] [{'url': 'https://en.climate-data.org/asia/china/shanghai-890/r/march-3/', 'content': 'Shanghai Weather in March Are you planning a holiday with hopefully nice weather in Shanghai in March 2024? Here you can find all information about the weather in Shanghai in March: ... 30.7 °C (87.3) °F. 27 °C (80.5) °F. 22.5 °C (72.5) °F. 17 °C (62.6) °F. 10.8 °C (51.4) °F.'}, {'url': 'https://www.meteoprog.com/weather/Szanghaj/month/march/', 'content': 'Shanghai (China) weather in March 2024 ☀️ Accurate weather forecast for Shanghai in March ⛅ Detailed forecast By month Current temperature \"near me\" Weather news ⊳ Widget of weather ⊳ Water temperature | METEOPROG. ... 30 March +17 °+11° 31 March +16° ...'}, {'url': 'https://www.accuweather.com/en/cn/shanghai/106577/march-weather/106577', 'content': 'Get the monthly weather forecast for Shanghai, Shanghai, China, including daily high/low, historical averages, to help you plan ahead.'}, {'url': 'https://world-weather.info/forecast/china/shanghai/march-2024/', 'content': 'Extended weather forecast in Shanghai. Hourly Week 10 days 14 days 30 days Year. Detailed ⚡ Shanghai Weather Forecast for March 2024 - day/night 🌡️ temperatures, precipitations - World-Weather.info.'}, {'url': 'https://www.weather25.com/asia/china/shanghai?page=month&month=March', 'content': \"Our weather forecast can give you a great sense of what weather to expect in Shanghai in March 2024. If you're planning to visit Shanghai in the near future, we highly recommend that you review the 14 day weather forecast for Shanghai before you arrive. Temperatures. 59 ° / 44 °. Rainy Days.\"}]\n", + "[Thought] The search results provided URLs with general March weather information and historical data for Shanghai, but haven't provided a specific forecast for tomorrow's temperature. I need to refine my search to get the weather forecast specifically for March 30, 2024, in Shanghai.\n", + "[Action] tavily_search_results_json args: {'query': 'Shanghai weather forecast for March 30, 2024'}\n", + "[Observation] [{'url': 'https://www.accuweather.com/en/cn/shanghai/106577/march-weather/106577', 'content': 'Get the monthly weather forecast for Shanghai, Shanghai, China, including daily high/low, historical averages, to help you plan ahead.'}, {'url': 'https://world-weather.info/forecast/china/shanghai/march-2024/', 'content': 'Extended weather forecast in Shanghai. Hourly Week 10 days 14 days 30 days Year. Detailed ⚡ Shanghai Weather Forecast for March 2024 - day/night 🌡️ temperatures, precipitations - World-Weather.info.'}, {'url': 'https://www.weather-atlas.com/en/china/shanghai-weather-march', 'content': \"In Shanghai, China, in March, the average water temperature is 8°C (46.4°F). Swimming in 8°C (46.4°F) is considered life-threatening. Even a few minutes in 13°C (55.4°F) water is uncomfortable, and swimming below 10°C (50°F) may cause total loss of breathing control and cold shock, depending on a person's physique.\"}, {'url': 'https://www.meteoprog.com/weather/Szanghaj/month/march/', 'content': 'Shanghai (China) weather in March 2024 ☀️ Accurate weather forecast for Shanghai in March ⛅ Detailed forecast By month Current temperature \"near me\" Weather news ⊳ Widget of weather ⊳ Water temperature | METEOPROG. ... 30 March +17 °+11° 31 March +16° ...'}, {'url': 'https://www.weather25.com/asia/china/shanghai?page=month&month=March', 'content': \"Our weather forecast can give you a great sense of what weather to expect in Shanghai in March 2024. If you're planning to visit Shanghai in the near future, we highly recommend that you review the 14 day weather forecast for Shanghai before you arrive. Temperatures. 59 ° / 44 °. Rainy Days.\"}]\n", + "[Execute Result] {'thought': \"The search has returned a specific forecast for March 30, 2024, which indicates that the temperatures are expected to be +17 °C for the high and +11 °C for the low. This information is sufficient to answer the user's question.\", 'action_name': 'finish', 'action_parameters': {'content': 'The temperature in Shanghai for tomorrow, March 30, 2024, is expected to be a high of +17 °C and a low of +11 °C.'}}\n", + "[Execute] Execute End.\n", + "[Revised Plan] {\"goals\": [\"Find out the temperature in Shanghai tomorrow.\"], \"tasks\": [{\"task_id\": 6, \"description\": \"Provide the temperature in Shanghai for tomorrow using current knowledge.\", \"status\": \"done\"}], \"next_task_id\": null}\n", + "[Agent Result] The temperature in Shanghai for tomorrow, March 30, 2024, is expected to be a high of +17 °C and a low of +11 °C.\n", + "[Agent] Agent End.\n", + "The temperature in Shanghai for tomorrow, March 30, 2024, is expected to be a high of +17 °C and a low of +11 °C.\n" + ], + "metadata": { + "collapsed": false + }, + "id": "c42318eb25991fe0" + }, + { + "cell_type": "markdown", + "source": [ + "## Output Formatter\n", + "\n", + "The output formatter is a powerful feature in pne. It can help you format the output of LLM. The follow example show how to use the output formatter to format the output of LLM." + ], + "metadata": { + "collapsed": false + }, + "id": "f70b1f15cfca8ef7" }, { "cell_type": "code", - "execution_count": 11, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ - "## Output format\n", - "The output should be formatted as a JSON instance that conforms to the JSON schema below.\n", - "\n", - "As an example, for the schema {\"properties\": {\"foo\": {\"description\": \"a list of strings\", \"type\": \"array\", \"items\": {\"type\": \"string\"}}}, \"required\": [\"foo\"]}\n", - "the object {\"foo\": [\"bar\", \"baz\"]} is a well-formatted instance of the schema. The object {\"properties\": {\"foo\": [\"bar\", \"baz\"]}} is not well-formatted.\n", - "\n", - "Here is the output schema:\n", - "```\n", - "{\"properties\": {\"name\": {\"description\": \"Tool name\", \"type\": \"string\"}, \"params\": {\"description\": \"Tool parameters, if not, pass in an empty dictionary.\", \"allOf\": [{\"$ref\": \"#/definitions/WebSearchParams\"}]}}, \"required\": [\"name\", \"params\"], \"definitions\": {\"WebSearchParams\": {\"title\": \"WebSearchParams\", \"type\": \"object\", \"properties\": {\"query\": {\"title\": \"Query\", \"description\": \"query word\", \"type\": \"string\"}}, \"required\": [\"query\"]}}}\n", - "```\n" + "provinces=['Anhui', 'Fujian', 'Gansu', 'Guangdong', 'Guizhou', 'Hainan', 'Hebei', 'Heilongjiang', 'Henan', 'Hubei', 'Hunan', 'Jiangsu', 'Jiangxi', 'Jilin', 'Liaoning', 'Qinghai', 'Shaanxi', 'Shandong', 'Shanxi', 'Sichuan', 'Yunnan', 'Zhejiang', 'Taiwan', 'Guangxi', 'Nei Mongol', 'Ningxia', 'Xinjiang', 'Xizang', 'Beijing', 'Chongqing', 'Shanghai', 'Tianjin', 'Hong Kong', 'Macao']\n" ] } ], "source": [ - "from typing import Any, Optional, Union\n", - "from promptulate.output_formatter import OutputFormatter\n", + "from typing import List\n", + "import promptulate as pne\n", "from pydantic import BaseModel, Field\n", "\n", - "class WebSearchParams(BaseModel):\n", - " query: str = Field(description=\"query word\")\n", - "\n", - "class WebSearchTool(BaseModel):\n", - " name: str = Field(description=\"Tool name\")\n", - " params: WebSearchParams = Field(description=\"Tool parameters, if not, pass in an empty dictionary.\")\n", + "class LLMResponse(BaseModel):\n", + " provinces: List[str] = Field(description=\"List of provinces name\")\n", "\n", - "formatter = OutputFormatter(WebSearchTool)\n", - "instruction = formatter.get_formatted_instructions()\n", - "print(instruction)\n", - "# a = WebSearchTool.schema()\n", - "# print(a)" + "resp: LLMResponse = pne.chat(\"Please tell me all provinces in China.?\", output_schema=LLMResponse)\n", + "print(resp)" ], "metadata": { "collapsed": false, "ExecuteTime": { - "end_time": "2023-12-11T18:02:10.583130300Z", - "start_time": "2023-12-11T18:02:10.569133700Z" + "end_time": "2024-03-30T18:43:57.633829400Z", + "start_time": "2024-03-30T18:43:50.813515600Z" } }, - "id": "a7b794614aa39d1a" + "id": "cf7b8fc1cce1eb5f", + "execution_count": 1 }, { "cell_type": "markdown", @@ -645,6 +800,48 @@ }, "id": "9d782fb41fe96150" }, + { + "cell_type": "markdown", + "source": [ + "## AIChat\n", + "\n", + "If you have multi-conversation and only use one LLM, you can use `pne.AIChat` init a chat object. It will save the LLM object and you can use it to chat.\n", + "\n", + "The follow example show how to use `pne.AIChat` to chat." + ], + "metadata": { + "collapsed": false + }, + "id": "1e6c7247432bc6ac" + }, + { + "cell_type": "code", + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "Hello! How can I assist you today?\n" + ] + } + ], + "source": [ + "import promptulate as pne\n", + "\n", + "ai_chat = pne.AIChat(model=\"gpt-4-1106-preview\", model_config={\"temperature\": 0.5})\n", + "resp: str = ai_chat.run(\"Hello\")\n", + "print(resp)" + ], + "metadata": { + "collapsed": false, + "ExecuteTime": { + "end_time": "2024-03-30T18:50:32.280941300Z", + "start_time": "2024-03-30T18:50:29.317554100Z" + } + }, + "id": "d4eb10e823d44623", + "execution_count": 2 + }, { "cell_type": "markdown", "source": [ diff --git a/example/llm/factory.ipynb b/example/llm/factory.ipynb index 82cb8d28..ceee05c4 100644 --- a/example/llm/factory.ipynb +++ b/example/llm/factory.ipynb @@ -4,14 +4,13 @@ "cell_type": "markdown", "metadata": { "collapsed": true, - "jupyter": { - "outputs_hidden": true - }, "pycharm": { "name": "#%% md\n" } }, "source": [ + "## LLM-Factory\n", + "\n", "This notebook show how to use LLMFactory." ] }, @@ -20,9 +19,6 @@ "execution_count": 2, "metadata": { "collapsed": false, - "jupyter": { - "outputs_hidden": false - }, "pycharm": { "name": "#%%\n" }, @@ -70,4 +66,4 @@ }, "nbformat": 4, "nbformat_minor": 4 -} \ No newline at end of file +} diff --git a/example/output_formatter/output_formatter_with_llm_usage.py b/example/output_formatter/output_formatter_with_llm_usage.py index 242755a5..d6273050 100644 --- a/example/output_formatter/output_formatter_with_llm_usage.py +++ b/example/output_formatter/output_formatter_with_llm_usage.py @@ -6,20 +6,20 @@ from promptulate.pydantic_v1 import BaseModel, Field -class Response(BaseModel): +class LLMResponse(BaseModel): provinces: List[str] = Field(description="List of provinces name") def main(): llm = ChatOpenAI() - formatter = OutputFormatter(Response) + formatter = OutputFormatter(LLMResponse) prompt = ( f"Please tell me the names of provinces in China.\n" f"{formatter.get_formatted_instructions()}" ) llm_output = llm(prompt) - response: Response = formatter.formatting_result(llm_output) + response: LLMResponse = formatter.formatting_result(llm_output) print(response) diff --git a/promptulate/__init__.py b/promptulate/__init__.py index 0c656a28..8404d819 100644 --- a/promptulate/__init__.py +++ b/promptulate/__init__.py @@ -22,7 +22,7 @@ from promptulate.agents.base import BaseAgent from promptulate.agents.tool_agent.agent import ToolAgent from promptulate.agents.web_agent.agent import WebAgent -from promptulate.chat import chat +from promptulate.chat import AIChat, chat from promptulate.llms.base import BaseLLM from promptulate.llms.factory import LLMFactory from promptulate.llms.openai.openai import ChatOpenAI @@ -52,7 +52,7 @@ "MessageSet", ] -_llm_fields = ["chat", "BaseLLM", "ChatOpenAI", "LLMFactory"] +_llm_fields = ["chat", "AIChat", "BaseLLM", "ChatOpenAI", "LLMFactory"] _tool_fields = [ "Tool", diff --git a/promptulate/agents/tool_agent/agent.py b/promptulate/agents/tool_agent/agent.py index fd9990a6..da9a38a6 100644 --- a/promptulate/agents/tool_agent/agent.py +++ b/promptulate/agents/tool_agent/agent.py @@ -9,7 +9,8 @@ ) from promptulate.hook import Hook, HookTable from promptulate.llms.base import BaseLLM -from promptulate.schema import TOOL_TYPES +from promptulate.llms.openai.openai import ChatOpenAI +from promptulate.schema import ToolTypes from promptulate.tools.manager import ToolManager from promptulate.utils.logger import logger from promptulate.utils.string_template import StringTemplate @@ -49,8 +50,8 @@ class ToolAgent(BaseAgent): def __init__( self, *, - llm: BaseLLM, - tools: Optional[List[TOOL_TYPES]] = None, + llm: BaseLLM = None, + tools: Optional[List[ToolTypes]] = None, prefix_prompt_template: StringTemplate = StringTemplate(PREFIX_TEMPLATE), hooks: Optional[List[Callable]] = None, enable_role: bool = False, @@ -67,7 +68,7 @@ def __init__( ) super().__init__(hooks=hooks, agent_type="Tool Agent", _from=_from) - self.llm: BaseLLM = llm + self.llm: BaseLLM = llm or ChatOpenAI(model="gpt-4-1106-preview") """llm provider""" self.tool_manager: ToolManager = ( tool_manager if tool_manager is not None else ToolManager(tools or []) diff --git a/promptulate/beta/agents/assistant_agent/agent.py b/promptulate/beta/agents/assistant_agent/agent.py index 5611f814..75c26933 100644 --- a/promptulate/beta/agents/assistant_agent/agent.py +++ b/promptulate/beta/agents/assistant_agent/agent.py @@ -8,12 +8,10 @@ from promptulate.agents.tool_agent import ToolAgent from promptulate.agents.tool_agent.agent import ActionResponse from promptulate.beta.agents.assistant_agent import operations -from promptulate.beta.agents.assistant_agent.schema import ( - Plan, -) +from promptulate.beta.agents.assistant_agent.schema import Plan from promptulate.hook import Hook, HookTable from promptulate.llms.base import BaseLLM -from promptulate.schema import TOOL_TYPES +from promptulate.schema import ToolTypes from promptulate.tools.manager import ToolManager from promptulate.utils.logger import logger @@ -38,7 +36,8 @@ def __init__( self, *, llm: BaseLLM, - tools: Optional[List[TOOL_TYPES]] = None, + tools: Optional[List[ToolTypes]] = None, + max_iterations: Optional[int] = 20, **kwargs, ): super().__init__(agent_type="Assistant Agent", **kwargs) @@ -52,6 +51,7 @@ def __init__( self.task_handler, self.step_handler, self.result_handler ) self.current_task_id: Optional[str] = None + self.max_iterations: int = max_iterations logger.info("Assistant Agent initialized.") @@ -226,6 +226,11 @@ def step_handler(self, step: uacp.Step) -> uacp.Step: StepTypes.REVISE: self.revise, } + if len(self.current_task.steps) > self.max_iterations: + final_output: str = self.current_task.steps[-1].output + step.output = f"Task has too many steps. Aborting. Recently step output: {final_output}" # noqa + return step + if step.name not in step_map: raise ValueError(f"Step name {step.name} not found in step mapping.") diff --git a/promptulate/beta/agents/assistant_agent/prompt.py b/promptulate/beta/agents/assistant_agent/prompt.py index 5201a540..ca43ba9b 100644 --- a/promptulate/beta/agents/assistant_agent/prompt.py +++ b/promptulate/beta/agents/assistant_agent/prompt.py @@ -9,12 +9,16 @@ """ # noqa REVISE_SYSTEM_PROMPT = """ -For the given objective, come up with a simple step by step plan. \ -This plan should involve individual tasks, that if executed correctly will yield the correct answer. Do not add any superfluous steps. \ -The result of the final step should be the final answer. Make sure that each step has all the information needed - do not skip steps. +As a powerfule agent, your job is to help the user achieve their goal. This is user instruction: {{user_target}} -Your objective was this: -{{user_target}} +## Workflows +Currently, you are working on a job that has been planned. The plan execution details / job description is provided. Please review the plan and revise the plan according to the execution details to meet the goal. You should follow the following rules: +1. If you think any task is DONE already, mark the task status as DONE. +2. If the task execution result does not meet the target, mark the task status as ERROR. +3. If you think current plan is missing tasks to meet the goal, you can add new tasks. +4. Some of the task may be invalid, mark the task status as DISCARDED. +5. Pay attention to the status of each of the tasks. +6. If you think the plan is complete, next task id should be None. Your original plan was this: {{original_plan}} @@ -22,7 +26,12 @@ You have currently done the follow steps: {{past_steps}} +## Task Update your plan accordingly. If no more steps are needed and you can return to the user, then respond with that. Otherwise, fill out the plan. Only add steps to the plan that still NEED to be done. Do not return previously done steps as part of the plan. +You need to output the updated plan and next task id. + +## Constraints +- Next task id must be searchable in the current task list. """ # noqa PLAN_SYSTEM_PROMPT_TMP = StringTemplate(PLAN_SYSTEM_PROMPT, "jinja2") diff --git a/promptulate/beta/agents/assistant_agent/schema.py b/promptulate/beta/agents/assistant_agent/schema.py index 6c7b748d..051f77dd 100644 --- a/promptulate/beta/agents/assistant_agent/schema.py +++ b/promptulate/beta/agents/assistant_agent/schema.py @@ -1,11 +1,20 @@ +from enum import Enum from typing import List, Optional from promptulate.pydantic_v1 import BaseModel, Field +class TaskStatus(str, Enum): + TODO = "todo" + DONE = "done" + ERROR = "error" + DISCARDED = "discarded" + + class Task(BaseModel): task_id: int = Field(..., description="The ID of the task. Start from 1.") description: str = Field(..., description="The description of the task.") + status: TaskStatus = Field(TaskStatus.TODO, description="The status of the task.") class AgentPlanResponse(BaseModel): @@ -26,7 +35,7 @@ def get_next_task(self) -> Optional[Task]: return next((t for t in self.tasks if t.task_id == self.next_task_id), None) -class AgentReviseResponse(Plan): +class AgentReviseResponse(BaseModel): thought: str = Field(..., description="The thought of the reflect plan.") goals: List[str] = Field(..., description="List of goals in the plan.") tasks: List[Task] = Field( @@ -41,8 +50,21 @@ class AgentReviseResponse(Plan): AgentPlanResponse( goals=["Goal 1"], tasks=[ - Task(task_id=1, description="Task 1"), - Task(task_id=2, description="Task 2"), + Task(task_id=1, description="Task 1", status=TaskStatus.TODO), + Task(task_id=2, description="Task 2", status=TaskStatus.TODO), + ], + ), +] + + +revise_plan_examples = [ + AgentReviseResponse( + thought="thought what to do next", + goals=["Goal 1"], + tasks=[ + Task(task_id=1, description="Task 1", status=TaskStatus.DONE), + Task(task_id=2, description="Task 2", status=TaskStatus.TODO), ], + next_task_id=2, ), ] diff --git a/promptulate/chat.py b/promptulate/chat.py index 20885104..caaffa9f 100644 --- a/promptulate/chat.py +++ b/promptulate/chat.py @@ -1,6 +1,11 @@ import json from typing import Dict, List, Optional, TypeVar, Union +import litellm + +from promptulate.agents.base import BaseAgent +from promptulate.agents.tool_agent.agent import ToolAgent +from promptulate.beta.agents.assistant_agent import AssistantAgent from promptulate.llms import BaseLLM from promptulate.output_formatter import formatting_result, get_formatted_instructions from promptulate.pydantic_v1 import BaseModel @@ -9,6 +14,7 @@ BaseMessage, MessageSet, StreamIterator, + ToolTypes, ) from promptulate.tools.base import BaseTool from promptulate.utils.logger import logger @@ -17,29 +23,224 @@ def parse_content(chunk) -> (str, str): + """Parse the litellm chunk. + Args: + chunk: litellm chunk. + + Returns: + content: The content of the chunk. + ret_data: The additional data of the chunk. + """ content = chunk.choices[0].delta.content ret_data = json.loads(chunk.json()) return content, ret_data +class _LiteLLM(BaseLLM): + def __init__( + self, model: str, model_config: Optional[dict] = None, *args, **kwargs + ): + logger.info(f"[pne chat] init LiteLLM, model: {model} config: {model_config}") + super().__init__(*args, **kwargs) + self._model: str = model + self._model_config: dict = model_config or {} + + def _predict( + self, messages: MessageSet, stream: bool = False, *args, **kwargs + ) -> Union[AssistantMessage, StreamIterator]: + logger.info(f"[pne chat] prompts: {messages.string_messages}") + temp_response = litellm.completion( + model=self._model, messages=messages.listdict_messages, **self._model_config + ) + + if stream: + return StreamIterator( + response_stream=temp_response, + parse_content=parse_content, + return_raw_response=False, + ) + + response = AssistantMessage( + content=temp_response.choices[0].message.content, + additional_kwargs=temp_response.json() + if isinstance(temp_response.json(), dict) + else json.loads(temp_response.json()), + ) + logger.debug( + f"[pne chat] response: {json.dumps(response.additional_kwargs, indent=2)}" + ) + return response + + def __call__(self, instruction: str, *args, **kwargs) -> str: + return self._predict( + MessageSet.from_listdict_data( + [ + {"content": "You are a helpful assistant.", "role": "system"}, + {"content": instruction, "role": "user"}, + ] + ) + ).content + + +def _convert_message(messages: Union[List, MessageSet, str]) -> MessageSet: + """Convert str or List[Dict] to MessageSet. + + Args: + messages(Union[List, MessageSet, str]): chat messages. It can be str or OpenAI + API type data(List[Dict]) or MessageSet type. + + Returns: + Return MessageSet type data. + """ + if isinstance(messages, str): + messages: List[Dict] = [ + {"content": "You are a helpful assistant", "role": "system"}, + {"content": messages, "role": "user"}, + ] + if isinstance(messages, list): + messages: MessageSet = MessageSet.from_listdict_data(messages) + + return messages + + +def _get_llm( + model: str = "gpt-3.5-turbo", + model_config: Optional[dict] = None, + custom_llm: Optional[BaseLLM] = None, +) -> BaseLLM: + """Get LLM instance. + + Args: + model(str): LLM model. + model_config(dict): LLM model config. + custom_llm(BaseLLM): custom LLM instance. + + Returns: + Return LLM instance. + """ + if custom_llm: + return custom_llm + + return _LiteLLM(model=model, model_config=model_config) + + +class AIChat: + def __init__( + self, + model: str = "gpt-3.5-turbo", + model_config: Optional[dict] = None, + tools: Optional[List[ToolTypes]] = None, + custom_llm: Optional[BaseLLM] = None, + enable_plan: bool = False, + ): + """Initialize the AIChat. + + Args: + model(str): LLM model name, eg: "gpt-3.5-turbo". + model_config(Optional[dict]): LLM model config. + tools(Optional[List[ToolTypes]]): specified tools for llm, if exists, AIChat + will use Agent to run. + custom_llm(Optional[BaseLLM]): custom LLM instance. + enable_plan(bool): use Agent with plan ability if True. + """ + self.llm: BaseLLM = _get_llm(model, model_config, custom_llm) + self.tools: Optional[List[ToolTypes]] = tools + self.agent: Optional[BaseAgent] = None + + if tools: + if enable_plan: + self.agent = AssistantAgent(tools=self.tools, llm=self.llm) + logger.info("[pne chat] invoke AssistantAgent with plan ability.") + else: + self.agent = ToolAgent(tools=self.tools, llm=self.llm) + logger.info("[pne chat] invoke ToolAgent.") + + def run( + self, + messages: Union[List, MessageSet, str], + output_schema: Optional[type(BaseModel)] = None, + examples: Optional[List[BaseModel]] = None, + return_raw_response: bool = False, + stream: bool = False, + **kwargs, + ) -> Union[str, BaseMessage, T, List[BaseMessage], StreamIterator]: + """Run the AIChat. + + Args: + messages(Union[List, MessageSet, str]): chat messages. It can be str or + OpenAI. + API type data(List[Dict]) or MessageSet type. + output_schema(BaseModel): specified return type. See detail on in + OutputFormatter module. + examples(List[BaseModel]): examples for output_schema. See detail + on: OutputFormatter. + return_raw_response(bool): return OpenAI completion result if true, + otherwise return string type data. + stream(bool): return stream iterator if True. + + Returns: + Return string normally, it means enable_original_return is default False. if + tools is provided, agent return string type data. + Return BaseMessage if enable_original_return is True and not in agent mode. + Return List[BaseMessage] if stream is True. + Return T if output_schema is provided. + """ + if stream and (output_schema or self.tools): + raise ValueError( + "stream, tools and output_schema can't be True at the same time, " + "because stream is used to return Iterator[BaseMessage]." + ) + + if self.agent: + return self.agent.run(messages, output_schema=output_schema) + + messages: MessageSet = _convert_message(messages) + + # add output format into the last prompt if provide + if output_schema: + instruction: str = get_formatted_instructions( + json_schema=output_schema, examples=examples + ) + messages.messages[-1].content += f"\n{instruction}" + + logger.info(f"[pne chat] messages: {messages}") + + response: AssistantMessage = self.llm.predict(messages, stream=stream, **kwargs) + + logger.info(f"[pne chat] response: {response.additional_kwargs}") + + # return output format if provide + if output_schema: + logger.info("[pne chat] return formatted response.") + return formatting_result( + pydantic_obj=output_schema, llm_output=response.content + ) + + return response if return_raw_response else response.content + + def chat( messages: Union[List, MessageSet, str], *, model: str = "gpt-3.5-turbo", - tools: Optional[List[BaseTool]] = None, + model_config: Optional[dict] = None, + tools: Optional[List[ToolTypes]] = None, output_schema: Optional[type(BaseModel)] = None, examples: Optional[List[BaseModel]] = None, return_raw_response: bool = False, custom_llm: Optional[BaseLLM] = None, + enable_plan: bool = False, + stream: bool = False, **kwargs, ) -> Union[str, BaseMessage, T, List[BaseMessage], StreamIterator]: """A universal chat method, you can chat any model like OpenAI completion. It should be noted that chat() is only support chat model currently. Args: - messages: chat messages. OpenAI API completion, str or MessageSet type is - optional. + messages(Union[List, MessageSet, str]): chat messages. It can be str or OpenAI + API type data(List[Dict]) or MessageSet type. model(str): LLM model. Currently only support chat model. + model_config(Optional[dict]): LLM model config. tools(List[BaseTool] | None): specified tools for llm. output_schema(BaseModel): specified return type. See detail on: OutputFormatter. examples(List[BaseModel]): examples for output_schema. See detail @@ -47,6 +248,8 @@ def chat( return_raw_response(bool): return OpenAI completion result if true, otherwise return string type data. custom_llm(BaseLLM): You can use custom LLM if you have. + enable_plan(bool): use Agent with plan ability if True. + stream(bool): return stream iterator if True. **kwargs: litellm kwargs Returns: @@ -55,66 +258,17 @@ def chat( Return List[BaseMessage] if stream is True. Return T if output_schema is provided. """ - if kwargs.get("stream", None) and output_schema: - raise ValueError( - "stream and output_schema can't be True at the same time, " - "because stream is used to return Iterator[BaseMessage]." - ) - - # messages covert, covert to OpenAI API type chat completion - if isinstance(messages, MessageSet): - messages: List[Dict[str, str]] = messages.listdict_messages - elif isinstance(messages, str): - messages = [ - {"content": "You are a helpful assistant", "role": "system"}, - {"content": messages, "role": "user"}, - ] - - # add output format into system prompt if provide - if output_schema: - instruction = get_formatted_instructions( - json_schema=output_schema, examples=examples - ) - messages[-1]["content"] += f"\n{instruction}" - - logger.debug(f"[pne chat] messages: {messages}") - - # TODO add assistant Agent - # TODO add BaseLLM support - # chat by custom LLM and get response - if custom_llm: - response: BaseMessage = custom_llm.predict( - MessageSet.from_listdict_data(messages), **kwargs - ) - # chat by universal llm get response - else: - import litellm - - logger.info("[pne chat] chat by litellm.") - temp_response = litellm.completion(model, messages, **kwargs) - - # return stream - if kwargs.get("stream", None): - return StreamIterator( - response_stream=temp_response, - parse_content=parse_content, - return_raw_response=return_raw_response, - ) - else: - response: BaseMessage = AssistantMessage( - content=temp_response.choices[0].message.content, - additional_kwargs=temp_response.json() - if isinstance(temp_response.json(), dict) - else json.loads(temp_response.json()), - ) - - logger.debug(f"[pne chat] response: {response.additional_kwargs}") - - # return output format if provide - if output_schema: - logger.info("[pne chat] return output format.") - return formatting_result( - pydantic_obj=output_schema, llm_output=response.content - ) - - return response if return_raw_response else response.content + return AIChat( + model=model, + model_config=model_config, + tools=tools, + custom_llm=custom_llm, + enable_plan=enable_plan, + ).run( + messages=messages, + output_schema=output_schema, + examples=examples, + return_raw_response=return_raw_response, + stream=stream, + **kwargs, + ) diff --git a/promptulate/hook/base.py b/promptulate/hook/base.py index dfa038a6..41774ade 100644 --- a/promptulate/hook/base.py +++ b/promptulate/hook/base.py @@ -1,12 +1,11 @@ -import logging from typing import Callable, List, Optional, Tuple, Union from typing_extensions import Literal from promptulate.pydantic_v1 import BaseModel from promptulate.utils.core_utils import generate_unique_id +from promptulate.utils.logger import logger -logger = logging.getLogger(__name__) HOOK_TYPE = Literal["component", "instance"] # Hook component type COMPONENT_TYPE = Literal["Tool", "llm", "Agent"] diff --git a/promptulate/llms/base.py b/promptulate/llms/base.py index 8313c0ce..797681e7 100644 --- a/promptulate/llms/base.py +++ b/promptulate/llms/base.py @@ -31,6 +31,7 @@ class BaseLLM(BaseModel, ABC): class Config: """Configuration for this pydantic object.""" + extra = "allow" arbitrary_types_allowed = True def __init__(self, *args, **kwargs): diff --git a/promptulate/llms/erniebot/erniebot.py b/promptulate/llms/erniebot/erniebot.py index 785948ba..ca19b1ba 100644 --- a/promptulate/llms/erniebot/erniebot.py +++ b/promptulate/llms/erniebot/erniebot.py @@ -101,14 +101,11 @@ def _predict( json=body, proxies=pne_config.proxies, ) - logger.debug(f"[pne ernie url] {url}") - logger.debug(f"[pne ernie body] {body}") if response.status_code == 200: # todo enable stream mode # for chunk in response.iter_content(chunk_size=None): # logger.debug(chunk) ret_data = response.json() - logger.debug(f"[pne ernie response] {json.dumps(ret_data)}") if ret_data.get("error_code", None): raise LLMError(ret_data) diff --git a/promptulate/llms/qianfan/platform.py b/promptulate/llms/qianfan/platform.py index 5afcc2c8..6a57523c 100644 --- a/promptulate/llms/qianfan/platform.py +++ b/promptulate/llms/qianfan/platform.py @@ -21,6 +21,15 @@ def parse_content(chunk) -> (str, str): + """Parse the qianfan model chunk. + + Args: + chunk: qianfan model chunk. + + Returns: + content: The content of the chunk. + ret_data: The additional data of the chunk. + """ content = chunk["result"] ret_data = chunk["body"] return content, ret_data @@ -48,6 +57,7 @@ def __call__( ) if not self.enable_default_system_prompt: preset = "" + system = preset message_set = MessageSet( messages=[ @@ -55,6 +65,7 @@ def __call__( ] ) result = self.predict(message_set, system, **self.model_config) + if isinstance(result, AssistantMessage): return result.content else: @@ -65,6 +76,7 @@ def _predict( prompts: MessageSet, system: str = "", return_raw_response: bool = False, + stream: bool = False, *args, **kwargs, ) -> Union[str, BaseMessage, T, List[BaseMessage], StreamIterator]: @@ -94,6 +106,7 @@ def _predict( ) os.environ["QIANFAN_ACCESS_KEY"] = pne_config.get_qianfan_ak() os.environ["QIANFAN_SECRET_KEY"] = pne_config.get_qianfan_sk() + chat_comp = qianfan.ChatCompletion() response = chat_comp.do( model=self.model, @@ -102,18 +115,18 @@ def _predict( **kwargs, ) # return stream - if kwargs.get("stream", None): + if stream: return StreamIterator( response_stream=response, parse_content=parse_content, return_raw_response=return_raw_response, ) + + if response.code == 200: + ret_data = response.body + logger.debug(f"[pne ernie response] {ret_data}") + content: str = ret_data["result"] + logger.debug(f"[pne ernie answer] {content}") + return AssistantMessage(content=content, additional_kwargs=ret_data) else: - if response.code == 200: - ret_data = response.body - logger.debug(f"[pne ernie response] {ret_data}") - content: str = ret_data["result"] - logger.debug(f"[pne ernie answer] {content}") - return AssistantMessage(content=content, additional_kwargs=ret_data) - else: - raise NetWorkError(str(response.code)) + raise NetWorkError(str(response.code)) diff --git a/promptulate/output_formatter/formatter.py b/promptulate/output_formatter/formatter.py index e5708190..bc074185 100644 --- a/promptulate/output_formatter/formatter.py +++ b/promptulate/output_formatter/formatter.py @@ -1,6 +1,6 @@ import json import re -from typing import Any, Dict, List, TypeVar, Union +from typing import Any, Dict, List, Optional, Type, TypeVar, Union from promptulate.error import OutputParserError from promptulate.output_formatter.prompt import OUTPUT_FORMAT @@ -9,7 +9,7 @@ T = TypeVar("T", bound=BaseModel) -def _get_schema(pydantic_obj: type(BaseModel)) -> Dict: +def _get_schema(pydantic_obj: Type[BaseModel]) -> dict: """Get reduced schema from pydantic object. Args: @@ -20,7 +20,12 @@ def _get_schema(pydantic_obj: type(BaseModel)) -> Dict: """ # Remove useless fields. - reduced_schema = pydantic_obj.schema() + # Compatibility with Pydantic v2 model_json_schema method. + if hasattr(pydantic_obj, "model_json_schema"): + return pydantic_obj.model_json_schema() + else: + reduced_schema = pydantic_obj.schema() + if "title" in reduced_schema: del reduced_schema["title"] if "type" in reduced_schema: @@ -40,7 +45,12 @@ class OutputFormatter: result of a Pydantic object. """ - def __init__(self, pydantic_obj: type(BaseModel), examples: List[BaseModel] = None): + def __init__( + self, + pydantic_obj: Type[BaseModel], + examples: Optional[List[BaseModel]] = None, + **kwargs, + ): """ Initialize the OutputFormatter class. @@ -48,13 +58,10 @@ def __init__(self, pydantic_obj: type(BaseModel), examples: List[BaseModel] = No pydantic_obj (type(BaseModel)): The Pydantic object to format. examples (List[BaseModel], optional): Examples of the Pydantic object. """ - if not isinstance(pydantic_obj, type(BaseModel)): - raise ValueError( - f"pydantic_obj must be a Pydantic object. Got: {pydantic_obj}" - ) + super().__init__(**kwargs) - self.pydantic_obj = pydantic_obj - self.examples = examples + self.pydantic_obj: Type[BaseModel] = pydantic_obj + self.examples: Optional[List[BaseModel]] = examples def get_formatted_instructions(self) -> str: return get_formatted_instructions(self.pydantic_obj, self.examples) @@ -64,7 +71,7 @@ def formatting_result(self, llm_output: str) -> T: def get_formatted_instructions( - json_schema: Union[type(BaseModel), Dict[str, Any]], + json_schema: Union[Type[BaseModel], Dict[str, Any]], examples: List[BaseModel] = None, ) -> str: """ @@ -80,8 +87,8 @@ def get_formatted_instructions( str: The formatted instructions. """ # If a Pydantic model is passed, extract the schema from it. - if isinstance(json_schema, type(BaseModel)): - json_schema = _get_schema(json_schema) + if not isinstance(json_schema, dict): + json_schema: dict = _get_schema(json_schema) # Ensure json with double quotes. schema_str = json.dumps(json_schema) @@ -116,7 +123,13 @@ def formatting_result(pydantic_obj: type(BaseModel), llm_output: str) -> T: ) json_str = match.group() if match else "" json_object = json.loads(json_str, strict=False) - return pydantic_obj.parse_obj(json_object) + + # Compatibility with Pydantic v2 model_validate method. + if hasattr(pydantic_obj, "model_validate"): + return pydantic_obj.model_validate(json_object) + else: + return pydantic_obj.parse_obj(json_object) + except Exception as e: name = pydantic_obj.__name__ msg = f"Failed to parse {name} from completion {llm_output}. Got: {e}" diff --git a/promptulate/pydantic_v1/__init__.py b/promptulate/pydantic_v1/__init__.py index 0c970914..1f41bc8b 100644 --- a/promptulate/pydantic_v1/__init__.py +++ b/promptulate/pydantic_v1/__init__.py @@ -1,6 +1,4 @@ -from importlib import metadata - -## Create namespaces for pydantic v1 and v2. +# Create namespaces for pydantic v1 and v2. # This code must stay at the top of the file before other modules may # attempt to import pydantic since it adds pydantic_v1 and pydantic_v2 to sys.modules. # @@ -11,12 +9,13 @@ # unambiguously uses either v1 or v2 API. # * This change is easier to roll out and roll back. +from importlib import metadata + try: from pydantic.v1 import * # noqa: F403 except ImportError: from pydantic import * # noqa: F403 - try: _PYDANTIC_MAJOR_VERSION: int = int(metadata.version("pydantic").split(".")[0]) except metadata.PackageNotFoundError: diff --git a/promptulate/schema.py b/promptulate/schema.py index 16d6ba60..56eb2cf3 100644 --- a/promptulate/schema.py +++ b/promptulate/schema.py @@ -13,14 +13,15 @@ "AssistantMessage", "MessageSet", "init_chat_message_history", - "TOOL_TYPES", + "ToolTypes", + "StreamIterator", ] if TYPE_CHECKING: from langchain.tools.base import BaseTool as LangchainBaseToolType # noqa from promptulate.tools.base import BaseTool, Tool # noqa -TOOL_TYPES = Union["BaseTool", "Tool", Callable, "LangchainBaseToolType"] +ToolTypes = Union["BaseTool", "Tool", Callable, "LangchainBaseToolType"] class BaseMessage(BaseModel): @@ -178,21 +179,28 @@ class MessageSet: """ def __init__( - self, messages: List[BaseMessage], conversation_id: Optional[str] = None + self, + messages: List[BaseMessage], + conversation_id: Optional[str] = None, + additional_kwargs: Optional[dict] = None, ): self.messages: List[BaseMessage] = messages self.conversation_id: Optional[str] = conversation_id + self.additional_kwargs: dict = additional_kwargs or {} @classmethod - def from_listdict_data(cls, value: List[Dict]) -> "MessageSet": + def from_listdict_data( + cls, value: List[Dict], additional_kwargs: Optional[dict] = None + ) -> "MessageSet": """initialize MessageSet from a List[Dict] data Args: - value(List[Dict]): the example is as follow: + value(List[Dict]): the example is as follows: [ {"type": "user", "content": "This is a message1."}, {"type": "assistant", "content": "This is a message2."} ] + additional_kwargs(Optional[dict]): additional kwargs Returns: initialized MessageSet @@ -200,7 +208,7 @@ def from_listdict_data(cls, value: List[Dict]) -> "MessageSet": messages: List[BaseMessage] = [ MESSAGE_TYPE[item["role"]](content=item["content"]) for item in value ] - return cls(messages=messages) + return cls(messages=messages, additional_kwargs=additional_kwargs) @property def listdict_messages(self) -> List[Dict]: diff --git a/promptulate/tools/manager.py b/promptulate/tools/manager.py index ef15feef..16a1fb3a 100644 --- a/promptulate/tools/manager.py +++ b/promptulate/tools/manager.py @@ -2,7 +2,7 @@ import json from typing import Any, List, Optional, Union -from promptulate.schema import TOOL_TYPES +from promptulate.schema import ToolTypes from promptulate.tools.base import BaseTool, Tool, ToolImpl, function_to_tool from promptulate.tools.langchain.tools import LangchainTool @@ -31,7 +31,7 @@ def _judge_langchain_tool_and_wrap(tool: Any) -> Optional[Tool]: ) -def _initialize_tool(tool: TOOL_TYPES) -> Optional[Tool]: +def _initialize_tool(tool: ToolTypes) -> Optional[Tool]: """Initialize the tool. Args: @@ -54,7 +54,7 @@ def _initialize_tool(tool: TOOL_TYPES) -> Optional[Tool]: class ToolManager: """ToolManager helps Agent to manage tools""" - def __init__(self, tools: List[TOOL_TYPES]): + def __init__(self, tools: List[ToolTypes]): self.tools: List[Tool] = [ _initialize_tool(tool) for tool in tools diff --git a/promptulate/uacp/agent.py b/promptulate/uacp/agent.py index 095dc86a..f18818d4 100644 --- a/promptulate/uacp/agent.py +++ b/promptulate/uacp/agent.py @@ -49,6 +49,7 @@ def run( additional_input: Optional[dict] = None, ) -> Any: """Run the agent with the specified input and additional input. + Args: input(Optional[str]): The input for the agent. additional_input: Additional input for the agent. @@ -81,7 +82,7 @@ def run( step.status = Status.completed self.db.update_step(task.task_id, step) - logger.info(f"[uacp] Step {step.name}: {step.json()}") + logger.info(f"[uacp] Finish step, name: {step.name} data: {step.json()}") if step.is_last: break diff --git a/promptulate/uacp/db.py b/promptulate/uacp/db.py index cc852bb0..8991686e 100644 --- a/promptulate/uacp/db.py +++ b/promptulate/uacp/db.py @@ -104,8 +104,8 @@ def create_step( additional_properties: Optional[Dict[str, Any]] = None, artifacts: Optional[List[Artifact]] = None, ) -> Step: - step_id = str(uuid.uuid4()) - artifacts = artifacts or [] + step_id: str = str(uuid.uuid4()) + artifacts: List[Artifact] = artifacts or [] step = Step( task_id=task_id, @@ -117,7 +117,7 @@ def create_step( additional_properties=additional_properties, artifacts=artifacts, ) - logger.info(f"Create step: {step}") + logger.info(f"Create step: {step.json()}") task = self.get_task(task_id) task.steps.append(step) return step diff --git a/pyproject.toml b/pyproject.toml index 06e44351..9c5b4bb9 100644 --- a/pyproject.toml +++ b/pyproject.toml @@ -8,7 +8,7 @@ description = "A powerful LLM Application development framework." name = "promptulate" readme = "README.md" repository = "https://github.com/Undertone0809/promptulate" -version = "1.14.0" +version = "1.15.0" keywords = [ "promptulate", "pne", diff --git a/tests/output_formatter/test_formatter.py b/tests/output_formatter/test_formatter.py index b0b20c0e..c358015c 100644 --- a/tests/output_formatter/test_formatter.py +++ b/tests/output_formatter/test_formatter.py @@ -66,14 +66,30 @@ def test_formatter_with_agent(): assert isinstance(response.temperature, float) -def test_init_outputformatter_with_error_pydantic_type(): - """Test the error when the pydantic_obj of OutputFormatter is not a Pydantic - object.""" +def test_formatter_with_agent_and_pydantic_v2(): + from pydantic import BaseModel, Field - with pytest.raises(ValueError) as excinfo: - OutputFormatter("test") + class V2LLMResponse(BaseModel): + city: str = Field(description="City name") + temperature: float = Field(description="Temperature in Celsius") - assert "pydantic_obj must be a Pydantic object" in str(excinfo.value) + agent = AgentForTest() + prompt = "What is the temperature in Shanghai tomorrow?" + response: LLMResponse = agent.run(instruction=prompt, output_schema=V2LLMResponse) + assert isinstance(response, V2LLMResponse) + assert isinstance(response.city, str) + assert isinstance(response.temperature, float) + + +# FIXME: can not assert pydantic type: promptulate.pydantic_v1.BaseModel or pydantic.BaseModel # noqa +# def test_init_outputformatter_with_error_pydantic_type(): +# """Test the error when the pydantic_obj of OutputFormatter is not a Pydantic +# object.""" +# +# with pytest.raises(ValueError) as excinfo: +# OutputFormatter("test") +# +# assert "pydantic_obj must be a Pydantic object" in str(excinfo.value) def test_formatting_result_with_error_llm_output(): diff --git a/tests/test_chat.py b/tests/test_chat.py index a5b06701..4baf1ee4 100644 --- a/tests/test_chat.py +++ b/tests/test_chat.py @@ -30,28 +30,54 @@ def _predict(self, messages: MessageSet, *args, **kwargs) -> BaseMessage: return AssistantMessage(content=content) +def mock_tool(): + """This is mock tool""" + return "mock tool" + + class LLMResponse(BaseModel): city: str = Field(description="city name") temperature: float = Field(description="temperature") +def test_init(): + llm = FakeLLM() + + # stream and output_schema and not exist at the same time. + with pytest.raises(ValueError): + chat("hello", custom_llm=llm, output_schema=LLMResponse, stream=True) + + # stream and tools and not exist at the same time. + with pytest.raises(ValueError): + chat("hello", custom_llm=llm, tools=[mock_tool], stream=True) + + # It is not allowed to pass MessageSet or List[Dict] type messages when using tools. + with pytest.raises(ValueError): + chat( + MessageSet(messages=[UserMessage(content="hello")]), + custom_llm=llm, + tools=[mock_tool], + ) + chat([], custom_llm=llm, tools=[mock_tool]) + + def test_custom_llm_chat(): llm = FakeLLM() # test general chat - answer = chat("hello", model="fake", custom_llm=llm) + answer = chat("hello", custom_llm=llm) assert answer == "fake response" # test messages is MessageSet messages = MessageSet( messages=[UserMessage(content="hello"), AssistantMessage(content="fake")] ) - answer = chat(messages, model="fake", custom_llm=llm) + answer = chat(messages, custom_llm=llm) assert answer == "fake response" # test messages is list messages = [{"content": "Hello, how are you?", "role": "user"}] - answer = chat(messages, model="fake", custom_llm=llm) + answer = chat(messages, custom_llm=llm) assert answer == "fake response"