Skip to content

Latest commit

 

History

History
278 lines (214 loc) · 8.81 KB

README.md

File metadata and controls

278 lines (214 loc) · 8.81 KB

llm-as-function

Embed LLM into your python function


English | 简体中文

llm-as-function is a Python library that helps you quickly build functions based on Large Language Models (LLMs). You can use LLMFunc as a decorator for your functions, while also providing type annotations and writing docstrings for your functions. llm-as-function will automatically complete the parameter filling by invoking the large model and return formatted output.

llm-as-function 是一个帮助你快速构建基于LLM的函数的Python库. 你可以使用LLMFunc作为你函数的装饰器, 同时给你的函数进行类型标注和docstring的编写, llm-as-function会自动的通过调用大模型完成参数的填入, 并且返回格式化的输出.

The llm-as-function also supports defining code bodies within the LLM function for precise inference control and business logic.

llm-as-function还支持在LLM函数中定义代码体, 用于精确的推理控制和业务逻辑.

Get Started

pip install llm-as-function

Features

Basic usage

from llm_as_function import gpt35_func # gpt35_func using gpt-3.5-turbo as the LLM backend
from pydantic import BaseModel, Field

# Validate and Define output types with Pydantic
class Result(BaseModel):
    emoji: str = Field(description="The output emoji")

# Using decorators, LLMFunc will automatically recognize the input and output of your function, as well as the function's docstring.
# Here, the function's DocString is your Prompt, so please design it carefully.
@gpt35_func
def fool() -> Result:
    """
    You need to randomly output an emoji
    """
    pass
  
print(foo()) # {emoji: "😅"}

You can construct more complex output logic.

class Reason(BaseModel):
    where: str = Field(description="Where I can use this emoji?")
    warning: str = Field(description="Anything I should notice to use this emoji?")

class StructuredOutput(BaseModel):
    emoji: str = Field(description="The emoji")
    why: str = Field(description="Why you choose this emoji?")
    more: Reason = Field(description="More information about this emoji")

class Result(BaseModel):
    emoji: StructuredOutput = Field(description="The emoji and its related information")
    
@gpt35_func
def fool() -> Result:
    """
    You need to randomly output an emoji
    """
    pass

print(fool())

You will get the below output:

Final(
    pack={
        'emoji': {
            'emoji': '🎉',
            'why': 'I choose this emoji because it represents celebration and excitement.',
            'more': {
                'where': 'I can use this emoji to express joy and happiness in messages or social media
posts.',
                'warning': 'Be mindful of the context in which you use this emoji, as it may not be appropriate
for all situations.'
            }
        }
    },
    raw_response=None
)

Prompt variables

You can also dynamically insert variables into the prompt.

@gpt35_func
def fool2(emotion) -> Result:
    """
    You need to randomly output an emoji, the emoji should be {emotion}
    """
    pass
  
print(foo2(emotion="Happy")) # {'emoji': '😊'}

Merge program with LLM

The most crucial part is that you can insert python code into your function, which will run before the actual LLM execution, so you can accomplish similar tasks:

@gpt35_func
def fool() -> Result:
    """
    You need to output an emoji
    """
    print("Logging once")

More interestingly, you can invoke other functions within it, other LLM functions, such as calling itself (refer to examples/3_fibonacci.py):

from llm_as_function import gpt35_func, Final
class Result(BaseModel):
    value: int = Field(description="The calculated value of the Fibonacci sequence.")

@gpt35_func
def f(x: int) -> Result:
    """You need to calculate the {x}th term of the Fibonacci sequence, given that you have the values of the two preceding terms, which are {a} and {b}. The method to calculate the {x}th term is by adding the values of the two preceding terms. Please compute the value of the {x}th term."""
    if x == 1 or x == 0:
        # The `Final` is a class in `llm-as-function`, and returning this class indicates that you do not need the large model to process your output. Inside `Final` should be a dictionary, and its format should be the same as the `Result` you defined.
        return Final({"value": x})
    a = f(x=x - 1)
    b = f(x=x - 2)
    # A normal function return indicates that you have passed 'local variables' to the large model, and the variables you return will be inserted into your prompt.
    return {"a": a.unpack()["value"], "b": b.unpack()["value"]}

print(f(3)) # {value: 2}

Function calling

llm-as-function offer the samilar way to tell LLM a series of function tools(examples/5_function_calling.py)

class Result(BaseModel):
    summary: str = Field(description="The response summary sentence")


class GetCurrentWeatherRequest(BaseModel):
    location: str = Field(description="The city and state, e.g. San Francisco, CA")

def get_current_weather(request: GetCurrentWeatherRequest):
    """
    Get the current weather in a given location
    """
    weather_info = {
        "location": request.location,
        "temperature": "72",
        "forecast": ["sunny", "windy"],
    }
    return json.dumps(weather_info)

@gpt35_func.func(get_current_weather)
def fool() -> Result:
    """
    Search the weather of New York. And then summary the weather in one sentence.
    Be careful, you should not call the same function twice.
    """
    pass

Parallel function calling for OpenAI is supported:

def get_current_time(request: GetCurrentTimeRequest):
    """
    Get the current time in a given location
    """
    time_info = {
        "location": request.location,
        "time": "2024/1/1",
    }
    return json.dumps(time_info)
  
@gpt35_func.func(get_current_weather).func(get_current_time)
def fool() -> Result:
    """
    Search the weather and current time of New York. And then summary the time and weather in one sentence.
    Be careful, you should not call the same function twice.
    """
    pass

Async Call

Async calling for LLM api is supported, you call simply add async_call then the function will be an async python function(examples/1.5_get_started.py):

@gpt35_func.async_call
def fool(emotion) -> Result:
    """
    You need to output an emoji, which is {emotion}
    """
    pass
    
async def async_call():
    result = await asyncio.gather(
        *[
            asyncio.create_task(fool(emotion="happy")),
            asyncio.create_task(fool(emotion="sad")),
            asyncio.create_task(fool(emotion="weird")),
        ]
    )
    print([r.unpack() for r in result])

More demos in examples/

Docs

LLMFunc

# LLMFunc currently support OpenAI provider
@LLMFunc(model="gpt-3.5-turbo-1106", temperature=0.3, openai_base_url=..., openai_api_key=...)
def fool() -> Result:
    ...

-----------------------------------------------------
# For your convenience, llm-as-function already instantiated some LLMFunc
from llm_as_function import gpt35_func, gpt4_func

@gpt35_func
def fool() -> Result:
    pass
-----------------------------------------------------
# Parse mode: ["error", "accept_raw"], default "error"
# llm-as-function may fail to return the result format, due to LLM doesn't always obey
# When the parsing fails, there are two mode you can choose:

@LLMFunc(parse_mode="error")
def fool() -> Result:
    ...
result = fool() # When the parsing fails, fool will raise an error

@LLMFunc(parse_mode="accept_raw")
def fool() -> Result:
    ...
result = fool() # When the parsing fails, fool will not raise an error but return the raw response of LLM, refer to the `Final` class

Final

# The return value of any llm function is a `Final` class

result: Final = fool()
if result.ok():
  format_response = result.unpack() # the response will be a formated dict
else:
  raw_response = result.unpack() # the response will be the raw string result from LLM

FQA

  • The formatting of the return from llm-as-function depends on the capabilities of the model you are using. Sometimes, larger models may not be able to return a parsable JSON format, which can lead to an Error or return the raw response if you set the parse_mode="accept_raw".

    代表你遇到rate limit限制了, 考虑进行换模型或者在每次使用function后sleep一段时间