Skip to content

Commit

Permalink
Merge remote-tracking branch 'refs/remotes/origin/main'
Browse files Browse the repository at this point in the history
  • Loading branch information
shroominic committed Feb 11, 2024
2 parents 0233a39 + 19bc896 commit 49b689a
Show file tree
Hide file tree
Showing 4 changed files with 12 additions and 5 deletions.
4 changes: 2 additions & 2 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -15,7 +15,7 @@ pip install funcchain
## Introduction

`funcchain` is the *most pythonic* way of writing cognitive systems. Leveraging pydantic models as output schemas combined with langchain in the backend allows for a seamless integration of llms into your apps.
It utilizes perfect with OpenAI Functions or LlamaCpp grammars (json-schema-mode) for efficient structured output.
It utilizes OpenAI Functions or LlamaCpp grammars (json-schema-mode) for efficient structured output.
In the backend it compiles the funcchain syntax into langchain runnables so you can easily invoke, stream or batch process your pipelines.

[![Open in GitHub Codespaces](https://github.com/codespaces/badge.svg)](https://codespaces.new/ricklamers/funcchain-demo)
Expand Down Expand Up @@ -112,7 +112,7 @@ class AnalysisResult(BaseModel):
objects: list[str] = Field(description="A list of objects found in the image")

# easy use of images as input with structured output
def analyse_image(image: Image.Image) -> AnalysisResult:
def analyse_image(image: Image) -> AnalysisResult:
"""
Analyse the image and extract its
theme, description and objects.
Expand Down
7 changes: 5 additions & 2 deletions src/funcchain/backend/settings.py
Original file line number Diff line number Diff line change
Expand Up @@ -27,11 +27,12 @@ class FuncchainSettings(BaseSettings):
retry_parse: int = 3
retry_parse_sleep: float = 0.1

# KEYS
# KEYS / URLS
openai_api_key: Optional[str] = None
azure_api_key: Optional[str] = None
anthropic_api_key: Optional[str] = None
google_api_key: Optional[str] = None
ollama_base_url: str = "http://localhost:11434"

# MODEL KWARGS
verbose: bool = False
Expand Down Expand Up @@ -60,7 +61,9 @@ def openai_kwargs(self) -> dict:
}

def ollama_kwargs(self) -> dict:
return {}
return {
"base_url": self.ollama_base_url
}

def llamacpp_kwargs(self) -> dict:
return {
Expand Down
2 changes: 1 addition & 1 deletion src/funcchain/model/defaults.py
Original file line number Diff line number Diff line change
Expand Up @@ -155,7 +155,7 @@ def univeral_model_selector(
except Exception as e:
print(e)

model_kwargs.pop("model_name")
model_kwargs.pop("model_name", None)

if settings.openai_api_key:
from langchain_openai.chat_models import ChatOpenAI
Expand Down
4 changes: 4 additions & 0 deletions src/funcchain/syntax/executable.py
Original file line number Diff line number Diff line change
Expand Up @@ -42,6 +42,8 @@ def chain(

# todo maybe this should be done in the prompt processor?
system = system or settings.system_prompt
if system:
context = [SystemMessage(content=system)] + context
instruction = instruction or from_docstring()

# temp image handling
Expand Down Expand Up @@ -90,6 +92,8 @@ async def achain(

# todo maybe this should be done in the prompt processor?
system = system or settings.system_prompt
if system:
context = [SystemMessage(content=system)] + context
instruction = instruction or from_docstring()

# temp image handling
Expand Down

0 comments on commit 49b689a

Please sign in to comment.