![](https://private-user-images.githubusercontent.com/41792982/404283338-10a3a9ca-1f39-410c-ac48-a7365de589d9.png?jwt=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJnaXRodWIuY29tIiwiYXVkIjoicmF3LmdpdGh1YnVzZXJjb250ZW50LmNvbSIsImtleSI6ImtleTUiLCJleHAiOjE3Mzk3ODg4NzYsIm5iZiI6MTczOTc4ODU3NiwicGF0aCI6Ii80MTc5Mjk4Mi80MDQyODMzMzgtMTBhM2E5Y2EtMWYzOS00MTBjLWFjNDgtYTczNjVkZTU4OWQ5LnBuZz9YLUFtei1BbGdvcml0aG09QVdTNC1ITUFDLVNIQTI1NiZYLUFtei1DcmVkZW50aWFsPUFLSUFWQ09EWUxTQTUzUFFLNFpBJTJGMjAyNTAyMTclMkZ1cy1lYXN0LTElMkZzMyUyRmF3czRfcmVxdWVzdCZYLUFtei1EYXRlPTIwMjUwMjE3VDEwMzYxNlomWC1BbXotRXhwaXJlcz0zMDAmWC1BbXotU2lnbmF0dXJlPTNmZjJhNTQwODA2YmQ3MGE3OTFjZDM0ZmZiMjM5YWJmYTcxYmU1NTIyODk5NTEyZWRiOGNkODlhZThmNjBiYzAmWC1BbXotU2lnbmVkSGVhZGVycz1ob3N0In0.2UzlCsCZ4sdil8xBsi6RgT4a_Hf0jgXtQNJa6C_I_BU)
Upsonic is a reliability-focused framework designed for real-world applications. It enables trusted agent workflows in your organization through advanced reliability features, including verification layers, triangular architecture, validator agents, and output evaluation systems.
Upsonic is a next-generation framework that makes agents production-ready by solving three critical challenges:
1- Reliability: While other frameworks require expertise and complex coding for reliability features, Upsonic offers easy-to-activate reliability layers without disrupting functionality.
2- Model Context Protocol: The MCP allows you to leverage tools with various functionalities developed both officially and by third parties without requiring you to build custom tools from scratch.
3- Secure Runtime: Isolated environment to run agents
Key features:
- Production-Ready Scalability: Deploy seamlessly on AWS, GCP, or locally using Docker.
- Task-Centric Design: Focus on practical task execution, with options for:
- Basic tasks via LLM calls.
- Advanced tasks with V1 agents.
- Complex automation using V2 agents with MCP integration.
- MCP Server Support: Utilize multi-client processing for high-performance tasks.
- Tool-Calling Server: Exception-secure tool management with robust server API interactions.
- Computer Use Integration: Execute human-like tasks using Anthropic’s ‘Computer Use’ capabilities.
- Easily adding tools: You can add your custom tools and MCP tools with a single line of code.
You can access our documentation at docs.upsonic.ai. All concepts and examples are available there.
- Python 3.10 or higher
- Access to OpenAI or Anthropic API keys (Azure and Bedrock Supported)
pip install upsonic
Set your OPENAI_API_KEY
export OPENAI_API_KEY=sk-***
Start the agent
from upsonic import Task, Agent
task = Task("Who developed you?")
agent = Agent("Coder")
agent.print_do(task)
LLM output reliability is critical, particularly for numerical operations and action execution. Upsonic addresses this through a multi-layered reliability system, enabling control agents and verification rounds to ensure output accuracy.
Verifier Agent: Validates outputs, tasks, and formats - detecting inconsistencies, numerical errors, and hallucinations
Editor Agent: Works with verifier feedback to revise and refine outputs until they meet quality standards
Rounds: Implements iterative quality improvement through scored verification cycles
Loops: Ensures accuracy through controlled feedback loops at critical reliability checkpoints
class ReliabilityLayer:
prevent_hallucination = 10
agent = Agent("Coder", reliability_layer=ReliabilityLayer)
Upsonic officially supports Model Context Protocol (MCP) and custom tools. You can use hundreds of MCP servers at https://glama.ai/mcp/servers or https://smithery.ai/ We also support Python functions inside a class as a tool. You can easily generate your integrations with that.
from upsonic import Agent, Task, ObjectResponse
# Define Fetch MCP configuration
class FetchMCP:
command = "uvx"
args = ["mcp-server-fetch"]
# Create response format for web content
class WebContent(ObjectResponse):
title: str
content: str
summary: str
word_count: int
# Initialize agent
web_agent = Agent(
"Web Content Analyzer",
model="openai/gpt-4o", # You can use other models
)
# Create a task to analyze a web page
task = Task(
description="Fetch and analyze the content from url. Extract the main content, title, and create a brief summary.",
context=["https://upsonic.ai"],
tools=[FetchMCP],
response_format=WebContent
)
# Usage
web_agent.print_do(task)
print(result.title)
print(result.summary)
Distribute tasks effectively across agents with our automated task distribution mechanism. This tool matches tasks based on the relationship between agent and task, ensuring collaborative problem-solving across agents and tasks. The output is essential for deploying an AI agent across apps or as a service. Upsonic uses Pydantic BaseClass to define structured outputs for tasks, allowing developers to specify exact response formats for their AI agent tasks.
from upsonic import Agent, Task, MultiAgent, ObjectResponse
from upsonic.tools import Search
from typing import List
# Targeted Company and Our Company
our_company = "https://redis.io/"
targeted_url = "https://upsonic.ai/"
# Response formats
class CompanyResearch(ObjectResponse):
industry: str
product_focus: str
company_values: List[str]
recent_news: List[str]
class Mail(ObjectResponse):
subject: str
content: str
# Creating Agents
researcher = Agent(
"Company Researcher",
company_url=our_company
)
strategist = Agent(
"Outreach Strategist",
company_url=our_company
)
# Creating Tasks and connect
company_task = Task(
"Research company website and analyze key information",
context=[targeted_url],
tools=[Search],
response_format=CompanyResearch
)
position_task = Task(
"Analyze Senior Developer position context and requirements",
context=[company_task, targeted_url],
)
message_task = Task(
"Create personalized outreach message using research",
context=[company_task, position_task, targeted_url],
response_format=Mail
)
# Run the Tasks over agents
results = MultiAgent.do(
[researcher, strategist],
[company_task, position_task, message_task]
)
# Print the results
print(f"Company Industry: {company_task.response.industry}")
print(f"Company Focus: {company_task.response.product_focus}")
print(f"Company Values: {company_task.response.company_values}")
print(f"Company Recent News: {company_task.response.recent_news}")
print(f"Position Analyze: {position_task.response}")
print(f"Outreach Message Subject: {message_task.response.subject}")
print(f"Outreach Message Content: {message_task.response.content}")
Direct LLM calls offer faster, cheaper solutions for simple tasks. In Upsonic, you can make calls to model providers without any abstraction level and organize structured outputs. You can also use tools with LLM calls.
from upsonic import Direct
Direct.do(task1)
Computer use can able to human task like humans, mouse move, mouse click, typing and scrolling and etc. So you can build tasks over non-API systems. It can help your linkedin cases, internal tools. Computer use is supported by only Claude for now.
from upsonic.client.tools import ComputerUse
...
tools = [ComputerUse]
...
We use anonymous telemetry to collect usage data. We do this to focus our developments on more accurate points. You can disable it by setting the UPSONIC_TELEMETRY environment variable to false.
import os
os.environ["UPSONIC_TELEMETRY"] = "False"
- Dockerized Server Deploy
- Verifiers For Computer Use