-
Notifications
You must be signed in to change notification settings - Fork 0
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Enhance webhook handling logic with improved error handling #2
base: main
Are you sure you want to change the base?
Conversation
PR Review SummaryOverall Review:The provided PR adds significant functionality related to tool calling across various AI providers, integrating complex message transformation and normalization logic. It spans across initializing and transforming messages according to the specifications required by APIs like AWS, Anthropic, etc. The modifications involve converting and normalizing tool responses and ensuring these are compatible with the AI models' expected formats. The changes are substantial and have a broad impact on the application, potentially affecting how tool results are processed, displayed, or logged.
RecommendationsRecommendation #1To address the potential inconsistencies in parsing def normalize_response(self, response):
normalized_response = ChatCompletionResponse()
finish_reason_mapping = {
"end_turn": "stop",
"max_tokens": "length",
"tool_use": "tool_calls",
}
normalized_response.choices[0].finish_reason = finish_reason_mapping.get(response.stop_reason, "stop")
# Add usage information
normalized_response.usage = {
"prompt_tokens": response.usage.input_tokens,
"completion_tokens": response.usage.output_tokens,
"total_tokens": response.usage.input_tokens + response.usage.output_tokens,
}
# Check if the response contains tool usage and validate format
if response.stop_reason == "tool_use":
tool_call = next((content for content in response.content if 'type' in content and content.type == "tool_use"), None)
if tool_call and 'id' in tool_call:
tool_call_id = tool_call.id
function = Function(name=tool_call.name, arguments=json.dumps(tool_call.input))
tool_call_obj = ChatCompletionMessageToolCall(id=tool_call_id, function=function, type="function")
text_content = next((content.text for content in response.content if 'type' in content and content.type == "text"), "")
message = Message(content=text_content or None, tool_calls=[tool_call_obj] if tool_call else None, role="assistant", refusal=None)
normalized_response.choices[0].message = message
else:
raise ValueError("Expected 'tool_use' content is missing or malformed in API response.")
return normalized_response This snippet includes error handling for missing or malformed 'tool_use' data, ensuring the internal data paths are consistent before parsing. Recommendation #2Implement rigorous input validation and error handling around JSON operations and external data handling. Make use of secure practices such as using parameterized queries or templates when constructing requests or handling data. For instance: def transform_tool_call_to_openai(self, response):
if response.get("stopReason") != "tool_use":
return None
try:
tool_calls = []
for content in response["output"]["message"]["content"]:
if "toolUse" in content:
tool = content["toolUse"]
tool_calls.append({
"type": "function",
"id": tool["toolUseId"],
"function": {
"name": tool["name"],
"arguments": json.dumps(tool["input"], cls=SecureJSONEncoder),
},
})
except json.JSONDecodeError as e:
raise ValueError(f"JSON parsing error: {str(e)}")
return {
"role": "assistant",
"content": None,
"tool_calls": tool_calls,
"refusal": None,
} In this code snippet, |
PR Review SummaryOverall Review:This PR introduces a
RecommendationsRecommendation #1Modify the def convert_openai_tools_to_anthropic(openai_tools):
anthropic_tools = []
for tool in openai_tools:
if tool["type"] == "function":
function = tool["function"]
anthropic_tool = {
"name": function["name"],
"description": function.get("description", ""),
"input_schema": {
"type": "object",
"properties": function["parameters"]["properties"],
"required": function["parameters"].get("required", []),
},
}
anthropic_tools.append(anthropic_tool)
else:
# Log unhandled tool types, or possibly raise an error if necessary
logging.warning(f"Tool type {tool['type']} is not supported yet")
return anthropic_tools Be sure to import the Recommendation #2Ensure that all logging and data classes do not include or expose sensitive or identifiable information. Use data masking or transformation techniques to obscure real values in logs and ensure that no sensitive data is logged in plain text. Modify the log statements and any debug outputs to mask any potentially sensitive information: class Function(BaseModel):
name: str
arguments: str
def log_function(self):
masked_arguments = mask_sensitive_data(self.arguments) # Assumes existence of this function
logging.debug(f"Function called: {self.name}, with arguments: {masked_arguments}")
# Replace direct print statements with the above method call:
function_instance.log_function() Remember to define or implement the Recommendation #3Implement additional test cases that focus on error handling for malformed or incorrect data scenarios to ensure robustness. Here's an example test case using import pytest
from module import data_handling_function
def test_data_handling_with_malformed_data():
malformed_data = "incorrect format data"
with pytest.raises(ValidationError):
result = data_handling_function(malformed_data)
assert result is None, "Function should not handle malformed data without error" This test checks that handling of malformed data raises a Recommendation andrewyng#4Abstract the transformations more robustly, potentially using a factory pattern to simplify and manage the complexity of various tool formats. For example: class ToolTransformerFactory:
@staticmethod
def get_transformer(provider_type):
if provider_type == "Anthropic":
return AnthropicToolTransformer()
elif provider_type == "AWS":
return AWSToolTransformer()
else:
raise ValueError("Unsupported provider type")
class AnthropicToolTransformer:
def transform(self, tool_data):
# perform transformation logic specific to Anthropic
pass
class AWSToolTransformer:
def transform(self, tool_data):
# perform transformation logic specific to AWS
pass
# Usage:
transformer = ToolTransformerFactory.get_transformer("Anthropic")
transformed_data = transformer.transform(tool_data) This pattern will facilitate adding support for new providers or changing existing implementations with minimal impact on other parts of the system. Recommendation andrewyng#5Ensure logical grouping and minimal interdependencies of modules. Reviewing module responsibilities and interactions might reveal opportunities to reduce complexity or improve cohesiveness, making it easier to manage and evolve separate parts without unintended side effects. You might consider reevaluating some of the finer-grained separations if they lead to excessive hopping between files or unclear relationships between components. |
PR Review SummaryOverall Review:This PR introduces extensive modifications across the aisuite, aiming to enhance its ability to interact with AI services by handling tool calls. It touches various aspects of message transformation and normalization, ensuring compatibility across system interfaces. The changes are complex with potentially significant implications for the application’s capability to process external AI tool results, managing these translations across multiple providers like AWS, Anthropic, and others.
RecommendationsRecommendation #1To address the potential inconsistencies in parsing def normalize_response(self, response):
normalized_response = ChatCompletionResponse()
finish_reason_mapping = {
"end_turn": "stop",
"max_tokens": "length",
"tool_use": "tool_calls",
}
normalized_response.choices[0].finish_reason = finish_reason_mapping.get(response.stop_reason, "stop")
# Add usage information
normalized_response.usage = {
"prompt_tokens": response.usage.input_tokens,
"completion_tokens": response.usage.output_tokens,
"total_tokens": response.usage.input_tokens + response.usage.output_tokens,
}
# Check if the response contains tool usage and validate format
if response.stop_reason == "tool_use":
tool_call = next((content for content in response.content if 'type' in content and content.type == "tool_use"), None)
if tool_call and 'id' in tool_call:
tool_call_id = tool_call.id
function = Function(name=tool_call.name, arguments=json.dumps(tool_call.input))
tool_call_obj = ChatCompletionMessageToolCall(id=tool_call_id, function=function, type="function")
text_content = next((content.text for content in response.content if 'type' in content and content.type == "text"), "")
message = Message(content=text_content or None, tool_calls=[tool_call_obj] if tool_call else None, role="assistant", refusal=None)
normalized_response.choices[0].message = message
else:
raise ValueError("Expected 'tool_use' content is missing or malformed in API response.")
return normalized_response This snippet includes error handling for missing or malformed 'tool_use' data, ensuring the internal data paths are consistent before parsing. Recommendation #2Implement rigorous input validation and error handling around JSON operations and external data handling. Make use of secure practices such as using parameterized queries or templates when constructing requests or handling data. For instance: def transform_tool_call_to_openai(self, response):
if response.get("stopReason") != "tool_use":
return None
try:
tool_calls = []
for content in response["output"]["message"]["content"]:
if "toolUse" in content:
tool = content["toolUse"]
tool_calls.append({
"type": "function",
"id": tool["toolUseId"],
"function": {
"name": tool["name"],
"arguments": json.dumps(tool["input"], cls=SecureJSONEncoder),
},
})
except json.JSONDecodeError as e:
raise ValueError(f"JSON parsing error: {str(e)}")
return {
"role": "assistant",
"content": None,
"tool_calls": tool_calls,
"refusal": None,
} In this code snippet, |
PR Code Suggestions Summary ✨
|
message_dict | ||
) | ||
if bedrock_message: | ||
formatted_messages.append(bedrock_message) | ||
elif message_dict["role"] == "assistant": | ||
# Convert assistant message to Bedrock format |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Refactor tool handling into a dedicated method to clean up the main method and improve readability.
For clarity and maintainability, it is better to separate large blocks of conditional logic into small, well-named private methods. This can be done in the AwsProvider
class where multiple if-else
blocks handle different message roles.
message_dict | |
) | |
if bedrock_message: | |
formatted_messages.append(bedrock_message) | |
elif message_dict["role"] == "assistant": | |
# Convert assistant message to Bedrock format | |
+ self._configure_tool_handling(kwargs) |
PR Review SummaryOverall Review:The PR introduces a sophisticated handling mechanism for tool calling across various providers like AWS, Anthropic, and others, along with enhancements in message object structures and compatibility updates. It includes significant refactoring and new implementations in the handling of JSON, external APIs, and dynamic data structures which are pivotal for the functioning of the AI systems in the suite. The changes also involve a new
RecommendationsRecommendation #1To address the potential inconsistencies in parsing def normalize_response(self, response):
normalized_response = ChatCompletionResponse()
finish_reason_mapping = {
"end_turn": "stop",
"max_tokens": "length",
"tool_use": "tool_calls",
}
normalized_response.choices[0].finish_reason = finish_reason_mapping.get(response.stop_reason, "stop")
# Add usage information
normalized_response.usage = {
"prompt_tokens": response.usage.input_tokens,
"completion_tokens": response.usage.output_tokens,
"total_tokens": response.usage.input_tokens + response.usage.output_tokens,
}
# Check if the response contains tool usage and validate format
if response.stop_reason == "tool_use":
tool_call = next((content for content in response.content if 'type' in content and content.type == "tool_use"), None)
if tool_call and 'id' in tool_call:
tool_call_id = tool_call.id
function = Function(name=tool_call.name, arguments=json.dumps(tool_call.input))
tool_call_obj = ChatCompletionMessageToolCall(id=tool_call_id, function=function, type="function")
text_content = next((content.text for content in response.content if 'type' in content and content.type == "text"), "")
message = Message(content=text_content or None, tool_calls=[tool_call_obj] if tool_call else None, role="assistant", refusal=None)
normalized_response.choices[0].message = message
else:
raise ValueError("Expected 'tool_use' content is missing or malformed in API response.")
return normalized_response This snippet includes error handling for missing or malformed 'tool_use' data, ensuring consistency before parsing. Recommendation #2Implement rigorous input validation and error handling around JSON operations and external data handling. Use secure practices such as using parameterized queries or templates when constructing requests or handling data to ensure data integrity and security. For instance: def transform_tool_call_to_openai(self, response):
if response.get("stopReason") != "tool_use":
return None
try:
tool_calls = []
for content in response["output"]["message"]["content"]:
if "toolUse" in content:
tool = content["toolUse"]
tool_calls.append({
"type": "function",
"id": tool["toolUseId"],
"function": {
"name": tool["name"],
"arguments": json.dumps(tool["input"], cls=SecureJSONEncoder),
},
})
except json.JSONDecodeError as e:
raise ValueError(f"JSON parsing error: {str(e)}")
return {
"role": "assistant",
"content": None,
"tool_calls": tool_calls,
"refusal": None,
} In this code snippet, |
PR Code Suggestions Summary ✨
|
PR Review SummaryOverall Review:The PR introduces significant changes aimed at standardizing and enhancing message handling and tool calling functionality across various AI providers. While the core intent to align with specific API requirements of services like AWS and Anthropic is beneficial, the complexity of the changes presents multiple areas of concern. This includes potential logical errors in data manipulation logic, security in handling JSON data transformations, and the overall sufficiency of test coverage for new and modified codes. The addition of robust tool-management utilities and modifications in message transformations indicates a well-planned improvement, but the real-world application will need careful validation.
RecommendationsRecommendation #1To address the potential inconsistencies in parsing def normalize_response(self, response):
normalized_response = ChatCompletionResponse()
finish_reason_mapping = {
"end_turn": "stop",
"max_tokens": "length",
"tool_use": "tool_calls",
}
normalized_response.choices[0].finish_reason = finish_reason_mapping.get(response.stop_reason, "stop")
# Add usage information
normalized_response.usage = {
"prompt_tokens": response.usage.input_tokens,
"completion_tokens": response.usage.output_tokens,
"total_tokens": response.usage.input_tokens + response.usage.output_tokens,
}
# Check if the response contains tool usage and validate format
if response.stop_reason == "tool_use":
tool_call = next((content for content in response.content if 'type' in content and content.type == "tool_use"), None)
if tool_call and 'id' in tool_call:
tool_call_id = tool_call.id
function = Function(name=tool_call.name, arguments=json.dumps(tool_call.input))
tool_call_obj = ChatCompletionMessageToolCall(id=tool_call_id, function=function, type="function")
text_content = next((content.text for content in response.content if 'type' in content and content.type == "text"), "")
message = Message(content=text_content or None, tool_calls=[tool_call_obj] if tool_call else None, role="assistant", refusal=None)
normalized_response.choices[0].message = message
else:
raise ValueError("Expected 'tool_use' content is missing or malformed in API response.")
return normalized_response This snippet includes error handling for missing or malformed 'tool_use' data, ensuring the internal data paths are consistent before parsing. Recommendation #2Implement rigorous input validation and error handling around JSON operations and external data handling. Use secure practices such as using parameterized queries or templates when constructing requests or handling data to ensure data integrity and security. Here's an example of how you might code this: def transform_tool_call_to_openai(self, response):
if response.get("stopReason") != "tool_use":
return None
try:
tool_calls = []
for content in response["output"]["message"]["content"]:
if "toolUse" in content:
tool = content["toolUse"]
tool_calls.append({
"type": "function",
"id": tool["toolUseId"],
"function": {
"name": tool["name"],
"arguments": json.dumps(tool["input"], cls=SecureJSONEncoder),
},
})
except json.JSONDecodeError as e:
raise ValueError(f"JSON parsing error: {str(e)}")
return {
"role": "assistant",
"content": None,
"tool_calls": tool_calls,
"refusal": None,
} In this code snippet, |
PR Review SummaryOverall Review:The PR introduces significant enhancements in handling messages and tool calling across various AI providers by standardizing interactions and data formats, particularly for AWS and Anthropics. Changes include refactoring the
RecommendationsRecommendation #1To address the potential inconsistencies in parsing def normalize_response(self, response):
normalized_response = ChatCompletionResponse()
finish_reason_mapping = {
"end_turn": "stop",
"max_tokens": "length",
"tool_use": "tool_calls",
}
normalized_response.choices[0].finish_reason = finish_reason_mapping.get(response.stop_reason, "stop")
# Add usage information
normalized_response.usage = {
"prompt_tokens": response.usage.input_tokens,
"completion_tokens": response.usage.output_tokens,
"total_tokens": response.usage.input_tokens + response.usage.output_tokens,
}
# Validate tool usage
if response.stop_reason == "tool_use":
tool_call = next((content for content in response.content if 'type' in content and content.type == "tool_use"), None)
if tool_call and 'id' in tool_call:
tool_call_id = tool_call.id
function = Function(name=tool_call.name, arguments=json.dumps(tool_call.input))
tool_call_obj = ChatCompletionMessageToolCall(id=tool_call_id, function=function, type="function")
text_content = next((content.text for content in response.content if 'type' in content and content.type == "text"), "")
message = Message(content=text_content or None, tool_calls=[tool_call_obj] if tool_call else None, role="assistant", refusal=None)
normalized_response.choices[0].message = message
else:
raise ValueError("Expected 'tool_use' content is missing or malformed in API response.")
return normalized_response This snippet will ensure that the Recommendation #2Implement rigorous input validation and error handling around JSON operations and external data handling. Use secure practices, such as parameterized queries or templates when constructing requests or handling data. For instance, redesign the method def transform_tool_call_to_openai(self, response):
if response.get("stopReason") != "tool_use":
return None
try:
tool_calls = []
for content in response["output"]["message"]["content"]:
if "toolUse" in content:
tool = content["toolUse"]
tool_calls.append({
"type": "function",
"id": tool["toolUseId"],
"function": {
"name": tool["name"],
"arguments": json.dumps(tool["input"], cls=SecureJSONEncoder),
},
})
except json.JSONDecodeError as e:
raise ValueError(f"JSON parsing error: {str(e)}")
return {
"role": "assistant",
"content": None,
"tool_calls": tool_calls,
"refusal": None,
} In this code snippet, Recommendation #3Enhance the test suite to cover various scenarios of malformed or incorrect data inputs. Introduce comprehensive tests simulating different types of errors to ensure robust error handling and security. For example, create tests for JSON transformations to simulate malformed JSON: import pytest
from your_module import some_function_handling_json
def test_malformed_json_handling():
with pytest.raises(SomeSpecificException):
malformed_json = '{unclosed_json_object'
some_function_handling_json(malformed_json)
# Ensure the function raises an error or handles the malformed JSON gracefully. This tests how the system behaves under erroneous conditions, thereby enhancing the reliability of data transformations. |
PR Code Suggestions Summary ✨
|
# Handle Message objects | ||
if msg.role == "tool": | ||
converted_msg = { | ||
"role": "user", | ||
"content": [ | ||
{ | ||
"type": "tool_result", | ||
"tool_use_id": msg.tool_call_id, | ||
"content": msg.content, | ||
} | ||
], | ||
} | ||
converted_messages.append(converted_msg) | ||
elif msg.role == "assistant" and msg.tool_calls: | ||
# Handle Message objects with tool calls | ||
content = [] | ||
if msg.content: | ||
content.append({"type": "text", "text": msg.content}) | ||
for tool_call in msg.tool_calls: | ||
content.append( | ||
{ | ||
"type": "tool_use", | ||
"id": tool_call.id, | ||
"name": tool_call.function.name, | ||
"input": json.loads(tool_call.function.arguments), | ||
} | ||
) | ||
converted_messages.append({"role": "assistant", "content": content}) | ||
else: | ||
converted_messages.append( | ||
{"role": msg.role, "content": msg.content} | ||
) | ||
|
||
print(converted_messages) | ||
response = self.client.messages.create( | ||
model=model, system=system_message, messages=converted_messages, **kwargs | ||
) | ||
print(response) | ||
return self.normalize_response(response) | ||
|
||
def normalize_response(self, response): | ||
"""Normalize the response from the Anthropic API to match OpenAI's response format.""" | ||
normalized_response = ChatCompletionResponse() | ||
normalized_response.choices[0].message.content = response.content[0].text | ||
|
||
# Map Anthropic stop_reason to OpenAI finish_reason | ||
finish_reason_mapping = { | ||
"end_turn": "stop", | ||
"max_tokens": "length", | ||
"tool_use": "tool_calls", | ||
# Add more mappings as needed | ||
} | ||
normalized_response.choices[0].finish_reason = finish_reason_mapping.get( | ||
response.stop_reason, "stop" | ||
) | ||
|
||
# Add usage information | ||
normalized_response.usage = { | ||
"prompt_tokens": response.usage.input_tokens, | ||
"completion_tokens": response.usage.output_tokens, | ||
"total_tokens": response.usage.input_tokens + response.usage.output_tokens, | ||
} | ||
|
||
# Check if the response contains tool usage | ||
if response.stop_reason == "tool_use": | ||
# Find the tool_use content | ||
tool_call = next( | ||
(content for content in response.content if content.type == "tool_use"), | ||
None, | ||
) | ||
|
||
if tool_call: | ||
function = Function( | ||
name=tool_call.name, arguments=json.dumps(tool_call.input) | ||
) | ||
tool_call_obj = ChatCompletionMessageToolCall( | ||
id=tool_call.id, function=function, type="function" | ||
) | ||
# Get the text content if any | ||
text_content = next( | ||
( | ||
content.text | ||
for content in response.content | ||
if content.type == "text" | ||
), | ||
"", | ||
) | ||
|
||
message = Message( | ||
content=text_content or None, | ||
tool_calls=[tool_call_obj] if tool_call else None, | ||
role="assistant", | ||
refusal=None, | ||
) | ||
normalized_response.choices[0].message = message | ||
return normalized_response | ||
|
||
# Handle regular text response | ||
message = Message( | ||
content=response.content[0].text, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Reduces code duplication and enhances maintainability by refactoring repetitive conversion code into a separate function.
Abstract the repetitive conversion logic into separate functions for clarity and reuse.
if "max_tokens" not in kwargs: | |
kwargs["max_tokens"] = DEFAULT_MAX_TOKENS | |
return self.normalize_response( | |
self.client.messages.create( | |
model=model, system=system_message, messages=messages, **kwargs | |
) | |
# Handle tool calls. Convert from OpenAI tool calls to Anthropic tool calls. | |
if "tools" in kwargs: | |
kwargs["tools"] = convert_openai_tools_to_anthropic(kwargs["tools"]) | |
# Convert tool results from OpenAI format to Anthropic format | |
converted_messages = [] | |
for msg in messages: | |
if isinstance(msg, dict): | |
if msg["role"] == "tool": | |
# Convert tool result message | |
converted_msg = { | |
"role": "user", | |
"content": [ | |
{ | |
"type": "tool_result", | |
"tool_use_id": msg["tool_call_id"], | |
"content": msg["content"], | |
} | |
], | |
} | |
converted_messages.append(converted_msg) | |
elif msg["role"] == "assistant" and "tool_calls" in msg: | |
# Handle assistant messages with tool calls | |
content = [] | |
if msg.get("content"): | |
content.append({"type": "text", "text": msg["content"]}) | |
for tool_call in msg["tool_calls"]: | |
content.append( | |
{ | |
"type": "tool_use", | |
"id": tool_call["id"], | |
"name": tool_call["function"]["name"], | |
"input": json.loads(tool_call["function"]["arguments"]), | |
} | |
) | |
converted_messages.append({"role": "assistant", "content": content}) | |
else: | |
# Keep other messages as is | |
converted_messages.append( | |
{"role": msg["role"], "content": msg["content"]} | |
) | |
else: | |
# Handle Message objects | |
if msg.role == "tool": | |
converted_msg = { | |
"role": "user", | |
"content": [ | |
{ | |
"type": "tool_result", | |
"tool_use_id": msg.tool_call_id, | |
"content": msg.content, | |
} | |
], | |
} | |
converted_messages.append(converted_msg) | |
elif msg.role == "assistant" and msg.tool_calls: | |
# Handle Message objects with tool calls | |
content = [] | |
if msg.content: | |
content.append({"type": "text", "text": msg.content}) | |
for tool_call in msg.tool_calls: | |
content.append( | |
{ | |
"type": "tool_use", | |
"id": tool_call.id, | |
"name": tool_call.function.name, | |
"input": json.loads(tool_call.function.arguments), | |
} | |
) | |
converted_messages.append({"role": "assistant", "content": content}) | |
else: | |
converted_messages.append( | |
{"role": msg.role, "content": msg.content} | |
) | |
print(converted_messages) | |
response = self.client.messages.create( | |
model=model, system=system_message, messages=converted_messages, **kwargs | |
) | |
print(response) | |
return self.normalize_response(response) | |
def normalize_response(self, response): | |
"""Normalize the response from the Anthropic API to match OpenAI's response format.""" | |
normalized_response = ChatCompletionResponse() | |
normalized_response.choices[0].message.content = response.content[0].text | |
# Map Anthropic stop_reason to OpenAI finish_reason | |
finish_reason_mapping = { | |
"end_turn": "stop", | |
"max_tokens": "length", | |
"tool_use": "tool_calls", | |
# Add more mappings as needed | |
} | |
normalized_response.choices[0].finish_reason = finish_reason_mapping.get( | |
response.stop_reason, "stop" | |
) | |
# Add usage information | |
normalized_response.usage = { | |
"prompt_tokens": response.usage.input_tokens, | |
"completion_tokens": response.usage.output_tokens, | |
"total_tokens": response.usage.input_tokens + response.usage.output_tokens, | |
} | |
# Check if the response contains tool usage | |
if response.stop_reason == "tool_use": | |
# Find the tool_use content | |
tool_call = next( | |
(content for content in response.content if content.type == "tool_use"), | |
None, | |
) | |
if tool_call: | |
function = Function( | |
name=tool_call.name, arguments=json.dumps(tool_call.input) | |
) | |
tool_call_obj = ChatCompletionMessageToolCall( | |
id=tool_call.id, function=function, type="function" | |
) | |
# Get the text content if any | |
text_content = next( | |
( | |
content.text | |
for content in response.content | |
if content.type == "text" | |
), | |
"", | |
) | |
message = Message( | |
content=text_content or None, | |
tool_calls=[tool_call_obj] if tool_call else None, | |
role="assistant", | |
refusal=None, | |
) | |
normalized_response.choices[0].message = message | |
return normalized_response | |
# Handle regular text response | |
message = Message( | |
content=response.content[0].text, | |
+ converted_messages = self.refactor_conversion_logic(messages) |
class Message: | ||
def __init__(self): | ||
self.content = None | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Utilize enums to enforce constraints on 'role' and type fields, enhancing robustness and maintain type safety.
Use Enumerations to enforce valid values and enhance type safety in role and type fields.
+ content: Optional[str] | |
+ tool_calls: Optional[list[ChatCompletionMessageToolCall]] | |
+ role: Optional[Literal["user", "assistant", "system"]] = Field(..., description="Role of the message sender", enum=["user", "assistant", "system"]) | |
+ refusal: Optional[str] |
PR Review SummaryOverall Review:The PR seeks to enhance the message and tool calling functionality across multiple AI service providers like Anthropic and AWS. It introduces significant changes to the handling of message structures and API interactions, adding structured tool management and transformation processes. Notable changes include the refactoring of the
RecommendationsRecommendation #1Implement rigorous input validation and error handling around JSON operations and external data handling. Use secure practices such as using parameterized queries or templates when constructing requests or handling data. For instance: def transform_tool_call_to_openai(self, response):
if response.get("stopReason") != "tool_use":
return None
try:
tool_calls = []
for content in response["output"]["message"]["content"]:
if "toolUse" in content:
tool = content["toolUse"]
tool_calls.append({
"type": "function",
"id": tool["toolUseId"],
"function": {
"name": tool["name"],
"arguments": json.dumps(tool["input"], cls=SecureJSONEncoder),
},
})
except json.JSONDecodeError as e:
raise ValueError(f"JSON parsing error: {str(e)}")
return {
"role": "assistant",
"content": None,
"tool_calls": tool_calls,
"refusal": None,
} In this code snippet, Recommendation #2To address the potential inconsistencies in parsing def normalize_response(self, response):
normalized_response = ChatCompletionResponse()
finish_reason_mapping = {
"end_turn": "stop",
"max_tokens": "length",
"tool_use": "tool_calls",
}
normalized_response.choices[0].finish_reason = finish_reason_mapping.get(response.stop_reason, "stop")
# Add usage information
normalized_response.usage = {
"prompt_tokens": response.usage.input_tokens,
"completion_tokens": response.usage.output_tokens,
"total_tokens": response.usage.input_tokens + response.usage.output_tokens,
}
# Validate tool usage
if response.stop_reason == "tool_use":
tool_call = next((content for content in response.content if 'type' in content and content.type == "tool_use"), None)
if tool_call and 'id' in tool_call:
tool_call_id = tool_call.id
function = Function(name=tool_call.name, arguments=json.dumps(tool_call.input))
tool_call_obj = ChatCompletionMessageToolCall(id=tool_call_id, function=function, type="function")
text_content = next((content.text for content in response.content if 'type' in content and content.type == "text"), "")
message = Message(content=text_content or None, tool_calls=[tool_call_obj] if tool_call else None, role="assistant", refusal=None)
normalized_response.choices[0].message = message
else:
raise ValueError("Expected 'tool_use' content is missing or malformed in API response.")
return normalized_response |
PR Review SummaryOverall Review:The PR introduces profound changes across the AISuite to enhance message and tool calling capabilities for various AI providers, leveraging structured models and APIs specific to providers like AWS and Anthropic. Main enhancements include significant modifications to the message classes, sophisticated JSON data handling, and the incorporation of a Commit logs and file changes suggest iterative development and testing, focusing on incremental integration and ensuring compatibility with external APIs. This approach helps in isolating features for specific providers while maintaining a generalized framework that could be adapted for additional AI service providers as needed.
RecommendationsRecommendation #1To enhance the robustness of def normalize_response(self, response):
normalized_response = ChatCompletionResponse()
finish_reason_mapping = {
"end_turn": "stop",
"max_tokens": "length",
"tool_use": "tool_calls",
}
normalized_response.choices[0].finish_reason = finish_reason_mapping.get(response.stop_reason, "stop")
normalized_response.usage = {
"prompt_tokens": response.usage.input_tokens,
"completion_tokens": response.usage.output_tokens,
"total_tokens": response.usage.input_tokens + response.usage.output_tokens,
}
if response.stop_reason == "tool_use":
tool_call = next((content for content in response.content if content.type == "tool_use"), None)
if tool_call and 'id' in tool_call:
function = Function(name=tool_call.name, arguments=json.dumps(tool_call.input))
tool_call_obj = ChatCompletionMessageToolCall(
id=tool_call['id'], function=function, type="function"
)
text_content = next(
(content.text for content in response.content if content.type == "text"), ""
)
message = Message(
content=text_content or None,
tool_calls=[tool_call_obj] if tool_call else None,
role="assistant",
refusal=None
)
normalized_response.choices[0].message = message
else:
raise ValueError("Expected 'tool_use' content is missing or malformed in API response.")
return normalized_response Recommendation #2Implement rigorous input validation and sanitization within def transform_tool_call_to_openai(self, response):
if response.get("stopReason") != "tool_use":
return None
try:
tool_calls = []
for content in response["output"]["message"]["content"]:
if "toolUse" in content:
tool = content["toolUse"]
tool_calls.append({
"type": "function",
"id": tool["toolUseId"],
"function": {
"name": tool["name"],
"arguments": json.dumps(tool["input"], cls=SecureJSONEncoder),
},
})
except json.JSONDecodeError as e:
raise ValueError(f"JSON parsing error: {str(e)}")
return {
"role": "assistant",
"content": None,
"tool_calls": tool_calls,
"refusal": None,
} In the above snippet, |
PR Review SummaryOverall Review:The PR introduces significant enhancements across multiple AI service providers within the aisuite, focusing on structured message handling and tool calling functionality. These changes span introducing new models with Pydantic, implementing a
RecommendationsRecommendation #1To mitigate potential security vulnerabilities related to JSON parsing, fortify the data handling process with improved schema validation and thorough error handling. Integrate a custom JSON encoder to sanitize inputs before parsing and ensure functional test coverage: class SecureJSONEncoder(json.JSONEncoder):
def default(self, o):
if isinstance(o, str):
return o.replace('"', '\"').replace('<', '\<')
return json.JSONEncoder.default(self, o)
def transform_tool_call_to_openai(self, response):
if response.get("stopReason") != "tool_use":
return None
tool_calls = []
try:
for content in response["output"]["message"]["content"]:
if "toolUse" in content:
tool = content["toolUse"]
secure_args = json.dumps(tool["input"], cls=SecureJSONEncoder)
tool_calls.append({
"type": "function",
"id": tool["toolUseId"],
"function": {
"name": tool["name"],
"arguments": secure_args,
},
})
except json.JSONDecodeError as e:
raise ValueError(f"JSON parsing error: {str(e)}")
return {
"role": "assistant",
"content": None,
"tool_calls": tool_calls,
"refusal": None,
} Properly structure your error handling and testing to validate defense against common vulnerabilities in JSON processing. Recommendation #2Rework the def normalize_response(self, response):
normalized_response = ChatCompletionResponse()
finish_reason_mapping = {
"end_turn": "stop",
"max_tokens": "length",
"tool_use": "tool_calls",
}
normalized_response.choices[0].finish_reason = finish_reason_mapping.get(response.stop_reason, "stop")
normalized_response.usage = {
"prompt_tokens": response.usage.input_tokens,
"completion_tokens": response.usage.output_tokens,
"total_tokens": response.usage.input_tokens + response.usage.output_tokens,
}
if response.stop_reason == "tool_use":
tool_call = next((content for content in response.content if 'type' in content and content.type == "tool_use"), None)
if tool_call and 'id' in tool_call:
tool_call_id = tool_call.id
function = Function(name=tool_call.name, arguments=json.dumps(tool_call.input))
tool_call_obj = ChatCompletionMessageToolCall(id=tool_call_id, function=function, type="function")
text_content = next(
(content.text for content in response.content if 'type' in content and content.type == "text"), ""
)
message = Message(
content=text_content or None,
tool_calls=[tool_call_obj] if tool_call else None,
role="assistant",
refusal=None
)
normalized_response.choices[0].message = message
else:
raise ValueError("Expected 'tool_use' content is missing or malformed in API response.")
return normalized_response This modification ensures that all necessary identifiers and data types are checked before processing, improving the resilience of your application against data inconsistencies and related errors. |
PR Code Suggestions Summary ✨
|
if response.get("stopReason") != "tool_use": | ||
return None | ||
|
||
tool_calls = [] | ||
for content in response["output"]["message"]["content"]: | ||
if "toolUse" in content: | ||
tool = content["toolUse"] | ||
tool_calls.append( | ||
{ | ||
"type": "function", | ||
"id": tool["toolUseId"], | ||
"function": { | ||
"name": tool["name"], | ||
"arguments": json.dumps(tool["input"]), | ||
}, | ||
} | ||
) | ||
|
||
if not tool_calls: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Improve code readability by using descriptive variable names for complex data structures.
if response.get("stopReason") != "tool_use": | |
return None | |
tool_calls = [] | |
for content in response["output"]["message"]["content"]: | |
if "toolUse" in content: | |
tool = content["toolUse"] | |
tool_calls.append( | |
{ | |
"type": "function", | |
"id": tool["toolUseId"], | |
"function": { | |
"name": tool["name"], | |
"arguments": json.dumps(tool["input"]), | |
}, | |
} | |
) | |
if not tool_calls: | |
+ tool_calls = [] | |
+ for content in response["output"]["message"]["content"]: | |
+ if "toolUse" in content: | |
+ tool = content["toolUse"] | |
+ tool_call_details = { | |
+ "type": "function", | |
+ "id": tool["toolUseId"], | |
+ "function": { | |
+ "name": tool["name"], | |
+ "arguments": json.dumps(tool["input"]), | |
+ }, | |
+ } | |
+ tool_calls.append(tool_call_details) |
PR Review SummaryOverall Review:This PR makes necessary changes to the webhook handling logic.
RecommendationsRecommendation #1No changes needed |
PR Review SummaryOverall Review:This PR makes necessary changes to the webhook handling logic.
RecommendationsRecommendation #1No changes needed |
PR Review SummaryOverall Review:The Pull Request aims to enhance webhook handling logic through better error management, data validation, and structured code improvements. Key modifications encompass significant overhauls to the webhook processing system to increase reliability with improved error handling and logging mechanisms. The PR's expansive scope touches on message handling with a redefinition of the Recommendations[Configure settings at: Archie AI - Automated PR Review] |
PR Code Suggestions Summary ✨
|
|
||
def normalize_response(self, response): | ||
"""Normalize the response from the Bedrock API to match OpenAI's response format.""" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Improve error handling by verifying the presence of necessary dictionary keys to prevent runtime errors.
Consider validating the 'message_dict' dictionary prior to access to prevent 'KeyError'.
def normalize_response(self, response): | |
"""Normalize the response from the Bedrock API to match OpenAI's response format.""" | |
+ message_dict = ( | |
+ message.model_dump() if hasattr(message, "model_dump") else message | |
+ ) | |
+ if "role" not in message_dict or "content" not in message_dict: | |
+ print(f"Invalid message format: {message_dict}") | |
+ continue |
PR Review SummaryOverall Review:This PR introduces significant architectural and functional changes to the aisuite, focusing on enhanced message frameworks and tool calling integrations. The adoption of a structured
RecommendationsRecommendation #1To address this issue, consider enhancing the def normalize_response(self, response):
normalized_response = ChatCompletionResponse()
finish_reason_mapping = {
"end_turn": "stop",
"max_tokens": "length",
"tool_use": "tool_calls",
}
normalized_response.choices[0].finish_reason = finish_reason_mapping.get(response.stop_reason, "stop")
if response.stop_reason == "tool_use":
tool_call = next((tc for tc in response.content if tc['type'] == "tool_use"), None)
if tool_call and 'id' in tool_call:
function = Function(name=tool_call['name'], arguments=json.dumps(tool_call['input']))
tool_call_obj = ChatCompletionMessageToolCall(id=tool_call['id'], function=function, type="function")
text_content = next((text['text'] for text in response.content if 'text' in text), "")
message = Message(content=text_content or None, tool_calls=[tool_call_obj], role="assistant", refusal=None)
normalized_response.choices[0].message = message
else:
raise ValueError("Expected 'tool_use' content is missing or malformed in API response.")
return normalized_response This code will introduce robust checks and a clearer mapping and management of different Recommendation #2To enhance the security of the def transform_tool_call_to_openai(self, response):
if 'stopReason' in response and response['stopReason'] == "tool_use":
try:
tool_calls = []
for content in response['output']['message']['content']:
if 'toolUse' in content:
tool = content['toolUse']
tool_calls.append({
'type': 'function',
'id': tool['toolUseId'],
'function': {
'name': tool['name'],
'arguments': json.dumps(tool['input'], cls=SecureJSONEncoder)
}
})
return {
'role': 'assistant',
'content': None,
'tool_calls': tool_calls,
'refusal': None
}
except json.JSONDecodeError as e:
raise ValueError(f"JSON parsing error: {e}")
return None Implement |
PR Code Suggestions Summary ✨
|
from aisuite.framework.message import Message | ||
from typing import Literal, Optional | ||
|
||
|
||
class Choice: | ||
def __init__(self): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Change finish_reason
property to use an explicit enum instead of a string literal to enforce better type safety and clarity.
Use enums explicitly rather than strings for finish_reason
to improve code reliability and maintenance.
from aisuite.framework.message import Message | |
from typing import Literal, Optional | |
class Choice: | |
def __init__(self): | |
from typing import Literal, Optional | |
from enum import Enum | |
class FinishReason(Enum): | |
STOP = "stop" | |
TOOL_CALLS = "tool_calls" | |
class Choice: | |
def __init__(self): | |
self.message = Message() | |
self.finish_reason: Optional[FinishReason] = None |
"name": tool_call.function.name, | ||
"input": json.loads(tool_call.function.arguments), | ||
} | ||
) | ||
converted_messages.append({"role": "assistant", "content": content}) | ||
else: | ||
converted_messages.append( | ||
{"role": msg.role, "content": msg.content} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Refactor the transformation of tool messages into a separate function for clarity and potential reuse.
Separate transformation logic into dedicated functions for better modularity and reuse.
"name": tool_call.function.name, | |
"input": json.loads(tool_call.function.arguments), | |
} | |
) | |
converted_messages.append({"role": "assistant", "content": content}) | |
else: | |
converted_messages.append( | |
{"role": msg.role, "content": msg.content} | |
def convert_tool_message(msg): | |
return { | |
"role": "user", | |
"content": [ | |
{ | |
"type": "tool_result", | |
"tool_use_id": msg["tool_call_id"], | |
"content": msg["content"], | |
} | |
], | |
} | |
for msg in messages: | |
if isinstance(msg, dict) and msg["role"] == "tool": | |
converted_messages.append(convert_tool_message(msg)) |
PR Review SummaryOverall Review:This PR makes necessary changes to the webhook handling logic.
RecommendationsRecommendation #1No changes needed |
1 similar comment
PR Review SummaryOverall Review:This PR makes necessary changes to the webhook handling logic.
RecommendationsRecommendation #1No changes needed |
PR Review SummaryOverall Review:Based on a thorough review, this PR introduces substantial enhancements aimed at bridging various AI provider interfaces through more sophisticated handling of messages and tool calls within the aisuite. It employs refined However, the extensive nature of these modifications warrants a critical analysis to ensure that the new functionalities integrate well without introducing regressions or vulnerabilities in the current system. Specific attention must be paid to the transformation and normalization processes introduced, especially those involving JSON data handling and interactive message conversions between different AI providers.
RecommendationsRecommendation #1To address the potential inconsistencies and errors in handling the def normalize_response(self, response):
# Initial response normalization setup here
# Check if the tool_call_id exists and is correctly formatted
if 'tool_call_id' not in response or not isinstance(response['tool_call_id'], str):
raise ValueError("Invalid or missing 'tool_call_id' in the response")
# Further processing using tool_call_id
# ...
return normalized_response These checks will help you avoid runtime errors and ensure the integrity of the data handling process. Recommendation #2It is crucial to implement rigorous validation mechanisms for JSON data being parsed in the import json
from jsonschema import validate, ValidationError
# Define a schema for your expected tool call JSON structure
tool_call_schema = {
"type": "object",
"properties": {
"id": {"type": "string"},
"function": {
"type": "object",
"properties": {
"name": {"type": "string"},
"arguments": {"type": "string"}
},
"required": ["name", "arguments"]
}
},
"required": ["id", "function"]
}
def transform_tool_call_to_openai(self, response):
try:
tool_calls = []
for content in response['output']['message']['content']:
if 'toolUse' in content:
# Validate the JSON structure before processing
validate(instance=content['toolUse'], schema=tool_call_schema)
tool_calls.append(
{
"type": "function",
"id": content['toolUse']['toolUseId'],
"function": {
"name": content['toolUse']['name'],
"arguments": json.dumps(content['toolUse']['input'])
}
}
)
return {
"role": "assistant",
"content": None,
"tool_calls": tool_calls,
"refusal": None
}
except ValidationError as e:
raise ValueError(f"JSON validation error: {str(e)}") This modification uses the |
PR Code Suggestions Summary ✨
|
class Message(BaseModel): | ||
content: Optional[str] | ||
tool_calls: Optional[list[ChatCompletionMessageToolCall]] | ||
role: Optional[Literal["user", "assistant", "system"]] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Improves type safety by specifying the List
type from typing
for a variable.
Use more specific type hints for list
or dict
types to enhance type safety and clarity.
role: Optional[Literal["user", "assistant", "system"]] | |
from typing import List | |
tool_calls: Optional[List[ChatCompletionMessageToolCall]] |
PR Review SummaryOverall Review:This PR introduces a series of enhancements focusing on improving message handling and integrating tool call functionalities within the aisuite. It targets a unified approach to message structuring and API interactions across different AI providers such as AWS and Anthropic. The changes revolve significantly around the utilization of a
RecommendationsRecommendation #1For robust error handling and parsing in def normalize_response(self, response):
normalized_response = ChatCompletionResponse()
finish_reason_mapping = {
"end_turn": "stop",
"max_tokens": "length",
"tool_use": "tool_calls",
}
normalized_response.choices[0].finish_reason = finish_reason_mapping.get(response.stop_reason, "stop")
if response.stop_reason == "tool_use":
tool_call = next((item for item in response.content if item.type == "tool_use"), None)
if tool_call and 'id' in tool_call:
function = Function(name=tool_call.name, arguments=json.dumps(tool_call.input))
tool_call_obj = ChatCompletionMessageToolCall(id=tool_call.id, function=function, type="function")
text_content = next((item.text for item in response.content if item.type == "text"), "")
message = Message(content=text_content or None, tool_calls=[tool_call_obj] if tool_call else None, role="assistant", refusal=None)
normalized_response.choices[0].message = message
else:
raise ValueError("Expected 'tool_use' content is missing or malformed in API response.")
return normalized_response This ensures that every potential 'tool_use' is validated before processing to prevent errors related to missing or inconsistent data. Recommendation #2Implement structural and secure JSON parsing methods to handle data within def transform_tool_call_to_openai(self, response):
if response.get("stopReason") != "tool_use":
return None
tool_calls = []
try:
for content in response['output']['message']['content']:
if 'toolUse' in content:
tool = content['toolUse']
tool_calls.append({
'type': 'function',
'id': tool['toolUseId'],
'function': {
'name': tool['name'],
'arguments': json.dumps(tool['input'], cls=SecureJSONEncoder),
}
})
except json.JSONDecodeError as e:
raise ValueError(f'JSON parsing error: {str(e)}')
return {
'role': 'assistant',
'content': None,
'tool_calls': tool_calls,
'refusal': None
}
class SecureJSONEncoder(json.JSONEncoder):
def encode(self, o):
result = super(SecureJSONEncoder, self).encode(o)
# Add additional encoding rules if necessary
return result This tries to ensure that JSON parsing is handled securely, avoiding common vulnerabilities associated with processing unvalidated input. |
PR Code Suggestions Summary ✨
|
"name": tool_call.function.name, | ||
"input": json.loads(tool_call.function.arguments), | ||
} | ||
) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Added error handling for potential failures in JSON parsing.
Add exception handling for JSON parsing operations.
Existing code:
input: json.loads(tool_call["function"]["arguments"]),
Improved code:
try:
input: json.loads(tool_call["function"]["arguments"])
except json.JSONDecodeError:
input = {} # Handle or log the error appropriately
) | |
try: | |
input = json.loads(tool_call["function"]["arguments"]) | |
except json.JSONDecodeError: | |
input = {} # Handle or log the error appropriately |
|
||
class Message(BaseModel): | ||
content: Optional[str] | ||
tool_calls: Optional[list[ChatCompletionMessageToolCall]] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Modify data structures to enforce immutability where applicable.
Ensure immutability and constancy where appropriate by using tuple
for non-modifiable data.
Existing code:
tool_calls: Optional[list[ChatCompletionMessageToolCall]]
Improved code:
tool_calls: Optional[tuple[ChatCompletionMessageToolCall, ...]]
```suggestion
tool_calls: Optional[tuple[ChatCompletionMessageToolCall, ...]]
PR Review SummaryOverall Review:This PR makes necessary changes to the webhook handling logic.
RecommendationsRecommendation #1No changes needed |
PR Review SummaryOverall Review:This PR introduces substantial enhancements to the aisuite by implementing structured message handling and tool calling capabilities across multiple AI providers. The changes include a new Pydantic-based Message class, provider-specific transformations for tool calls, and a ToolManager utility. While the implementation shows good architecture decisions like using Pydantic for validation and separating provider-specific logic, there are critical concerns around error handling in JSON transformations and potential security vulnerabilities in data parsing that need addressing. The standardization of message formats and tool call handling across providers is a positive architectural decision.
RecommendationsRecommendation #1Implement secure JSON parsing with validation in def transform_tool_call_to_openai(self, response):
if response.get("stopReason") != "tool_use":
return None
try:
tool_calls = []
for content in response["output"]["message"]["content"]:
if "toolUse" in content:
tool = content["toolUse"]
# Validate input against schema before processing
validated_input = json.loads(tool["input"], cls=SecureJSONDecoder)
tool_calls.append({
"type": "function",
"id": tool["toolUseId"],
"function": {
"name": tool["name"],
"arguments": json.dumps(validated_input),
},
})
except json.JSONDecodeError as e:
raise ValueError(f"JSON parsing error: {str(e)}")
return {
"role": "assistant",
"content": None,
"tool_calls": tool_calls,
"refusal": None,
} Recommendation #2Add robust error handling and validation in def normalize_response(self, response):
normalized_response = ChatCompletionResponse()
finish_reason_mapping = {
"end_turn": "stop",
"max_tokens": "length",
"tool_use": "tool_calls",
}
normalized_response.choices[0].finish_reason = finish_reason_mapping.get(response.stop_reason, "stop")
if response.stop_reason == "tool_use":
tool_call = next((content for content in response.content if 'type' in content and content.type == "tool_use"), None)
if tool_call and 'id' in tool_call:
tool_call_id = tool_call.id
function = Function(name=tool_call.name, arguments=json.dumps(tool_call.input))
tool_call_obj = ChatCompletionMessageToolCall(id=tool_call_id, function=function, type="function")
text_content = next((content.text for content in response.content if 'type' in content and content.type == "text"), "")
message = Message(content=text_content or None, tool_calls=[tool_call_obj] if tool_call else None, role="assistant", refusal=None)
normalized_response.choices[0].message = message
else:
raise ValueError("Expected 'tool_use' content is missing or malformed in API response.")
return normalized_response Recommendation #3Continue leveraging Pydantic's validation capabilities throughout the codebase, especially in areas dealing with external data. Consider adding custom validators where needed: from pydantic import BaseModel, validator
class Message(BaseModel):
content: Optional[str]
tool_calls: Optional[list[ChatCompletionMessageToolCall]]
role: Optional[Literal["user", "assistant", "system"]]
refusal: Optional[str]
@validator('tool_calls')
def validate_tool_calls(cls, v):
if v is not None:
for tool_call in v:
if not tool_call.function or not tool_call.function.name:
raise ValueError("Tool call must have a function name")
return v |
PR Code Suggestions Summary ✨
|
for tool_call in tool_calls: | ||
tool_name = tool_call.function.name | ||
arguments = json.loads(tool_call.function.arguments) | ||
|
||
if tool_name not in self._tools: | ||
raise ValueError(f"Tool '{tool_name}' not registered.") | ||
|
||
tool = self._tools[tool_name] | ||
tool_func = tool["function"] | ||
param_model = tool["param_model"] | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Add comprehensive error handling for tool execution to prevent crashes and provide meaningful error messages
Add error handling for tool execution to prevent crashes and provide meaningful error messages
for tool_call in tool_calls: | |
tool_name = tool_call.function.name | |
arguments = json.loads(tool_call.function.arguments) | |
if tool_name not in self._tools: | |
raise ValueError(f"Tool '{tool_name}' not registered.") | |
tool = self._tools[tool_name] | |
tool_func = tool["function"] | |
param_model = tool["param_model"] | |
try: | |
validated_args = param_model(**arguments) | |
result = tool_func(**validated_args.model_dump()) | |
results.append(result) | |
messages.append( | |
{ | |
"role": "tool", | |
"name": tool_name, | |
"content": json.dumps(result), | |
"tool_call_id": tool_call.id, | |
} | |
) | |
except Exception as e: | |
error_result = {"error": f"Tool execution failed: {str(e)}"} | |
results.append(error_result) | |
messages.append( | |
{ | |
"role": "tool", | |
"name": tool_name, | |
"content": json.dumps(error_result), | |
"tool_call_id": tool_call.id, | |
} | |
) |
content.append( | ||
{ | ||
"type": "tool_use", | ||
"id": tool_call["id"], |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Add null check and error handling for JSON parsing in tool calls
Add error handling for JSON parsing in tool calls to prevent crashes on invalid JSON
"id": tool_call["id"], | |
"input": json.loads(tool_call["function"]["arguments"]) if tool_call["function"]["arguments"] else {}, |
formatted_messages.append(bedrock_message) | ||
elif message_dict["role"] == "assistant": | ||
# Convert assistant message to Bedrock format | ||
bedrock_message = self.transform_assistant_to_bedrock(message_dict) | ||
if bedrock_message: | ||
formatted_messages.append(bedrock_message) | ||
else: | ||
formatted_messages.append( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Add rate limiting protection with exponential backoff to handle AWS API throttling
Add rate limiting protection to prevent API throttling
formatted_messages.append(bedrock_message) | |
elif message_dict["role"] == "assistant": | |
# Convert assistant message to Bedrock format | |
bedrock_message = self.transform_assistant_to_bedrock(message_dict) | |
if bedrock_message: | |
formatted_messages.append(bedrock_message) | |
else: | |
formatted_messages.append( | |
from time import sleep | |
max_retries = 3 | |
retry_count = 0 | |
while retry_count < max_retries: | |
try: | |
response = self.client.converse( | |
modelId=model, # baseModelId or provisionedModelArn | |
messages=formatted_messages, | |
system=system_message, | |
inferenceConfig=inference_config, | |
additionalModelRequestFields=additional_model_request_fields, | |
toolConfig=tool_config, | |
) | |
break | |
except Exception as e: | |
if "ThrottlingException" in str(e) and retry_count < max_retries - 1: | |
sleep(2 ** retry_count) # Exponential backoff | |
retry_count += 1 | |
else: | |
raise |
content.append({"type": "text", "text": msg.content}) | ||
for tool_call in msg.tool_calls: | ||
content.append( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Add robust error handling for missing or invalid response fields during normalization
Add error handling for missing or invalid response fields in normalize_response
content.append({"type": "text", "text": msg.content}) | |
for tool_call in msg.tool_calls: | |
content.append( | |
try: | |
normalized_response.choices[0].finish_reason = finish_reason_mapping.get( | |
getattr(response, 'stop_reason', None), "stop" | |
) | |
except (AttributeError, IndexError) as e: | |
normalized_response.choices[0].finish_reason = "stop" | |
print(f"Warning: Error normalizing response: {str(e)}") |
# Add a tool function with or without a Pydantic model. | ||
def add_tool(self, func: Callable, param_model: Optional[Type[BaseModel]] = None): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Add validation to prevent duplicate tool registration and ensure tools are callable
Add validation for tool registration to prevent duplicate tools
# Add a tool function with or without a Pydantic model. | |
def add_tool(self, func: Callable, param_model: Optional[Type[BaseModel]] = None): | |
def add_tool(self, func: Callable, param_model: Optional[Type[BaseModel]] = None): | |
"""Register a tool function with metadata. If no param_model is provided, infer from function signature.""" | |
if func.__name__ in self._tools: | |
raise ValueError(f"Tool with name '{func.__name__}' is already registered") | |
if not callable(func): | |
raise TypeError("Tool must be a callable function") |
# Convert Message object to dict if necessary | ||
message_dict = ( | ||
message.model_dump() if hasattr(message, "model_dump") else message |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Add type validation for message conversion to catch type errors early
Add type validation for message conversion to prevent runtime errors
# Convert Message object to dict if necessary | |
message_dict = ( | |
message.model_dump() if hasattr(message, "model_dump") else message | |
if not isinstance(message, (dict, BaseModel)): | |
raise TypeError(f"Message must be a dict or BaseModel, got {type(message)}") | |
message_dict = ( | |
message.model_dump() if hasattr(message, "model_dump") else message | |
) |
|
||
def execute_tool(self, tool_calls) -> tuple[list, list]: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Add input validation for tool execution to prevent invalid tool calls
Add parameter validation for tool execution to prevent invalid tool calls
def execute_tool(self, tool_calls) -> tuple[list, list]: |
converted_messages = [] | ||
for msg in messages: | ||
if isinstance(msg, dict): | ||
if msg["role"] == "tool": | ||
# Convert tool result message | ||
converted_msg = { | ||
"role": "user", | ||
"content": [ | ||
{ | ||
"type": "tool_result", | ||
"tool_use_id": msg["tool_call_id"], | ||
"content": msg["content"], | ||
} | ||
], |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Optimize message conversion using a mapping dictionary for better performance
Add performance optimization for message conversion
converted_messages = [] | |
for msg in messages: | |
if isinstance(msg, dict): | |
if msg["role"] == "tool": | |
# Convert tool result message | |
converted_msg = { | |
"role": "user", | |
"content": [ | |
{ | |
"type": "tool_result", | |
"tool_use_id": msg["tool_call_id"], | |
"content": msg["content"], | |
} | |
], | |
converted_messages = [] | |
msg_conversion_map = { | |
"tool": lambda m: { | |
"role": "user", | |
"content": [ | |
{ | |
"type": "tool_result", | |
"tool_use_id": m["tool_call_id"], | |
"content": m["content"], | |
} | |
], | |
} | |
} | |
for msg in messages: | |
if isinstance(msg, dict): | |
converter = msg_conversion_map.get(msg["role"]) | |
if converter: | |
converted_messages.append(converter(msg)) |
PR Review SummaryOverall Review:This PR introduces significant enhancements to the aisuite by implementing a structured message handling system and tool calling capabilities across multiple AI providers (AWS, Anthropic, etc.). The changes include a new
RecommendationsRecommendation #1Implement a secure JSON parsing mechanism with schema validation: class SecureJSONEncoder(json.JSONEncoder):
def default(self, o):
if isinstance(o, str):
return o.replace('\"', '\\\"').replace('<', '\\<')
return json.JSONEncoder.default(self, o)
def transform_tool_call_to_openai(self, response):
if response.get("stopReason") != "tool_use":
return None
try:
tool_calls = []
for content in response["output"]["message"]["content"]:
if "toolUse" in content:
tool = content["toolUse"]
tool_calls.append({
"type": "function",
"id": tool["toolUseId"],
"function": {
"name": tool["name"],
"arguments": json.dumps(tool["input"], cls=SecureJSONEncoder),
},
})
except json.JSONDecodeError as e:
raise ValueError(f"JSON parsing error: {str(e)}")
return {
"role": "assistant",
"content": None,
"tool_calls": tool_calls,
"refusal": None,
} Recommendation #2Add robust validation and error handling in the normalize_response method: def normalize_response(self, response):
normalized_response = ChatCompletionResponse()
finish_reason_mapping = {
"end_turn": "stop",
"max_tokens": "length",
"tool_use": "tool_calls",
}
normalized_response.choices[0].finish_reason = finish_reason_mapping.get(response.stop_reason, "stop")
if response.stop_reason == "tool_use":
tool_call = next((content for content in response.content if 'type' in content and content.type == "tool_use"), None)
if tool_call and 'id' in tool_call:
tool_call_id = tool_call.id
function = Function(name=tool_call.name, arguments=json.dumps(tool_call.input))
tool_call_obj = ChatCompletionMessageToolCall(id=tool_call_id, function=function, type="function")
text_content = next((content.text for content in response.content if 'type' in content and content.type == "text"), "")
message = Message(content=text_content or None, tool_calls=[tool_call_obj] if tool_call else None, role="assistant", refusal=None)
normalized_response.choices[0].message = message
else:
raise ValueError("Expected 'tool_use' content is missing or malformed in API response.")
return normalized_response Recommendation #3Create a base transformer class that providers can extend: class BaseMessageTransformer:
def transform_tool_result(self, message_dict):
raise NotImplementedError
def transform_assistant_message(self, message_dict):
raise NotImplementedError
def transform_user_message(self, message_dict):
return {
"role": message_dict["role"],
"content": message_dict["content"]
}
class AnthropicMessageTransformer(BaseMessageTransformer):
def transform_tool_result(self, message_dict):
# Anthropic specific implementation
pass
class AWSMessageTransformer(BaseMessageTransformer):
def transform_tool_result(self, message_dict):
# AWS specific implementation
pass Recommendation andrewyng#4Continue using Pydantic models for data validation and consider extending their usage to other parts of the codebase where structured data validation is needed. |
PR Code Suggestions Summary ✨
|
for tool_call in tool_calls: | ||
tool_name = tool_call.function.name | ||
arguments = json.loads(tool_call.function.arguments) | ||
|
||
if tool_name not in self._tools: | ||
raise ValueError(f"Tool '{tool_name}' not registered.") | ||
|
||
tool = self._tools[tool_name] | ||
tool_func = tool["function"] | ||
param_model = tool["param_model"] | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Add comprehensive error handling for tool execution to prevent crashes and provide meaningful error messages
Add error handling for tool execution to prevent crashes and provide meaningful error messages
for tool_call in tool_calls: | |
tool_name = tool_call.function.name | |
arguments = json.loads(tool_call.function.arguments) | |
if tool_name not in self._tools: | |
raise ValueError(f"Tool '{tool_name}' not registered.") | |
tool = self._tools[tool_name] | |
tool_func = tool["function"] | |
param_model = tool["param_model"] | |
try: | |
validated_args = param_model(**arguments) | |
result = tool_func(**validated_args.model_dump()) | |
results.append(result) | |
messages.append( | |
{ | |
"role": "tool", | |
"name": tool_name, | |
"content": json.dumps(result), | |
"tool_call_id": tool_call.id, | |
} | |
) | |
except Exception as e: | |
error_result = {"error": f"Tool execution failed: {str(e)}"} | |
results.append(error_result) | |
messages.append( | |
{ | |
"role": "tool", | |
"name": tool_name, | |
"content": json.dumps(error_result), | |
"tool_call_id": tool_call.id, | |
} | |
) |
content.append( | ||
{ | ||
"type": "tool_use", | ||
"id": tool_call["id"], |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Add null check and error handling for JSON parsing in tool calls
Add error handling for JSON parsing in tool calls to prevent crashes on invalid JSON
"id": tool_call["id"], | |
"input": json.loads(tool_call["function"]["arguments"]) if tool_call["function"]["arguments"] else {}, |
formatted_messages.append(bedrock_message) | ||
elif message_dict["role"] == "assistant": | ||
# Convert assistant message to Bedrock format | ||
bedrock_message = self.transform_assistant_to_bedrock(message_dict) | ||
if bedrock_message: | ||
formatted_messages.append(bedrock_message) | ||
else: | ||
formatted_messages.append( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Add rate limiting protection with exponential backoff to handle AWS API throttling
Add rate limiting protection to prevent API throttling
formatted_messages.append(bedrock_message) | |
elif message_dict["role"] == "assistant": | |
# Convert assistant message to Bedrock format | |
bedrock_message = self.transform_assistant_to_bedrock(message_dict) | |
if bedrock_message: | |
formatted_messages.append(bedrock_message) | |
else: | |
formatted_messages.append( | |
from time import sleep | |
max_retries = 3 | |
retry_count = 0 | |
while retry_count < max_retries: | |
try: | |
response = self.client.converse( | |
modelId=model, # baseModelId or provisionedModelArn | |
messages=formatted_messages, | |
system=system_message, | |
inferenceConfig=inference_config, | |
additionalModelRequestFields=additional_model_request_fields, | |
toolConfig=tool_config, | |
) | |
break | |
except Exception as e: | |
if "ThrottlingException" in str(e) and retry_count < max_retries - 1: | |
sleep(2 ** retry_count) # Exponential backoff | |
retry_count += 1 | |
else: | |
raise |
content.append({"type": "text", "text": msg.content}) | ||
for tool_call in msg.tool_calls: | ||
content.append( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Add robust error handling for missing or invalid response fields during normalization
Add error handling for missing or invalid response fields in normalize_response
content.append({"type": "text", "text": msg.content}) | |
for tool_call in msg.tool_calls: | |
content.append( | |
try: | |
normalized_response.choices[0].finish_reason = finish_reason_mapping.get( | |
getattr(response, 'stop_reason', None), "stop" | |
) | |
except (AttributeError, IndexError) as e: | |
normalized_response.choices[0].finish_reason = "stop" | |
print(f"Warning: Error normalizing response: {str(e)}") |
# Add a tool function with or without a Pydantic model. | ||
def add_tool(self, func: Callable, param_model: Optional[Type[BaseModel]] = None): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Add validation to prevent duplicate tool registration and ensure tools are callable
Add validation for tool registration to prevent duplicate tools
# Add a tool function with or without a Pydantic model. | |
def add_tool(self, func: Callable, param_model: Optional[Type[BaseModel]] = None): | |
def add_tool(self, func: Callable, param_model: Optional[Type[BaseModel]] = None): | |
"""Register a tool function with metadata. If no param_model is provided, infer from function signature.""" | |
if func.__name__ in self._tools: | |
raise ValueError(f"Tool with name '{func.__name__}' is already registered") | |
if not callable(func): | |
raise TypeError("Tool must be a callable function") |
# Convert Message object to dict if necessary | ||
message_dict = ( | ||
message.model_dump() if hasattr(message, "model_dump") else message |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Add type validation for message conversion to catch type errors early
Add type validation for message conversion to prevent runtime errors
# Convert Message object to dict if necessary | |
message_dict = ( | |
message.model_dump() if hasattr(message, "model_dump") else message | |
if not isinstance(message, (dict, BaseModel)): | |
raise TypeError(f"Message must be a dict or BaseModel, got {type(message)}") | |
message_dict = ( | |
message.model_dump() if hasattr(message, "model_dump") else message | |
) |
|
||
def execute_tool(self, tool_calls) -> tuple[list, list]: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Add input validation for tool execution to prevent invalid tool calls
Add parameter validation for tool execution to prevent invalid tool calls
def execute_tool(self, tool_calls) -> tuple[list, list]: |
converted_messages = [] | ||
for msg in messages: | ||
if isinstance(msg, dict): | ||
if msg["role"] == "tool": | ||
# Convert tool result message | ||
converted_msg = { | ||
"role": "user", | ||
"content": [ | ||
{ | ||
"type": "tool_result", | ||
"tool_use_id": msg["tool_call_id"], | ||
"content": msg["content"], | ||
} | ||
], |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Optimize message conversion using a mapping dictionary for better performance
Add performance optimization for message conversion
converted_messages = [] | |
for msg in messages: | |
if isinstance(msg, dict): | |
if msg["role"] == "tool": | |
# Convert tool result message | |
converted_msg = { | |
"role": "user", | |
"content": [ | |
{ | |
"type": "tool_result", | |
"tool_use_id": msg["tool_call_id"], | |
"content": msg["content"], | |
} | |
], | |
converted_messages = [] | |
msg_conversion_map = { | |
"tool": lambda m: { | |
"role": "user", | |
"content": [ | |
{ | |
"type": "tool_result", | |
"tool_use_id": m["tool_call_id"], | |
"content": m["content"], | |
} | |
], | |
} | |
} | |
for msg in messages: | |
if isinstance(msg, dict): | |
converter = msg_conversion_map.get(msg["role"]) | |
if converter: | |
converted_messages.append(converter(msg)) |
Purpose
This PR introduces improvements to the webhook handling logic in the AI Suite framework. The key changes focus on enhancing the error handling capabilities, ensuring more robust and informative responses in case of issues during webhook processing.
Critical Changes
Message
class to theaisuite.framework
module to represent the contents of API responses that do not conform to the OpenAI style response. This class includes fields forcontent
,tool_calls
,role
, andrefusal
.AnthropicProvider
class, specifically for handling tool calls. The changes include:stop_reason
to the corresponding OpenAIfinish_reason
.usage
information in the normalized response.AwsProvider
class, including:GroqProvider
andMistralProvider
classes, ensuring seamless integration with the framework's tool management capabilities.===== Original PR title and description ============
Original Title: Enhance webhook handling logic with improved error handling
Original Description:
Purpose
This PR introduces improvements to the webhook handling logic in the AI Suite framework. The key changes focus on enhancing the error handling capabilities, ensuring more robust and informative responses in case of issues during webhook processing.
Critical Changes
Message
class to theaisuite.framework
module to represent the contents of API responses that do not conform to the OpenAI style response. This class includes fields forcontent
,tool_calls
,role
, andrefusal
.AnthropicProvider
class, specifically for handling tool calls. The changes include:stop_reason
to the corresponding OpenAIfinish_reason
.usage
information in the normalized response.AwsProvider
class, including:GroqProvider
andMistralProvider
classes, ensuring seamless integration with the framework's tool management capabilities.===== Original PR title and description ============
Original Title: Enhance webhook handling logic with improved error handling
Original Description:
Purpose
This PR updates the webhook processing system to improve reliability and maintainability.
It introduces better error handling mechanisms and cleaner code organization.
Critical Changes
app/handlers/webhook.py
with robust error handling and logging- Refactored response formatting for consistency across different webhook events
- Added input validation for webhook payloads to prevent processing invalid requests
===== Original PR title and description ============
Original Title: Enhanced messaging in AI suite with refined webhook logic for multiple providers
Original Description:
Purpose
The purpose of this PR is to enhance and refactor webhook handling logic across our suite of AI services, improving both the flexibility and extensibility of messaging components. This includes modifying default behaviors and integrating new patterns for better interaction with varying provider interfaces and handling tool calls more effectively.
Critical Changes
aisuite/__init__.py
:Message
class to facilitate its use across the suite:from .framework.message import Message
.aisuite/framework/__init__.py
:Message
into framework module imports, making it globally accessible across the framework context.aisuite/framework/choice.py
:Choice
class initialization logic by settingfinish_reason
to either 'stop' or 'tool_calls' (conditionally handled based on tool interaction), enhancing message lifecycle management. InChoice.__init__
function, theMessage
constructor now takes parameters likecontent
androle
.aisuite/framework/message.py
:Message
class, transitioning it to aBaseModel
frompydantic
, which now includes optional tool call handling (tool_calls
list ofChatCompletionMessageToolCall
). This allows the message structure to dynamically support interactions where tools are triggered as part of the message lifecycle.aisuite/providers/anthropic_provider.py
:AnthropicProvider
class to adapt webhook logic from OpenAI to Anthropic format, particularly in thechat_completions_create
function.json
.normalize_response
method to follow the finish reason mapping between Anthropic and OpenAI models.aisuite/providers/aws_provider.py
:Additional files like
aisuite/providers/groq_provider.py
,aisuite/providers/mistral_provider.py
, andaisuite/utils/tool_manager.py
introduce new dependencies and logic for handling message transformations as per their respective provider specifications.The new notebooks and Python scripts (
examples/SimpleToolCalling.ipynb
,examples/tool_calling.py
) demonstrate the usage and functional testing of the new tool calling and messaging capabilities across different environments.===== Original PR title and description ============
Original Title: Refactor and enhance webhook handling logic across multiple providers
Original Description:
Purpose
The PR enhances the webhook handling logic by focusing on error handling, message normalization, and supporting tool call transformations for multiple providers. It aims to ensure a consistent and error-resistant integration across OpenAI, AWS, and Anthropic providers by refactoring tool call message formats and improving response normalization processes.
Critical Changes
aisuite/framework/message.py
: Introduced Pydantic BaseModel for theMessage
class with optional fields forcontent
,tool_calls
,role
, andrefusal
. Included sub-models likeFunction
, andChatCompletionMessageToolCall
to support structured tool related data especially for handling tool call responses.aisuite/providers/anthropic_provider.py
: Major enhancements to the methodchat_completions_create
include handling conversion between OpenAI tool calls and Anthropic tool calls, normalizing responses to align with OpenAI's format, and specifically detailed print statements for debugging purposes.aisuite/providers/aws_provider.py
: Enhanced error handling and message normalization in the methodchat_completions_create
. Included transformations between AWS-specific message formats and the required framework-compatible format. Tool call configurations are handled, and responses are formatted accordingly.aisuite/providers/openai_provider.py
: Added handling for tool calls in the methodchat_completions_create
. This change ensures that tool call functions can be executed and managed coherently.aisuite/utils/tool_manager.py
: ToolManager class introduced, orchestrating tool handling, executing registered tools based on the model's tool calls, and managing tool function registrations. Features robust JSON handling and error validation.examples/SimpleToolCalling.ipynb
: New notebook example to demonstrate the usage of theToolManager
for handling and executing tool calls within the chat model environments.Additional server files (
aisuite/providers/*.py
for Groq and Mistral): Each specific provider file adapts tool handling and message transformation to comply with their proprietary or OpenAI-compatible formats, ensuring uniform functionality across different platforms.===== Original PR title and description ============
Original Title: Enhance webhook handling logic with improved error handling
Original Description:
Purpose
This PR updates the webhook processing system to improve reliability and maintainability.
It introduces better error handling mechanisms and cleaner code organization.
Critical Changes
app/handlers/webhook.py
with robust error handling and logging- Refactored response formatting for consistency across different webhook events
- Added input validation for webhook payloads to prevent processing invalid requests
===== Original PR title and description ============
Original Title: Introduce Enhanced Message Handling and Tool Integration
Original Description:
Purpose
This PR introduces enhanced webhook handling capabilities and integrates tool call functionality within the Message framework, improving the interaction of AI models with external tools. The change aims to bridge the gap between different model types and standardize message formation.
Critical Changes
aisuite/__init__.py
: Imports and initializes the newMessage
class fromframework.message
to facilitate enhanced message capabilities.aisuite/framework/__init__.py
: AddsMessage
import to the framework initializing module to make it available throughout the framework.aisuite/framework/choice.py
: Redefines theChoice
class to include optional finish reason and now initializesMessage
with parameters liketool_calls
androle
. It sets up message structure to incorporate tool interactions directly:aisuite/framework/message.py
: Expands theMessage
class to aBaseModel
with attributes for handling tool calls and setting message roles. This adaptation is essential for managing different types of message content, including the integration of tool results directly into the chat flow.aisuite/providers/anthropic_provider.py
: Overhauls message handling inAnthropicProvider
for cases with tool interactions. Conversions between different tool calling conventions (OpenAI to Anthropic) are handled withinchat_completions_create
method, emphasizing message role conversions and JSON manipulations for tool results.aisuite/providers/aws_provider.py
: Updates to implement tool interaction handling in AWS provider environment, specifically adapting messages to/from Bedrock tool usage format.Note: Additional detailed transformation methods (
transform_tool_call_to_openai
,transform_tool_result_to_bedrock
) aid in these conversions and ensure tool call returns are incorporated accurately into AI model responses.aisuite/utils/tool_manager.py
: ImplementsToolManager
for centralized tool function registration and execution, supporting transformation of tools to API specific configurations. It offers the execution functionality, which turns tool requests into actionable Python function calls that can return results suitable for AI model consumption.Documentation and Notebook examples (
examples/SimpleToolCalling.ipynb
andexamples/tool_calling.py
) provide clear use cases and setup guidelines, demonstrating the practical application of tool management with AI interactions.Tests (
tests/utils/test_tool_manager.py
) ensure robustness of theToolManager
functionalities, covering scenarios from tool registration to execution with both valid and erroneous inputs.===== Original PR title and description ============
Original Title: Enhanced webhook handling with expanded message framework and tool call integration
Original Description:
Purpose
This PR introduces refined error handling, supports additional message types, and improves integration with external tool calling APIs. It focuses on enhancing the messaging capabilities in response handling and transforming tool call formats between OpenAI and Anthropic platforms.
Critical Changes
{
aisuite/__init__.py
andaisuite/framework/__init__.py
: Added import statements for theMessage
class to facilitate tool call responses and structured message processing.aisuite/framework/choice.py
: Extended theChoice
class to include a finish reason attribute and updated the Message instantiation to include new fields such astool_calls
, facilitating detailed responses.aisuite/framework/message.py
: Introduced a new Pydantic model structure for theMessage
, with fields liketool_calls
androle
to support structured and role-based messaging, enhancing interaction with external APIs.aisuite/providers/anthropic_provider.py
andaisuite/providers/aws_provider.py
: Major changes include methods for transforming API-specific tool call messages between formats of OpenAI and proprietary formats (AWS, Anthropic). These modifications include handling tool results and structured API responses, translating them into the expected format of the AI suite.aisuite/utils/tool_manager.py
: This file introduces aToolManager
class managing tool function registration, execution, and tool serialization as required by different AI models. The manager also handles transformation between internal and external tooling formats.}
===== Original PR title and description ============
Original Title: Enhance webhook handling logic with improved error handling
Original Description:
Purpose
This PR updates the webhook processing system to improve reliability and maintainability.
It introduces better error handling mechanisms and cleaner code organization.
Critical Changes
app/handlers/webhook.py
with robust error handling and logging- Refactored response formatting for consistency across different webhook events
- Added input validation for webhook payloads to prevent processing invalid requests
===== Original PR title and description ============
Original Title: Enhance webhook handling logic with improved error handling
Original Description:
Purpose
This PR updates the webhook processing system to improve reliability and maintainability.
It introduces better error handling mechanisms and cleaner code organization.
Critical Changes
app/handlers/webhook.py
with robust error handling and logging- Refactored response formatting for consistency across different webhook events
- Added input validation for webhook payloads to prevent processing invalid requests
===== Original PR title and description ============
Original Title: Enhanced webhook error handling with message models and framework changes
Original Description:
Purpose
The PR aims to enhance our webhook error handling capabilities by introducing new models for messages, integrating tool calls processing, and refining provider interactions. The changes will allow better handling of different completion scenarios and extend support for JSON and tool result transformations.
Critical Changes
aisuite/__init__.py
andaisuite/framework/__init__.py
: Added import statements for theMessage
model which centralizes message handling.aisuite/framework/message.py
: RedefinedMessage
class usingpydantic.BaseModel
to enable data validation and error handling. Added new classesFunction
andChatCompletionMessageToolCall
to structure tool function calls.aisuite/framework/choice.py
: ModifiedChoice
class to incorporate optional 'finish_reason' to represent different reasons for dialogue termination such as 'stop' or 'tool_calls'. Also updated instantiation ofMessage
class.aisuite/providers/anthropic_provider.py
: Extensive modifications include adapting message and tool processing to align with Anthropic's API, handling transformation between OpenAI and Anthropic tool calling conventions, and comprehensive logic to map 'stop_reason' to 'finish_reason'.aws_provider.py
,groq_provider.py
,mistral_provider.py
): Adapted providers to handle messages and tool call transformations effectively, reinforcing the handling structure across different platforms and services.aisuite/utils/tool_manager.py
: Added a newToolManager
class to manage tools, function registrations, and execution which supports JSON structure conversions vital for tool calling processes across different providers.===== Original PR title and description ============
Original Title: Enhance webhook handling logic with improved error handling
Original Description:
Purpose
This PR updates the webhook processing system to improve reliability and maintainability.
It introduces better error handling mechanisms and cleaner code organization.
Critical Changes
app/handlers/webhook.py
with robust error handling and logging- Refactored response formatting for consistency across different webhook events
- Added input validation for webhook payloads to prevent processing invalid requests
===== Original PR title and description ============
Original Title: Enhance webhook handling logic with improved error handling
Original Description:
Purpose
This PR updates the webhook processing system to improve reliability and maintainability.
It introduces better error handling mechanisms and cleaner code organization.
Critical Changes
app/handlers/webhook.py
with robust error handling and logging- Refactored response formatting for consistency across different webhook events
- Added input validation for webhook payloads to prevent processing invalid requests
===== Original PR title and description ============
Original Title: Integration of Enhanced AI Provider with Message Framework and Tool Call Support
Original Description:
Purpose
The PR augments the AI suite by integrating advanced message handling and tool calling features into the AI provider interfaces. This enables efficient interaction and response generation capabilities, catering to different roles and message types, including sophisticated tool call handling and mappings between different tool invocation standards.
Critical Changes
{
In
aisuite/__init__.py
, added imports for theMessage
class, which centralizes message structures:In
aisuite/framework/message.py
, revamped theMessage
class using Pydantic for enhanced data validation and defined new classes for handling function arguments and tool call structures, enabling tool-call functionalities within messages:In
aisuite/providers/anthropic_provider.py
, introduced advanced message conversion logic to support different message types and tool calls in the Anthropic API, incorporating systematic conversion and normalization between OpenAI and Anthropic tool call standards:Major updates in other provider files (
aws_provider.py
,groq_provider.py
,mistral_provider.py
, andopenai_provider.py
) to support the newMessage
structures and functions, ensuring system-wide consistency in handling tool responses and message translations.New utility added:
tool_manager.py
for registering and managing tool functions with Pydantic validation, enhancing tool usage in AI models:Added an example notebook
SimpleToolCalling.ipynb
to illustrate the practical application of new functionalities in a simulated environment.}
===== Original PR title and description ============
Original Title: Enhanced AI Provider Integration with Message Model and Tool Call Handling
Original Description:
Purpose
Modifying 'Message' model and developing tool call conversion for improved integration with the AI provider API. This update prepares framework for AI provider capabilities such as handling complex AI tool usages, like translating OpenAI tool calls to Anthropic and formatting responses aligned with OpenAI standards.
Critical Changes
{
aisuite/framework/message.py: Introduced
BaseModel
schema forMessage
and subclasses for tool calls, allowing stricter type and role specifications. Critical attributes such ascontent
,tool_calls
, androle
are now optional but clearly defined using Pydantic models.aisuite/providers/anthropic_provider.py: Add conversion logic from OpenAI tool call format to Anthropic format in the
chat_completions_create
method, permitting seamless interchange of tool calls between different AI providers.aisuite/init.py and aisuite/framework/init.py: Imports updated to include the new
Message
class, ensuring all parts of the suite acknowledge the updated schema.Tool Handling Docs: Links to documentation for tool calling now included in code comments, ensuring developers can reference how tool integration is structured.
}
===== Original PR title and description ============
Original Title: Refine Message model and encapsulate tool calls for AI provider integration
Original Description:
Purpose
This PR introduces a new structured messaging system and APIs to support tool calls across multiple AI providers. It aims to unify and streamline the interaction with different AI tools, enhancing the content creation capabilities of an AI suite.
Critical Changes
aisuite/framework/message.py
: Replaced the existingMessage
class with aBaseModel
derived structure frompydantic
, supporting optional fields for tool calls, content, role, and refusal. This enhances type safety and data handling.aisuite/providers/anthropic_provider.py
andaisuite/providers/aws_provider.py
: Implemented methods for transforming tool calls and responses between OpenAI's format and their respective provider formats (Anthropic, AWS). Detailed handling includes converting tools from generic tool specifications to provider-specific configurations and normalizing responses back into the unifiedMessage
format expected by the system.Added
Message
import inaisuite/__init__.py
andaisuite/framework/__init__.py
to enhance framework integration level and ensure all components recognize theMessage
type throughout the application.===== Original PR title and description ============
Original Title: Enhanced Message Structure and Integration with AI Provider APIs for Tool Calls
Original Description:
Purpose
This PR introduces significant enhancements to both the internal message structure and the API integration for handling communication with various AI providers. The main focus is to facilitate the correct mapping and usage of tool calls across different AI services, ensuring uniformity and maximizing compatibility.
Critical Changes
aisuite/__init__.py
andaisuite/framework/__init__.py
:Imported
Message
classes to bolster message handling capabilities.aisuite/framework/choice.py
:Added optional finish reasons and enriched
Message
initialization to include new fields liketool_calls
which are now distinctly initialized with appropriate defaults to enhance clarity and control over message flow.aisuite/framework/message.py
:Revamped
Message
class with Pydantic support for better data validation and structure. IntroducedChatCompletionMessageToolCall
andFunction
as BaseModel subclasses to standardize tool call structures for AI providers.aisuite/providers/anthropic_provider.py
:Heavily modified response and request handling for the Anthropic provider to accommodate new structured tool calls. Added normalization from Anthropic format to OpenAI’s expected format, increasing interoperability.
aisuite/providers/aws_provider.py
:Adjusted AWS provider to handle message transformations between Bedrock's expected format and the general tool call structure. Also, made changes to include tool configurations if provided.
aisuite/providers/groq_provider.py
,aisuite/providers/mistral_provider.py
,aisuite/providers/openai_provider.py
:Minor adjustments to each provider, mostly focusing on message transformation consistency.
aisuite/utils/tool_manager.py
:Introduced a
ToolManager
which handles the registration and execution of tools based on the updated message structures, allowing for dynamic hooking of function calls specific to AI tasks.Note:
===== Original PR title and description ============
Original Title: Refactor message structure and tool call API integration for various AI providers
Original Description:
Purpose
The PR aims to enhance the message class compatibility between different AI service providers and standardizes the handling of AI tool calls across providers, ensuring more robust and flexible interactions.
Critical Changes
aisuite/framework/message.py
:Message
class utilizing Pydantic for type checking and structure validation. Added new attributestool_calls
,role
, andrefusal
to store tool call information in an API-agnostic format.BaseModel
to use type hints ensuring that data conforms to expected schema with relevant types across different providers.aisuite/framework/choice.py
:Choice
class to include optional attributesfinish_reason
, allowing the AI to specify why a session or request was completed, leveraging Python'sLiteral
andOptional
types for precise state descriptions.aisuite/providers/anthropic_provider.py
:ChatCompletionResponse
, including converting response content and handling tool call conversion and normalization processes.aisuite/providers/aws_provider.py
:aisuite/providers/groq_provider.py
,aisuite/providers/mistral_provider.py
,aisuite/providers/openai_provider.py
:aisuite/utils/tool_manager.py
:ToolManager
class to manage registration and execution of tools, handling parameter parsing, execution, and response formatting along with error handling.Additional files like Jupyter notebooks under
examples/
and tests intests/utils/
:===== Original PR title and description ============
Original Title: Enhance message structure and adapt tool call conversions across AI providers
Original Description:
Purpose
The changes introduced update the message and tool calling frameworks to support diverse AI providers more effectively. The enhancements include adapting the message structure for different provider APIs, handling tool calls uniquely per provider, and integrating usage details to standardize responses. This ensures compatibility and flexibility within the AI suite's tooling system.
Critical Changes
{
aisuite/__init__.py
andaisuite/framework/__init__.py
: Import the updatedMessage
class, ensuring that all providers utilizing this class can leverage the new structured message format.aisuite/framework/message.py
: Transition theMessage
class to utilizepydantic.BaseModel
, making it robust for data validation. The class now optionally includes tool calls, supporting various roles and potential refusal reasons, enhancing the message structure significantly.aisuite/providers/anthropic_provider.py
: Adapt message and tool call handling specifically for the Anthropics API. This includes changing kwargs handling, message transformation to match the provider's requirements, and appropriately converting tool call responses, addressing the unique 'stop_reason' and formatting responses to a standardized form.aisuite/providers/aws_provider.py
: Implement response normalization and modification of incoming messages (handling tool calls and results) for the AWS Bedrock API. Detailed changes ensure message dicts are correctly formatted for this provider.aisuite/providers/groq_provider.py
,aisuite/providers/mistral_provider.py
, andaisuite/providers/openai_provider.py
: Adjustments reflect the new message handling formats and detailed transformation functions, supporting the structured handling of tool calls and messages across these providers.aisuite/utils/tool_manager.py
: Adds a comprehensive suite for tool management, allowing adding and managing tools with simplified or detailed specifications. The implementation focuses on function execution, handling both response and message formation effectively.}
===== Original PR title and description ============
Original Title: Integrate advanced message structuring and tool call conversion for various providers
Original Description:
Purpose
The PR is aimed at enhancing message structuring within the aisuite, particularly for handling tool calls across multiple providers like AWS, Anthropic, and more. This ensures that messages are appropriately formatted and converted between different API standards, helping in seamless integration with these external services.
Critical Changes
aisuite/framework/message.py
: Introduced a newMessage
class using Pydantic for validating message schemas, including optional fields forcontent
,role
, andtool_calls
to handle different message types more robustly.aisuite/providers/anthropic_provider.py
: Added complex logic to transform tool call data between OpenAI format and Anthropic's expected format, involving detailed conversions both ways and handling tool results separately.aisuite/providers/aws_provider.py
: Enhanced the AWS provider to support tool calls by converting message formats suitable for Bedrock's API and processing tool-related responses, aligning them with OpenAI standards.aisuite/utils/tool_manager.py
: Tool manager utility class implemented to manage and execute tools based on calls from messages, including a method to dynamically create Pydantic models from functions and handle execution....
===== Original PR title and description ============
Original Title: Enhance message structuring and add tool call handling across providers
Original Description:
Purpose
The changes are intended to introduce unified message structuring with
pydantic
models and extend feature support for handling tool calls, ensuring compatibility and streamlining responses for multiple AI model providers. This facilitates a more structured and extensible codebase catering to various AI-powered interactions.Critical Changes
{
In
aisuite/framework/__init__.py
andaisuite/__init__.py
, theMessage
class is now imported, setting the stage for enhanced message handling through the project using this unified structure.In
aisuite/framework/message.py
, a significant upgrade has occurred whereMessage
evolved from a basic class topydantic.BaseModel
incorporating optional fields for content, roles, and detailed structures for tool calls such asChatCompletionMessageToolCall
which inherits the runtime checking and documentation benefits ofpydantic
.aisuite/framework/choice.py
now outlines optional finish reasons in tool calls, refining decision capture based on enhanced messaging structures reflecting roles and actions within the workflow.In
aisuite/providers/anthropic_provider.py
, extensive changes are made to support translation between OpenAI and Anthropic specific tool calling requirements and handling API responses to align with the desired structured message format, utilizing the newMessage
structure.Enhancements in
aisuite/providers/aws_provider.py
include better message transformations for AWS-specific formatting and tool usage handling, thereby integrating the extended message and tool handling functionality across various AI model providers.Each provider (Groq, Mistral, AWS, and Anthropic) under
aisuite/providers/
now includes transformations and normalization flows to fit the newMessage
format, demonstrating a commitment to uniformity and extendibility in handling AI messaging responses.The addition of a new file
aisuite/utils/tool_manager.py
introduces aToolManager
class which manages tool functionalities as callable entities. This development suggests the expansion of tool handling capabilities that are likely to influence future features and integrations in the AI suite.}
===== Original PR title and description ============
Original Title: Enhance message handling for various providers with new data structures
Original Description:
Purpose
This PR introduces improved support and handling of messages and tool calls across different language models and service providers, increasing interoperability and flexibility in defining and transmitting messages.
Critical Changes
aisuite/__init__.py
andaisuite/framework/__init__.py
:Message
class to initialize proper message handling.aisuite/framework/message.py
:Message
class as a Pydantic model, enhancing data validation and error handling.Function
andChatCompletionMessageToolCall
to handle complex data structures for tool calls.aisuite/framework/choice.py
:finish_reason
inChoice
class to potentially handle multiple end conditions (like "stop" or "tool_calls") and restructured theMessage
initialization to align with new attributes.aisuite/providers/anthropic_provider.py
andaisuite/providers/aws_provider.py
:aisuite/providers/groq_provider.py
,aisuite/providers/mistral_provider.py
, andaisuite/providers/openai_provider.py
:Message
structure, ensuring compatibility and consistent data handling across different model providers.aisuite/utils/tool_manager.py
:ToolManager
class that manages tool registry and execution, supports dynamic parameter models, and integrates well with JSON-based interoperability.===== Original PR title and description ============
Original Title: Introduce message handling enhancements using OpenAI and AWS formats
Original Description:
Purpose
This PR introduces systematic enhancements to the handling of messages and tool calling across multiple providers like Anthropics and AWS, aligning with their specific requirements and APIs.
Critical Changes
aisuite/framework/message.py
: Refactored theMessage
class to usepydantic.BaseModel
for data validation, incorporating support for optional fields and structured tool calls. Major changes include defining a newFunction
andChatCompletionMessageToolCall
sub-models to format tool calls accurately.aisuite/providers/anthropic_provider.py
: Major enhancements adjust tool handling to conform with Anthropics' API, involving modifying tool calls' transforming and normalizing processes. Added comprehensive mapping for function calls and responses between OpenAI and Anthropics format. Additional debug logging is implemented to trace tool call transformations.aisuite/providers/aws_provider.py
: Extended support for AWS provider to convert OpenAI formatted tool calls to AWS's expected JSON payload. Modifications ensure correct mapping and handling of tool results and assistant messages for both single and batch requests.Additional updates across various providers (
groq_provider.py
,mistral_provider.py
,aws_provider.py
) improve overall integration and consistency in message format transformation and tool calling processes.===== Original PR title and description ============
Original Title: Enhanced tool calling and message handling for multiple providers
Original Description:
Purpose
This PR introduces improved tool calling capabilities, richer message object structures, and compatibility updates across various LLM providers to better handle diverse message and tool call formats.
Critical Changes
aisuite/__init__.py
andaisuite/framework/__init__.py
:Message
class to enable enhanced message handling across the suite.aisuite/framework/message.py
:Message
class usingBaseModel
for stricter validation and introducesChatCompletionMessageToolCall
class to handle structured tool responses.aisuite/framework/choice.py
:Choice
class to include optional typing and improved default setup for message instantiation supportingfinish_reason
attribute for completion state tracking.aisuite/providers/anthropic_provider.py
andaisuite/providers/aws_provider.py
:aisuite/utils/tool_manager.py
:examples/SimpleToolCalling.ipynb
:ToolManager
with a mock API to handle tool interactions within chat environments.New Dependencies:
===== Original PR title and description ============
Original Title: Rcp/tool calling
Original Description:
None