-
Notifications
You must be signed in to change notification settings - Fork 0
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Refine Message model and encapsulate tool calls for AI provider integration #2
base: main
Are you sure you want to change the base?
Conversation
PR Review SummaryOverall Review:The provided PR adds significant functionality related to tool calling across various AI providers, integrating complex message transformation and normalization logic. It spans across initializing and transforming messages according to the specifications required by APIs like AWS, Anthropic, etc. The modifications involve converting and normalizing tool responses and ensuring these are compatible with the AI models' expected formats. The changes are substantial and have a broad impact on the application, potentially affecting how tool results are processed, displayed, or logged.
RecommendationsRecommendation #1To address the potential inconsistencies in parsing def normalize_response(self, response):
normalized_response = ChatCompletionResponse()
finish_reason_mapping = {
"end_turn": "stop",
"max_tokens": "length",
"tool_use": "tool_calls",
}
normalized_response.choices[0].finish_reason = finish_reason_mapping.get(response.stop_reason, "stop")
# Add usage information
normalized_response.usage = {
"prompt_tokens": response.usage.input_tokens,
"completion_tokens": response.usage.output_tokens,
"total_tokens": response.usage.input_tokens + response.usage.output_tokens,
}
# Check if the response contains tool usage and validate format
if response.stop_reason == "tool_use":
tool_call = next((content for content in response.content if 'type' in content and content.type == "tool_use"), None)
if tool_call and 'id' in tool_call:
tool_call_id = tool_call.id
function = Function(name=tool_call.name, arguments=json.dumps(tool_call.input))
tool_call_obj = ChatCompletionMessageToolCall(id=tool_call_id, function=function, type="function")
text_content = next((content.text for content in response.content if 'type' in content and content.type == "text"), "")
message = Message(content=text_content or None, tool_calls=[tool_call_obj] if tool_call else None, role="assistant", refusal=None)
normalized_response.choices[0].message = message
else:
raise ValueError("Expected 'tool_use' content is missing or malformed in API response.")
return normalized_response This snippet includes error handling for missing or malformed 'tool_use' data, ensuring the internal data paths are consistent before parsing. Recommendation #2Implement rigorous input validation and error handling around JSON operations and external data handling. Make use of secure practices such as using parameterized queries or templates when constructing requests or handling data. For instance: def transform_tool_call_to_openai(self, response):
if response.get("stopReason") != "tool_use":
return None
try:
tool_calls = []
for content in response["output"]["message"]["content"]:
if "toolUse" in content:
tool = content["toolUse"]
tool_calls.append({
"type": "function",
"id": tool["toolUseId"],
"function": {
"name": tool["name"],
"arguments": json.dumps(tool["input"], cls=SecureJSONEncoder),
},
})
except json.JSONDecodeError as e:
raise ValueError(f"JSON parsing error: {str(e)}")
return {
"role": "assistant",
"content": None,
"tool_calls": tool_calls,
"refusal": None,
} In this code snippet, |
PR Review SummaryOverall Review:This PR introduces a
RecommendationsRecommendation #1Modify the def convert_openai_tools_to_anthropic(openai_tools):
anthropic_tools = []
for tool in openai_tools:
if tool["type"] == "function":
function = tool["function"]
anthropic_tool = {
"name": function["name"],
"description": function.get("description", ""),
"input_schema": {
"type": "object",
"properties": function["parameters"]["properties"],
"required": function["parameters"].get("required", []),
},
}
anthropic_tools.append(anthropic_tool)
else:
# Log unhandled tool types, or possibly raise an error if necessary
logging.warning(f"Tool type {tool['type']} is not supported yet")
return anthropic_tools Be sure to import the Recommendation #2Ensure that all logging and data classes do not include or expose sensitive or identifiable information. Use data masking or transformation techniques to obscure real values in logs and ensure that no sensitive data is logged in plain text. Modify the log statements and any debug outputs to mask any potentially sensitive information: class Function(BaseModel):
name: str
arguments: str
def log_function(self):
masked_arguments = mask_sensitive_data(self.arguments) # Assumes existence of this function
logging.debug(f"Function called: {self.name}, with arguments: {masked_arguments}")
# Replace direct print statements with the above method call:
function_instance.log_function() Remember to define or implement the Recommendation #3Implement additional test cases that focus on error handling for malformed or incorrect data scenarios to ensure robustness. Here's an example test case using import pytest
from module import data_handling_function
def test_data_handling_with_malformed_data():
malformed_data = "incorrect format data"
with pytest.raises(ValidationError):
result = data_handling_function(malformed_data)
assert result is None, "Function should not handle malformed data without error" This test checks that handling of malformed data raises a Recommendation andrewyng#4Abstract the transformations more robustly, potentially using a factory pattern to simplify and manage the complexity of various tool formats. For example: class ToolTransformerFactory:
@staticmethod
def get_transformer(provider_type):
if provider_type == "Anthropic":
return AnthropicToolTransformer()
elif provider_type == "AWS":
return AWSToolTransformer()
else:
raise ValueError("Unsupported provider type")
class AnthropicToolTransformer:
def transform(self, tool_data):
# perform transformation logic specific to Anthropic
pass
class AWSToolTransformer:
def transform(self, tool_data):
# perform transformation logic specific to AWS
pass
# Usage:
transformer = ToolTransformerFactory.get_transformer("Anthropic")
transformed_data = transformer.transform(tool_data) This pattern will facilitate adding support for new providers or changing existing implementations with minimal impact on other parts of the system. Recommendation andrewyng#5Ensure logical grouping and minimal interdependencies of modules. Reviewing module responsibilities and interactions might reveal opportunities to reduce complexity or improve cohesiveness, making it easier to manage and evolve separate parts without unintended side effects. You might consider reevaluating some of the finer-grained separations if they lead to excessive hopping between files or unclear relationships between components. |
PR Review SummaryOverall Review:This PR introduces extensive modifications across the aisuite, aiming to enhance its ability to interact with AI services by handling tool calls. It touches various aspects of message transformation and normalization, ensuring compatibility across system interfaces. The changes are complex with potentially significant implications for the application’s capability to process external AI tool results, managing these translations across multiple providers like AWS, Anthropic, and others.
RecommendationsRecommendation #1To address the potential inconsistencies in parsing def normalize_response(self, response):
normalized_response = ChatCompletionResponse()
finish_reason_mapping = {
"end_turn": "stop",
"max_tokens": "length",
"tool_use": "tool_calls",
}
normalized_response.choices[0].finish_reason = finish_reason_mapping.get(response.stop_reason, "stop")
# Add usage information
normalized_response.usage = {
"prompt_tokens": response.usage.input_tokens,
"completion_tokens": response.usage.output_tokens,
"total_tokens": response.usage.input_tokens + response.usage.output_tokens,
}
# Check if the response contains tool usage and validate format
if response.stop_reason == "tool_use":
tool_call = next((content for content in response.content if 'type' in content and content.type == "tool_use"), None)
if tool_call and 'id' in tool_call:
tool_call_id = tool_call.id
function = Function(name=tool_call.name, arguments=json.dumps(tool_call.input))
tool_call_obj = ChatCompletionMessageToolCall(id=tool_call_id, function=function, type="function")
text_content = next((content.text for content in response.content if 'type' in content and content.type == "text"), "")
message = Message(content=text_content or None, tool_calls=[tool_call_obj] if tool_call else None, role="assistant", refusal=None)
normalized_response.choices[0].message = message
else:
raise ValueError("Expected 'tool_use' content is missing or malformed in API response.")
return normalized_response This snippet includes error handling for missing or malformed 'tool_use' data, ensuring the internal data paths are consistent before parsing. Recommendation #2Implement rigorous input validation and error handling around JSON operations and external data handling. Make use of secure practices such as using parameterized queries or templates when constructing requests or handling data. For instance: def transform_tool_call_to_openai(self, response):
if response.get("stopReason") != "tool_use":
return None
try:
tool_calls = []
for content in response["output"]["message"]["content"]:
if "toolUse" in content:
tool = content["toolUse"]
tool_calls.append({
"type": "function",
"id": tool["toolUseId"],
"function": {
"name": tool["name"],
"arguments": json.dumps(tool["input"], cls=SecureJSONEncoder),
},
})
except json.JSONDecodeError as e:
raise ValueError(f"JSON parsing error: {str(e)}")
return {
"role": "assistant",
"content": None,
"tool_calls": tool_calls,
"refusal": None,
} In this code snippet, |
PR Code Suggestions Summary ✨
|
message_dict | ||
) | ||
if bedrock_message: | ||
formatted_messages.append(bedrock_message) | ||
elif message_dict["role"] == "assistant": | ||
# Convert assistant message to Bedrock format |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Refactor tool handling into a dedicated method to clean up the main method and improve readability.
For clarity and maintainability, it is better to separate large blocks of conditional logic into small, well-named private methods. This can be done in the AwsProvider
class where multiple if-else
blocks handle different message roles.
message_dict | |
) | |
if bedrock_message: | |
formatted_messages.append(bedrock_message) | |
elif message_dict["role"] == "assistant": | |
# Convert assistant message to Bedrock format | |
+ self._configure_tool_handling(kwargs) |
PR Review SummaryOverall Review:The PR introduces a sophisticated handling mechanism for tool calling across various providers like AWS, Anthropic, and others, along with enhancements in message object structures and compatibility updates. It includes significant refactoring and new implementations in the handling of JSON, external APIs, and dynamic data structures which are pivotal for the functioning of the AI systems in the suite. The changes also involve a new
RecommendationsRecommendation #1To address the potential inconsistencies in parsing def normalize_response(self, response):
normalized_response = ChatCompletionResponse()
finish_reason_mapping = {
"end_turn": "stop",
"max_tokens": "length",
"tool_use": "tool_calls",
}
normalized_response.choices[0].finish_reason = finish_reason_mapping.get(response.stop_reason, "stop")
# Add usage information
normalized_response.usage = {
"prompt_tokens": response.usage.input_tokens,
"completion_tokens": response.usage.output_tokens,
"total_tokens": response.usage.input_tokens + response.usage.output_tokens,
}
# Check if the response contains tool usage and validate format
if response.stop_reason == "tool_use":
tool_call = next((content for content in response.content if 'type' in content and content.type == "tool_use"), None)
if tool_call and 'id' in tool_call:
tool_call_id = tool_call.id
function = Function(name=tool_call.name, arguments=json.dumps(tool_call.input))
tool_call_obj = ChatCompletionMessageToolCall(id=tool_call_id, function=function, type="function")
text_content = next((content.text for content in response.content if 'type' in content and content.type == "text"), "")
message = Message(content=text_content or None, tool_calls=[tool_call_obj] if tool_call else None, role="assistant", refusal=None)
normalized_response.choices[0].message = message
else:
raise ValueError("Expected 'tool_use' content is missing or malformed in API response.")
return normalized_response This snippet includes error handling for missing or malformed 'tool_use' data, ensuring consistency before parsing. Recommendation #2Implement rigorous input validation and error handling around JSON operations and external data handling. Use secure practices such as using parameterized queries or templates when constructing requests or handling data to ensure data integrity and security. For instance: def transform_tool_call_to_openai(self, response):
if response.get("stopReason") != "tool_use":
return None
try:
tool_calls = []
for content in response["output"]["message"]["content"]:
if "toolUse" in content:
tool = content["toolUse"]
tool_calls.append({
"type": "function",
"id": tool["toolUseId"],
"function": {
"name": tool["name"],
"arguments": json.dumps(tool["input"], cls=SecureJSONEncoder),
},
})
except json.JSONDecodeError as e:
raise ValueError(f"JSON parsing error: {str(e)}")
return {
"role": "assistant",
"content": None,
"tool_calls": tool_calls,
"refusal": None,
} In this code snippet, |
PR Code Suggestions Summary ✨
|
PR Review SummaryOverall Review:The PR introduces significant changes aimed at standardizing and enhancing message handling and tool calling functionality across various AI providers. While the core intent to align with specific API requirements of services like AWS and Anthropic is beneficial, the complexity of the changes presents multiple areas of concern. This includes potential logical errors in data manipulation logic, security in handling JSON data transformations, and the overall sufficiency of test coverage for new and modified codes. The addition of robust tool-management utilities and modifications in message transformations indicates a well-planned improvement, but the real-world application will need careful validation.
RecommendationsRecommendation #1To address the potential inconsistencies in parsing def normalize_response(self, response):
normalized_response = ChatCompletionResponse()
finish_reason_mapping = {
"end_turn": "stop",
"max_tokens": "length",
"tool_use": "tool_calls",
}
normalized_response.choices[0].finish_reason = finish_reason_mapping.get(response.stop_reason, "stop")
# Add usage information
normalized_response.usage = {
"prompt_tokens": response.usage.input_tokens,
"completion_tokens": response.usage.output_tokens,
"total_tokens": response.usage.input_tokens + response.usage.output_tokens,
}
# Check if the response contains tool usage and validate format
if response.stop_reason == "tool_use":
tool_call = next((content for content in response.content if 'type' in content and content.type == "tool_use"), None)
if tool_call and 'id' in tool_call:
tool_call_id = tool_call.id
function = Function(name=tool_call.name, arguments=json.dumps(tool_call.input))
tool_call_obj = ChatCompletionMessageToolCall(id=tool_call_id, function=function, type="function")
text_content = next((content.text for content in response.content if 'type' in content and content.type == "text"), "")
message = Message(content=text_content or None, tool_calls=[tool_call_obj] if tool_call else None, role="assistant", refusal=None)
normalized_response.choices[0].message = message
else:
raise ValueError("Expected 'tool_use' content is missing or malformed in API response.")
return normalized_response This snippet includes error handling for missing or malformed 'tool_use' data, ensuring the internal data paths are consistent before parsing. Recommendation #2Implement rigorous input validation and error handling around JSON operations and external data handling. Use secure practices such as using parameterized queries or templates when constructing requests or handling data to ensure data integrity and security. Here's an example of how you might code this: def transform_tool_call_to_openai(self, response):
if response.get("stopReason") != "tool_use":
return None
try:
tool_calls = []
for content in response["output"]["message"]["content"]:
if "toolUse" in content:
tool = content["toolUse"]
tool_calls.append({
"type": "function",
"id": tool["toolUseId"],
"function": {
"name": tool["name"],
"arguments": json.dumps(tool["input"], cls=SecureJSONEncoder),
},
})
except json.JSONDecodeError as e:
raise ValueError(f"JSON parsing error: {str(e)}")
return {
"role": "assistant",
"content": None,
"tool_calls": tool_calls,
"refusal": None,
} In this code snippet, |
PR Review SummaryOverall Review:The PR introduces significant enhancements in handling messages and tool calling across various AI providers by standardizing interactions and data formats, particularly for AWS and Anthropics. Changes include refactoring the
RecommendationsRecommendation #1To address the potential inconsistencies in parsing def normalize_response(self, response):
normalized_response = ChatCompletionResponse()
finish_reason_mapping = {
"end_turn": "stop",
"max_tokens": "length",
"tool_use": "tool_calls",
}
normalized_response.choices[0].finish_reason = finish_reason_mapping.get(response.stop_reason, "stop")
# Add usage information
normalized_response.usage = {
"prompt_tokens": response.usage.input_tokens,
"completion_tokens": response.usage.output_tokens,
"total_tokens": response.usage.input_tokens + response.usage.output_tokens,
}
# Validate tool usage
if response.stop_reason == "tool_use":
tool_call = next((content for content in response.content if 'type' in content and content.type == "tool_use"), None)
if tool_call and 'id' in tool_call:
tool_call_id = tool_call.id
function = Function(name=tool_call.name, arguments=json.dumps(tool_call.input))
tool_call_obj = ChatCompletionMessageToolCall(id=tool_call_id, function=function, type="function")
text_content = next((content.text for content in response.content if 'type' in content and content.type == "text"), "")
message = Message(content=text_content or None, tool_calls=[tool_call_obj] if tool_call else None, role="assistant", refusal=None)
normalized_response.choices[0].message = message
else:
raise ValueError("Expected 'tool_use' content is missing or malformed in API response.")
return normalized_response This snippet will ensure that the Recommendation #2Implement rigorous input validation and error handling around JSON operations and external data handling. Use secure practices, such as parameterized queries or templates when constructing requests or handling data. For instance, redesign the method def transform_tool_call_to_openai(self, response):
if response.get("stopReason") != "tool_use":
return None
try:
tool_calls = []
for content in response["output"]["message"]["content"]:
if "toolUse" in content:
tool = content["toolUse"]
tool_calls.append({
"type": "function",
"id": tool["toolUseId"],
"function": {
"name": tool["name"],
"arguments": json.dumps(tool["input"], cls=SecureJSONEncoder),
},
})
except json.JSONDecodeError as e:
raise ValueError(f"JSON parsing error: {str(e)}")
return {
"role": "assistant",
"content": None,
"tool_calls": tool_calls,
"refusal": None,
} In this code snippet, Recommendation #3Enhance the test suite to cover various scenarios of malformed or incorrect data inputs. Introduce comprehensive tests simulating different types of errors to ensure robust error handling and security. For example, create tests for JSON transformations to simulate malformed JSON: import pytest
from your_module import some_function_handling_json
def test_malformed_json_handling():
with pytest.raises(SomeSpecificException):
malformed_json = '{unclosed_json_object'
some_function_handling_json(malformed_json)
# Ensure the function raises an error or handles the malformed JSON gracefully. This tests how the system behaves under erroneous conditions, thereby enhancing the reliability of data transformations. |
PR Code Suggestions Summary ✨
|
# Handle Message objects | ||
if msg.role == "tool": | ||
converted_msg = { | ||
"role": "user", | ||
"content": [ | ||
{ | ||
"type": "tool_result", | ||
"tool_use_id": msg.tool_call_id, | ||
"content": msg.content, | ||
} | ||
], | ||
} | ||
converted_messages.append(converted_msg) | ||
elif msg.role == "assistant" and msg.tool_calls: | ||
# Handle Message objects with tool calls | ||
content = [] | ||
if msg.content: | ||
content.append({"type": "text", "text": msg.content}) | ||
for tool_call in msg.tool_calls: | ||
content.append( | ||
{ | ||
"type": "tool_use", | ||
"id": tool_call.id, | ||
"name": tool_call.function.name, | ||
"input": json.loads(tool_call.function.arguments), | ||
} | ||
) | ||
converted_messages.append({"role": "assistant", "content": content}) | ||
else: | ||
converted_messages.append( | ||
{"role": msg.role, "content": msg.content} | ||
) | ||
|
||
print(converted_messages) | ||
response = self.client.messages.create( | ||
model=model, system=system_message, messages=converted_messages, **kwargs | ||
) | ||
print(response) | ||
return self.normalize_response(response) | ||
|
||
def normalize_response(self, response): | ||
"""Normalize the response from the Anthropic API to match OpenAI's response format.""" | ||
normalized_response = ChatCompletionResponse() | ||
normalized_response.choices[0].message.content = response.content[0].text | ||
|
||
# Map Anthropic stop_reason to OpenAI finish_reason | ||
finish_reason_mapping = { | ||
"end_turn": "stop", | ||
"max_tokens": "length", | ||
"tool_use": "tool_calls", | ||
# Add more mappings as needed | ||
} | ||
normalized_response.choices[0].finish_reason = finish_reason_mapping.get( | ||
response.stop_reason, "stop" | ||
) | ||
|
||
# Add usage information | ||
normalized_response.usage = { | ||
"prompt_tokens": response.usage.input_tokens, | ||
"completion_tokens": response.usage.output_tokens, | ||
"total_tokens": response.usage.input_tokens + response.usage.output_tokens, | ||
} | ||
|
||
# Check if the response contains tool usage | ||
if response.stop_reason == "tool_use": | ||
# Find the tool_use content | ||
tool_call = next( | ||
(content for content in response.content if content.type == "tool_use"), | ||
None, | ||
) | ||
|
||
if tool_call: | ||
function = Function( | ||
name=tool_call.name, arguments=json.dumps(tool_call.input) | ||
) | ||
tool_call_obj = ChatCompletionMessageToolCall( | ||
id=tool_call.id, function=function, type="function" | ||
) | ||
# Get the text content if any | ||
text_content = next( | ||
( | ||
content.text | ||
for content in response.content | ||
if content.type == "text" | ||
), | ||
"", | ||
) | ||
|
||
message = Message( | ||
content=text_content or None, | ||
tool_calls=[tool_call_obj] if tool_call else None, | ||
role="assistant", | ||
refusal=None, | ||
) | ||
normalized_response.choices[0].message = message | ||
return normalized_response | ||
|
||
# Handle regular text response | ||
message = Message( | ||
content=response.content[0].text, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Reduces code duplication and enhances maintainability by refactoring repetitive conversion code into a separate function.
Abstract the repetitive conversion logic into separate functions for clarity and reuse.
if "max_tokens" not in kwargs: | |
kwargs["max_tokens"] = DEFAULT_MAX_TOKENS | |
return self.normalize_response( | |
self.client.messages.create( | |
model=model, system=system_message, messages=messages, **kwargs | |
) | |
# Handle tool calls. Convert from OpenAI tool calls to Anthropic tool calls. | |
if "tools" in kwargs: | |
kwargs["tools"] = convert_openai_tools_to_anthropic(kwargs["tools"]) | |
# Convert tool results from OpenAI format to Anthropic format | |
converted_messages = [] | |
for msg in messages: | |
if isinstance(msg, dict): | |
if msg["role"] == "tool": | |
# Convert tool result message | |
converted_msg = { | |
"role": "user", | |
"content": [ | |
{ | |
"type": "tool_result", | |
"tool_use_id": msg["tool_call_id"], | |
"content": msg["content"], | |
} | |
], | |
} | |
converted_messages.append(converted_msg) | |
elif msg["role"] == "assistant" and "tool_calls" in msg: | |
# Handle assistant messages with tool calls | |
content = [] | |
if msg.get("content"): | |
content.append({"type": "text", "text": msg["content"]}) | |
for tool_call in msg["tool_calls"]: | |
content.append( | |
{ | |
"type": "tool_use", | |
"id": tool_call["id"], | |
"name": tool_call["function"]["name"], | |
"input": json.loads(tool_call["function"]["arguments"]), | |
} | |
) | |
converted_messages.append({"role": "assistant", "content": content}) | |
else: | |
# Keep other messages as is | |
converted_messages.append( | |
{"role": msg["role"], "content": msg["content"]} | |
) | |
else: | |
# Handle Message objects | |
if msg.role == "tool": | |
converted_msg = { | |
"role": "user", | |
"content": [ | |
{ | |
"type": "tool_result", | |
"tool_use_id": msg.tool_call_id, | |
"content": msg.content, | |
} | |
], | |
} | |
converted_messages.append(converted_msg) | |
elif msg.role == "assistant" and msg.tool_calls: | |
# Handle Message objects with tool calls | |
content = [] | |
if msg.content: | |
content.append({"type": "text", "text": msg.content}) | |
for tool_call in msg.tool_calls: | |
content.append( | |
{ | |
"type": "tool_use", | |
"id": tool_call.id, | |
"name": tool_call.function.name, | |
"input": json.loads(tool_call.function.arguments), | |
} | |
) | |
converted_messages.append({"role": "assistant", "content": content}) | |
else: | |
converted_messages.append( | |
{"role": msg.role, "content": msg.content} | |
) | |
print(converted_messages) | |
response = self.client.messages.create( | |
model=model, system=system_message, messages=converted_messages, **kwargs | |
) | |
print(response) | |
return self.normalize_response(response) | |
def normalize_response(self, response): | |
"""Normalize the response from the Anthropic API to match OpenAI's response format.""" | |
normalized_response = ChatCompletionResponse() | |
normalized_response.choices[0].message.content = response.content[0].text | |
# Map Anthropic stop_reason to OpenAI finish_reason | |
finish_reason_mapping = { | |
"end_turn": "stop", | |
"max_tokens": "length", | |
"tool_use": "tool_calls", | |
# Add more mappings as needed | |
} | |
normalized_response.choices[0].finish_reason = finish_reason_mapping.get( | |
response.stop_reason, "stop" | |
) | |
# Add usage information | |
normalized_response.usage = { | |
"prompt_tokens": response.usage.input_tokens, | |
"completion_tokens": response.usage.output_tokens, | |
"total_tokens": response.usage.input_tokens + response.usage.output_tokens, | |
} | |
# Check if the response contains tool usage | |
if response.stop_reason == "tool_use": | |
# Find the tool_use content | |
tool_call = next( | |
(content for content in response.content if content.type == "tool_use"), | |
None, | |
) | |
if tool_call: | |
function = Function( | |
name=tool_call.name, arguments=json.dumps(tool_call.input) | |
) | |
tool_call_obj = ChatCompletionMessageToolCall( | |
id=tool_call.id, function=function, type="function" | |
) | |
# Get the text content if any | |
text_content = next( | |
( | |
content.text | |
for content in response.content | |
if content.type == "text" | |
), | |
"", | |
) | |
message = Message( | |
content=text_content or None, | |
tool_calls=[tool_call_obj] if tool_call else None, | |
role="assistant", | |
refusal=None, | |
) | |
normalized_response.choices[0].message = message | |
return normalized_response | |
# Handle regular text response | |
message = Message( | |
content=response.content[0].text, | |
+ converted_messages = self.refactor_conversion_logic(messages) |
class Message: | ||
def __init__(self): | ||
self.content = None | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Utilize enums to enforce constraints on 'role' and type fields, enhancing robustness and maintain type safety.
Use Enumerations to enforce valid values and enhance type safety in role and type fields.
+ content: Optional[str] | |
+ tool_calls: Optional[list[ChatCompletionMessageToolCall]] | |
+ role: Optional[Literal["user", "assistant", "system"]] = Field(..., description="Role of the message sender", enum=["user", "assistant", "system"]) | |
+ refusal: Optional[str] |
PR Review SummaryOverall Review:The PR seeks to enhance the message and tool calling functionality across multiple AI service providers like Anthropic and AWS. It introduces significant changes to the handling of message structures and API interactions, adding structured tool management and transformation processes. Notable changes include the refactoring of the
RecommendationsRecommendation #1Implement rigorous input validation and error handling around JSON operations and external data handling. Use secure practices such as using parameterized queries or templates when constructing requests or handling data. For instance: def transform_tool_call_to_openai(self, response):
if response.get("stopReason") != "tool_use":
return None
try:
tool_calls = []
for content in response["output"]["message"]["content"]:
if "toolUse" in content:
tool = content["toolUse"]
tool_calls.append({
"type": "function",
"id": tool["toolUseId"],
"function": {
"name": tool["name"],
"arguments": json.dumps(tool["input"], cls=SecureJSONEncoder),
},
})
except json.JSONDecodeError as e:
raise ValueError(f"JSON parsing error: {str(e)}")
return {
"role": "assistant",
"content": None,
"tool_calls": tool_calls,
"refusal": None,
} In this code snippet, Recommendation #2To address the potential inconsistencies in parsing def normalize_response(self, response):
normalized_response = ChatCompletionResponse()
finish_reason_mapping = {
"end_turn": "stop",
"max_tokens": "length",
"tool_use": "tool_calls",
}
normalized_response.choices[0].finish_reason = finish_reason_mapping.get(response.stop_reason, "stop")
# Add usage information
normalized_response.usage = {
"prompt_tokens": response.usage.input_tokens,
"completion_tokens": response.usage.output_tokens,
"total_tokens": response.usage.input_tokens + response.usage.output_tokens,
}
# Validate tool usage
if response.stop_reason == "tool_use":
tool_call = next((content for content in response.content if 'type' in content and content.type == "tool_use"), None)
if tool_call and 'id' in tool_call:
tool_call_id = tool_call.id
function = Function(name=tool_call.name, arguments=json.dumps(tool_call.input))
tool_call_obj = ChatCompletionMessageToolCall(id=tool_call_id, function=function, type="function")
text_content = next((content.text for content in response.content if 'type' in content and content.type == "text"), "")
message = Message(content=text_content or None, tool_calls=[tool_call_obj] if tool_call else None, role="assistant", refusal=None)
normalized_response.choices[0].message = message
else:
raise ValueError("Expected 'tool_use' content is missing or malformed in API response.")
return normalized_response |
PR Review SummaryOverall Review:The PR introduces profound changes across the AISuite to enhance message and tool calling capabilities for various AI providers, leveraging structured models and APIs specific to providers like AWS and Anthropic. Main enhancements include significant modifications to the message classes, sophisticated JSON data handling, and the incorporation of a Commit logs and file changes suggest iterative development and testing, focusing on incremental integration and ensuring compatibility with external APIs. This approach helps in isolating features for specific providers while maintaining a generalized framework that could be adapted for additional AI service providers as needed.
RecommendationsRecommendation #1To enhance the robustness of def normalize_response(self, response):
normalized_response = ChatCompletionResponse()
finish_reason_mapping = {
"end_turn": "stop",
"max_tokens": "length",
"tool_use": "tool_calls",
}
normalized_response.choices[0].finish_reason = finish_reason_mapping.get(response.stop_reason, "stop")
normalized_response.usage = {
"prompt_tokens": response.usage.input_tokens,
"completion_tokens": response.usage.output_tokens,
"total_tokens": response.usage.input_tokens + response.usage.output_tokens,
}
if response.stop_reason == "tool_use":
tool_call = next((content for content in response.content if content.type == "tool_use"), None)
if tool_call and 'id' in tool_call:
function = Function(name=tool_call.name, arguments=json.dumps(tool_call.input))
tool_call_obj = ChatCompletionMessageToolCall(
id=tool_call['id'], function=function, type="function"
)
text_content = next(
(content.text for content in response.content if content.type == "text"), ""
)
message = Message(
content=text_content or None,
tool_calls=[tool_call_obj] if tool_call else None,
role="assistant",
refusal=None
)
normalized_response.choices[0].message = message
else:
raise ValueError("Expected 'tool_use' content is missing or malformed in API response.")
return normalized_response Recommendation #2Implement rigorous input validation and sanitization within def transform_tool_call_to_openai(self, response):
if response.get("stopReason") != "tool_use":
return None
try:
tool_calls = []
for content in response["output"]["message"]["content"]:
if "toolUse" in content:
tool = content["toolUse"]
tool_calls.append({
"type": "function",
"id": tool["toolUseId"],
"function": {
"name": tool["name"],
"arguments": json.dumps(tool["input"], cls=SecureJSONEncoder),
},
})
except json.JSONDecodeError as e:
raise ValueError(f"JSON parsing error: {str(e)}")
return {
"role": "assistant",
"content": None,
"tool_calls": tool_calls,
"refusal": None,
} In the above snippet, |
PR Review SummaryOverall Review:The PR introduces significant enhancements across multiple AI service providers within the aisuite, focusing on structured message handling and tool calling functionality. These changes span introducing new models with Pydantic, implementing a
RecommendationsRecommendation #1To mitigate potential security vulnerabilities related to JSON parsing, fortify the data handling process with improved schema validation and thorough error handling. Integrate a custom JSON encoder to sanitize inputs before parsing and ensure functional test coverage: class SecureJSONEncoder(json.JSONEncoder):
def default(self, o):
if isinstance(o, str):
return o.replace('"', '\"').replace('<', '\<')
return json.JSONEncoder.default(self, o)
def transform_tool_call_to_openai(self, response):
if response.get("stopReason") != "tool_use":
return None
tool_calls = []
try:
for content in response["output"]["message"]["content"]:
if "toolUse" in content:
tool = content["toolUse"]
secure_args = json.dumps(tool["input"], cls=SecureJSONEncoder)
tool_calls.append({
"type": "function",
"id": tool["toolUseId"],
"function": {
"name": tool["name"],
"arguments": secure_args,
},
})
except json.JSONDecodeError as e:
raise ValueError(f"JSON parsing error: {str(e)}")
return {
"role": "assistant",
"content": None,
"tool_calls": tool_calls,
"refusal": None,
} Properly structure your error handling and testing to validate defense against common vulnerabilities in JSON processing. Recommendation #2Rework the def normalize_response(self, response):
normalized_response = ChatCompletionResponse()
finish_reason_mapping = {
"end_turn": "stop",
"max_tokens": "length",
"tool_use": "tool_calls",
}
normalized_response.choices[0].finish_reason = finish_reason_mapping.get(response.stop_reason, "stop")
normalized_response.usage = {
"prompt_tokens": response.usage.input_tokens,
"completion_tokens": response.usage.output_tokens,
"total_tokens": response.usage.input_tokens + response.usage.output_tokens,
}
if response.stop_reason == "tool_use":
tool_call = next((content for content in response.content if 'type' in content and content.type == "tool_use"), None)
if tool_call and 'id' in tool_call:
tool_call_id = tool_call.id
function = Function(name=tool_call.name, arguments=json.dumps(tool_call.input))
tool_call_obj = ChatCompletionMessageToolCall(id=tool_call_id, function=function, type="function")
text_content = next(
(content.text for content in response.content if 'type' in content and content.type == "text"), ""
)
message = Message(
content=text_content or None,
tool_calls=[tool_call_obj] if tool_call else None,
role="assistant",
refusal=None
)
normalized_response.choices[0].message = message
else:
raise ValueError("Expected 'tool_use' content is missing or malformed in API response.")
return normalized_response This modification ensures that all necessary identifiers and data types are checked before processing, improving the resilience of your application against data inconsistencies and related errors. |
PR Code Suggestions Summary ✨
|
if response.get("stopReason") != "tool_use": | ||
return None | ||
|
||
tool_calls = [] | ||
for content in response["output"]["message"]["content"]: | ||
if "toolUse" in content: | ||
tool = content["toolUse"] | ||
tool_calls.append( | ||
{ | ||
"type": "function", | ||
"id": tool["toolUseId"], | ||
"function": { | ||
"name": tool["name"], | ||
"arguments": json.dumps(tool["input"]), | ||
}, | ||
} | ||
) | ||
|
||
if not tool_calls: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Improve code readability by using descriptive variable names for complex data structures.
if response.get("stopReason") != "tool_use": | |
return None | |
tool_calls = [] | |
for content in response["output"]["message"]["content"]: | |
if "toolUse" in content: | |
tool = content["toolUse"] | |
tool_calls.append( | |
{ | |
"type": "function", | |
"id": tool["toolUseId"], | |
"function": { | |
"name": tool["name"], | |
"arguments": json.dumps(tool["input"]), | |
}, | |
} | |
) | |
if not tool_calls: | |
+ tool_calls = [] | |
+ for content in response["output"]["message"]["content"]: | |
+ if "toolUse" in content: | |
+ tool = content["toolUse"] | |
+ tool_call_details = { | |
+ "type": "function", | |
+ "id": tool["toolUseId"], | |
+ "function": { | |
+ "name": tool["name"], | |
+ "arguments": json.dumps(tool["input"]), | |
+ }, | |
+ } | |
+ tool_calls.append(tool_call_details) |
PR Review SummaryOverall Review:The presented PR aims to standardize message handling and introduce robust tool management for various AI providers within the aisuite. The proposed changes signify a substantial improvement in message structuring and API interaction uniformity. The adjustments stabilize and adapt the aisuite’s architecture to efficiently incorporate and translate between different provider formats. Refining message interaction through the
RecommendationsRecommendation #1To address the potential inconsistencies in parsing def normalize_response(self, response):
normalized_response = ChatCompletionResponse()
finish_reason_mapping = {
"end_turn": "stop",
"max_tokens": "length",
"tool_use": "tool_calls",
}
normalized_response.choices[0].finish_reason = finish_reason_mapping.get(response.stop_reason, "stop")
# Add usage information
normalized_response.usage = {
"prompt_tokens": response.usage.input_tokens,
"completion_tokens": response.usage.output_tokens,
"total_tokens": response.usage.input_tokens + response.usage.output_tokens,
}
# Check if the response contains tool usage and validate format
if response.stop_reason == "tool_use":
tool_call = next((content for content in response.content if 'type' in content and content.type == "tool_use"), None)
if tool_call and 'id' in tool_call:
tool_call_id = tool_call.id
function = Function(name=tool_call.name, arguments=json.dumps(tool_call.input))
tool_call_obj = ChatCompletionMessageToolCall(id=tool_call_id, function=function, type="function")
text_content = next((content.text for content in response.content if 'type' in content and content.type == "text"), "")
message = Message(content=text_content or None, tool_calls=[tool_call_obj] if tool_call else None, role="assistant", refusal=None)
normalized_response.choices[0].message = message
else:
raise ValueError("Expected 'tool_use' content is missing or malformed in API response.")
return normalized_response Recommendation #2It's vital to implement rigorous input validation and error handling around JSON operations and external data handling. Enhance the security of JSON parsing within the def transform_tool_call_to_openai(self, response):
if response.get("stopReason") != "tool_use":
return None
try:
tool_calls = []
for content in response["output"]["message"]["content"]:
if "toolUse" in content:
tool = content["toolUse"]
tool_calls.append({
"type": "function",
"id": tool["toolUseId"],
"function": {
"name": tool["name"],
"arguments": json.dumps(tool["input"], cls=SecureJSONEncoder),
},
})
except json.JSONDecodeError as e:
raise ValueError(f"JSON parsing error: {str(e)}")
return {
"role": "assistant",
"content": None,
"tool_calls": tool_calls,
"refusal": None,
} This function uses a custom Recommendation #3Implement strict content validation and utilize secure parsing methods to handle JSON data: from pydantic import ValidationError
def secure_json_parse(input_data: str):
try:
result = json.loads(input_data)
# Implement additional checks here to validate schema if necessary
return result
except json.JSONDecodeError as e:
raise ValidationError(f"Invalid JSON data: {e}")
# Usage within any data handling context
try:
safe_data = secure_json_parse(unsafe_data)
except ValidationError as ve:
logger.error(f"Data validation error: {ve}")
# Handle the error accordingly Review and augment the use of Pydantic validators, which enforce all data meets predefined schemas extensively. |
PR Code Suggestions Summary ✨
|
@@ -46,6 +70,7 @@ def __init__(self, **config): | |||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Refactor repetitive message transformation logic into a helper function within the AWS provider.
Abstract repeated logic into helper functions within the AWS provider to make the method more concise and maintainable.
+ for message in messages: | |
+ message_dict = ( | |
+ message.model_dump() if hasattr(message, "model_dump") else message | |
+ ) | |
+ self.append_formatted_message(message_dict, formatted_messages) |
PR Review SummaryOverall Review:The PR introduces extensive modifications aimed at enhancing message structuring and tool management across various AI providers within the aisuite. It centralizes around augmenting the
RecommendationsRecommendation #1To address the potential inconsistencies in parsing def normalize_response(self, response):
normalized_response = ChatCompletionResponse()
finish_reason_mapping = {
"end_turn": "stop",
"max_tokens": "length",
"tool_use": "tool_calls",
}
normalized_response.choices[0].finish_reason = finish_reason_mapping.get(response.stop_reason, "default_follow_up_action")
# Validate and parse tool use behavior
if response.stop_reason == "tool_use":
tool_call = next((item for item in response.content if item.type == "tool_use"), None)
if tool_call and 'id' in tool_call:
tool_id = tool_call.id
function = Function(
name=tool_call.name,
arguments=json.dumps(to.getExternalStorageDirectory().getJsonfromcall())),
)
tool_call_obj = ChatCompletionMessageToolCall(id=tool_id, function=function, type="function")
text_content = next(
(item.text for item in response.content if item.type == "text"),
""
)
message = Message(
content=text_content or None,
tool_calls=[tool_call_obj] if tool_call else None,
role="assistant",
refusal=None
)
normalized_response.choices[0].message = message
else:
raise ValueError("Expected 'tool_use' content is missing or malformed in API response.")
return normalized_response This modification ensures more rigorous validation and error handling to better accommodate variations in Recommendation #2Implement rigorous input validation and error handling mechanisms around JSON operations within the def transform_tool_call_to_openai(self, response):
if response.get("stopReason") != "tool_use":
return None
try:
tool_calls = []
for content in response["output"]["message"]["content"]:
if "toolUse" in content:
tool = content["toolUse"]
secure_args = json.dumps(tool["input"], cls=SecureJSONEncoder)
tool_calls.append({
"type": "function",
"id": tool["toolUseId"],
"function": {
"name": tool["name"],
"arguments": secure_args,
},
})
except json.JSONDecodeError as e:
raise ValueError(f"JSON parsing error: {str(e)}")
return {
"role": "assistant",
"content": None,
"tool_calls": tool_calls,
"refusal": None,
} Ensure you define or integrate a |
PR Code Suggestions Summary ✨
|
normalized_response.usage = { | ||
"prompt_tokens": response.usage.input_tokens, | ||
"completion_tokens": response.usage.output_tokens, | ||
"total_tokens": response.usage.input_tokens + response.usage.output_tokens, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Add null checks to tool call properties to avoid access errors.
Handling conversion for tools should include a null check before accessing the tool's name and arguments to prevent the application from crashing.
normalized_response.usage = { | |
"prompt_tokens": response.usage.input_tokens, | |
"completion_tokens": response.usage.output_tokens, | |
"total_tokens": response.usage.input_tokens + response.usage.output_tokens, | |
+ "type": "tool_use", | |
+ "id": tool_call["id"], | |
+ "name": tool_call.get("function", {}).get("name", ""), | |
+ "input": json.loads(tool_call.get("function", {}).get("arguments", "{}")), |
PR Review SummaryOverall Review:The PR introduces comprehensive enhancements in message structuring and tool management across several AI service providers, embodying significant improvements and adaptations to the aisuite's messaging and tool handling systems. Changes span from the introduction of a robust
RecommendationsRecommendation #1Modify the def transform_tool_call_to_openai(self, response):
if response.get("stopReason") != "tool_use":
return None
tool_calls = []
for content in response["output"]["message"]["content"]:
if "toolUse" in content and content["toolUse"]["type"] == "function":
tool = content["toolUse"]
tool_calls.append({
"type": "function",
"id": tool["toolUseId"],
"function": {
"name": tool["name"],
"arguments": json.dumps(tool["input"]),
},
})
else:
logging.warning(f"Unsupported tool type encountered: {content.get('toolUse', {}).get('type')}")
return {"role": "assistant", "content": None, "tool_calls": tool_calls, "refusal": None} Ensure to add proper logging configuration if not existing. Recommendation #2Enhance security by implementing rigorous input validation and error handling: def transform_tool_call_to_openai(self, response):
if response.get("stopReason") != "tool_use":
return None
tool_calls = []
try:
for content in response["output"]["message"]["content"]:
if "toolUse" in content:
tool = content["toolUse"]
validated_input = json.loads(tool["input"], cls=SecureJSONLoader) # Custom class to validate JSON input
tool_calls.append({
"type": "function",
"id": tool["toolUseId"],
"function": {
"name": tool["name"],
"arguments": json.dumps(validated_input),
},
})
except json.JSONDecodeError as e:
logging.error(f"JSON parsing error in tool call: {str(e)}")
return None
return {"role": "assistant", "content": None, "tool_calls": tool_calls, "refusal": None} Create a |
PR Code Suggestions Summary ✨
|
PR Review SummaryOverall Review:The PR implements substantial enhancements aimed at upgrading AI provider integrations and message handling within the aisuite by introducing a structured
RecommendationsRecommendation #1To address this issue, add more rigorous validation and error checking in the def normalize_response(self, response):
normalized_response = ChatCompletionResponse()
finish_reason_mapping = {
"end_turn": "stop",
"max_tokens": "length",
"tool_use": "tool_calls",
}
normalized_response.choices[0].finish_reason = finish_reason_mapping.get(response.stop_reason, "stop")
# Validate tool usage
if response.stop_reason == "tool_use":
tool_call = next((content for content in response.content if 'type' in content and content.type == "tool_use"), None)
if tool_call and 'id' in tool_call:
tool_call_id = tool_call.id
function = Function(name=tool_call.name, arguments=json.dumps(tool_call.input))
tool_call_obj = ChatCompletionMessageToolCall(id=tool_call_id, function=function, type="function")
text_content = next((content.text for content in response.content if 'type' in content and content.type == "text"), "")
message = Message(content=text_content or None, tool_calls=[tool_call_obj] if tool_call else None, role="assistant", refusal=None)
normalized_response.choices[0].message = message
else:
raise ValueError("Expected 'tool_use' content is missing or malformed in API response.")
return normalized_response This modification will make sure each part of the data is properly checked and parsed to prevent failures due to unexpected format changes or missing data. Recommendation #2Implement rigorous input validation and error handling around JSON operations within def transform_tool_call_to_openai(self, response):
if response.get("stopReason") != "tool_use":
return None
tool_calls = []
try:
for content in response["output"]["message"]["content"]:
if "toolUse" in content:
tool = content["toolUse"]
tool_calls.append({
"type": "function",
"id": tool["toolUseId"],
"function": {
"name": tool["name"],
"arguments": json.dumps(tool["input"], cls=SecureJSONEncoder),
},
})
except json.JSONDecodeError as e:
raise ValueError(f"JSON parsing error: {str(e)}")
return {
"role": "assistant",
"content": None,
"tool_calls": tool_calls,
"refusal": None,
} Ensure the |
PR Code Suggestions Summary ✨
|
PR Review SummaryOverall Review:This PR introduces a robust framework for interacting with different AI providers, enhances message structuring using Pydantic models, and centralizes tool management through a new
RecommendationsRecommendation #1To mitigate these security risks, it is recommended to implement rigorous input validations, such as schema checks, and employ JSON parsing with error handling to detect and reject malformed inputs effectively. Example modification for the parsing logic using secure practices could be implemented as follows: from json import JSONDecodeError
def transform_tool_call_to_openai(self, response):
if response.get("stopReason") != "tool_use":
return None
try:
tool_calls = []
for content in response["output"]["message"]["content"]:
if "toolUse" in content:
tool = content["toolUse"]
tool_calls.append({
"type": "function",
"id": tool["toolUseId"],
"function": {
"name": tool["name"],
"arguments": json.dumps(tool["input"], cls=SecureJSONEncoder),
},
})
except JSONDecodeError as error:
raise ValueError(f"Error parsing JSON data: {error}")
return {
"role": "assistant",
"content": None,
"tool_calls": tool_calls,
"refusal": None,
} Ensure defining a Recommendation #2It's critical to address this issue by refining the error handling and data extraction logic to prevent application failures. Recommended changes include strict checks on data existence and proper fallbacks in case of missing information: def normalize_response(self, response):
normalized_response = ChatCompletionResponse()
finish_reason_mapping = {
"end_turn": "stop",
"max_tokens": "length",
"tool_use": "tool_calls"
}
normalized_response.choices[0].finish_reason = finish_reason_mapping.get(response.stop_reason, "stop")
# Ensure tool usage information is validated and parsed correctly
if response.stop_reason == "tool_use":
tool_call = next((item for item in response.content if item.type == "tool_use"), None)
if tool_call and hasattr(tool_call, 'id'):
function = Function(name=tool_call.name, arguments=json.dumps(tool_call.input))
tool_call_obj = ChatCompletionMessageToolCall(
id=tool_call.id, function=function, type="function")
text_content = next(
(item.text for item in response.content if item.type == "text"),
""
)
message = Message(
content=text_content or None,
tool_calls=[tool_call_obj] if tool_call else None,
role="assistant",
refusal=None
)
normalized_response.choices[0].message = message
else:
raise ValueError("Expected 'tool_use' content is missing or malformed in API response.")
return normalized_response Implement better validation checks for |
Purpose
This PR introduces a new structured messaging system and APIs to support tool calls across multiple AI providers. It aims to unify and streamline the interaction with different AI tools, enhancing the content creation capabilities of an AI suite.
Critical Changes
aisuite/framework/message.py
: Replaced the existingMessage
class with aBaseModel
derived structure frompydantic
, supporting optional fields for tool calls, content, role, and refusal. This enhances type safety and data handling.aisuite/providers/anthropic_provider.py
andaisuite/providers/aws_provider.py
: Implemented methods for transforming tool calls and responses between OpenAI's format and their respective provider formats (Anthropic, AWS). Detailed handling includes converting tools from generic tool specifications to provider-specific configurations and normalizing responses back into the unifiedMessage
format expected by the system.Added
Message
import inaisuite/__init__.py
andaisuite/framework/__init__.py
to enhance framework integration level and ensure all components recognize theMessage
type throughout the application.===== Original PR title and description ============
Original Title: Enhanced Message Structure and Integration with AI Provider APIs for Tool Calls
Original Description:
Purpose
This PR introduces significant enhancements to both the internal message structure and the API integration for handling communication with various AI providers. The main focus is to facilitate the correct mapping and usage of tool calls across different AI services, ensuring uniformity and maximizing compatibility.
Critical Changes
aisuite/__init__.py
andaisuite/framework/__init__.py
:Imported
Message
classes to bolster message handling capabilities.aisuite/framework/choice.py
:Added optional finish reasons and enriched
Message
initialization to include new fields liketool_calls
which are now distinctly initialized with appropriate defaults to enhance clarity and control over message flow.aisuite/framework/message.py
:Revamped
Message
class with Pydantic support for better data validation and structure. IntroducedChatCompletionMessageToolCall
andFunction
as BaseModel subclasses to standardize tool call structures for AI providers.aisuite/providers/anthropic_provider.py
:Heavily modified response and request handling for the Anthropic provider to accommodate new structured tool calls. Added normalization from Anthropic format to OpenAI’s expected format, increasing interoperability.
aisuite/providers/aws_provider.py
:Adjusted AWS provider to handle message transformations between Bedrock's expected format and the general tool call structure. Also, made changes to include tool configurations if provided.
aisuite/providers/groq_provider.py
,aisuite/providers/mistral_provider.py
,aisuite/providers/openai_provider.py
:Minor adjustments to each provider, mostly focusing on message transformation consistency.
aisuite/utils/tool_manager.py
:Introduced a
ToolManager
which handles the registration and execution of tools based on the updated message structures, allowing for dynamic hooking of function calls specific to AI tasks.Note:
===== Original PR title and description ============
Original Title: Refactor message structure and tool call API integration for various AI providers
Original Description:
Purpose
The PR aims to enhance the message class compatibility between different AI service providers and standardizes the handling of AI tool calls across providers, ensuring more robust and flexible interactions.
Critical Changes
aisuite/framework/message.py
:Message
class utilizing Pydantic for type checking and structure validation. Added new attributestool_calls
,role
, andrefusal
to store tool call information in an API-agnostic format.BaseModel
to use type hints ensuring that data conforms to expected schema with relevant types across different providers.aisuite/framework/choice.py
:Choice
class to include optional attributesfinish_reason
, allowing the AI to specify why a session or request was completed, leveraging Python'sLiteral
andOptional
types for precise state descriptions.aisuite/providers/anthropic_provider.py
:ChatCompletionResponse
, including converting response content and handling tool call conversion and normalization processes.aisuite/providers/aws_provider.py
:aisuite/providers/groq_provider.py
,aisuite/providers/mistral_provider.py
,aisuite/providers/openai_provider.py
:aisuite/utils/tool_manager.py
:ToolManager
class to manage registration and execution of tools, handling parameter parsing, execution, and response formatting along with error handling.Additional files like Jupyter notebooks under
examples/
and tests intests/utils/
:===== Original PR title and description ============
Original Title: Enhance message structure and adapt tool call conversions across AI providers
Original Description:
Purpose
The changes introduced update the message and tool calling frameworks to support diverse AI providers more effectively. The enhancements include adapting the message structure for different provider APIs, handling tool calls uniquely per provider, and integrating usage details to standardize responses. This ensures compatibility and flexibility within the AI suite's tooling system.
Critical Changes
{
aisuite/__init__.py
andaisuite/framework/__init__.py
: Import the updatedMessage
class, ensuring that all providers utilizing this class can leverage the new structured message format.aisuite/framework/message.py
: Transition theMessage
class to utilizepydantic.BaseModel
, making it robust for data validation. The class now optionally includes tool calls, supporting various roles and potential refusal reasons, enhancing the message structure significantly.aisuite/providers/anthropic_provider.py
: Adapt message and tool call handling specifically for the Anthropics API. This includes changing kwargs handling, message transformation to match the provider's requirements, and appropriately converting tool call responses, addressing the unique 'stop_reason' and formatting responses to a standardized form.aisuite/providers/aws_provider.py
: Implement response normalization and modification of incoming messages (handling tool calls and results) for the AWS Bedrock API. Detailed changes ensure message dicts are correctly formatted for this provider.aisuite/providers/groq_provider.py
,aisuite/providers/mistral_provider.py
, andaisuite/providers/openai_provider.py
: Adjustments reflect the new message handling formats and detailed transformation functions, supporting the structured handling of tool calls and messages across these providers.aisuite/utils/tool_manager.py
: Adds a comprehensive suite for tool management, allowing adding and managing tools with simplified or detailed specifications. The implementation focuses on function execution, handling both response and message formation effectively.}
===== Original PR title and description ============
Original Title: Integrate advanced message structuring and tool call conversion for various providers
Original Description:
Purpose
The PR is aimed at enhancing message structuring within the aisuite, particularly for handling tool calls across multiple providers like AWS, Anthropic, and more. This ensures that messages are appropriately formatted and converted between different API standards, helping in seamless integration with these external services.
Critical Changes
aisuite/framework/message.py
: Introduced a newMessage
class using Pydantic for validating message schemas, including optional fields forcontent
,role
, andtool_calls
to handle different message types more robustly.aisuite/providers/anthropic_provider.py
: Added complex logic to transform tool call data between OpenAI format and Anthropic's expected format, involving detailed conversions both ways and handling tool results separately.aisuite/providers/aws_provider.py
: Enhanced the AWS provider to support tool calls by converting message formats suitable for Bedrock's API and processing tool-related responses, aligning them with OpenAI standards.aisuite/utils/tool_manager.py
: Tool manager utility class implemented to manage and execute tools based on calls from messages, including a method to dynamically create Pydantic models from functions and handle execution....
===== Original PR title and description ============
Original Title: Enhance message structuring and add tool call handling across providers
Original Description:
Purpose
The changes are intended to introduce unified message structuring with
pydantic
models and extend feature support for handling tool calls, ensuring compatibility and streamlining responses for multiple AI model providers. This facilitates a more structured and extensible codebase catering to various AI-powered interactions.Critical Changes
{
In
aisuite/framework/__init__.py
andaisuite/__init__.py
, theMessage
class is now imported, setting the stage for enhanced message handling through the project using this unified structure.In
aisuite/framework/message.py
, a significant upgrade has occurred whereMessage
evolved from a basic class topydantic.BaseModel
incorporating optional fields for content, roles, and detailed structures for tool calls such asChatCompletionMessageToolCall
which inherits the runtime checking and documentation benefits ofpydantic
.aisuite/framework/choice.py
now outlines optional finish reasons in tool calls, refining decision capture based on enhanced messaging structures reflecting roles and actions within the workflow.In
aisuite/providers/anthropic_provider.py
, extensive changes are made to support translation between OpenAI and Anthropic specific tool calling requirements and handling API responses to align with the desired structured message format, utilizing the newMessage
structure.Enhancements in
aisuite/providers/aws_provider.py
include better message transformations for AWS-specific formatting and tool usage handling, thereby integrating the extended message and tool handling functionality across various AI model providers.Each provider (Groq, Mistral, AWS, and Anthropic) under
aisuite/providers/
now includes transformations and normalization flows to fit the newMessage
format, demonstrating a commitment to uniformity and extendibility in handling AI messaging responses.The addition of a new file
aisuite/utils/tool_manager.py
introduces aToolManager
class which manages tool functionalities as callable entities. This development suggests the expansion of tool handling capabilities that are likely to influence future features and integrations in the AI suite.}
===== Original PR title and description ============
Original Title: Enhance message handling for various providers with new data structures
Original Description:
Purpose
This PR introduces improved support and handling of messages and tool calls across different language models and service providers, increasing interoperability and flexibility in defining and transmitting messages.
Critical Changes
aisuite/__init__.py
andaisuite/framework/__init__.py
:Message
class to initialize proper message handling.aisuite/framework/message.py
:Message
class as a Pydantic model, enhancing data validation and error handling.Function
andChatCompletionMessageToolCall
to handle complex data structures for tool calls.aisuite/framework/choice.py
:finish_reason
inChoice
class to potentially handle multiple end conditions (like "stop" or "tool_calls") and restructured theMessage
initialization to align with new attributes.aisuite/providers/anthropic_provider.py
andaisuite/providers/aws_provider.py
:aisuite/providers/groq_provider.py
,aisuite/providers/mistral_provider.py
, andaisuite/providers/openai_provider.py
:Message
structure, ensuring compatibility and consistent data handling across different model providers.aisuite/utils/tool_manager.py
:ToolManager
class that manages tool registry and execution, supports dynamic parameter models, and integrates well with JSON-based interoperability.===== Original PR title and description ============
Original Title: Introduce message handling enhancements using OpenAI and AWS formats
Original Description:
Purpose
This PR introduces systematic enhancements to the handling of messages and tool calling across multiple providers like Anthropics and AWS, aligning with their specific requirements and APIs.
Critical Changes
aisuite/framework/message.py
: Refactored theMessage
class to usepydantic.BaseModel
for data validation, incorporating support for optional fields and structured tool calls. Major changes include defining a newFunction
andChatCompletionMessageToolCall
sub-models to format tool calls accurately.aisuite/providers/anthropic_provider.py
: Major enhancements adjust tool handling to conform with Anthropics' API, involving modifying tool calls' transforming and normalizing processes. Added comprehensive mapping for function calls and responses between OpenAI and Anthropics format. Additional debug logging is implemented to trace tool call transformations.aisuite/providers/aws_provider.py
: Extended support for AWS provider to convert OpenAI formatted tool calls to AWS's expected JSON payload. Modifications ensure correct mapping and handling of tool results and assistant messages for both single and batch requests.Additional updates across various providers (
groq_provider.py
,mistral_provider.py
,aws_provider.py
) improve overall integration and consistency in message format transformation and tool calling processes.===== Original PR title and description ============
Original Title: Enhanced tool calling and message handling for multiple providers
Original Description:
Purpose
This PR introduces improved tool calling capabilities, richer message object structures, and compatibility updates across various LLM providers to better handle diverse message and tool call formats.
Critical Changes
aisuite/__init__.py
andaisuite/framework/__init__.py
:Message
class to enable enhanced message handling across the suite.aisuite/framework/message.py
:Message
class usingBaseModel
for stricter validation and introducesChatCompletionMessageToolCall
class to handle structured tool responses.aisuite/framework/choice.py
:Choice
class to include optional typing and improved default setup for message instantiation supportingfinish_reason
attribute for completion state tracking.aisuite/providers/anthropic_provider.py
andaisuite/providers/aws_provider.py
:aisuite/utils/tool_manager.py
:examples/SimpleToolCalling.ipynb
:ToolManager
with a mock API to handle tool interactions within chat environments.New Dependencies:
===== Original PR title and description ============
Original Title: Rcp/tool calling
Original Description:
None