Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

"Instructor does not support multiple tool calls" Error When Using List[Model] #1111

Closed
2 tasks done
andytriboletti opened this issue Oct 23, 2024 · 5 comments
Closed
2 tasks done

Comments

@andytriboletti
Copy link

andytriboletti commented Oct 23, 2024

  • This is actually a bug report.
  • I have tried searching the documentation and have not found an answer.

What Model are you using?

Llama3.2

Describe the bug
When trying to handle multiple files using a List in my model, I'm getting an error about multiple tool calls.

Code:

class CodeFile(BaseModel):
    filename: str
    content: str

class MultiFileResponse(BaseModel):
    files: List[CodeFile]

Error message:
Error: Instructor does not support multiple tool calls, use List[Model] instead

What's the correct way to structure a model to handle multiple files when the error suggests using List[Model]? I'm trying to get multiple files (with filename and content) in a single response, but getting the "multiple tool calls" error.

The error message suggests using List[Model], but I'm already using List[CodeFile]. Would appreciate guidance on the correct way to structure this.

To Reproduce

from pydantic import BaseModel
from typing import List
import instructor
from openai import OpenAI

class CodeFile(BaseModel):
    filename: str
    content: str

class MultiFileResponse(BaseModel):
    files: List[CodeFile]

def main():
    client = OpenAI(
        base_url="http://localhost:11434/v1",
        api_key="ollama"
    )
    
    client = instructor.patch(client)
    
    try:
        response = client.chat.completions.create(
            model="llama3.2",
            messages=[
                {"role": "system", "content": "You are a Python code generator. Return multiple Python files."},
                {"role": "user", "content": "Generate a fibonacci generator with tests"}
            ],
            response_model=MultiFileResponse
        )
        print("Response:", response)
    except Exception as e:
        print("Error:", e)

if __name__ == "__main__":
    main()

run this script

Expected behavior
It outputs some files.

@ivanbelenky
Copy link
Contributor

ivanbelenky commented Oct 23, 2024

As in this issue your client is wrognfully created.

if you read closely at the docs you will notice that OpenAI client is being leveraged for Ollama instructor.Instructor instance in this case. Furthermore, it may not be clear as water but the mode is set as instructor.Mode.JSON.

For LLMs without tools mode, not overriding the default TOOLS mode will trigger --> attempt to parse tools responses --> not finding anything --> throwing the error you saw.

Hope this clarifies the issue.

@andytriboletti
Copy link
Author

andytriboletti commented Oct 30, 2024

Thank you for the link. I was able to get my program to run. However, I noticed the program took 10 hours(!) on my MacBook Pro. Is there anything wrong? How can I speed it up? Should I open a new bug? Thank you for your help.

time python aliendevtool.py JumpSearch
Verified jumpsearch_recursive.py was created successfully
Verified jumpsearch_iterative.py was created successfully
python aliendevtool.py JumpSearch  2.06s user 0.78s system 0% cpu 10:56.93 total

Here is the code I used: https://github.com/greenrobotllc/aliendevtool/blob/main/aliendevtool.py It creates a recursive and iterative version of different algorithms.

@odellus
Copy link

odellus commented Feb 12, 2025

As in this issue your client is wrognfully created.

if you read closely at the docs you will notice that OpenAI client is being leveraged for Ollama instructor.Instructor instance in this case. Furthermore, it may not be clear as water but the mode is set as instructor.Mode.JSON.

For LLMs without tools mode, not overriding the default TOOLS mode will trigger --> attempt to parse tools responses --> not finding anything --> throwing the error you saw.

Hope this clarifies the issue.

This link worked for me. Thank you @ivanbelenky !

@matlowai
Copy link

I'm running into this same error intermittently using "mistral-small:24b-instruct-2501-q4_K_M" /w ollama when it has None for the tools. This is my fix but it might not be for everyone and have side effects I didn't consider. might be worth thinking about as an OLLAMA_TOOLS cls.parse_ollama_tools?

`
@classmethod
def parse_tools(
cls: type[BaseModel],
completion: ChatCompletion,
validation_context: Optional[dict[str, Any]] = None,
strict: Optional[bool] = None,
) -> BaseModel:
message = completion.choices[0].message

    if hasattr(message, "refusal"):
        assert message.refusal is None, f"Unable to generate a response due to {message.refusal}"
    
    tool_calls = message.tool_calls or []
    
    # If no tool calls, examine the content
    if not tool_calls:
        if not message.content:
            raise ValueError("No tool calls and no content available in response")
            
        content = message.content
        
        # Handle plain text directly if the model has a chat_message field
        if "chat_message" in cls.model_fields:
            return cls.model_validate(
                {"chat_message": content}, 
                context=validation_context,
                strict=strict
            )
            
        # Try JSON parsing as fallback
        try:
            content = extract_json_from_codeblock(content)
            json.loads(content)
            return cls.parse_json(completion, validation_context, strict)
        except (json.JSONDecodeError, ValueError):
            raise ValueError(
                f"Response content doesn't match expected schema and isn't valid JSON: {content}"
            )
    
    # Handle tool calls normally
    assert len(tool_calls) == 1, "Instructor does not support multiple tool calls, use List[Model] instead"
    tool_call = tool_calls[0]
    assert tool_call.function.name == cls.openai_schema["name"], "Tool name does not match"
    
    return cls.model_validate_json(
        tool_call.function.arguments,
        context=validation_context,
        strict=strict,
    )

`

@jxnl
Copy link
Collaborator

jxnl commented Mar 5, 2025

Closing as part of repository maintenance for issues created before 2025.

@jxnl jxnl closed this as completed Mar 5, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

5 participants