You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Environment, CPU architecture, OS, and Version:
docker on debian, intel i9, nvidia gpu
Describe the bug
When using functions, AI stuck in a loop of function calls. It seems, as it does not understand the tool result.
As documented everywhere after getting a [TOOL_RESULT], the model should process the result and answer to the user as assistant role and not run the same function call again and again...
I'm not sure if this is an issue of localai, the model or the chat template? I thought that maybe the tool_call_id is missing, so the model is not able to connect the tool result to the function call.
Any ideas?
To Reproduce
use this api call with v1/chat/completions:
LocalAI version:
docker image: localai/localai:v2.22.0-aio-gpu-nvidia-cuda-11
Environment, CPU architecture, OS, and Version:
docker on debian, intel i9, nvidia gpu
Describe the bug
When using functions, AI stuck in a loop of function calls. It seems, as it does not understand the tool result.
As documented everywhere after getting a [TOOL_RESULT], the model should process the result and answer to the user as assistant role and not run the same function call again and again...
I'm not sure if this is an issue of localai, the model or the chat template? I thought that maybe the tool_call_id is missing, so the model is not able to connect the tool result to the function call.
Any ideas?
To Reproduce
use this api call with
v1/chat/completions
:The response is now the same function call again:
Expected behavior
The response should be from an assistant role that processes the tool/function result.
Logs
The text was updated successfully, but these errors were encountered: