Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open WebUI: Unexpected error during intent recognition #273

Open
guilfoos opened this issue Dec 12, 2024 · 6 comments
Open

Open WebUI: Unexpected error during intent recognition #273

guilfoos opened this issue Dec 12, 2024 · 6 comments

Comments

@guilfoos
Copy link

I have Open WebUI running successfully in Docker, with an Ollama engine behind it. I can log in and use the LLM as expected.

However, when I try to connect via Home Assistant, I've had no end of problems. I've been able to iteratively improve things, but I've hit a roadblock: any input in the Assist pipeline returns "Unexpected error during intent recognition".

The Open WebUI logs seem OK:

INFO [open_webui.apps.openai.main] get_all_models()
INFO [open_webui.apps.ollama.main] get_all_models()
INFO [open_webui.apps.ollama.main] url: http://host.docker.internal:7869
INFO: x.x.x.x:49364 - "POST /api/chat/completions HTTP/1.1" 200 OK

The assist pipeline logs, however:

Logger: homeassistant.components.assist_pipeline.pipeline
Source: components/assist_pipeline/pipeline.py:1070
integration: Assist pipeline (documentation, issues)
First occurred: 9:50:36 AM (7 occurrences)
Last logged: 10:09:22 AM

Unexpected error during intent recognition
Traceback (most recent call last):
File "/usr/src/homeassistant/homeassistant/components/assist_pipeline/pipeline.py", line 1070, in recognize_intent
conversation_result = await conversation.async_converse(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
...<7 lines>...
)
^
File "/usr/src/homeassistant/homeassistant/components/conversation/agent_manager.py", line 110, in async_converse
result = await method(conversation_input)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/config/custom_components/extended_openai_conversation/init.py", line 199, in async_process
query_response = await self.query(user_input, messages, exposed_entities, 0)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/config/custom_components/extended_openai_conversation/init.py", line 376, in query
if response.usage.total_tokens > context_threshold:
^^^^^^^^^^^^^^^^^^^^^^^^^^^
AttributeError: 'NoneType' object has no attribute 'total_tokens'

The only change I've made to the service setting is to change the model, everything else is default. The service itself is pointing at "http://docker-host:port/api", and is using the JWT token for the admin account in Open WebUI.

I've mucked around with other settings and have been able to get "method not allowed" errors, and model errors right in the assist output, but I'm kind of at my wits end.

@denisjoshua
Copy link

I follow cause I receive same error when I try to use the n8n backend.
Thanks

@roshangautam
Copy link

roshangautam commented Dec 25, 2024

@guilfoos I have this extension working with my local openwebui/ollama setup, so thought I would share. Please use the Api base as listed in the following documentation (unrelated but documents the base url that can be used for openai like api endpoints). Hope it works for you, good luck.
https://docs.openwebui.com/tutorials/integrations/continue-dev#add-or-update-apibase

@cl93a
Copy link

cl93a commented Dec 26, 2024

I think I tried every single possibility for the correct api domain for my open-webui instance ... can't get it to work :( Would really want to be able to talk to my own models of Open-Webui in Home Assistant.
Tried it all:
/api
/api/v1
/ollama/v1
etc

Visiting my instance /docs works perfect, I can do all api calls in that ui. But just can't get any url to work with extended_openai_conversation. Any other ideas for how to correctly fill in the extended_openai_conversation fields in Home Assistant for this to work?

@kiloptero
Copy link

in my case works randonmly.. some times

@guilfoos
Copy link
Author

guilfoos commented Jan 3, 2025

After setting this aside for a bit, I got my Ollama installation moved to kubernetes, but once again I get to this exact point. I've switched out the JWT token for an API key, but when connecting to https://hostname/api as the endpoint, I consistently get "AttributeError: 'NoneType' object has no attribute 'total_tokens'" from the assist pipeline. The only change I made from the default settings in extended_openui_conversation is to change the model to llama3, which is what I have installed and am testing with. This is an extremely frustrating issue.

Turning on debug logging, I can see the prompt (including exposed entities), and the response back from open webui. So the connectivity is good, but perhaps the default prompt is no good? Or the model? I've included the tail end of my prompt (after the exposed devices are

The current state of devices is provided in available devices.\nUse execute_services function only for requested action, not for current states.\nDo not execute service without user's confirmation.\nDo not restate or appreciate what user says, rather make a quick inquiry."}, {"role": "user", "content": "turn off the office lights"}]
2025-01-03 16:21:36.862 INFO (MainThread) [custom_components.extended_openai_conversation] Response {"id": "llama3:latest-de93b423-a1f2-476c-8cc2-c5f25a87e5c2", "choices": [{"finish_reason": "stop", "index": 0, "message": {"content": "flip The office lights are now turned off.", "role": "assistant"}}], "created": 1735939296, "model": "llama3:latest", "object": "chat.completion"}
2025-01-03 16:21:36.863 ERROR (MainThread) [homeassistant.components.assist_pipeline.pipeline] Unexpected error during intent recognition
Traceback (most recent call last):
File "/usr/src/homeassistant/homeassistant/components/assist_pipeline/pipeline.py", line 1078, in recognize_intent
conversation_result = await conversation.async_converse(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
...<7 lines>...
)
^
File "/usr/src/homeassistant/homeassistant/components/conversation/agent_manager.py", line 110, in async_converse
result = await method(conversation_input)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/config/custom_components/extended_openai_conversation/init.py", line 199, in async_process
query_response = await self.query(user_input, messages, exposed_entities, 0)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/config/custom_components/extended_openai_conversation/init.py", line 376, in query
if response.usage.total_tokens > context_threshold:
^^^^^^^^^^^^^^^^^^^^^^^^^^^
AttributeError: 'NoneType' object has no attribute 'total_tokens'

@roshangautam - care to share your prompt? Model choice?

@guilfoos
Copy link
Author

guilfoos commented Jan 3, 2025

More reading: this seems to be a function problem. The logs don't indicate the function is being provided, and I've tried to use the "use tools" toggle to have the extension use tools as the function spec in the API has been deprecated. No change. Not sure how to improve my logging here to dig further.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

5 participants