Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Issue: Local LLM Integration Not Available as a Conversation Agent #244

Open
haltdurchdigger opened this issue Jan 26, 2025 · 1 comment
Labels
bug Something isn't working

Comments

@haltdurchdigger
Copy link

Description:
I’ve installed the Local LLM integration from HACS. When adding the integration, I selected Llama.cpp (HuggingFace). The model I chose is acon96/Home-3B-GGUF, and the model files were successfully downloaded to /media/models.

However, when I try to configure the voice assistant, the Local LLM option does not appear as an available conversation agent. According to the logs, it seems that the conversation platform is not launching.

Steps to Reproduce:
1. Install the Local LLM integration via HACS.
2. Add the integration and select Llama.cpp (HuggingFace).
3. Choose the model acon96/Home-3B-GGUF.
4. Verify that the model files are downloaded to /media/models.
5. Attempt to configure the voice assistant and look for Local LLM as a conversation agent.

Expected Behavior:
The Local LLM option should appear as a selectable conversation agent.

Logs:


Logger: homeassistant.components.conversation
Quelle: helpers/entity_platform.py:366
Integration: Konversation (Dokumentation, Probleme)
Erstmals aufgetreten: 00:51:33 (4 Vorkommnisse)
Zuletzt protokolliert: 03:22:02

Error while setting up llama_conversation platform for conversation
Traceback (most recent call last):
  File "/usr/src/homeassistant/homeassistant/helpers/entity_platform.py", line 366, in _async_setup_platform
    await asyncio.shield(awaitable)
  File "/config/custom_components/llama_conversation/conversation.py", line 179, in async_setup_entry
    await agent._async_load_model(entry)
  File "/config/custom_components/llama_conversation/conversation.py", line 282, in _async_load_model
    return await self.hass.async_add_executor_job(
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
        self._load_model, entry
        ^^^^^^^^^^^^^^^^^^^^^^^
    )
    ^
  File "/usr/local/lib/python3.13/concurrent/futures/thread.py", line 59, in run
    result = self.fn(*self.args, **self.kwargs)
  File "/config/custom_components/llama_conversation/conversation.py", line 895, in _load_model
    validate_llama_cpp_python_installation()
    ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^^
  File "/config/custom_components/llama_conversation/utils.py", line 151, in validate_llama_cpp_python_installation
    raise Exception(f"Failed to properly initialize llama-cpp-python. (Exit code {process.exitcode}.)")
Exception: Failed to properly initialize llama-cpp-python. (Exit code -4.)

Environment:
• Home Assistant Version: 2025.1.4 x86
• Local LLM Integration Version: 0.3.7
• HACS Version: 2.0.3
• Hardware: Dell Wyse 5070, Intel Celeron J4105 | 32GB DDR4

Any suggestions on troubleshooting steps or possible misconfigurations would be appreciated.

@haltdurchdigger haltdurchdigger added the bug Something isn't working label Jan 26, 2025
@acon96
Copy link
Owner

acon96 commented Jan 26, 2025

Please follow the workaround listed here: https://github.com/acon96/home-llm/blob/develop/docs/Backend%20Configuration.md#wheels

I recently got a Celeron Mini PC in to try to start adding builds for those processors.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

2 participants