You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Description:
I’ve installed the Local LLM integration from HACS. When adding the integration, I selected Llama.cpp (HuggingFace). The model I chose is acon96/Home-3B-GGUF, and the model files were successfully downloaded to /media/models.
However, when I try to configure the voice assistant, the Local LLM option does not appear as an available conversation agent. According to the logs, it seems that the conversation platform is not launching.
Steps to Reproduce:
1. Install the Local LLM integration via HACS.
2. Add the integration and select Llama.cpp (HuggingFace).
3. Choose the model acon96/Home-3B-GGUF.
4. Verify that the model files are downloaded to /media/models.
5. Attempt to configure the voice assistant and look for Local LLM as a conversation agent.
Expected Behavior:
The Local LLM option should appear as a selectable conversation agent.
Logs:
Logger: homeassistant.components.conversation
Quelle: helpers/entity_platform.py:366
Integration: Konversation (Dokumentation, Probleme)
Erstmals aufgetreten: 00:51:33 (4 Vorkommnisse)
Zuletzt protokolliert: 03:22:02
Error while setting up llama_conversation platform for conversation
Traceback (most recent call last):
File "/usr/src/homeassistant/homeassistant/helpers/entity_platform.py", line 366, in _async_setup_platform
await asyncio.shield(awaitable)
File "/config/custom_components/llama_conversation/conversation.py", line 179, in async_setup_entry
await agent._async_load_model(entry)
File "/config/custom_components/llama_conversation/conversation.py", line 282, in _async_load_model
return await self.hass.async_add_executor_job(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
self._load_model, entry
^^^^^^^^^^^^^^^^^^^^^^^
)
^
File "/usr/local/lib/python3.13/concurrent/futures/thread.py", line 59, in run
result = self.fn(*self.args, **self.kwargs)
File "/config/custom_components/llama_conversation/conversation.py", line 895, in _load_model
validate_llama_cpp_python_installation()
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^^
File "/config/custom_components/llama_conversation/utils.py", line 151, in validate_llama_cpp_python_installation
raise Exception(f"Failed to properly initialize llama-cpp-python. (Exit code {process.exitcode}.)")
Exception: Failed to properly initialize llama-cpp-python. (Exit code -4.)
Environment:
• Home Assistant Version: 2025.1.4 x86
• Local LLM Integration Version: 0.3.7
• HACS Version: 2.0.3
• Hardware: Dell Wyse 5070, Intel Celeron J4105 | 32GB DDR4
Any suggestions on troubleshooting steps or possible misconfigurations would be appreciated.
The text was updated successfully, but these errors were encountered:
Description:
I’ve installed the Local LLM integration from HACS. When adding the integration, I selected Llama.cpp (HuggingFace). The model I chose is acon96/Home-3B-GGUF, and the model files were successfully downloaded to /media/models.
However, when I try to configure the voice assistant, the Local LLM option does not appear as an available conversation agent. According to the logs, it seems that the conversation platform is not launching.
Steps to Reproduce:
1. Install the Local LLM integration via HACS.
2. Add the integration and select Llama.cpp (HuggingFace).
3. Choose the model acon96/Home-3B-GGUF.
4. Verify that the model files are downloaded to /media/models.
5. Attempt to configure the voice assistant and look for Local LLM as a conversation agent.
Expected Behavior:
The Local LLM option should appear as a selectable conversation agent.
Logs:
Environment:
• Home Assistant Version: 2025.1.4 x86
• Local LLM Integration Version: 0.3.7
• HACS Version: 2.0.3
• Hardware: Dell Wyse 5070, Intel Celeron J4105 | 32GB DDR4
Any suggestions on troubleshooting steps or possible misconfigurations would be appreciated.
The text was updated successfully, but these errors were encountered: