-
Notifications
You must be signed in to change notification settings - Fork 144
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Local model via llama-cpp-python support #72
Comments
I haven't tried llama-cpp-python yet, but the error message above happens when LLM tries to call service with Since I don't know much about LLM, I have no right answer. I will try this as well later! Also, I want to know what prompt you have used, probably default prompt? |
I think that my model do not know anything about HomeAssistant. Is there a way to provide service names with description in "tool spec"? For example about domain light with list of services ? |
I think so.
Maybe you can try setting enum to - spec:
name: execute_services
description: Use this function to execute service of devices in Home Assistant.
parameters:
type: object
properties:
list:
type: array
items:
type: object
properties:
domain:
type: string
description: The domain of the service
enum:
- light
- switch
service:
type: string
description: The service to be called
enum:
- turn_on
- turn_off
service_data:
type: object
description: The service data object to indicate what to control.
properties:
entity_id:
type: array
items:
type: string
description: The entity_id retrieved from available devices. It must start with domain, followed by dot character.
required:
- entity_id
required:
- domain
- service
- service_data
function:
type: native
name: execute_service |
Ok, after model change and those fixes I've got HA error: My debug shows:
Maybe we can trim extra chars from function names ? |
I think so.
Without modification of code, it's not possible. Since providing enums in spec is just a workaround, it would result in problems after problems. |
I've changed model and now it do not need enum anymore. |
Maybe I can try to fix this trim issue by myself, can you help me finding right place to start within you code ? |
I'm not certain where to put, but this is the place that compares function names. |
Thanks! But have to dig more.. any clues on those logs ?
|
Yeah I was think that this was an error and changed this to execute_serices, thanks! Now with extra function calling is working ok, but after calling there is response:
|
After function is called, it makes another request to LLM to get response message. Probably it's not aware that function call succeeded even though we resulted in |
This did not help, but:
|
We can't get response message and function call at the same time. |
I'm using LocalAI and this integration works with models
It is model problem or API problem and how it can be fixed? |
Maybe you can try dolphin-2.7-mixtral-8x7b as Anto mentioned. Since I haven't tried LocalAI much, I also need to try those. |
@OperKH yes, this model works https://huggingface.co/TheBloke/dolphin-2.7-mixtral-8x7b-GGUF BUT it cannot perform fucntion services. Have you tried the functionary v2 model? I cannot get a template for the model to work with LocalAI. Supposedely this handles functions/tool better: |
@OperKH did you get to do any of the functions \ tools? Or was it just communication/answers? |
It does function only. Does not chat well, if at all if I remember correctly. |
As llama.cpp is now best backend for opensource models, and llama-cpp-python (used as python software backend for python powered GUIs) have buildin OpenAI API support with function (tools) calling support.
https://llama-cpp-python.readthedocs.io/en/latest/server/#function-calling
https://github.com/abetlen/llama-cpp-python#function-calling
and there are docker support of this tool, I wanted to get support with running this things all together
I have read #17 but that is mostly about LocalAI. LocalAI is using llama-cpp-python as backend, so why not to go shortcut and use llama-cpp-python directly ?
My docker-compose looks like this (with llama-cpp-python git cloned, if you do not need GPU support just use commented #image instead of build:)
But I've got answers like:
turn on "wyspa" light
Something went wrong: Service light.on not found.
where is paris?
Something went wrong: Service location.navigate not found.
Maybe something wrong with my prompt ?
The text was updated successfully, but these errors were encountered: