-
Notifications
You must be signed in to change notification settings - Fork 144
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Function spec example for say_tts #176
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for your contribution!
It will be released in the next version.
Question on this, and i know its not its intended purpose but could this be modified and utilized in a way that my input device and output device were separate entities? Example: Let me know your thoughts |
I'm looking for a similar solution. |
If this PR is merged, you can use input |
sure but how would i grab the response back that open AI is sending? |
Currently, as far as I know, Assist pipeline doesn't support respond to another speaker. |
Check out stream assist integration, I am using it with this. It creates a pipeline from any camera entity or rtsp stream, and allows you to select the output speaker. That is the basic function. However I wanted my Alexa to be the output but that does not work with TTS service streamassist calls on the output device, so I had to install Alexa Media Player addon, and use the custom command for Alexa to use "Simon says , for this to work I have an automation that is triggered when the pipeline (stream assist) says text is detected from the STT entity, if that is true then parse the text from the TTS entity with templating and send to whatever device you want, in my case Alexa. That sounds more confusing than it is lol. |
A flexible say tts example. This allows the LLM to pass in any tts entity so it can direct the message to where it is needed.