Replies: 1 comment 4 replies
-
if __name__ == "__main__":
from RealtimeTTS import CoquiEngine, TextToAudioStream
def dummy_generator():
yield "Hey guys! These here are realtime spoken sentences based on local text synthesis. "
yield "With a local, neuronal, cloned voice. So every spoken sentence sounds unique."
source_voice = "my_voice_reference.wav"
coqui_engine = CoquiEngine(voice=source_voice)
stream = TextToAudioStream(coqui_engine)
stream.feed(dummy_generator())
stream.play(output_wavfile=stream.engine.engine_name + "_output.wav")
coqui_engine.shutdown() my_voice_reference.wav should be 22050 Hz sample rate, mono and 16 bit. |
Beta Was this translation helpful? Give feedback.
4 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
Hello, the documentation states that it is possible to upload an audio of 5 to 30 seconds to use as a model during a generation of tts to wav but how do I send this to coqui? using realtts?
I tried to use some coqui variables in example-fast_api but I was unsuccessful, do you have a simple example? for me to test the source file of the text voice, generate a new file with the voice of the source file
Beta Was this translation helpful? Give feedback.
All reactions