-
Hello, Mr. Kolja Beigel, Thank you for your nice program, RealtimeTTS. Watched the demo: Short_RealtimeTTS_Demo.mov It covered the following engines sequentially: Coqui TTS, Azure TTS, Elevenlabs and System TTS. The rest: gTTS and Parler TTS weren't included in the demo. But no issues. The four engines demonstrated are sufficient for my purpose. The first three, Coqui TTS, Azure TTS and Elevenlabs are very good and natural. But are they available during offline operations? Also, could the text pasted on the textbox for reading as a steam be endless? Any maximum limit? I would try: |
Beta Was this translation helpful? Give feedback.
Replies: 1 comment 5 replies
-
Hi,
the demo is old, gTTS and Parler were integrated after that. I won't create another video for every new engine.
Azure TTS and Elevenlabs are APIs and can't be made "offline" because they a services. Coqui works offline.
RealtimeTTS does not impose a specific limit on text size. In practice, the constraints are only theoretical and should be enormous, dictated by your system's memory capacity or the capabilities of the Textbox UI component. For most use cases, these limits are unlikely to be an issue.
RealtimeTTS has 100k downloads and is used by huge projects like Open Interpreter 01 or Neuro. |
Beta Was this translation helpful? Give feedback.
Hi,
the demo is old, gTTS and Parler were integrated after that. I won't create another video for every new engine.
Azure TTS and Elevenlabs are APIs and can't be made "offline" because they a services. Coqui works offline.
RealtimeTTS does not impose a specific limit on text size. In practice, the constraints are only theoretical and should be enormous, dictated by your system's memory capacity or th…