Pickle data was truncated #99
Replies: 4 comments 3 replies
-
This probably has to do with the usage of specific_model. Does the unchanged coqui_test.py work? If you use specific_model but don't specify the local_models_path parameter it is expected that specific_model is a XTTS model that can be downloaded from huggingface. So only "v2.0.0", "v2.0.1", "v2.0.2" and "v2.0.3" would be supported unless you use local_models_path parameter. Greetings, |
Beta Was this translation helpful? Give feedback.
-
Simplest possible reason: maybe you did not wait long enough. CoquiEngine needs a bit time to start. If this does not work, please add maximum logging to see what exactly happens: if __name__ == '__main__':
from RealtimeTTS import TextToAudioStream, CoquiEngine
def dummy_generator():
yield "Hey guys! These here are realtime spoken sentences based on local text synthesis. "
yield "With a local, neuronal, cloned voice. So every spoken sentence sounds unique."
import logging
logging.basicConfig(level=logging.DEBUG)
engine = CoquiEngine(level=logging.DEBUG)
stream = TextToAudioStream(engine)
print ("Starting to play stream")
stream.feed(dummy_generator()).play(log_synthesized_text=True)
engine.shutdown() Please post the last lines of that output so I can see better, where CoquiEngine stops in initialization. |
Beta Was this translation helpful? Give feedback.
-
Thanks for posting this. Something goes wrong within the coqui tts library this repo depends on. (The xtts load_checkpoint debug message is written directly before calling the XTTS load_checkpoint method from Coqui TTS) This is unfortunately a bit out of scope for me, because it happens in another library, which RealtimeTTS depends on. So I suggest: Try some basic Coqui TTS examples first to verify coqui tts is installed and working on your system: Then try an example using load_checkpoint: If the load_checkpoint example works, then RealtimeTTS should too. If it does not work, it's related to tts library and out of scope for me, you would need to log an issue in their github repo then. |
Beta Was this translation helpful? Give feedback.
-
My name is Joseph Dingess, known as With-Darkness on GitHub. I am eager to utilize your module. Following your recommendation, I have initiated a project. Specifically, I am interested in utilizing the English model rather than multilanguage models. Therefore, I have begun to follow the project accordingly.
import torch
from TTS.api import TTS
import logging
OUTPUT_PATH='audio1.wav'
# Get device
device = "cuda" if torch.cuda.is_available() else "cpu"
# Init TTS with the target model name
tts = TTS(model_name="tts_models/en/ljspeech/tacotron2-DDC", progress_bar=False).to(device)
# Run TTS
tts.tts_to_file(text="Hello Guys I am potter", file_path=OUTPUT_PATH)
It works well so tried this using your module as follows.
if __name__ == '__main__':
from RealtimeTTS import TextToAudioStream, CoquiEngine
def dummy_generator():
yield "Hey guys! These here are realtime spoken sentences based on local text synthesis. "
yield "With a local, neuronal, cloned voice. So every spoken sentence sounds unique."
import logging
logging.basicConfig(level=logging.DEBUG)
engine = CoquiEngine(level=logging.DEBUG, model_name="tts_models/en/ljspeech/tacotron2-DDC", specific_model="")
stream = TextToAudioStream(engine)
print ("Starting to play stream")
stream.feed(dummy_generator()).play(log_synthesized_text=True)
engine.shutdown()
But there is still error like this.
[cid:edf5f883-5991-463f-8b65-9ec49238d504]
Please help me. If you want to contact me out side of mail, you can find me in Discord josephdingess0926.
…________________________________
From: Kolja Beigel ***@***.***>
Sent: Friday, July 5, 2024 1:34 PM
To: KoljaB/RealtimeTTS ***@***.***>
Cc: With-Darkness ***@***.***>; Author ***@***.***>
Subject: Re: [KoljaB/RealtimeTTS] Pickle data was truncated (Discussion #99)
This probably has to do with the usage of specific_model. Does the unchanged coqui_test.py work?
If you use specific_model but don't specify the local_models_path parameter it is expected that specific_model is a XTTS model that can be downloaded from huggingface.
So only "v2.0.0", "v2.0.1", "v2.0.2" and "v2.0.3" would be supported unless you use local_models_path parameter.
Greetings,
Kolja
—
Reply to this email directly, view it on GitHub<#99 (comment)>, or unsubscribe<https://github.com/notifications/unsubscribe-auth/BJO73IWDSRAHSGMIZ2FBJ43ZKYV57AVCNFSM6AAAAABKML7GDKVHI2DSMVQWIX3LMV43SRDJONRXK43TNFXW4Q3PNVWWK3TUHM4TSNRUGUYDK>.
You are receiving this because you authored the thread.Message ID: ***@***.***>
|
Beta Was this translation helpful? Give feedback.
-
I am happy to use your RealtimeTTS package. But I faced some errors and I couldn't fix it.
I want to use coquiEngine and execute coqui_test.py in the tests folder.
I edited some code in this test as follows.
from RealtimeTTS import TextToAudioStream, CoquiEngine
import multiprocessing
if name == 'main':
multiprocessing.freeze_support() # Ensure proper multiprocessing initialization
Unfortunately, there was an error that Pickle data was truncated.
Please help me.
Beta Was this translation helpful? Give feedback.
All reactions