Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

can't load parler tts models #508

Open
Rubrum7 opened this issue Jan 31, 2025 · 1 comment
Open

can't load parler tts models #508

Rubrum7 opened this issue Jan 31, 2025 · 1 comment

Comments

@Rubrum7
Copy link

Rubrum7 commented Jan 31, 2025

Describe the bug

After downloading the model from the TTS engine settings, I swapped the TTS model to parler and selected the large version (I did the same with the small version). However, when I try to load it, I encountered this error in the console:

Text/logs
Server Ready
[AllTalk TTS]
[AllTalk TTS] Swapping TTS Engine. Please wait.
[AllTalk TTS]
[AllTalk ENG] Transcoding : ffmpeg found
[AllTalk ENG] DeepSpeed version : 0.14.0+ce78a63
[AllTalk ENG] Python Version : 3.11.11
[AllTalk ENG] PyTorch Version : 2.2.1
[AllTalk ENG] CUDA Version : 12.1
[AllTalk ENG]
[AllTalk ENG] Error: Selected model 'No Models Available' not found in the models folder.
[AllTalk TTS]
[AllTalk TTS] Server Ready
[AllTalk TTS]
[AllTalk TTS] Changing model loaded. Please wait.
[AllTalk TTS]
[AllTalk ENG] Model/Engine : parler-tts-large-v1 loading into cuda
[AllTalk ENG] Loading model with dtype: float16
Loading checkpoint shards: 100%|█████████████████████████████████████████████████████████| 2/2 [00:00<00:00, 2.53it/s]
[AllTalk TTS] Warning Error during request to webserver process: Status code:
HTTPConnectionPool(host='127.0.0.1', port=7851): Read timed out. (read timeout=30)
Traceback (most recent call last):
File "C:\alltalk_tts\alltalk_environment\env\Lib\site-packages\gradio\queueing.py", line 536, in process_events
response = await route_utils.call_process_api(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\alltalk_tts\alltalk_environment\env\Lib\site-packages\gradio\route_utils.py", line 322, in call_process_api
output = await app.get_blocks().process_api(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\alltalk_tts\alltalk_environment\env\Lib\site-packages\gradio\blocks.py", line 1945, in process_api
data = await self.postprocess_data(block_fn, result["prediction"], state)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\alltalk_tts\alltalk_environment\env\Lib\site-packages\gradio\blocks.py", line 1717, in postprocess_data
self.validate_outputs(block_fn, predictions) # type: ignore
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\alltalk_tts\alltalk_environment\env\Lib\site-packages\gradio\blocks.py", line 1691, in validate_outputs
raise ValueError(
ValueError: An event handler (change_model_loaded) didn't receive enough output values (needed: 13, received: 1).
Wanted outputs:
[<gradio.components.textbox.Textbox object at 0x0000024E7F6C5790>, <gradio.components.dropdown.Dropdown object at 0x0000024E0163B590>, <gradio.components.dropdown.Dropdown object at 0x0000024E0163A250>, <gradio.components.dropdown.Dropdown object at 0x0000024E01646550>, <gradio.components.dropdown.Dropdown object at 0x0000024E01682390>, <gradio.components.dropdown.Dropdown object at 0x0000024E016D6F90>, <gradio.components.slider.Slider object at 0x0000024E016F5E10>, <gradio.components.slider.Slider object at 0x0000024E016F6650>, <gradio.components.slider.Slider object at 0x0000024E016C1B90>, <gradio.components.slider.Slider object at 0x0000024E016F6A90>, <gradio.components.dropdown.Dropdown object at 0x0000024E016F4A90>, <gradio.components.dropdown.Dropdown object at 0x0000024E01646690>, <gradio.components.dropdown.Dropdown object at 0x0000024E01682450>]
Received outputs:
[{'status': 'error', 'message': "HTTPConnectionPool(host='127.0.0.1', port=7851): Read timed out. (read timeout=30)"}]
[AllTalk ENG] GPU Memory Used: 4469.10 MB
[AllTalk ENG] Load time : 35.50 seconds.

Desktop (please complete the following information):
installed the v2 using standalone installation

my specs:
Processor 12th Gen Intel(R) Core(TM) i9-12900HX 2.30 GHz
RAM 32,0 GB
windows 11
gpu rtx 4060

@unifirer
Copy link

unifirer commented Feb 4, 2025

same here, it broke the whole program and i had to reinstall

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants