You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
After downloading the model from the TTS engine settings, I swapped the TTS model to parler and selected the large version (I did the same with the small version). However, when I try to load it, I encountered this error in the console:
Text/logs
Server Ready
[AllTalk TTS]
[AllTalk TTS] Swapping TTS Engine. Please wait.
[AllTalk TTS]
[AllTalk ENG] Transcoding : ffmpeg found
[AllTalk ENG] DeepSpeed version : 0.14.0+ce78a63
[AllTalk ENG] Python Version : 3.11.11
[AllTalk ENG] PyTorch Version : 2.2.1
[AllTalk ENG] CUDA Version : 12.1
[AllTalk ENG]
[AllTalk ENG] Error: Selected model 'No Models Available' not found in the models folder.
[AllTalk TTS]
[AllTalk TTS] Server Ready
[AllTalk TTS]
[AllTalk TTS] Changing model loaded. Please wait.
[AllTalk TTS]
[AllTalk ENG] Model/Engine : parler-tts-large-v1 loading into cuda
[AllTalk ENG] Loading model with dtype: float16
Loading checkpoint shards: 100%|█████████████████████████████████████████████████████████| 2/2 [00:00<00:00, 2.53it/s]
[AllTalk TTS] Warning Error during request to webserver process: Status code:
HTTPConnectionPool(host='127.0.0.1', port=7851): Read timed out. (read timeout=30)
Traceback (most recent call last):
File "C:\alltalk_tts\alltalk_environment\env\Lib\site-packages\gradio\queueing.py", line 536, in process_events
response = await route_utils.call_process_api(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\alltalk_tts\alltalk_environment\env\Lib\site-packages\gradio\route_utils.py", line 322, in call_process_api
output = await app.get_blocks().process_api(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\alltalk_tts\alltalk_environment\env\Lib\site-packages\gradio\blocks.py", line 1945, in process_api
data = await self.postprocess_data(block_fn, result["prediction"], state)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\alltalk_tts\alltalk_environment\env\Lib\site-packages\gradio\blocks.py", line 1717, in postprocess_data
self.validate_outputs(block_fn, predictions) # type: ignore
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\alltalk_tts\alltalk_environment\env\Lib\site-packages\gradio\blocks.py", line 1691, in validate_outputs
raise ValueError(
ValueError: An event handler (change_model_loaded) didn't receive enough output values (needed: 13, received: 1).
Wanted outputs:
[<gradio.components.textbox.Textbox object at 0x0000024E7F6C5790>, <gradio.components.dropdown.Dropdown object at 0x0000024E0163B590>, <gradio.components.dropdown.Dropdown object at 0x0000024E0163A250>, <gradio.components.dropdown.Dropdown object at 0x0000024E01646550>, <gradio.components.dropdown.Dropdown object at 0x0000024E01682390>, <gradio.components.dropdown.Dropdown object at 0x0000024E016D6F90>, <gradio.components.slider.Slider object at 0x0000024E016F5E10>, <gradio.components.slider.Slider object at 0x0000024E016F6650>, <gradio.components.slider.Slider object at 0x0000024E016C1B90>, <gradio.components.slider.Slider object at 0x0000024E016F6A90>, <gradio.components.dropdown.Dropdown object at 0x0000024E016F4A90>, <gradio.components.dropdown.Dropdown object at 0x0000024E01646690>, <gradio.components.dropdown.Dropdown object at 0x0000024E01682450>]
Received outputs:
[{'status': 'error', 'message': "HTTPConnectionPool(host='127.0.0.1', port=7851): Read timed out. (read timeout=30)"}]
[AllTalk ENG] GPU Memory Used: 4469.10 MB
[AllTalk ENG] Load time : 35.50 seconds.
Desktop (please complete the following information):
installed the v2 using standalone installation
my specs:
Processor 12th Gen Intel(R) Core(TM) i9-12900HX 2.30 GHz
RAM 32,0 GB
windows 11
gpu rtx 4060
The text was updated successfully, but these errors were encountered:
Describe the bug
After downloading the model from the TTS engine settings, I swapped the TTS model to parler and selected the large version (I did the same with the small version). However, when I try to load it, I encountered this error in the console:
Text/logs
Server Ready
[AllTalk TTS]
[AllTalk TTS] Swapping TTS Engine. Please wait.
[AllTalk TTS]
[AllTalk ENG] Transcoding : ffmpeg found
[AllTalk ENG] DeepSpeed version : 0.14.0+ce78a63
[AllTalk ENG] Python Version : 3.11.11
[AllTalk ENG] PyTorch Version : 2.2.1
[AllTalk ENG] CUDA Version : 12.1
[AllTalk ENG]
[AllTalk ENG] Error: Selected model 'No Models Available' not found in the models folder.
[AllTalk TTS]
[AllTalk TTS] Server Ready
[AllTalk TTS]
[AllTalk TTS] Changing model loaded. Please wait.
[AllTalk TTS]
[AllTalk ENG] Model/Engine : parler-tts-large-v1 loading into cuda
[AllTalk ENG] Loading model with dtype: float16
Loading checkpoint shards: 100%|█████████████████████████████████████████████████████████| 2/2 [00:00<00:00, 2.53it/s]
[AllTalk TTS] Warning Error during request to webserver process: Status code:
HTTPConnectionPool(host='127.0.0.1', port=7851): Read timed out. (read timeout=30)
Traceback (most recent call last):
File "C:\alltalk_tts\alltalk_environment\env\Lib\site-packages\gradio\queueing.py", line 536, in process_events
response = await route_utils.call_process_api(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\alltalk_tts\alltalk_environment\env\Lib\site-packages\gradio\route_utils.py", line 322, in call_process_api
output = await app.get_blocks().process_api(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\alltalk_tts\alltalk_environment\env\Lib\site-packages\gradio\blocks.py", line 1945, in process_api
data = await self.postprocess_data(block_fn, result["prediction"], state)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\alltalk_tts\alltalk_environment\env\Lib\site-packages\gradio\blocks.py", line 1717, in postprocess_data
self.validate_outputs(block_fn, predictions) # type: ignore
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\alltalk_tts\alltalk_environment\env\Lib\site-packages\gradio\blocks.py", line 1691, in validate_outputs
raise ValueError(
ValueError: An event handler (change_model_loaded) didn't receive enough output values (needed: 13, received: 1).
Wanted outputs:
[<gradio.components.textbox.Textbox object at 0x0000024E7F6C5790>, <gradio.components.dropdown.Dropdown object at 0x0000024E0163B590>, <gradio.components.dropdown.Dropdown object at 0x0000024E0163A250>, <gradio.components.dropdown.Dropdown object at 0x0000024E01646550>, <gradio.components.dropdown.Dropdown object at 0x0000024E01682390>, <gradio.components.dropdown.Dropdown object at 0x0000024E016D6F90>, <gradio.components.slider.Slider object at 0x0000024E016F5E10>, <gradio.components.slider.Slider object at 0x0000024E016F6650>, <gradio.components.slider.Slider object at 0x0000024E016C1B90>, <gradio.components.slider.Slider object at 0x0000024E016F6A90>, <gradio.components.dropdown.Dropdown object at 0x0000024E016F4A90>, <gradio.components.dropdown.Dropdown object at 0x0000024E01646690>, <gradio.components.dropdown.Dropdown object at 0x0000024E01682450>]
Received outputs:
[{'status': 'error', 'message': "HTTPConnectionPool(host='127.0.0.1', port=7851): Read timed out. (read timeout=30)"}]
[AllTalk ENG] GPU Memory Used: 4469.10 MB
[AllTalk ENG] Load time : 35.50 seconds.
Desktop (please complete the following information):
installed the v2 using standalone installation
my specs:
Processor 12th Gen Intel(R) Core(TM) i9-12900HX 2.30 GHz
RAM 32,0 GB
windows 11
gpu rtx 4060
The text was updated successfully, but these errors were encountered: