-
Notifications
You must be signed in to change notification settings - Fork 72
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
视频大于1M ,24GPU会爆吗?还是只有我是这样的? #21
Comments
可能是你的视频太宽了吧。 Line 170 in 9fe1be7
|
我的也不行 |
我在这个issue里写了,512x512的 24G GPU是可以跑的 |
好的 🫡 |
Traceback (most recent call last):
File "D:\python_project\Rerender_A_Video\venv\Lib\site-packages\gradio\queueing.py", line 388, in call_prediction
output = await route_utils.call_process_api(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\python_project\Rerender_A_Video\venv\Lib\site-packages\gradio\route_utils.py", line 219, in call_process_api
output = await app.get_blocks().process_api(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\python_project\Rerender_A_Video\venv\Lib\site-packages\gradio\blocks.py", line 1437, in process_api
result = await self.call_function(
^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\python_project\Rerender_A_Video\venv\Lib\site-packages\gradio\blocks.py", line 1109, in call_function
prediction = await anyio.to_thread.run_sync(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\python_project\Rerender_A_Video\venv\Lib\site-packages\anyio\to_thread.py", line 33, in run_sync
return await get_asynclib().run_sync_in_worker_thread(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\python_project\Rerender_A_Video\venv\Lib\site-packages\anyio_backends_asyncio.py", line 877, in run_sync_in_worker_thread
return await future
^^^^^^^^^^^^
File "D:\python_project\Rerender_A_Video\venv\Lib\site-packages\anyio_backends_asyncio.py", line 807, in run
result = context.run(func, *args)
^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\python_project\Rerender_A_Video\venv\Lib\site-packages\gradio\utils.py", line 650, in wrapper
response = f(*args, **kwargs)
^^^^^^^^^^^^^^^^^^
File "D:\python_project\Rerender_A_Video\venv\Lib\site-packages\torch\utils_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "D:\python_project\Rerender_A_Video\webUI.py", line 159, in process
keypath = process1(*args)
^^^^^^^^^^^^^^^
File "D:\python_project\Rerender_A_Video\venv\Lib\site-packages\torch\utils_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "D:\python_project\Rerender_A_Video\webUI.py", line 280, in process1
latents = inference(global_state.pipe, global_state.controlnet, global_state.frescoProc,
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\python_project\Rerender_A_Video\venv\Lib\site-packages\torch\utils_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "D:\python_project\Rerender_A_Video\src\pipe_FRESCO.py", line 201, in inference
noise_pred = pipe.unet(
^^^^^^^^^^
File "D:\python_project\Rerender_A_Video\venv\Lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\python_project\Rerender_A_Video\src\diffusion_hacked.py", line 787, in forward
sample = upsample_block(
^^^^^^^^^^^^^^^
File "D:\python_project\Rerender_A_Video\venv\Lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\python_project\Rerender_A_Video\src\free_lunch_utils.py", line 346, in forward
hidden_states = attn(
^^^^^
File "D:\python_project\Rerender_A_Video\venv\Lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\python_project\Rerender_A_Video\venv\Lib\site-packages\diffusers\models\transformer_2d.py", line 292, in forward
hidden_states = block(
^^^^^^
File "D:\python_project\Rerender_A_Video\venv\Lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\python_project\Rerender_A_Video\venv\Lib\site-packages\diffusers\models\attention.py", line 155, in forward
attn_output = self.attn1(
^^^^^^^^^^^
File "D:\python_project\Rerender_A_Video\venv\Lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\python_project\Rerender_A_Video\venv\Lib\site-packages\diffusers\models\attention_processor.py", line 322, in forward
return self.processor(
^^^^^^^^^^^^^^^
File "D:\python_project\Rerender_A_Video\src\diffusion_hacked.py", line 281, in call
query = F.scaled_dot_product_attention(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
torch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate 7.66 GiB (GPU 0; 23.99 GiB total capacity; 15.29 GiB already allocated; 2.85 GiB free; 17.22 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF
The text was updated successfully, but these errors were encountered: