Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

视频大于1M ,24GPU会爆吗?还是只有我是这样的? #21

Open
douhaohaode opened this issue Mar 22, 2024 · 4 comments
Open

Comments

@douhaohaode
Copy link

Traceback (most recent call last):
File "D:\python_project\Rerender_A_Video\venv\Lib\site-packages\gradio\queueing.py", line 388, in call_prediction
output = await route_utils.call_process_api(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\python_project\Rerender_A_Video\venv\Lib\site-packages\gradio\route_utils.py", line 219, in call_process_api
output = await app.get_blocks().process_api(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\python_project\Rerender_A_Video\venv\Lib\site-packages\gradio\blocks.py", line 1437, in process_api
result = await self.call_function(
^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\python_project\Rerender_A_Video\venv\Lib\site-packages\gradio\blocks.py", line 1109, in call_function
prediction = await anyio.to_thread.run_sync(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\python_project\Rerender_A_Video\venv\Lib\site-packages\anyio\to_thread.py", line 33, in run_sync
return await get_asynclib().run_sync_in_worker_thread(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\python_project\Rerender_A_Video\venv\Lib\site-packages\anyio_backends_asyncio.py", line 877, in run_sync_in_worker_thread
return await future
^^^^^^^^^^^^
File "D:\python_project\Rerender_A_Video\venv\Lib\site-packages\anyio_backends_asyncio.py", line 807, in run
result = context.run(func, *args)
^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\python_project\Rerender_A_Video\venv\Lib\site-packages\gradio\utils.py", line 650, in wrapper
response = f(*args, **kwargs)
^^^^^^^^^^^^^^^^^^
File "D:\python_project\Rerender_A_Video\venv\Lib\site-packages\torch\utils_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "D:\python_project\Rerender_A_Video\webUI.py", line 159, in process
keypath = process1(*args)
^^^^^^^^^^^^^^^
File "D:\python_project\Rerender_A_Video\venv\Lib\site-packages\torch\utils_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "D:\python_project\Rerender_A_Video\webUI.py", line 280, in process1
latents = inference(global_state.pipe, global_state.controlnet, global_state.frescoProc,
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\python_project\Rerender_A_Video\venv\Lib\site-packages\torch\utils_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "D:\python_project\Rerender_A_Video\src\pipe_FRESCO.py", line 201, in inference
noise_pred = pipe.unet(
^^^^^^^^^^
File "D:\python_project\Rerender_A_Video\venv\Lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\python_project\Rerender_A_Video\src\diffusion_hacked.py", line 787, in forward
sample = upsample_block(
^^^^^^^^^^^^^^^
File "D:\python_project\Rerender_A_Video\venv\Lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\python_project\Rerender_A_Video\src\free_lunch_utils.py", line 346, in forward
hidden_states = attn(
^^^^^
File "D:\python_project\Rerender_A_Video\venv\Lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\python_project\Rerender_A_Video\venv\Lib\site-packages\diffusers\models\transformer_2d.py", line 292, in forward
hidden_states = block(
^^^^^^
File "D:\python_project\Rerender_A_Video\venv\Lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\python_project\Rerender_A_Video\venv\Lib\site-packages\diffusers\models\attention.py", line 155, in forward
attn_output = self.attn1(
^^^^^^^^^^^
File "D:\python_project\Rerender_A_Video\venv\Lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\python_project\Rerender_A_Video\venv\Lib\site-packages\diffusers\models\attention_processor.py", line 322, in forward
return self.processor(
^^^^^^^^^^^^^^^
File "D:\python_project\Rerender_A_Video\src\diffusion_hacked.py", line 281, in call
query = F.scaled_dot_product_attention(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
torch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate 7.66 GiB (GPU 0; 23.99 GiB total capacity; 15.29 GiB already allocated; 2.85 GiB free; 17.22 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF

@williamyang1991
Copy link
Owner

可能是你的视频太宽了吧。
你可以适当减小一下512的值,比如448或者384试试
要么就是减少batch size

img = resize_image(frame, 512)

@moosl
Copy link

moosl commented Mar 24, 2024

我的也不行

@williamyang1991
Copy link
Owner

#28 (comment)

我在这个issue里写了,512x512的 24G GPU是可以跑的
如果你的视频是16:9的,则输入就会被放缩到512x896,像素变为1.7倍,中间在计算GRAM矩阵的时候,显存占用会增加1.75x1.75=3倍,
于是就爆显存了。
解决办法就是,把你的视频裁减得高一点,或者降低视频分辨率(放缩到不到512),或者使用小一点的batch size

@douhaohaode
Copy link
Author

我在这个issue里写了,512x512的 24G GPU是可以跑的
如果你的视频是16:9的,则输入就会被放缩到512x896,像素变为1.7倍,中间在计算GRAM矩阵的时候,显存占用会增加1.75x1.75=3倍,
于是就爆显存了。
解决办法就是,把你的视频裁减得高一点,或者降低视频分辨率(放缩到不到512),或者使用小一点的batch size

好的 🫡

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants