Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

VRAM memory Leak while using GPEN-BFR-512 model #6

Open
yorkane opened this issue Oct 31, 2024 · 0 comments
Open

VRAM memory Leak while using GPEN-BFR-512 model #6

yorkane opened this issue Oct 31, 2024 · 0 comments

Comments

@yorkane
Copy link

yorkane commented Oct 31, 2024

Easily to reproduce,
Aflter 24 images upscaledm throw out execeptions logs below:

ComfyUI Error Report

Error Details

  • Node Type: easy forLoopEnd
  • Exception Type: AttributeError
  • Exception Message: 'NoneType' object has no attribute 'get_tensor_shape'

Stack Trace

  File "/home/aigc/comfyui/execution.py", line 323, in execute
    output_data, output_ui, has_subgraph = get_output_data(obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)

  File "/home/aigc/comfyui/execution.py", line 198, in get_output_data
    return_values = _map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)

  File "/home/aigc/comfyui/custom_nodes/ComfyUI-0246/utils.py", line 353, in new_func
    res_value = old_func(*final_args, **kwargs)

  File "/home/aigc/comfyui/execution.py", line 169, in _map_node_over_list
    process_inputs(input_dict, i)

  File "/home/aigc/comfyui/execution.py", line 158, in process_inputs
    results.append(getattr(obj, func)(**inputs))

  File "/home/aigc/comfyui/custom_nodes/ComfyUI-Facerestore-Tensorrt/__init__.py", line 93, in main
    self.engine.allocate_buffers()

  File "/home/aigc/comfyui/custom_nodes/ComfyUI-Facerestore-Tensorrt/trt_utilities.py", line 229, in allocate_buffers
    shape = self.context.get_tensor_shape(name)

System Information

  • ComfyUI Version: v0.2.6-1-g09fdb2b
  • Arguments: main.py --listen --port 8192 --cuda-device 1
  • OS: posix
  • Python Version: 3.10.12 (main, Jul 29 2024, 16:56:48) [GCC 11.4.0]
  • Embedded Python: false
  • PyTorch Version: 2.4.1+cu121

Devices

  • Name: cuda:0 NVIDIA GeForce RTX 4090 : cudaMallocAsync
    • Type: cuda
    • VRAM Total: 25386352640
    • VRAM Free: 305111178
    • Torch VRAM Total: 2382364672
    • Torch VRAM Free: 8560778
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant