Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

BNK Unsampler exceed allowed memory. (out of memory) #21

Open
jamesmgg opened this issue Jan 15, 2024 · 3 comments
Open

BNK Unsampler exceed allowed memory. (out of memory) #21

jamesmgg opened this issue Jan 15, 2024 · 3 comments

Comments

@jamesmgg
Copy link

I'm getting this error when loading 150 frames... seems kind of excessive. Any ideas on how to fix this?

Error occurred when executing BNK_Unsampler:

Allocation on device 0 would exceed allowed memory. (out of memory)
Currently allocated : 9.49 GiB
Requested : 9.89 GiB
Device limit : 23.99 GiB
Free (according to CUDA): 0 bytes
PyTorch limit (set by user-supplied memory fraction)
: 17179869184.00 GiB

File "/stable-diffusion/execution.py", line 154, in recursive_execute
output_data, output_ui = get_output_data(obj, input_data_all)
File "/stable-diffusion/execution.py", line 84, in get_output_data
return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True)
File "/stable-diffusion/execution.py", line 77, in map_node_over_list
results.append(getattr(obj, func)(**slice_dict(input_data_all, i)))
File "/stable-diffusion/custom_nodes/ComfyUI_Noise/nodes.py", line 236, in unsampler
samples = sampler.sample(noise, positive, negative, cfg=cfg, latent_image=latent_image, force_full_denoise=False, denoise_mask=noise_mask, sigmas=sigmas, start_step=0, last_step=end_at_step, callback=callback)
File "/stable-diffusion/comfy/samplers.py", line 716, in sample
return sample(self.model, noise, positive, negative, cfg, self.device, sampler, sigmas, self.model_options, latent_image=latent_image, denoise_mask=denoise_mask, callback=callback, disable_pbar=disable_pbar, seed=seed)
File "/stable-diffusion/comfy/samplers.py", line 622, in sample
samples = sampler.sample(model_wrap, sigmas, extra_args, callback, noise, latent_image, denoise_mask, disable_pbar)
File "/stable-diffusion/comfy/samplers.py", line 561, in sample
samples = self.sampler_function(model_k, noise, sigmas, extra_args=extra_args, callback=k_callback, disable=disable_pbar, **self.extra_options)
File "/opt/conda/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "/stable-diffusion/comfy/k_diffusion/sampling.py", line 580, in sample_dpmpp_2m
denoised = model(x, sigmas[i] * s_in, **extra_args)
File "/opt/conda/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1518, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/opt/conda/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1527, in _call_impl
return forward_call(*args, **kwargs)
File "/stable-diffusion/comfy/samplers.py", line 285, in forward
out = self.inner_model(x, sigma, cond=cond, uncond=uncond, cond_scale=cond_scale, model_options=model_options, seed=seed)
File "/opt/conda/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1518, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/opt/conda/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1527, in _call_impl
return forward_call(*args, **kwargs)
File "/stable-diffusion/comfy/samplers.py", line 275, in forward
return self.apply_model(*args, **kwargs)
File "/stable-diffusion/comfy/samplers.py", line 272, in apply_model
out = sampling_function(self.inner_model, x, timestep, uncond, cond, cond_scale, model_options=model_options, seed=seed)
File "/stable-diffusion/comfy/samplers.py", line 252, in sampling_function
cond_pred, uncond_pred = calc_cond_uncond_batch(model, cond, uncond_, x, timestep, model_options)
File "/stable-diffusion/comfy/samplers.py", line 226, in calc_cond_uncond_batch
output = model.apply_model(input_x, timestep_, **c).chunk(batch_chunks)
File "/stable-diffusion/comfy/model_base.py", line 85, in apply_model
model_output = self.diffusion_model(xc, t, context=context, control=control, transformer_options=transformer_options, **extra_conds).float()
File "/opt/conda/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1518, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/opt/conda/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1527, in _call_impl
return forward_call(*args, **kwargs)
File "/stable-diffusion/custom_nodes/SeargeSDXL/modules/custom_sdxl_ksampler.py", line 70, in new_unet_forward
x0 = old_unet_forward(self, x, timesteps, context, y, control, transformer_options, **kwargs)
File "/stable-diffusion/comfy/ldm/modules/diffusionmodules/openaimodel.py", line 854, in forward
h = forward_timestep_embed(module, h, emb, context, transformer_options, time_context=time_context, num_video_frames=num_video_frames, image_only_indicator=image_only_indicator)
File "/stable-diffusion/comfy/ldm/modules/diffusionmodules/openaimodel.py", line 46, in forward_timestep_embed
x = layer(x, context, transformer_options)
File "/opt/conda/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1518, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/opt/conda/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1527, in _call_impl
return forward_call(*args, **kwargs)
File "/stable-diffusion/comfy/ldm/modules/attention.py", line 604, in forward
x = block(x, context=context[i], transformer_options=transformer_options)
File "/opt/conda/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1518, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/opt/conda/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1527, in _call_impl
return forward_call(*args, **kwargs)
File "/stable-diffusion/comfy/ldm/modules/attention.py", line 431, in forward
return checkpoint(self._forward, (x, context, transformer_options), self.parameters(), self.checkpoint)
File "/stable-diffusion/comfy/ldm/modules/diffusionmodules/util.py", line 189, in checkpoint
return func(*inputs)
File "/stable-diffusion/comfy/ldm/modules/attention.py", line 541, in _forward
x = self.ff(self.norm3(x))
File "/opt/conda/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1518, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/opt/conda/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1527, in _call_impl
return forward_call(*args, **kwargs)
File "/stable-diffusion/comfy/ldm/modules/attention.py", line 85, in forward
return self.net(x)
File "/opt/conda/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1518, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/opt/conda/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1527, in _call_impl
return forward_call(*args, **kwargs)
File "/opt/conda/lib/python3.10/site-packages/torch/nn/modules/container.py", line 215, in forward
input = module(input)
File "/opt/conda/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1518, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/opt/conda/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1527, in _call_impl
return forward_call(*args, **kwargs)
File "/stable-diffusion/comfy/ldm/modules/attention.py", line 64, in forward
x, gate = self.proj(x).chunk(2, dim=-1)
File "/opt/conda/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1518, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/opt/conda/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1527, in _call_impl
return forward_call(*args, **kwargs)
File "/stable-diffusion/comfy/ops.py", line 28, in forward
return super().forward(*args, **kwargs)
File "/opt/conda/lib/python3.10/site-packages/torch/nn/modules/linear.py", line 114, in forward
return F.linear(input, self.weight, self.bias)
@yukselyusuf
Copy link

same issue

@orionfro
Copy link

orionfro commented Jan 30, 2024

same issue on first run, then working on second re-run.
console output : [rgthree] Using rgthree's optimized recursive execution.
mabye a problem with low VRAM mode ?

@alanhuang67
Copy link

same here...

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants