Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Quality degradation after multi-inferences #54

Open
973398769 opened this issue Nov 18, 2024 · 14 comments
Open

Quality degradation after multi-inferences #54

973398769 opened this issue Nov 18, 2024 · 14 comments

Comments

@973398769
Copy link

This is my current setup: helper.set_params(cache_interval=3, cache_branch_id=0). When processing a large amount of images, I've noticed some quality degradation, but I'm not sure if it's caused by DeepCache.

@973398769
Copy link
Author

973398769 commented Nov 18, 2024

I would like to understand when I should enable and when I should disable it. Is it necessary to disable after every inference, and should I enable it before starting a new inference? Super Thanks!

@elismasilva
Copy link

I would like to understand when I should enable and when I should disable it. Is it necessary to disable after every inference, and should I enable it before starting a new inference? Super Thanks!

you need disable and reapply in next inference only if you change params. I found this bug too and solved doing this.

@973398769
Copy link
Author

973398769 commented Nov 19, 2024 via email

@elismasilva
Copy link

I would like to understand when I should enable and when I should disable it. Is it necessary to disable after every inference, and should I enable it before starting a new inference? Super Thanks!

Sorry, i did a test now, you always need disable and enable again in each inference even if params is not changed.

@973398769
Copy link
Author

Could you kindly assist me in understanding how to reproduce this issue? Would it be possible for you to share some sample code that could help in reproducing it?

I would like to understand when I should enable and when I should disable it. Is it necessary to disable after every inference, and should I enable it before starting a new inference? Super Thanks!

Sorry, i did a test now, you always need disable and enable again in each inference even if params is not changed.

@973398769
Copy link
Author

I may only encounter the problem when testing with thousands of images. Thank you very much for your help

@elismasilva
Copy link

elismasilva commented Nov 20, 2024

@973398769
just put helper.disable() before other calls.

helper.disable()
helper.set_params(cache_interval=3, cache_branch_id=0)
helper.enable()

this need be before you call pipe()

@973398769
Copy link
Author

I would like to reproduce the issue of quality degradation, but I only detect this problem locally when testing with thousands of images. Could you please tell me how to make this issue appear more quickly? Thank you.

@973398769
Copy link
Author

973398769 commented Nov 20, 2024

This is my test pipeline:
pipe = StableDiffusionPipeline.from_pretrained(
"runwayml/stable-diffusion-v1-5",
torch_dtype=torch.float16,
use_safetensors=True,
).to("cuda:0")
helper = DeepCacheSDHelper(pipe=pipe)
helper.set_params(
cache_interval=3,
cache_branch_id=0,
)
deepcache_image = pipe(
prompt,
generator=generator,
output_type='pil'
).images[0]

@elismasilva
Copy link

This is my test pipeline: pipe = StableDiffusionPipeline.from_pretrained( "runwayml/stable-diffusion-v1-5", torch_dtype=torch.float16, use_safetensors=True, ).to("cuda:0") helper = DeepCacheSDHelper(pipe=pipe) helper.set_params( cache_interval=3, cache_branch_id=0, ) deepcache_image = pipe( prompt, generator=generator, output_type='pil' ).images[0]

I don't know if we're talking about the same problem, I'm working with SDXL, but I reuse the pipeline without deleting the variable or restarting the script, because I'm in a Gradio interface. I don't know how you're doing it, if you're running this code inside a loop and reusing the same pipeline variable or if you're destroying the variable. For me, if I order the first inference of an image, it generates it normally, then if I try to generate it again without disabling it first, it already generates the deteriorated image. Like this:

2024_11_19_23_08_06_554319_6_1

@973398769
Copy link
Author

Are you using generate_pipe.enable_model_cpu_offload() in your inference? Enabling this could cause the issue as detailed here: https://www.kaggle.com/code/ledrose/deepcache-cpu-offload-bug

@973398769
Copy link
Author

We seem to be discussing the same issue. Although I haven't used enable_model_cpu_offload(), I still experience a decrease in quality when inferring a large number of images, but it's hard to reproduce consistently

@elismasilva
Copy link

We seem to be discussing the same issue. Although I haven't used enable_model_cpu_offload(), I still experience a decrease in quality when inferring a large number of images, but it's hard to reproduce consistently

yes i use cpu offload then we need do it disable and enable again inside loop

@elismasilva
Copy link

We seem to be discussing the same issue. Although I haven't used enable_model_cpu_offload(), I still experience a decrease in quality when inferring a large number of images, but it's hard to reproduce consistently

how are you generating images, are you using loop or setting number of images in pipeline?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants