-
-
Notifications
You must be signed in to change notification settings - Fork 105
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Fix overaggressive caching for latents with the same pixel density but different dimensions, minor fix for headless mode #251
base: main
Are you sure you want to change the base?
Conversation
hey sure ad thanks will check and update |
Note that the caching 100% ignores the changing of the input image to the sampler. I will check this does the trick and let you know if this resolves the actual problem. |
doesnt_calculate_-_2024-10-18_09-36-03.mp4As an example of what I am experiencing. Switching your KSampler out for a normal one, makes it work again. |
I patched the file in but it still refuses to flush when a new image from the webcam is processed. You can see in my video the images ARE unique and new, the loader processes them fine -- when they hit the KSampler, locked to a seed, they do no updates. Using a normal KSampler from ComfyUI resolve the issue, but I would like to stick with Efficiency nodes. |
Hi Amorano - my patch only fixes the issue where latents of equal pixel density are passed in via the latent_image pipe to the kSampler - in your case your images are feeding through via the lineart controlnet, and are then used to modify the positive and negative conditionings as part of the loader (see lines 179-187 in efficiency_nodes.py). So this is a different issue, but I agree it should work. From looking at the code, both the positive and negative conditioning are hashed into the cache (see lines 520 and 521), but the problem is that only the tensor itself is hashed (i.e. To show this, I am using Asterr to print the value of the first tensor to the console while changing controlnet inputs: I don't know a lot about controlnet, so I'm not sure how you would robustly extract a hash of the input image from the object to insert into the Efficiency cache key. It may be that the sampler should take another input where a user can insert their own caching key? In the meantime, I can give you a workaround though - end_at_step is part of the cache key, but as you can see in your workflow it is set much higher than the sampler will ever reach (10,000 vs 6) and so isn't really used for anything. Convert your end_at_step to an input and connect a primitive set to e.g. autoincrement from 100 (or if you have the nodes handy, some random number plus 100 or so), and it should ensure that you always get a cache miss on trigger. I hope this is of some use, and it may be worth opening another ticket to discuss the issue around controlnet modified conditioning being cached too heavily. |
Currently TSC_KSampler.sample caches sampler results using a variety of input keys including the hash of the input latent image. However, the key does not include the latent's shape. Back to back generations with empty latents of contrasting portrait/landscape shapes but equal latent pixel counts appear to result in a false positive cache hit if all other parameters are held the same.
Additionally, a guard has been added to calls to globals_cleanup() to ensure a prompt object exists - when using the node headlessly via pydn/ComfyUI-to-Python-Extension calls to globals_cleanup will crash the server. My apologies for bundling these.