Replies: 9 comments 28 replies
-
I will give you a more detailed answer later but for now my best guess are:
|
Beta Was this translation helpful? Give feedback.
-
I have not yet found the right settings but I'm getting closer. The idea is to rewind a few steps back with the base model itself after the upscale since using two different models for doing just never works. As well as in general, rewinding the refiner over the base model never ever gives anything good. At least to my experience. The perlin based noise is still a little too strong and with SDXL gives me variables results depending on the subject. Kentucky houses are always crumbling down. Probably due to the perlin noise since swapping to usual noise after the upscale makes things smoother and definitely not as broken. I intend to rework the way I generate this noise. My Comfy carbonara: You will need this small set of nodes that I used as sliders.. The last image in these examples contains the most advanced version of the workflow. With: Without: A nicer attempt with lower persistence: Some bottled galaxies. With: Without: With: Without: Depending on the context the extra intensity coming from the noise can be a nice source of details. Yet also destructive for photorealistic images. Edit: omg indeed (40steps in total): |
Beta Was this translation helpful? Give feedback.
-
Best result so far is by rewinding 10 steps with the base model before using the refiner (35 steps in total): |
Beta Was this translation helpful? Give feedback.
-
@ntdviet @city96 I rewrote the perlin generator and it is now 100% stable and a lot easier to use! I little git pull or re-download of the node should get you rollin' :D |
Beta Was this translation helpful? Give feedback.
-
I think I got it this time. Thanks to @Extraltodeus and @city96 for your great updates. |
Beta Was this translation helpful? Give feedback.
-
But standard noise does not have any pattern. This defeats the purpose IMO. Using Euler (not a) with blurrier settings makes it more obvious. I also decided to use Euler since its the only sampler that I have some small comprehension about and because it is not an ancestral sampler so it is solely using the injected noise rather than injecting its own. Also it is important to note that the base model seems a lot worse at handling the entire workflow. Now I don't know why but I get a lot more upscaling artifacts and overall blurrier images than if I use a custom average merged model. using bad settings to make things obviousWith perlin at upscale: Without: With: Without: As you can see it becomes more "smudged" with the usual noise since it is not matching what has been used initialy. The nodes that I used and you might not have are here: output latent size / batch size / print latent (also outputs noise distribution) https://github.com/Extraltodeus/CustomComfyUINodes/blob/main/print_latent.py VAE decode tiled with some more slider to make it faster (overlap at 8 and tile size at 128, make it 64 if you don't have enough memory) https://github.com/Extraltodeus/CustomComfyUINodes/blob/main/VAEdecodeTiledOptions.py The latent by ratio that I am using https://github.com/Extraltodeus/CustomComfyUINodes/blob/main/LatentByRatio_not_mine.py |
Beta Was this translation helpful? Give feedback.
-
I tried to import it but I missed multiple nodes so I couldn't. I did overall check your workflow despite being able to try it.
I mean you're not wrong. Reinjecting the same noise pattern can also create artifacts which in turn will become weird details depending on the subject. This was more noticeable on the older/bad version that I had made as there would be garlands or flying feathers all over the place. Also I have seen seomwhere that indeed using multiple seeds can be useful to actually enhance the image. So that's great that you were able to use the whole process in a better and smarter way! :)
May I ask what do you do that requires stable diffusion? I'm currently looking for a source of income and being able to monetize what I can do would be really handy. |
Beta Was this translation helpful? Give feedback.
-
Sorry I didn't mention, it's basically your previous workflow just with the custom script's primitive https://github.com/pythongosssss/ComfyUI-Custom-Scripts
Your latest version got the artifacts brushed out pretty nicely already. So this is more of an artistic choice what you want after a certain sampler pass. Introducing new noise for a resampling of 20-30% of steps can also help fixing errors, so it's all depending on the subjects. Changing the choice of model and sampler can push the result to a certain degree. Example: Euler is kind of soft brush, whileas dpmpp_2m_sde is more like hard pencil when you need more definition.
Oh I'm doing side gigs for a photo studio and fashion designers. Not a big deal yet :) There are plenty of ideas what else to do with stable diffusion but none will earn serious money right away (Edits: it's horrible to write something while a meeting is going on... so many typos and poorly written sentences :)) |
Beta Was this translation helpful? Give feedback.
-
I am putting most of this into one node the fun but it's a disgusting python mess lol As for refiner coordination I have posted a node on civitai that does the job. https://civitai.com/models/121394 I will check on your workflow later! :D |
Beta Was this translation helpful? Give feedback.
-
This is from the discussions here: #853 (reply in thread)
and here: city96/SD-Advanced-Noise#1
...if @Extraltodeus and @city96 want to continue
Guys, I hope you have something. I give up. Couldn't make it work for the SDXL Base+Refiner flow.
The latent upscaler is okayish for XL, but in conjunction with perlin noise injection, the artifacts coming from upscaling gets reinforced so much that the 2nd sampler needs a lot of denoising for a clean image, about 50% - 60%. And that throws the image's composition around. I tried without leftover noise, with leftover noise, noise injection before upscaling, after upscaling. Can't find the sweet spot for noise injection strength and denoise schedule to work together.
Samples--
Too much denoised:
Kind of right denoise amount, but too much noise injection, so the white&black spots appear:
1 step-back for Sigma calculation and it is already not enough noise injected
And how the hell can this happen? Not enough noise for everything else but the face is ultra detailed
Beta Was this translation helpful? Give feedback.
All reactions