-
Notifications
You must be signed in to change notification settings - Fork 897
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Feature Request]: Implement X-Adapter #290
Comments
@lllyasviel What is the best way to run 2 unets side by side now? It seems like the core logic for X-Adapter is mapping sd15 hidden states to SDXL hidden states (Add to original SDXL hidden states) in decode part of unet. |
Awesome following this topic |
@huchenlei Do they absolutely have to be side-by-side, or can they be loaded and unloaded one, then the other? I ask because it doesn't want to run as-is on my 8GB of VRAM. It OOM'ed during the 2nd set of generation iterations. By the description / tutorial here by the author I was guessing it first generates using SD1.5 and then using SDXL, but I could be wrong. I was able to make it work by changing every CUDA reference to CPU just for testing at a very slow 30~ minutes an image. Using |
I need this... right now |
+1 |
According to my testing X-Adapter result is similar to running HR fix with SD15 model doing low-res pass and SDXL model doing highres pass. |
There also SD-Latent-Interposer, as an alternative |
Is there an existing issue for this?
What would your feature do ?
https://github.com/showlab/X-Adapter
Code for X-Adapter is finally out.
X-Adapter enables plugins pretrained on old version (e.g. SD1.5) directly work with the upgraded Model (e.g., SDXL) without further retraining.
Is it possible to implement it into Forge?
Proposed workflow
Additional information
No response
The text was updated successfully, but these errors were encountered: