Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Feature Request]: Implement X-Adapter #290

Open
1 task done
sashasubbbb opened this issue Feb 17, 2024 · 7 comments
Open
1 task done

[Feature Request]: Implement X-Adapter #290

sashasubbbb opened this issue Feb 17, 2024 · 7 comments
Labels
enhancement New feature or request

Comments

@sashasubbbb
Copy link

sashasubbbb commented Feb 17, 2024

Is there an existing issue for this?

  • I have searched the existing issues and checked the recent builds/commits

What would your feature do ?

https://github.com/showlab/X-Adapter
Code for X-Adapter is finally out.

X-Adapter enables plugins pretrained on old version (e.g. SD1.5) directly work with the upgraded Model (e.g., SDXL) without further retraining.

Is it possible to implement it into Forge?

Proposed workflow

  1. Use 1.5 Loras on SDXL models.

Additional information

No response

@sashasubbbb sashasubbbb added the enhancement New feature or request label Feb 17, 2024
@huchenlei huchenlei self-assigned this Feb 18, 2024
@huchenlei
Copy link
Contributor

@lllyasviel What is the best way to run 2 unets side by side now? It seems like the core logic for X-Adapter is mapping sd15 hidden states to SDXL hidden states (Add to original SDXL hidden states) in decode part of unet.

https://github.com/showlab/X-Adapter/blob/d5460d3baaa3e995c18dbf6680a843c8a3a9b3f9/pipeline/pipeline_sd_xl_adapter_controlnet_img2img.py#L1088-L1227

@FurkanGozukara
Copy link

Awesome following this topic

@strawberrymelonpanda
Copy link

strawberrymelonpanda commented Feb 20, 2024

@huchenlei Do they absolutely have to be side-by-side, or can they be loaded and unloaded one, then the other?

I ask because it doesn't want to run as-is on my 8GB of VRAM. It OOM'ed during the 2nd set of generation iterations. By the description / tutorial here by the author I was guessing it first generates using SD1.5 and then using SDXL, but I could be wrong.

I was able to make it work by changing every CUDA reference to CPU just for testing at a very slow 30~ minutes an image. Using inference.py --plugin_type "lora" with --adapter_guidance_start_list 0.7 and an "old pencil sketch" style LoRA, I get a decent effect.

image

@huchenlei huchenlei removed their assignment Feb 28, 2024
@Gushousekai195
Copy link

I need this... right now

@shitianfang
Copy link

I need this... right now

+1

@huchenlei
Copy link
Contributor

According to my testing X-Adapter result is similar to running HR fix with SD15 model doing low-res pass and SDXL model doing highres pass.

See Mikubill/sd-webui-controlnet#2652 (comment).

@metapea
Copy link

metapea commented May 25, 2024

There also SD-Latent-Interposer, as an alternative
And the dev says a A1111 version can be implemented.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

7 participants