-
Notifications
You must be signed in to change notification settings - Fork 3
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
how about support other flux checkpoint #9
Comments
PRs are welcome |
@xueqing0622 #14 should enable support for other Flux checkpoints. Feel free to try it, it could benefit from more testing! |
thx, blepping, can you give a workflow for your brach |
seems can't use lora |
how about use gguf unet |
blepping, thanks for the answer, |
You will get warnings for the layers that don't exist in the modded version of Flux but the LoRA may still apply to the others. In other words, it may still work to some degree even if parts of the LoRA didn't apply. You can try to use a LoRA and see if you like the results. Pretty much all I can say is it does something. Just for example, no LoRA: With an illustration style LoRA: With a painting style LoRA:
It's worth a try. From what I understand, a chunk of the model in the middle is missing. Imagine something like this with normal Flux.
But then we delete blocks 2, 3, 4:
The LoRA still hits 1 as intended, but the rest of the model is shifted so the part that was supposed to hit 2 now hits 5. This is just my guess, not sure that this explanation is correct. If this is roughly correct then really the best we could do is skip parts of the LoRA that want to apply to layers that no longer exist and offset the others so they are at least hitting the intended layer. This would be fairly difficult to do and it's also pretty likely that the LoRA still wouldn't work properly (since it was trained on a different model and we might have to just skip parts of it entirely). |
blepping, I added it to the lora loader and it didn't work, |
不客气啊!把LoRA融合进模型里面我觉得是现在的最佳的办法,希望只是一种暂时的限制但此时我不能承诺什么。有空的话我再想想怎么解决这个问题。 Seems like the pull got merged so you can switch back to the main branch. It's encouraging that merging the LoRAs seemed to work for you, trying to fix it so they at least apply to the correct (existing) layers should probably lead to similar results. |
That approach would definitely be best from a user perspective, unfortunately I think it would be hard to implement. Probably the simplest way to to explain it would be to say that after you load the model with I have an idea for a different approach which I'll try when I get a chance, it should let you apply LoRAs normally and they should work about the same as when you tried to merge them into the checkpoint. |
Of course, I'm a non-professional. |
Actually, I think my explanation about the missing blocks was wrong (at least for normal FluxMod, might be sort of correct when using the mini version). From looking closer at how this stuff works, it doesn't seem obvious why merging a LoRA into a checkpoint would work better than just trying to load it normally. It seems like the warnings when trying to load the LoRA only involve parts of the model that get deleted in FluxMod so that should be the same either way. |
#15 only adds GGUF support, it doesn't do anything to improve the LoRA situation. LoRAs will also probably be even less likely to work if you use the lite patch. |
thx your answer, gguf and other checkponit support is very useful for my workflow. |
only Original Flux model, how about support other flux checkpoint
The text was updated successfully, but these errors were encountered: