Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Support applying FluxMod to GGUF models #15

Open
wants to merge 2 commits into
base: main
Choose a base branch
from

Conversation

blepping
Copy link
Contributor

@blepping blepping commented Jan 7, 2025

image
Fox courtesy of Q5_K_S Flux + FluxMod with lite patch.

This pull adds simple GGUF support if ComfyUI-GGUF is available. (I say simple because the advanced GGUF loader allows setting stuff like the dequant dtype:

image

It wouldn't be hard to allow setting those parameters with FluxMod but you'd either need GGUF-specific FluxMod loader nodes or to add GGUF fields to the base nodes which might not apply.

I also disabled the casting layers and channels last stuff for GGUF models, scared of messing with GGUF format weights. I don't know for sure if that was necessary.

I'm pretty sure this shouldn't break any non-GGUF models (seems fine in my testing) and it seems to work for the GGUF model I tested with and without lite patch. Definitely not sure that the GGUF support is optimal but at least this will enable basic use.


This also includes a few other fixes/cleanups: The copy of math.py from Comfy was unneeded, the only difference was having a mask argument, but you aren't passing a mask in the actual model layers (not sure if that's intentional) so it makes no difference. I also set the memory usage factor in the model to the normal Flux value. The default in BASE seemed to be too low - it was easy to run out of memory for stuff like previews since ComfyUI wasn't estimating the memory usage correctly. BASE defaults to 2.0, Flux uses 2.8. It possibly could be reduced somewhat, but I am not sure it's actually related to the model parameters or just the max amount of memory one operation like attention would require.

@blepping blepping mentioned this pull request Jan 7, 2025
Set Flux memory usage factor
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant