-
-
Notifications
You must be signed in to change notification settings - Fork 31
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add layer specific training to init flux train node? #97
Comments
Hey, thx for your fast answer! can u also select double and single blocks specifically? |
Yes just replace single with double in the string. |
Blessing! thx mate |
Hello, hope ure good - I have a another question please: Can I use multiple Block as comma sequence in one run? And how can I change the number of repeats, like in Koyha through renaming the folder with number like 10_fldername? Thx for taking the time |
I would rather say this, but I'm not sure: lora_unet_single_blocks_7_linear1, lora_unet_single_blocks_20_linear1 |
Hello dear Kijai,
First of all, thank you so much for creating these amazing nodes and workflows! They’ve made LoRa training much more accessible and streamlined for me.
I recently came across an article discussing the concept of training specific layers with LoRa to save time and resources. As described in this article, you can target specific layer regions for different use cases. For instance, layers 2 and 7 are ideal for training faces. It’s incredible not only because you can selectively control which layers to train, but also because the resulting LoRa files are significantly smaller—enabling super-fast training sessions.
For reference, the FLUX model consists of 37 single and double-layer blocks, with 18 of them being double blocks, offering fine-grained control during training. You can check out all the displayed layers here:
All 37 layers.
Additionally, this Reddit article highlights how compact and efficient FLUX LoRa files can be, often smaller than 45MB at 128 dimensions.
Thank you again for your time and effort—it’s truly appreciated!
The text was updated successfully, but these errors were encountered: