Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Stable Diffusion 3.5 missing parameters when generating toml file : Blocks to swap and Fused Backward Pass #3094

Open
FurkanGozukara opened this issue Feb 23, 2025 · 7 comments

Comments

@FurkanGozukara
Copy link
Contributor

I am starting to research Stable Diffusion 3.5 Large training

Currently Fused Backward Pass and Blocks to swap are not saved into toml file thus makes 0 difference

Also does Stable Diffusion 3.5 Large supports Clip Large or T5 XXL or Clip G training?

Thank you so much for fixes and info @bmaltais

@FurkanGozukara FurkanGozukara changed the title Stable Diffusion 3.5 missing parameters Blocks to swap and Fused Backward Pass Stable Diffusion 3.5 missing parameters when generating toml file : Blocks to swap and Fused Backward Pass Feb 23, 2025
@bernardmaltais
Copy link
Contributor

Also does Stable Diffusion 3.5 Large supports Clip Large or T5 XXL or Clip G training?

This is really a question for Kohya. I don't really train SD models anymore... hence why support for updates is lacking.

I will try to see why the other two parameters are not used... if they are in the GUI they should probably take effect, unless I made a mistake with the code and they are not properly handled.

@FurkanGozukara
Copy link
Contributor Author

Also does Stable Diffusion 3.5 Large supports Clip Large or T5 XXL or Clip G training?

This is really a question for Kohya. I don't really train SD models anymore... hence why support for updates is lacking.

I will try to see why the other two parameters are not used... if they are in the GUI they should probably take effect, unless I made a mistake with the code and they are not properly handled.

I managed to make clip l and clip g training via extra arguments

probably block swap will work too didnt test yet. but ye gui didnt write into toml file

thanks for all support until now

@bernardmaltais
Copy link
Contributor

bernardmaltais commented Feb 24, 2025

I fixed the issue with block_to_swap and fused_backward_pass. Just pushed an update.

Regarding clip l and clip g parameters, are they also missing in the sd3 lora gui? WHat are the actual parameter missing that you manually passed as extra arguments?

@FurkanGozukara
Copy link
Contributor Author

I fixed the issue with block_to_swap and fused_backward_pass. Just pushed an update.

Regarding clip l and clip g parameters, are they also missing in the sd3 lora gui? WHat are the actual parameter missing that you manually passed as extra arguments?

i am doing dreambooth training and yes

how it is provided is

te1 te2 and te3

here extra params i used and worked

--learning_rate_te1 1e-5 --learning_rate_te2 2e-5 --learning_rate_te3 0 --train_text_encoder --use_t5xxl_cache_only

@bernardmaltais
Copy link
Contributor

Ah... I fixed the Lora... Dreambooth might still have issues... I will have to look into that one too.

@bernardmaltais
Copy link
Contributor

block_to_swap and fused_backward_pass should work now for Dreambooth.

I will tackle the learning rates tomorrow

@FurkanGozukara
Copy link
Contributor Author

block_to_swap and fused_backward_pass should work now for Dreambooth.

I will tackle the learning rates tomorrow

thank you so much

also do you have any ideas about this?

#3095

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants