Replies: 1 comment
-
It can help a little with flexibility of the captions and prompting but if too much it can risk overfitting more. This is due to the lack of guidance from the captions skews the loss calculations. I find caption_tag_dropout_rate and shuffle_captions more effective balancing point. Each dataset, training, and goals will need to be evaluated to find what balance works for you. |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
If so, what value did you use? I've tried 5%, 10%, tried training without any txt files....the result has never been better for me than using full captions
Beta Was this translation helpful? Give feedback.
All reactions