-
Notifications
You must be signed in to change notification settings - Fork 24
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Is it possible to fine-tune StableNormal for different effects? #21
Comments
We find that dataset is of importance for repurposing SD to pix2pix tasks or applications. So you need to collect your training data pairs such as (img-norm, img-depth, etc). |
Yeah, for sure! I've made my own try by using pix2pixHD and the quality is far away from your archieved quality! |
Please, give me some examples about your training data pairs? |
Infortunately I cannot share the data since it belongs to the company I work for but they were more than a thousand pairs of image <-> specular. Ideally it should be something like training a LORA over your data. |
Do you say, using LORA fine-tune our model on your dataset? |
I was just refering to make a finetunning on your model, it could be creating a new model or a kind of lora over your base model any option could be great. My question was just about using any other dataset, in my case image <-> specular to somehow retrain your system as you made in stable delight. By the way how many pairs could be necessary? |
@lingtengqiu do you plan to share a way to retrain that on the future? |
We plan to clean our training codes and release ASAP. Please wait patiently :) :) : ) |
Great thank you! If it is possible could you told us if it will be a full traning or finetuning? |
Hello! I’m curious to know if StableNormal can be fine-tuned or re-trained for other types of effects beyond monocular normal estimation. Similar to how you’ve adapted StableDelight, would it be possible to tailor StableNormal for different tasks or applications? If so, could you provide some guidance or recommendations on the fine-tuning process?
Thank you!
The text was updated successfully, but these errors were encountered: