-
Notifications
You must be signed in to change notification settings - Fork 40
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Issue loading a PEFT lora model #70
Comments
Replaced the get_lora_depth function but the ff and to_out layers are still not handled correctly.
|
Even with the latest merged PR, I am still getting different results between using the node and merging to a checkpoint. I believe there are some missing layers/weights still! |
Here is a lora example I just published: I've seen that I need to crank the lora strength quite a bit i.e. up to 1.8 to get good results. Is there anything in the patching algorithm that needs quite a boost in the strength? |
When trying to load this PEFT Lora model I have multiple issues.
https://www.dropbox.com/scl/fi/30y9yn26ao8pnwch7z1ex/test_lora.zip?rlkey=r6kvgzwvrqm9tnw4jz8ctgu2f&st=x8r69pb3&dl=0
The first is that the code crashes at the line
k = new_modelpatcher.add_patches(loaded, strength)
in lora.py when using the automatic cfg node after loading the lora.Also, when I am trying to load the Lora, it doesn't find all the layers it needs to patch i.e. only 14 layers out of 574of them... There is an image generating but it doesn't load the lora correctly!
The text was updated successfully, but these errors were encountered: