Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

关于7b模型和lora模型合并的问题 #543

Open
ericzfguo opened this issue Dec 24, 2024 · 5 comments
Open

关于7b模型和lora模型合并的问题 #543

ericzfguo opened this issue Dec 24, 2024 · 5 comments

Comments

@ericzfguo
Copy link

作者您好,想请教一下在使用merge_peft_adapter.py代码合并预训练模型7b和训练好的lora模型时,参数--model_type、--tokenizer_path和--resize_emb应该怎样设置才正确,谢谢您!

merge

@shibing624
Copy link
Owner

--model_type auto --tokenizer_path和--resize_emb 不填。

@ericzfguo
Copy link
Author

感谢作者!

@ericzfguo
Copy link
Author

作者您好,还有个问题想向您请教一下,我们在把训练好的lora模型和预训练模型7b合并后,测试了一下效果,发现合并后的模型效果不如单独使用lora模型的效果(测试数据都是相同的),准确率下降了,请问出现这种现象的原因是什么?谢谢!

@shibing624
Copy link
Owner

应该是一模一样的,除非你评估时的参数有变动。

@ericzfguo
Copy link
Author

好的,感谢作者!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants