-
Notifications
You must be signed in to change notification settings - Fork 367
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
内存爆了 #17
Comments
我M1 Pro满血版带32G内存也爆了,估计要64G内存才能正常跑起来 |
我试试,今天出门前运行了微调,回来一看直接重启了.放弃放弃,· |
直接执行 python start_qa_dialogue.py,生成了个转化后的这个怎么做到的? tokenizer_config.json 自己写一个? |
不用啊 ./tools/compress_model.py 你执行看下
130G的Qwen1.5-32B-Chat,压缩到18G了,./main/chat.py 里的model路径改下,Qwen1.5-32B-Chat-FT-4Bit 改到 Qwen1.5-32B-Chat-4Bit,就可以直接执行chat.py了。 |
对,还要改一下目录地址,真的跑起来了,用结果谢谢你
|
我好奇,模型没有学习,只是压缩了一下,怎么就有这效果呢?
|
把 --batch 往下调试试 |
Starting training..., iters: 1000
Iter 1: Val loss 9.100, Val took 5657.756s
Iter 10: Train loss 9.754, Learning Rate 1.000e-05, It/sec 0.005, Tokens/sec 6.189, Trained Tokens 12868, Peak mem 81.864 GB
The text was updated successfully, but these errors were encountered: