-
Notifications
You must be signed in to change notification settings - Fork 468
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Bad Case]: MIniCPM3 原始pytorch .bin文件转为gguf失败 #212
Comments
同样,我以为原版llama.cpp不行,又从openBMB clone了一次llama.cpp 依然不行 |
我今天转成功了,中间也出现了错误,仔细看报错,我这里是缺少jsonschema datamodel_code_generator 这两个包。装了重新转就可以了。 |
可以提供下版本吗?我的报错信息没有多余的包错误信息呢 python convert_hf_to_gguf.py models/MiniCPM3-4B --outfile models/MiniCPM3-4B-f16.gguf
INFO:hf-to-gguf:Loading model: MiniCPM3-4B
ERROR:hf-to-gguf:Model MiniCPM3ForCausalLM is not supported |
@jason-ni +1,还请告知下环境版本信息,方便的话请提供下您虚拟环境的requirement文件 |
成功了,不能使用llama.cpp官方repo,需要使用cpm 的llama.cpp会经历与@jason-ni 同样的缺少包的问题 |
git checkout minicpm3 This fixed convert_hf_to_gguf.py is not merged into main branch. Regards. |
PR link: ggml-org/llama.cpp#9322 |
llama.cpp最新版已经支持MiniCPM3,GGUF格式的模型见这里。 |
Description / 描述
我按照指示,配置了llama_cpp环境,在执行模型转换时(命令:python convert_hf_to_gguf.py cpm_model_dir/MiniCPM3-4B/ --outfile cpm_model_dir/MiniCPM3-4B/CPM-4B-F16.gguf),提示错误:
(注,本机机器默认python环境为python3)
ERROR:hf-to-gguf:Model MiniCPM3ForCausalLM is not supported
不知道是否是版本问题,还是目前的模型不支持最新的llama_cpp版本,截图如下:
Case Explaination / 案例解释
No response
The text was updated successfully, but these errors were encountered: