Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

哪位大神帮帮忙 #105

Open
DragonChen-strong opened this issue Feb 14, 2025 · 2 comments
Open

哪位大神帮帮忙 #105

DragonChen-strong opened this issue Feb 14, 2025 · 2 comments

Comments

@DragonChen-strong
Copy link

我在web部署的时候出现
Traceback (most recent call last):
File "/home/jdkj/Whisper-Finetune/infer_server.py", line 39, in
model = WhisperModel(args.model_path, device="cuda", compute_type="float16", num_workers=args.num_workers,
File "/home/jdkj/miniconda3/envs/whisper/lib/python3.9/site-packages/faster_whisper/transcribe.py", line 647, in init
self.model = ctranslate2.models.Whisper(
RuntimeError: Unable to open file 'model.bin' in model 'models/whisper-tiny-finetune'
好像是ctranslate2的Whisper不支持safetensor,这个问题该怎么解决

@yeyupiaoling
Copy link
Owner

@DragonChen-strong 你的模型转换了吗?

@DragonChen-strong
Copy link
Author

python merge_lora.py --lora_model=output/whisper-tiny/checkpoint-best/ --output_dir=models/
我用这个命令进行模型微调的

python infer_server.py --host=0.0.0.0 --port=5000 --model_path=models/whisper-tiny-finetune --num_workers=2
我用这个命令启动命令

Traceback (most recent call last):
File "/home/jdkj/Whisper-Finetune/infer_server.py", line 39, in
model = WhisperModel(args.model_path, device="cuda", compute_type="float16", num_workers=args.num_workers,
File "/home/jdkj/miniconda3/envs/whisper/lib/python3.9/site-packages/faster_whisper/transcribe.py", line 647, in init
self.model = ctranslate2.models.Whisper(
RuntimeError: Unable to open file 'model.bin' in model 'models/whisper-tiny-finetune'

报这个错误

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants