Skip to content

Commit

Permalink
Fix qwen export with gptq (huggingface#639)
Browse files Browse the repository at this point in the history
  • Loading branch information
eaidova authored Mar 28, 2024
1 parent 10ac43e commit 382d00f
Showing 1 changed file with 5 additions and 1 deletion.
6 changes: 5 additions & 1 deletion optimum/exporters/openvino/model_patcher.py
Original file line number Diff line number Diff line change
Expand Up @@ -483,7 +483,11 @@ def __init__(
model.config.bf16 = False
model.config.fp16 = False
if self.original_fp16 or self.original_bf16:
model.to(torch.float32)
# GPTQ models does to support casting to dtype
try:
model.to(torch.float32)
except Exception:
pass
model.transformer.rotary_emb(2048)

def __enter__(self):
Expand Down

0 comments on commit 382d00f

Please sign in to comment.