Skip to content

Commit

Permalink
Support fp6 save & load (#11034)
Browse files Browse the repository at this point in the history
  • Loading branch information
cyita authored May 15, 2024
1 parent ac384e0 commit 686f603
Showing 1 changed file with 1 addition and 1 deletion.
2 changes: 1 addition & 1 deletion python/llm/src/ipex_llm/transformers/low_bit_linear.py
Original file line number Diff line number Diff line change
Expand Up @@ -268,7 +268,7 @@ def ggml_q_format_convet_xpu2cpu(tensor: torch.Tensor, num_elem: int, qtype: int

src = ctypes.c_void_p(tensor.data.data_ptr())

if qtype in [SYM_INT4, ASYM_INT4, SYM_INT8, NF4, NF3, FP4, FP8E4, FP8E5]:
if qtype in [SYM_INT4, ASYM_INT4, SYM_INT8, NF4, NF3, FP4, FP6, FP8E4, FP8E5]:
dst_tensor = torch.empty_like(tensor)
elif qtype == ggml_tensor_qtype["sym_int5"]:
QK = ggml.ggml_qk_size(ggml_tensor_qtype["asym_int5"])
Expand Down

0 comments on commit 686f603

Please sign in to comment.