Skip to content

Commit

Permalink
support batch forward for q4_k, q6_k (#11325)
Browse files Browse the repository at this point in the history
  • Loading branch information
rnwang04 authored Jun 14, 2024
1 parent e8dd8e9 commit 8a3247a
Showing 1 changed file with 1 addition and 1 deletion.
2 changes: 1 addition & 1 deletion python/llm/src/ipex_llm/transformers/low_bit_linear.py
Original file line number Diff line number Diff line change
Expand Up @@ -332,7 +332,7 @@ def use_batch_forward(x: torch.Tensor, qtype: int, output_len: int):
and output_len % 32 == 0
and device in ["arc", "flex", "pvc", "mtl"]
and qtype in [SYM_INT4, ASYM_INT4, SYM_INT8, FP4,
FP8E5, FP6, FP8E4]
FP8E5, FP6, FP8E4, Q4_K, Q6_K]
and batch_size <= 64
)
if hard_condition:
Expand Down

0 comments on commit 8a3247a

Please sign in to comment.