Skip to content

Commit

Permalink
quick fix qwen2 fp8 kv cache (#10135)
Browse files Browse the repository at this point in the history
  • Loading branch information
MeouSker77 authored Feb 8, 2024
1 parent 3c23854 commit c67d363
Showing 1 changed file with 2 additions and 0 deletions.
2 changes: 2 additions & 0 deletions python/llm/src/bigdl/llm/transformers/models/qwen2.py
Original file line number Diff line number Diff line change
Expand Up @@ -167,6 +167,8 @@ def qwen2_attention_forward_quantized(

if q_len != 1:
key, value = restore_fp8_kv_cache(key_states, value_states, query_states.dtype)
key = repeat_kv(key, self.num_key_value_groups)
value = repeat_kv(value, self.num_key_value_groups)
attn_weights = torch.matmul(query_states, key.transpose(2, 3))
else:
import linear_q4_0
Expand Down

0 comments on commit c67d363

Please sign in to comment.