Skip to content

Commit

Permalink
fix style
Browse files Browse the repository at this point in the history
  • Loading branch information
rnwang04 committed Feb 28, 2024
1 parent 136b98c commit 2c7315c
Showing 1 changed file with 1 addition and 1 deletion.
2 changes: 1 addition & 1 deletion python/llm/src/bigdl/llm/transformers/models/utils.py
Original file line number Diff line number Diff line change
Expand Up @@ -274,7 +274,7 @@ def use_flash_attention(query, key, attention_mask=None):
else:
# TODO: below logic may change for different model
# attention mask shape : [bsz, 1, q_len, k_len]
if attention_mask[0].squeeze()[0,0].item() != 0:
if attention_mask[0].squeeze()[0, 0].item() != 0:
# first batch contains padding
# otherwise we suppose it should be a upper triangular matrix
# at the same time, the diagonal is also 0
Expand Down

0 comments on commit 2c7315c

Please sign in to comment.