Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fix: 'Gemma2Attention' object has no attribute '_flash_attn_uses_top_left_mask' #35285

Closed
Closed
Changes from 1 commit
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Prev Previous commit
Fix: config to self
jp1924 authored Dec 16, 2024
commit 5ddb69a619e75944b4a2f43b9b0aa756ebc64d50
15 changes: 8 additions & 7 deletions src/transformers/models/gemma2/modeling_gemma2.py
Original file line number Diff line number Diff line change
@@ -199,7 +199,7 @@ def eager_attention_forward(


def flash_attention_forward(
config: Gemma2Config,
self: 'Gemma2Attention',
query: torch.Tensor,
key: torch.Tensor,
value: torch.Tensor,
@@ -218,26 +218,27 @@ def flash_attention_forward(
key_states = key.transpose(1, 2)
value_states = value.transpose(1, 2)

dropout_rate = config.attention_dropout if config.training else 0.0
dropout_rate = self.config.attention_dropout if self.training else 0.0

input_dtype = query_states.dtype
if input_dtype == torch.float32:
query_states = query_states.to(target_dtype)
key_states = key_states.to(target_dtype)
value_states = value_states.to(target_dtype)


attn_output = _flash_attention_forward(
query_states,
key_states,
value_states,
mask,
seq_len,
dropout=dropout_rate,
softmax_scale=config.scaling,
is_causal=config.is_causal,
sliding_window=config.sliding_window,
use_top_left_mask=config._flash_attn_uses_top_left_mask,
softcap=config.attn_logit_softcapping if is_flash_attn_greater_or_equal("2.6.0") else None,
softmax_scale=self.scaling,
is_causal=self.is_causal,
sliding_window=self.config.sliding_window,
use_top_left_mask=self.config._flash_attn_uses_top_left_mask,
softcap=self.config.attn_logit_softcapping if is_flash_attn_greater_or_equal("2.6.0") else None,
)

return attn_output, None