Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fix 1383 Llama model on transformers=4.41[WIP] #11280

Merged
merged 10 commits into from
Jun 21, 2024

Conversation

songhappy
Copy link
Contributor

Description

add llama_attention_forward_4_41, llama_model_forward_4_41

@songhappy songhappy requested a review from sgwhat June 13, 2024 05:17
@songhappy
Copy link
Contributor Author

Tested on Max1100 and documented LLama2-7B model metrics on issue-1383, performance metrics of transformers4.41 are similar to 4.38

Copy link
Contributor

@sgwhat sgwhat left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Others LGTM

if cache_position is not None:
# for transformers 4.38.0
causal_mask = attention_mask[:, :, cache_position, : kv_seq_len]
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The reason to remove causal_mask here?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

the code is compared to wrong place, did not touch for 4_38


next_cache = next_decoder_cache if use_cache else None
if return_legacy_cache:
next_cache = next_cache.to_legacy_cache()
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Need to double check if next_decoder_cache is DynamicFP8Cache.

@songhappy songhappy merged commit 7507000 into intel-analytics:main Jun 21, 2024
17 of 18 checks passed
RyuKosei pushed a commit to RyuKosei/ipex-llm that referenced this pull request Jul 19, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants