-
Notifications
You must be signed in to change notification settings - Fork 250
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add support for jamba model with Liger Kernel #214
Open
yubofredwang
wants to merge
21
commits into
linkedin:main
Choose a base branch
from
yubofredwang:jamba-test
base: main
Could not load branches
Branch not found: {{ refName }}
Loading
Could not load tags
Nothing to show
Loading
Are you sure you want to change the base?
Some commits from the old base branch may be removed from the timeline,
and old review comments may become outdated.
Open
Changes from all commits
Commits
Show all changes
21 commits
Select commit
Hold shift + click to select a range
cdd3af2
jamba liger fused linear+xentropy
winglian d05f82b
fix apply jamba function
yubofredwang 99d3553
lint
yubofredwang 7602cf7
remove profiler
yubofredwang 3357a56
Merge branch 'main' into jamba-test
yubofredwang 204742f
add jamba required deps into dev
yubofredwang 2cef594
bump setuptool version for causal-conv1d installation
yubofredwang 65597cf
change ci yaml to install torch beforehead
yubofredwang e654665
Merge branch 'main' into jamba-test
yubofredwang 674fae5
Merge branch 'main' into jamba-test
lancerts b7c866d
Merge branch 'main' into jamba-test
lancerts 50e1c4c
Merge branch 'main' into jamba-test
lancerts 4b247c4
Merge branch 'main' into jamba-test
yubofredwang cfd1eda
split conv1d and mamba to separate dependencies
yubofredwang e46fb17
fix contributing guide
yubofredwang c64791f
remove old deps
yubofredwang cf61cb2
remove jamba from test
yubofredwang 68494df
fix contribute guide
yubofredwang 13f3cd8
Merge branch 'main' into jamba-test
lancerts b91dd7b
Merge branch 'main' into jamba-test
lancerts 08d5bd9
Merge branch 'main' into jamba-test
yubofredwang File filter
Filter by extension
Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
There are no files selected for viewing
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,169 @@ | ||
from typing import Optional, Tuple, Union | ||
|
||
import torch | ||
from torch.nn import CrossEntropyLoss | ||
from transformers.modeling_outputs import MoeCausalLMOutputWithPast | ||
from transformers.models.jamba.modeling_jamba import ( | ||
_CONFIG_FOR_DOC, | ||
JAMBA_INPUTS_DOCSTRING, | ||
HybridMambaAttentionDynamicCache, | ||
load_balancing_loss_func, | ||
) | ||
from transformers.utils import ( | ||
add_start_docstrings_to_model_forward, | ||
replace_return_docstrings, | ||
) | ||
|
||
from liger_kernel.transformers.fused_linear_cross_entropy import ( | ||
LigerFusedLinearCrossEntropyLoss, | ||
) | ||
|
||
|
||
@add_start_docstrings_to_model_forward(JAMBA_INPUTS_DOCSTRING) | ||
@replace_return_docstrings( | ||
output_type=MoeCausalLMOutputWithPast, config_class=_CONFIG_FOR_DOC | ||
) | ||
def lce_forward( | ||
self, | ||
input_ids: torch.LongTensor = None, | ||
attention_mask: Optional[torch.Tensor] = None, | ||
position_ids: Optional[torch.LongTensor] = None, | ||
past_key_values: Optional[HybridMambaAttentionDynamicCache] = None, | ||
inputs_embeds: Optional[torch.FloatTensor] = None, | ||
labels: Optional[torch.LongTensor] = None, | ||
use_cache: Optional[bool] = None, | ||
output_attentions: Optional[bool] = None, | ||
output_hidden_states: Optional[bool] = None, | ||
output_router_logits: Optional[bool] = None, | ||
return_dict: Optional[bool] = None, | ||
cache_position: Optional[torch.LongTensor] = None, | ||
num_logits_to_keep: Optional[Union[int, None]] = None, | ||
) -> Union[Tuple, MoeCausalLMOutputWithPast]: | ||
r""" | ||
Args: | ||
labels (`torch.LongTensor` of shape `(batch_size, sequence_length)`, *optional*): | ||
Labels for computing the masked language modeling loss. Indices should either be in `[0, ..., | ||
config.vocab_size]` or -100 (see `input_ids` docstring). Tokens with indices set to `-100` are ignored | ||
(masked), the loss is only computed for the tokens with labels in `[0, ..., config.vocab_size]`. | ||
|
||
num_logits_to_keep (`int` or `None`, *optional*): | ||
Calculate logits for the last `num_logits_to_keep` tokens. If `None`, calculate logits for all | ||
`input_ids`. Only last token logits are needed for generation, and calculating them only for that token | ||
can save memory, which becomes pretty significant for long sequences. | ||
|
||
Returns: | ||
|
||
Example: | ||
|
||
```python | ||
>>> from transformers import AutoTokenizer, JambaForCausalLM | ||
|
||
>>> model = JambaForCausalLM.from_pretrained("ai21labs/Jamba-v0.1") | ||
>>> tokenizer = AutoTokenizer.from_pretrained("ai21labs/Jamba-v0.1") | ||
|
||
>>> prompt = "Hey, are you conscious? Can you talk to me?" | ||
>>> inputs = tokenizer(prompt, return_tensors="pt") | ||
|
||
>>> # Generate | ||
>>> generate_ids = model.generate(inputs.input_ids, max_length=30) | ||
>>> tokenizer.batch_decode(generate_ids, skip_special_tokens=True, clean_up_tokenization_spaces=False)[0] | ||
"Hey, are you conscious? Can you talk to me?\nI'm not conscious, but I can talk to you." | ||
```""" | ||
|
||
output_attentions = ( | ||
output_attentions | ||
if output_attentions is not None | ||
else self.config.output_attentions | ||
) | ||
output_router_logits = ( | ||
output_router_logits | ||
if output_router_logits is not None | ||
else self.config.output_router_logits | ||
) | ||
|
||
output_hidden_states = ( | ||
output_hidden_states | ||
if output_hidden_states is not None | ||
else self.config.output_hidden_states | ||
) | ||
return_dict = ( | ||
return_dict if return_dict is not None else self.config.use_return_dict | ||
) | ||
|
||
# decoder outputs consists of (dec_features, layer_state, dec_hidden, dec_attn) | ||
outputs = self.model( | ||
input_ids=input_ids, | ||
attention_mask=attention_mask, | ||
position_ids=position_ids, | ||
past_key_values=past_key_values, | ||
inputs_embeds=inputs_embeds, | ||
use_cache=use_cache, | ||
output_attentions=output_attentions, | ||
output_hidden_states=output_hidden_states, | ||
output_router_logits=output_router_logits, | ||
cache_position=cache_position, | ||
return_dict=return_dict, | ||
) | ||
|
||
hidden_states = outputs[0] | ||
|
||
loss = None | ||
logits = None | ||
|
||
if self.training: | ||
shift_hidden_states = hidden_states[..., :-1, :].contiguous() | ||
shift_labels = labels[..., 1:].contiguous() | ||
|
||
# flatten tokens | ||
shift_hidden_states = shift_hidden_states.view(-1, self.config.hidden_size) | ||
shift_labels = shift_labels.view(-1) | ||
|
||
lce = LigerFusedLinearCrossEntropyLoss() | ||
loss = lce(self.lm_head.weight, shift_hidden_states, shift_labels) | ||
else: | ||
if num_logits_to_keep is None: | ||
logits = self.lm_head(hidden_states) | ||
else: | ||
logits = self.lm_head(hidden_states[..., -num_logits_to_keep:, :]) | ||
logits = logits.float() | ||
|
||
if labels is not None: | ||
# Shift so that tokens < n predict n | ||
shift_logits = logits[..., :-1, :].contiguous() | ||
shift_labels = labels[..., 1:].contiguous() | ||
# Flatten the tokens | ||
loss_fct = CrossEntropyLoss() | ||
shift_logits = shift_logits.view(-1, self.config.vocab_size) | ||
shift_labels = shift_labels.view(-1) | ||
# Enable model parallelism | ||
shift_labels = shift_labels.to(shift_logits.device) | ||
loss = loss_fct(shift_logits, shift_labels) | ||
|
||
aux_loss = None | ||
if output_router_logits: | ||
aux_loss = load_balancing_loss_func( | ||
outputs.router_logits if return_dict else outputs[-1], | ||
self.num_experts, | ||
self.num_experts_per_tok, | ||
attention_mask, | ||
) | ||
if labels is not None: | ||
loss += self.router_aux_loss_coef * aux_loss.to( | ||
loss.device | ||
) # make sure to reside in the same device | ||
|
||
if not return_dict: | ||
output = (logits,) + outputs[1:] | ||
if output_router_logits: | ||
output = (aux_loss,) + output | ||
return (loss,) + output if loss is not None else output | ||
|
||
return MoeCausalLMOutputWithPast( | ||
loss=loss, | ||
aux_loss=aux_loss, | ||
logits=logits, | ||
past_key_values=outputs.past_key_values, | ||
hidden_states=outputs.hidden_states, | ||
attentions=outputs.attentions, | ||
router_logits=outputs.router_logits, | ||
) |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this is not good. these two dependencies seem very heavy. is there an alternative?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
these won't be installed by default though. only when you
pip install . '[test]'
they are installed.they are not heavy actually. installation takes about 15 seconds