Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Transformers 4.48 #2158

Merged
merged 26 commits into from
Jan 29, 2025
Merged
Show file tree
Hide file tree
Changes from 22 commits
Commits
Show all changes
26 commits
Select commit Hold shift + click to select a range
5190280
test
IlyasMoutawwakil Jan 16, 2025
6a03d76
testing tensor cache x)
IlyasMoutawwakil Jan 20, 2025
7207215
fix logger
IlyasMoutawwakil Jan 20, 2025
6261094
condition cache class usage
IlyasMoutawwakil Jan 20, 2025
822066d
update opset for beit and data2vec vision and skip flattened/fused pk…
IlyasMoutawwakil Jan 20, 2025
3ab38fd
style
IlyasMoutawwakil Jan 20, 2025
d713e5a
fix args patcher
IlyasMoutawwakil Jan 20, 2025
bf4d1f3
fix modernbert testing
IlyasMoutawwakil Jan 20, 2025
230c3a0
adaot to new whisper returned generation length
IlyasMoutawwakil Jan 20, 2025
3d5d9c9
fix is_causal in transformers
IlyasMoutawwakil Jan 20, 2025
96e2714
fix modernbert failures
IlyasMoutawwakil Jan 20, 2025
78a2dba
style
IlyasMoutawwakil Jan 20, 2025
967c6e2
traceable cache
IlyasMoutawwakil Jan 20, 2025
1d74388
use pkv index
IlyasMoutawwakil Jan 24, 2025
d452c46
add version gard and clean up other model patcher version gards
IlyasMoutawwakil Jan 24, 2025
5dcab7f
patch sdpa attention in optimum for now
IlyasMoutawwakil Jan 24, 2025
656941a
remove modernbert condition
IlyasMoutawwakil Jan 24, 2025
1bcb38f
style
IlyasMoutawwakil Jan 24, 2025
23fa20e
fix MistralModelPatcher
IlyasMoutawwakil Jan 24, 2025
24c8f4b
correctly patch gpt2 in vision encoder decoder
IlyasMoutawwakil Jan 24, 2025
3694ea4
patch sdpa attention forward everywhere
IlyasMoutawwakil Jan 26, 2025
3d7d586
fix gpt2 cross attention in seq2seq as well
IlyasMoutawwakil Jan 26, 2025
10833d8
moved traceable cache to a file for simplicity of model patcher
IlyasMoutawwakil Jan 29, 2025
9491d17
Apply suggestions from code review
IlyasMoutawwakil Jan 29, 2025
2b73129
style
IlyasMoutawwakil Jan 29, 2025
dea98a0
fix
IlyasMoutawwakil Jan 29, 2025
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
7 changes: 3 additions & 4 deletions optimum/exporters/onnx/model_configs.py
Original file line number Diff line number Diff line change
Expand Up @@ -843,7 +843,7 @@ class DeiTOnnxConfig(ViTOnnxConfig):


class BeitOnnxConfig(ViTOnnxConfig):
DEFAULT_ONNX_OPSET = 11
DEFAULT_ONNX_OPSET = 14 # now uses F.scaled_dot_product_attention by default for torch>=2.1.1.


class ConvNextOnnxConfig(ViTOnnxConfig):
Expand Down Expand Up @@ -1573,13 +1573,12 @@ class Data2VecTextOnnxConfig(DistilBertOnnxConfig):


class Data2VecVisionOnnxConfig(ViTOnnxConfig):
DEFAULT_ONNX_OPSET = 11
DEFAULT_ONNX_OPSET = 14 # now uses F.scaled_dot_product_attention by default for torch>=2.1.1.


class Data2VecAudioOnnxConfig(AudioOnnxConfig):
NORMALIZED_CONFIG_CLASS = NormalizedConfig
ATOL_FOR_VALIDATION = 1e-4
DEFAULT_ONNX_OPSET = 14 # now uses F.scaled_dot_product_attention by default for torch>=2.1.1.
NORMALIZED_CONFIG_CLASS = NormalizedConfig


class PerceiverDummyInputGenerator(DummyVisionInputGenerator):
Expand Down
Loading
Loading