Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[LLM][CPU] Enable dynamo GPTJ compilation #26

Open
wants to merge 3 commits into
base: cpu-proto
Choose a base branch
from

Conversation

Devjiu
Copy link

@Devjiu Devjiu commented Feb 19, 2024

This commit enables GPTJ compilation and supports getattr in fx graph
importing.

Signed-off-by: Dmitrii Makarenko dmitrii.makarenko@intel.com

print("[test body] out shape: ", out.size())


import torch._dynamo as dynamo
from transformers import AutoModelForCausalLM, AutoTokenizer
Copy link

@chudur-budur chudur-budur Mar 13, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I can't seem to import transformer, where is it from? @Devjiu
Never mind, found it pip install transformers[torch]

@chudur-budur
Copy link

chudur-budur commented Mar 13, 2024

This PR fails on ElementwiseAddScalar_TensorLiteralInt32_Module_basic and ResNext_basic with a segfault, after applying the fix in #30

Devjiu added 3 commits March 27, 2024 12:02
This commit applies small refactor to MLP basic e2e test.

Signed-off-by: Dmitrii Makarenko <dmitrii.makarenko@intel.com>
This commit enables GPTJ compilation and supports getattr in fx graph
importing.

Signed-off-by: Dmitrii Makarenko <dmitrii.makarenko@intel.com>
@Devjiu Devjiu force-pushed the dmitriim/gptj_enable branch from 0361a6c to 468d2bc Compare March 27, 2024 19:05
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

2 participants