Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Use the float8 and generic_float template for specialization #3606

Merged
merged 6 commits into from
Nov 12, 2024

Conversation

pfultz2
Copy link
Collaborator

@pfultz2 pfultz2 commented Nov 9, 2024

This is uses the template for float8 so we no longer need to add so many combinations of specializations. This should cover all of them. Also this will avoid the errors when merging #3570 and #3578 together(which will also cause an explosion of specializations).

@pfultz2 pfultz2 requested a review from causten as a code owner November 9, 2024 00:40
Copy link

codecov bot commented Nov 9, 2024

Codecov Report

All modified and coverable lines are covered by tests ✅

Project coverage is 92.17%. Comparing base (f5df004) to head (eaf728d).
Report is 5 commits behind head on develop.

Additional details and impacted files
@@             Coverage Diff             @@
##           develop    #3606      +/-   ##
===========================================
- Coverage    92.17%   92.17%   -0.01%     
===========================================
  Files          513      513              
  Lines        21536    21533       -3     
===========================================
- Hits         19851    19848       -3     
  Misses        1685     1685              

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

@migraphx-bot
Copy link
Collaborator

Test Batch Rate new
eaf728
Rate old
f5df00
Diff Compare
torchvision-resnet50 64 3,257.73 3,258.22 -0.02%
torchvision-resnet50_fp16 64 6,987.12 6,994.86 -0.11%
torchvision-densenet121 32 2,435.03 2,438.08 -0.12%
torchvision-densenet121_fp16 32 4,101.95 4,082.99 0.46%
torchvision-inceptionv3 32 1,637.85 1,637.00 0.05%
torchvision-inceptionv3_fp16 32 2,762.10 2,762.55 -0.02%
cadene-inceptionv4 16 776.33 776.12 0.03%
cadene-resnext64x4 16 808.08 811.97 -0.48%
slim-mobilenet 64 7,533.07 7,534.11 -0.01%
slim-nasnetalarge 64 211.49 211.46 0.02%
slim-resnet50v2 64 3,504.30 3,503.66 0.02%
bert-mrpc-onnx 8 1,149.63 1,150.99 -0.12%
bert-mrpc-tf 1 468.39 494.60 -5.30% 🔴
pytorch-examples-wlang-gru 1 418.13 428.98 -2.53%
pytorch-examples-wlang-lstm 1 389.52 385.66 1.00%
torchvision-resnet50_1 1 800.42 766.70 4.40% 🔆
cadene-dpn92_1 1 400.66 433.66 -7.61% 🔴
cadene-resnext101_1 1 383.65 383.99 -0.09%
onnx-taau-downsample 1 341.78 343.15 -0.40%
dlrm-criteoterabyte 1 33.34 33.33 0.03%
dlrm-criteoterabyte_fp16 1 52.70 52.72 -0.04%
agentmodel 1 7,758.07 8,297.10 -6.50% 🔴
unet_fp16 2 58.87 58.77 0.17%
resnet50v1_fp16 1 964.74 948.38 1.72%
resnet50v1_int8 1 1,022.74 1,046.77 -2.30%
bert_base_cased_fp16 64 1,171.28 1,171.13 0.01%
bert_large_uncased_fp16 32 363.50 363.68 -0.05%
bert_large_fp16 1 200.57 197.66 1.47%
distilgpt2_fp16 16 2,203.34 2,203.57 -0.01%
yolov5s 1 550.17 549.13 0.19%
tinyllama 1 43.43 43.43 0.01%
vicuna-fastchat 1 174.11 169.54 2.69%
whisper-tiny-encoder 1 418.81 418.17 0.15%
whisper-tiny-decoder 1 428.95 428.69 0.06%

This build is not recommended to merge 🔴

@migraphx-bot
Copy link
Collaborator


     ✅ bert-mrpc-onnx: PASSED: MIGraphX meets tolerance

     ✅ bert-mrpc-tf: PASSED: MIGraphX meets tolerance

     ✅ pytorch-examples-wlang-gru: PASSED: MIGraphX meets tolerance

     ✅ pytorch-examples-wlang-lstm: PASSED: MIGraphX meets tolerance

     ✅ torchvision-resnet50_1: PASSED: MIGraphX meets tolerance

     ✅ cadene-dpn92_1: PASSED: MIGraphX meets tolerance

     ✅ cadene-resnext101_1: PASSED: MIGraphX meets tolerance

     ✅ dlrm-criteoterabyte: PASSED: MIGraphX meets tolerance

     ✅ agentmodel: PASSED: MIGraphX meets tolerance

     ✅ unet: PASSED: MIGraphX meets tolerance

     ✅ resnet50v1: PASSED: MIGraphX meets tolerance

     ✅ bert_base_cased_fp16: PASSED: MIGraphX meets tolerance

🔴bert_large_uncased_fp16: FAILED: MIGraphX is not within tolerance - check verbose output


     ✅ bert_large: PASSED: MIGraphX meets tolerance

     ✅ yolov5s: PASSED: MIGraphX meets tolerance

     ✅ tinyllama: PASSED: MIGraphX meets tolerance

     ✅ vicuna-fastchat: PASSED: MIGraphX meets tolerance

     ✅ whisper-tiny-encoder: PASSED: MIGraphX meets tolerance

     ✅ whisper-tiny-decoder: PASSED: MIGraphX meets tolerance

     ✅ distilgpt2_fp16: PASSED: MIGraphX meets tolerance

Copy link
Collaborator

@CharlieL7 CharlieL7 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM, I make changes to #3570 to match.

Copy link
Contributor

@richagadgil richagadgil left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Will adjust BF16 PR (#3578) accordingly.

@causten causten merged commit 7654071 into develop Nov 12, 2024
31 of 35 checks passed
@causten causten deleted the float8-template branch November 12, 2024 21:06
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants