-
Notifications
You must be signed in to change notification settings - Fork 88
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Use the float8 and generic_float template for specialization #3606
Conversation
Codecov ReportAll modified and coverable lines are covered by tests ✅
Additional details and impacted files@@ Coverage Diff @@
## develop #3606 +/- ##
===========================================
- Coverage 92.17% 92.17% -0.01%
===========================================
Files 513 513
Lines 21536 21533 -3
===========================================
- Hits 19851 19848 -3
Misses 1685 1685 ☔ View full report in Codecov by Sentry. |
This build is not recommended to merge 🔴 |
🔴bert_large_uncased_fp16: FAILED: MIGraphX is not within tolerance - check verbose output |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM, I make changes to #3570 to match.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Will adjust BF16 PR (#3578) accordingly.
This is uses the template for float8 so we no longer need to add so many combinations of specializations. This should cover all of them. Also this will avoid the errors when merging #3570 and #3578 together(which will also cause an explosion of specializations).