-
Notifications
You must be signed in to change notification settings - Fork 1.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Enable use_batch_forward
Optimization on Battlemage GPU
#12516
Conversation
use_batch_forward
Optimization on Intel® Arc™ B-Series Graphics Cards (Battlemage)
use_batch_forward
Optimization on Intel® Arc™ B-Series Graphics Cards (Battlemage) use_batch_forward
Optimization on Battlemage GPU
@@ -405,6 +405,7 @@ def use_batch_forward(x: torch.Tensor, qtype: int, output_len: int): | |||
or (device in ["arc", "flex"] and qtype in [SYM_INT8, FP4]) | |||
or (device in ["arc", "flex", "mtl"] and qtype in [FP8E4]) | |||
or (device in ["lnl"] and qtype in [SYM_INT4] and x.shape[1] % 512 == 0) | |||
or (device in ["bmg"] and qtype in [SYM_INT4]) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
To support more inference and serving requirements on BMG, shall we also verify that FP8/INT8 can benefit from use_batch_forward. @jason-dai
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I collected FP8 data on BMG using the all-in-one. Models larger than 7B cause the machine to crash, while models smaller than 7B show a 1-2% improvement in next-token performance after enabling the batch kernel.
Should we enable the batch kernel for qtype=FP8E5 as well?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
Description:
Enable
use_batch_forward
Optimization on Intel® Arc™ B-Series Graphics Cards (Battlemage)Currently,
torch.xpu.get_device_name(0)
returnsIntel(R) Graphics [0xe20b]
on Intel® Arc™ B-Series graphics cards (code-named Battlemage), unlike the Intel® Arc™ A-Series which returns more specific names such asIntel(R) Arc(TM) A770 Graphics
.This PR updates the device name matching logic to recognize
Intel(R) Graphics [0xe20b]
as an indicator for Battlemage GPUs, enabling theuse_batch_forward
optimization on these devices.Changes:
Intel(R) Graphics [0xe20b]
.Testing: