-
Notifications
You must be signed in to change notification settings - Fork 307
Issues: microsoft/DirectML
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Author
Label
Projects
Milestones
Assignee
Sort
Issues list
About "pad" tuple of the arguments of the torch.nn.functional.pad function with torch_directml
#687
opened Feb 17, 2025 by
suryodasuke
Multiplication of a float tensor and a bool tensor produces a bool tensor on PyTorch
#686
opened Feb 2, 2025 by
kazssym
Undefined behavior of DML_OPERATOR_GATHER operator on Qualcomm Snapdragon X GPU and NPU
#685
opened Jan 25, 2025 by
SID37
Blue screen when running AI Dev Gallery model with shared memory
#683
opened Jan 17, 2025 by
hansmbakker
c# I need to run the program on NPU (OnnxRuntime + DirectML + NPU),
#682
opened Jan 14, 2025 by
Gusha-nye
aten::_thnn_fused_lstm_cell
not supported on the DML backend and not implemented on CPU
#677
opened Dec 25, 2024 by
cinco-de-mayonnaise
How can DML_OPERATOR_MULTIHEAD_ATTENTION map to a single metacommand?
#676
opened Dec 17, 2024 by
JianxiaoLuIntel
DirectML Allocates Excessive Memory Exceeding the Capacity of Radeon RX 6700 XT GPU
#673
opened Dec 2, 2024 by
Hedredo
Unknown error -2005270521 was caught when testing torch-directml resnet50 demo
#672
opened Nov 30, 2024 by
Basicname
'aten::linalg_inv_ex.inverse' AND 'aten::_linalg_svd.U' is not currently supported on the DML backend
#671
opened Nov 27, 2024 by
shenth222
Another DxDispatch issue with QCOM Hexagon NPU on Windows (CompileOperator)
#666
opened Nov 14, 2024 by
fobrs
Using directML to inference accelerate onnxruntime, a crash occurred
#657
opened Oct 20, 2024 by
yunhaolsh
Previous Next
ProTip!
Follow long discussions with comments:>50.