You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
So far, #526 implemented the first MKL related operator, aten.fft_c2c. The PR introduced MKL building system in torch-xpu-ops. The MKL SDK introduced for building bases on oneAPI package.
The potential issue is we would recommend using pip install mkl-dpcpp for runtime. There would be potential API breaking issue when MKL version in oneAPI package for building has gap with MKL in Pypi.
We need to unify recommended MKL package for building and runtime.
Alternatives
No response
Additional context
No response
The text was updated successfully, but these errors were encountered:
Is there a plan to add other oneMKL APIs such as those from Sparse BLAS in torch-xpu-ops? I'm seeing things like #1330 which appear to start adding Sparse CSR support but aren't (or maybe can't yet be?) using oneMKL.
@gajanan-choudhary Thanks for your question!
Some sparse ops, such as _sparse_sparse_matmul and _sparse_addmm, will be implemented with oneMKL.
Since ops in #1330 are implemented without MKL in Pytorch CPU, this choice has been retained. For now, functionality first, so relying less is better. These ops may be refined using oneMKL in future.
🚀 The feature, motivation and pitch
So far, #526 implemented the first MKL related operator,
aten.fft_c2c
. The PR introduced MKL building system in torch-xpu-ops. The MKL SDK introduced for building bases on oneAPI package.The potential issue is we would recommend using
pip install mkl-dpcpp
for runtime. There would be potential API breaking issue when MKL version in oneAPI package for building has gap with MKL in Pypi.We need to unify recommended MKL package for building and runtime.
Alternatives
No response
Additional context
No response
The text was updated successfully, but these errors were encountered: