You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I try to execute the models in the paper "Hardware Architecture and Software Stack for PIM Based on Commercial DRAM Technology : Industrial Product" using oneMCC repo.
Following instructions in README of PimAiCompiler, I could successfully execute HWR model using simpleMain binary. However, when running the model, it seems that the LSTM module is executed on AMDGPU, not on the PIM emulator. LSTM layer is mapped to the native module of PyTorch, at::lstm. I wonder if it is an expected result, or if I missed something. A Linear layer is successfully executed on PIM emulator (by customAtenAddmm/customAtenMatmul).
Similarly, PimAiCompiler doesn't seem to map element-wise addition/multiplication to custom PIM kernels.
Below is my runtime/hardware environment
GPU: Radeon RX Vega 64
System: built with modified docker file in PimAiCompiler (based on rocm/tensorflow:rocm4.0-tf2.3-dev)
ROCm version: 4.0.0
Python: 3.8
PyTorch: custom build from source based on v1.10.1
The text was updated successfully, but these errors were encountered:
Hi,
I try to execute the models in the paper "Hardware Architecture and Software Stack for PIM Based on Commercial DRAM Technology : Industrial Product" using oneMCC repo.
Following instructions in README of PimAiCompiler, I could successfully execute HWR model using
simpleMain
binary. However, when running the model, it seems that the LSTM module is executed on AMDGPU, not on the PIM emulator. LSTM layer is mapped to the native module of PyTorch,at::lstm
. I wonder if it is an expected result, or if I missed something. A Linear layer is successfully executed on PIM emulator (bycustomAtenAddmm
/customAtenMatmul
).Similarly, PimAiCompiler doesn't seem to map element-wise addition/multiplication to custom PIM kernels.
Below is my runtime/hardware environment
The text was updated successfully, but these errors were encountered: