You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I finally found out that the output1 and output2 is too different just because "import torch", if this is a bug? It'll call difference while inferencing with ONNX (without using torch to extract features).
The text was updated successfully, but these errors were encountered:
DrewdropLife
changed the title
Different values while use "import torch"
Different feature values while using "import torch" !
Mar 25, 2024
@DrewdropLife It sounds like you're encountering a compatibility issue related to the use of Intel Math Kernel Library (MKL) and OpenMP when using various Python scientific computing libraries like PyTorch, TensorFlow, etc. These libraries often depend on MKL for optimized performance, and MKL in turn relies on OpenMP for parallel acceleration.
The issue arises because only one instance of OpenMP can exist within the same process. When different libraries are linked to different versions or locations of libomp (the OpenMP library), conflicts can occur, leading to errors or unexpected behavior during inference or execution.
To resolve this, it's recommended to ensure that all libraries are linked to the same version and location of libomp. This typically involves adjusting environment variables or linking paths to ensure consistency across the libraries. Failing to do so can result in slower performance or even unreliable results due to conflicting configurations of MKL and OpenMP.
If you're using Conda environments, you can manage these dependencies more easily by ensuring that all relevant packages are installed in the same environment, or by explicitly configuring environment variables to point to the correct locations for libomp.
If you need further assistance with specific configurations or troubleshooting steps, feel free to ask!
for example
I finally found out that the output1 and output2 is too different just because "import torch", if this is a bug? It'll call difference while inferencing with ONNX (without using torch to extract features).
The text was updated successfully, but these errors were encountered: