You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Running Ubuntu 22.04 LTS I get the following error:
Traceback (most recent call last): File "/home/user/GFPGAN-onnxruntime-demo/torch2onnx.py", line 67, in <module> ort_session = onnxruntime.InferenceSession(onnx_model_path) File "/home/user/miniconda3/envs/richard-roop/lib/python3.10/site-packages/onnxruntime/capi/onnxruntime_inference_collection.py", line 396, in __init__ raise e File "/home/user/miniconda3/envs/richard-roop/lib/python3.10/site-packages/onnxruntime/capi/onnxruntime_inference_collection.py", line 383, in __init__ self._create_inference_session(providers, provider_options, disabled_optimizers) File "/home/user/miniconda3/envs/richard-roop/lib/python3.10/site-packages/onnxruntime/capi/onnxruntime_inference_collection.py", line 415, in _create_inference_session raise ValueError( ValueError: This ORT build has ['TensorrtExecutionProvider', 'CUDAExecutionProvider', 'CPUExecutionProvider'] enabled. Since ORT 1.9, you are required to explicitly set the providers parameter when instantiating InferenceSession. For example, onnxruntime.InferenceSession(..., providers=['TensorrtExecutionProvider', 'CUDAExecutionProvider', 'CPUExecutionProvider'], ...)
To fix the issue I modify line 67 in torch2onnx.py:
Running Ubuntu 22.04 LTS I get the following error:
Traceback (most recent call last): File "/home/user/GFPGAN-onnxruntime-demo/torch2onnx.py", line 67, in <module> ort_session = onnxruntime.InferenceSession(onnx_model_path) File "/home/user/miniconda3/envs/richard-roop/lib/python3.10/site-packages/onnxruntime/capi/onnxruntime_inference_collection.py", line 396, in __init__ raise e File "/home/user/miniconda3/envs/richard-roop/lib/python3.10/site-packages/onnxruntime/capi/onnxruntime_inference_collection.py", line 383, in __init__ self._create_inference_session(providers, provider_options, disabled_optimizers) File "/home/user/miniconda3/envs/richard-roop/lib/python3.10/site-packages/onnxruntime/capi/onnxruntime_inference_collection.py", line 415, in _create_inference_session raise ValueError( ValueError: This ORT build has ['TensorrtExecutionProvider', 'CUDAExecutionProvider', 'CPUExecutionProvider'] enabled. Since ORT 1.9, you are required to explicitly set the providers parameter when instantiating InferenceSession. For example, onnxruntime.InferenceSession(..., providers=['TensorrtExecutionProvider', 'CUDAExecutionProvider', 'CPUExecutionProvider'], ...)
To fix the issue I modify line 67 in torch2onnx.py:
From:
ort_session = onnxruntime.InferenceSession(onnx_model_path)
To:
ort_session = onnxruntime.InferenceSession(onnx_model_path, providers=['TensorrtExecutionProvider', 'CUDAExecutionProvider', 'CPUExecutionProvider'])
The text was updated successfully, but these errors were encountered: