Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

"This version of TensorRT does not support dynamic axes." #2

Open
squirrel-xs opened this issue Jun 19, 2023 · 1 comment
Open

"This version of TensorRT does not support dynamic axes." #2

squirrel-xs opened this issue Jun 19, 2023 · 1 comment

Comments

@squirrel-xs
Copy link

Description

Hi Developers,

When I transform image embedding onnx model to tensorrt engine, I meet this error
trtexec --onnx=embedding_onnx/sam_default_embedding.onnx --workspace=4096 --saveEngine=weights/sam_default_embedding.engine
ModelImporter.cpp:777: ERROR: builtin_op_importers.cpp:4493 In function importSlice: [8] Assertion failed: (axes.allValuesKnown()) && "This version of TensorRT does not support dynamic axes."

Environment

TensorRT version: 8.6.1
CUDA: 11.3
cudnn: 8.9.1.23
Operating System: Linux-x86_64
Python Version (if applicable): 3.8
PyTorch Version (if applicable): 1.12.1

The log are show in followers:
`(pytorch) root@crowd-max:/data/zhengwenqing/segment_anything_tensorrt# trtexec --onnx=embedding_onnx/sam_default_embedding.onnx --workspace=4096 --saveEngine=weights/sam_default_embedding.engine
&&&& RUNNING TensorRT.trtexec [TensorRT v8601] # trtexec --onnx=embedding_onnx/sam_default_embedding.onnx --workspace=4096 --saveEngine=weights/sam_default_embedding.engine
[06/19/2023-03:13:15] [W] --workspace flag has been deprecated by --memPoolSize flag.
[06/19/2023-03:13:15] [I] === Model Options ===
[06/19/2023-03:13:15] [I] Format: ONNX
[06/19/2023-03:13:15] [I] Model: embedding_onnx/sam_default_embedding.onnx
[06/19/2023-03:13:15] [I] Output:
[06/19/2023-03:13:15] [I] === Build Options ===
[06/19/2023-03:13:15] [I] Max batch: explicit batch
[06/19/2023-03:13:15] [I] Memory Pools: workspace: 4096 MiB, dlaSRAM: default, dlaLocalDRAM: default, dlaGlobalDRAM: default
[06/19/2023-03:13:15] [I] minTiming: 1
[06/19/2023-03:13:15] [I] avgTiming: 8
[06/19/2023-03:13:15] [I] Precision: FP32
[06/19/2023-03:13:15] [I] LayerPrecisions:
[06/19/2023-03:13:15] [I] Layer Device Types:
[06/19/2023-03:13:15] [I] Calibration:
[06/19/2023-03:13:15] [I] Refit: Disabled
[06/19/2023-03:13:15] [I] Version Compatible: Disabled
[06/19/2023-03:13:15] [I] TensorRT runtime: full
[06/19/2023-03:13:15] [I] Lean DLL Path:
[06/19/2023-03:13:15] [I] Tempfile Controls: { in_memory: allow, temporary: allow }
[06/19/2023-03:13:15] [I] Exclude Lean Runtime: Disabled
[06/19/2023-03:13:15] [I] Sparsity: Disabled
[06/19/2023-03:13:15] [I] Safe mode: Disabled
[06/19/2023-03:13:15] [I] Build DLA standalone loadable: Disabled
[06/19/2023-03:13:15] [I] Allow GPU fallback for DLA: Disabled
[06/19/2023-03:13:15] [I] DirectIO mode: Disabled
[06/19/2023-03:13:15] [I] Restricted mode: Disabled
[06/19/2023-03:13:15] [I] Skip inference: Disabled
[06/19/2023-03:13:15] [I] Save engine: weights/sam_default_embedding.engine
[06/19/2023-03:13:15] [I] Load engine:
[06/19/2023-03:13:15] [I] Profiling verbosity: 0
[06/19/2023-03:13:15] [I] Tactic sources: Using default tactic sources
[06/19/2023-03:13:15] [I] timingCacheMode: local
[06/19/2023-03:13:15] [I] timingCacheFile:
[06/19/2023-03:13:15] [I] Heuristic: Disabled
[06/19/2023-03:13:15] [I] Preview Features: Use default preview flags.
[06/19/2023-03:13:15] [I] MaxAuxStreams: -1
[06/19/2023-03:13:15] [I] BuilderOptimizationLevel: -1
[06/19/2023-03:13:15] [I] Input(s)s format: fp32:CHW
[06/19/2023-03:13:15] [I] Output(s)s format: fp32:CHW
[06/19/2023-03:13:15] [I] Input build shapes: model
[06/19/2023-03:13:15] [I] Input calibration shapes: model
[06/19/2023-03:13:15] [I] === System Options ===
[06/19/2023-03:13:15] [I] Device: 0
[06/19/2023-03:13:15] [I] DLACore:
[06/19/2023-03:13:15] [I] Plugins:
[06/19/2023-03:13:15] [I] setPluginsToSerialize:
[06/19/2023-03:13:15] [I] dynamicPlugins:
[06/19/2023-03:13:15] [I] ignoreParsedPluginLibs: 0
[06/19/2023-03:13:15] [I]
[06/19/2023-03:13:15] [I] === Inference Options ===
[06/19/2023-03:13:15] [I] Batch: Explicit
[06/19/2023-03:13:15] [I] Input inference shapes: model
[06/19/2023-03:13:15] [I] Iterations: 10
[06/19/2023-03:13:15] [I] Duration: 3s (+ 200ms warm up)
[06/19/2023-03:13:15] [I] Sleep time: 0ms
[06/19/2023-03:13:15] [I] Idle time: 0ms
[06/19/2023-03:13:15] [I] Inference Streams: 1
[06/19/2023-03:13:15] [I] ExposeDMA: Disabled
[06/19/2023-03:13:15] [I] Data transfers: Enabled
[06/19/2023-03:13:15] [I] Spin-wait: Disabled
[06/19/2023-03:13:15] [I] Multithreading: Disabled
[06/19/2023-03:13:15] [I] CUDA Graph: Disabled
[06/19/2023-03:13:15] [I] Separate profiling: Disabled
[06/19/2023-03:13:15] [I] Time Deserialize: Disabled
[06/19/2023-03:13:15] [I] Time Refit: Disabled
[06/19/2023-03:13:15] [I] NVTX verbosity: 0
[06/19/2023-03:13:15] [I] Persistent Cache Ratio: 0
[06/19/2023-03:13:15] [I] Inputs:
[06/19/2023-03:13:15] [I] === Reporting Options ===
[06/19/2023-03:13:15] [I] Verbose: Disabled
[06/19/2023-03:13:15] [I] Averages: 10 inferences
[06/19/2023-03:13:15] [I] Percentiles: 90,95,99
[06/19/2023-03:13:15] [I] Dump refittable layers:Disabled
[06/19/2023-03:13:15] [I] Dump output: Disabled
[06/19/2023-03:13:15] [I] Profile: Disabled
[06/19/2023-03:13:15] [I] Export timing to JSON file:
[06/19/2023-03:13:15] [I] Export output to JSON file:
[06/19/2023-03:13:15] [I] Export profile to JSON file:
[06/19/2023-03:13:15] [I]
[06/19/2023-03:13:15] [I] === Device Information ===
[06/19/2023-03:13:15] [I] Selected Device: Tesla T4
[06/19/2023-03:13:15] [I] Compute Capability: 7.5
[06/19/2023-03:13:15] [I] SMs: 40
[06/19/2023-03:13:15] [I] Device Global Memory: 15109 MiB
[06/19/2023-03:13:15] [I] Shared Memory per SM: 64 KiB
[06/19/2023-03:13:15] [I] Memory Bus Width: 256 bits (ECC enabled)
[06/19/2023-03:13:15] [I] Application Compute Clock Rate: 1.59 GHz
[06/19/2023-03:13:15] [I] Application Memory Clock Rate: 5.001 GHz
[06/19/2023-03:13:15] [I]
[06/19/2023-03:13:15] [I] Note: The application clock rates do not reflect the actual clock rates that the GPU is currently running at.
[06/19/2023-03:13:15] [I]
[06/19/2023-03:13:15] [I] TensorRT version: 8.6.1
[06/19/2023-03:13:15] [I] Loading standard plugins
[06/19/2023-03:13:17] [I] [TRT] [MemUsageChange] Init CUDA: CPU +207, GPU +0, now: CPU 211, GPU 2199 (MiB)
[06/19/2023-03:13:29] [I] [TRT] [MemUsageChange] Init builder kernel library: CPU +737, GPU +172, now: CPU 1025, GPU 2371 (MiB)
[06/19/2023-03:13:29] [W] [TRT] CUDA lazy loading is not enabled. Enabling it can significantly reduce device memory usage and speed up TensorRT initialization. See "Lazy Loading" section of CUDA documentation https://docs.nvidia.com/cuda/cuda-c-programming-guide/index.html#lazy-loading
[06/19/2023-03:13:29] [I] Start parsing network model.
[06/19/2023-03:13:30] [I] [TRT] ----------------------------------------------------------------
[06/19/2023-03:13:30] [I] [TRT] Input filename: embedding_onnx/sam_default_embedding.onnx
[06/19/2023-03:13:30] [I] [TRT] ONNX IR version: 0.0.7
[06/19/2023-03:13:30] [I] [TRT] Opset version: 14
[06/19/2023-03:13:30] [I] [TRT] Producer name: pytorch
[06/19/2023-03:13:30] [I] [TRT] Producer version: 1.12.1
[06/19/2023-03:13:30] [I] [TRT] Domain:
[06/19/2023-03:13:30] [I] [TRT] Model version: 0
[06/19/2023-03:13:30] [I] [TRT] Doc string:
[06/19/2023-03:13:30] [I] [TRT] ----------------------------------------------------------------
[06/19/2023-03:13:30] [W] [TRT] onnx2trt_utils.cpp:374: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[06/19/2023-03:13:45] [W] [TRT] onnx2trt_utils.cpp:400: One or more weights outside the range of INT32 was clamped
[06/19/2023-03:13:45] [E] [TRT] ModelImporter.cpp:771: While parsing node number 1494 [Slice -> "onnx::Slice_1269"]:
[06/19/2023-03:13:45] [E] [TRT] ModelImporter.cpp:772: --- Begin node ---
[06/19/2023-03:13:45] [E] [TRT] ModelImporter.cpp:773: input: "onnx::Slice_1259"
input: "onnx::Slice_13652"
input: "onnx::Slice_1265"
input: "onnx::Slice_13653"
input: "onnx::Slice_1268"
output: "onnx::Slice_1269"
name: "Slice_1494"
op_type: "Slice"

[06/19/2023-03:13:45] [E] [TRT] ModelImporter.cpp:774: --- End node ---
[06/19/2023-03:13:45] [E] [TRT] ModelImporter.cpp:777: ERROR: builtin_op_importers.cpp:4493 In function importSlice:
[8] Assertion failed: (axes.allValuesKnown()) && "This version of TensorRT does not support dynamic axes."
[06/19/2023-03:13:45] [E] Failed to parse onnx file
[06/19/2023-03:13:46] [I] Finished parsing network model. Parse time: 16.2497
[06/19/2023-03:13:46] [E] Parsing model failed
[06/19/2023-03:13:46] [E] Failed to create engine from model or file.
[06/19/2023-03:13:46] [E] Engine set up failed
&&&& FAILED TensorRT.trtexec [TensorRT v8601] # trtexec --onnx=embedding_onnx/sam_default_embedding.onnx --workspace=4096 --saveEngine=weights/sam_default_embedding.engine`

@squirrel-xs squirrel-xs changed the title ModelImporter.cpp:777: ERROR: builtin_op_importers.cpp:4493 In function importSlice: [8] Assertion failed: (axes.allValuesKnown()) && "This version of TensorRT does not support dynamic axes." "This version of TensorRT does not support dynamic axes." Jun 19, 2023
@BooHwang
Copy link
Owner

It seems like you haven't specified an input dimension

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants