Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[defaultAllocator.cpp::deallocate::42] Error Code 1: Cuda Runtime (invalid argument) Segmentation fault (core dumped) #506

Open
Ainecop opened this issue Sep 25, 2023 · 0 comments

Comments

@Ainecop
Copy link

Ainecop commented Sep 25, 2023

Device Name : Jetson Xavier Nx DeveloperToolkit
Jetpack:5.1.1

The inference code is running fine but on end of the prediction it displays following error message
[09/25/2023-10:36:28] [TRT] [E] 1: [defaultAllocator.cpp::deallocate::42] Error Code 1: Cuda Runtime (invalid argument). Is it something we should be worried about ?

Loading nms.trt for TensorRT inference...
[09/25/2023-10:15:14] [TRT] [I] Loaded engine size: 147 MiB
[09/25/2023-10:15:15] [TRT] [W] Using an engine plan file across different models of devices is not recommended and is likely to affect performance or even cause errors.
[09/25/2023-10:15:16] [TRT] [I] [MemUsageChange] Init cuBLAS/cuBLASLt: CPU +261, GPU +247, now: CPU 750, GPU 4081 (MiB)
[09/25/2023-10:15:17] [TRT] [I] [MemUsageChange] Init cuDNN: CPU +82, GPU +86, now: CPU 832, GPU 4167 (MiB)
[09/25/2023-10:15:17] [TRT] [I] [MemUsageChange] TensorRT-managed allocation in engine deserialization: CPU +0, GPU +150, now: CPU 0, GPU 150 (MiB)
[09/25/2023-10:15:20] [TRT] [I] [MemUsageChange] Init cuBLAS/cuBLASLt: CPU +0, GPU +7, now: CPU 1307, GPU 4716 (MiB)
[09/25/2023-10:15:20] [TRT] [I] [MemUsageChange] Init cuDNN: CPU +1, GPU +10, now: CPU 1308, GPU 4726 (MiB)
[09/25/2023-10:15:20] [TRT] [I] [MemUsageChange] TensorRT-managed allocation in IExecutionContext creation: CPU +0, GPU +170, now: CPU 0, GPU 320 (MiB)
123
[09/25/2023-10:15:22] [TRT] [E] 1: [defaultAllocator.cpp::deallocate::42] Error Code 1: Cuda Runtime (invalid argument)
Segmentation fault (core dumped)

The code is as follows:
import torch
from yolort.yolort.runtime import PredictorTRT

# Load the serialized TensorRT engine
engine_path = "nms.trt"
device = torch.device("cuda")
y_runtime = PredictorTRT(engine_path, device=device)

# Perform inference on an image file
predictions = y_runtime.predict("dummy_frame.jpg")
print("123")
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant