Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

There is no new message. I need to make clear about the steps to run the repo #19

Closed
trungUTELai opened this issue Oct 24, 2024 · 8 comments

Comments

@trungUTELai
Copy link

trungUTELai commented Oct 24, 2024

I have an issue while running the repo. As I have ran the roslaunch d2vins quadcam.launch (for Omni-VINS only case) nothing happened except the appearance of topic names (occurred when I checked rostopic list). However, when I tried to echo or hz it, there was no message published even with topic /oak_ffc_4p/assemble ).
I wonder if it needs any thing to run the code, for example starting the oak_ffc_4p_ros_driver first?

@trungUTELai trungUTELai changed the title No image received from oak-ffc-4p There is no new message. I need to make clear about the steps to run the repo Oct 24, 2024
@trungUTELai
Copy link
Author

Currently, I have to run the oak_ffc_4p_ros_driver first, then the code in this repo start running.
Another problem related to the latency, why the latency seem so high:
I ran the command: roslaunch d2vins quadcam.launch depth_gen:=true enable_pgo:=true show:=true
screenshot
screenshot1

@Peize-Liu
Copy link
Contributor

Your device only has 15W, please change to MAXN

@trungUTELai
Copy link
Author

@Peize-Liu , Hello, thank you for the advice and appreciate for the fast replication.
I have changed the system and the performance enhanced(from ~500ms to ~280ms time cost). However, It still so slow for real time application.

In fact, when I tried to run without enable depth_gen. It's great for real time (time cost ~13ms)

I am using jetson orin nx, and cameras from oak-ffc-4p only.
I am running the code with the available models in the branch.
Can you give me some guidance for how to improve the system performance.
screenshot3

@trungUTELai
Copy link
Author

Regarding to the official paper
Now, I will try to convert the HITNET model to tensorrt + fp16 to achieve the 15Hz frequency as the paper told.

Moreover, one of the issue from D2SLAM have tried to convert NetVlad to tensorrt+fp16 and achived the 6ms inference time. Have you tried it before? If yes, was the trade-off is acceptable ?

@Peize-Liu
Copy link
Contributor

The existing code will automatically convert the HITNET Onnx to the tensorRT Engine.
Can you show more information about your platform? Like nvidia-smi.
For NetVlad, I also implemented it before, but the performance improvement is not that evident.

@trungUTELai
Copy link
Author

trungUTELai commented Oct 25, 2024

Yes, currently, my jetson don't have nvidia-smi. Is it necessary to install nvidia driver on jetson orin ?

Currently, I am running the the dockerFile
image

Here is some information about my platform showed by jtop:
Screenshot from 2024-10-25 11-47-16
Screenshot from 2024-10-25 11-46-54

@trungUTELai
Copy link
Author

Thank you for your support, your code run as good as expected, I have some mistake while setup the configuration so the performance is not good as expected.

@Peize-Liu
Copy link
Contributor

Good to know.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants