-
Notifications
You must be signed in to change notification settings - Fork 15
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
About the Lidar Data preprocessing #27
Comments
Thank you for your interest in our work. Sorry, I have been busy with some personal issues these days. The data preprocessing code is in the cnn_data_pub.py. The network structure code is in the custom_cnn_full.py. The training code is in the drl_vo_train.py. The navigation code is in the drl_vo_inference.py. Hopefully these simple descriptions can give you better guidance. I will restructure the code later when I am less busy. |
Hi zzuxzt, thank you for your prompt reply. Do you mind if I ask you another question? When I set up the turtlebot2 as your README.md, I encountered the following issue: "The following packages have unmet dependencies: ros-noetic-joystick-drivers : Depends: ros-noetic-ps3joy but it is not installable Depends: ros-noetic-wiimote but it is not installable E: Unable to correct problems, you have held broken packages." I have been searching for the solutions for several days but still don't know how to resolve it. Do you have any advice? I'm very appreciate your guidance with the code structure and I will analyze them further soon. Wish you do well with your work and I'm looking forward for your reply 😁 |
You can try to remove the ros-noetic-joystick-drivers package since the DRL-VO did not use it. I guess the reason is that the packages ps3joy and wiimote cannot support Noetic as mentioned in here. |
I'm very appreciate your help. Can I contact to ask you more if I encounter more problem? |
Feel free to post your questions. I will answer your questions when I am free. |
Hi @zzuxzt, I'm currently reading the Simulation configuration and Hardware configuration sections in your paper. I want to ask you that why didn't you both train the DRL network and run the simulations on the DGX-1 server, but split into server specific task and desktop specific task? And if there are someway to train the model without the need to activate Rviz or Gazebo, can you please guide me? Another point I noticed is that between the training/simulation phase and the real-life hardware testing phase, you used different Ubuntu and ROS versions. Is that optional or is there any specific reason for you to do so? Because I assume that there will be conflicts between the versions. Again, thank you for your time and kindness. I hope to get your reply soon! |
Beside, I think you should update the turtlebot2 installation file as some of the packages are no longer supported: https://raw.githubusercontent.com/zzuxzt/turtlebot2_noetic_packages/master/turtlebot2_noetic_install.sh 😁 |
|
Thanks for your reminder. I will find time to update it since it was created a few years ago and many packages have changed. |
I appreciate your help! |
Hi, I want to thank you and your team for your hard work in this amazing project. I'm currently reviewing your paper and code. I'm not quite understand how to get 1x80x80 dimensional lidar map from 20x720 Min & Avg pooled data from the paper. When I read your code to understand more, I'm struggling to find these preprocessing functions. Can you explain this further to me, and if convenient, can you guide me through your code structure (about how things are organized)? Again, I am very appreciate your time and effort you put into this project!
The text was updated successfully, but these errors were encountered: