-
Notifications
You must be signed in to change notification settings - Fork 15
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
problem with simultion #10
Comments
Hi, have you tried visualizing all the nodes within the rviz? All node topics are published and you should have the possibility to visualize them as markers in rviz. Then you can see if there are any nodes that are being recorded and processed. What are your log outputs? If you would not have any goal nodes or any nodes to select goals from the program would encounter an indexing error. |
Thanks for your patience and time. In terminal 2, I run python GDAM.py, and here is the modified code. `import torch class Actor(nn.Module):
env = ImplementEnv(d_args) Set the parameters for the implementation#device = torch.device("cuda" if torch.cuda.is_available() else "cpu") # cuda or cpu print("running here") while True:
I didn't see any marker like the project https://github.com/reiniscimurs/DRL-robot-navigation as shown in Rviz. I checked that I setted the goal as x=4, y =-1.75 in GDAM_args.py. And I used the gazebo control lib as same as the /DRL-robot-navigation project. I am confused there is no output like "running here" or "running here 222" in terminal 1, but after I used "Ctrl + C "to stop this terminal, here are some outputs as shown. It seems that the number of lidar data from ROS topic /r1/front_laser/scan 360. The laser_in[381:420] |
Thanks for the images and code snippets. I am a bit confused at what point does the In any case, it looks like the program can load the weights and get into the step method of the environment. I assume it could be some sensor issue here and some compatibility thing. Could you provide the full step method code in your implementation? In GDAM we are expecting to have 2 rpLidars as inputs with 1440 laser reading each. From these 1440 values we then take the 720 values that are 180 degree field of view in front of the laser. I assume your setup looks different, so there could be some issue how the laser data is obtained as this code would need to be adapted. It just could be that it is stuck in one of those while loops. What lidar are you using and how many sensor values does it have? |
Thanks for your time and patience. |
So the lidars for GDAM are real physical RpLidar sensors. This means you automatically get 360 degree range with 1440 samples and have to limit the range by using only the frontal 180 degrees. This is done by collecting only the front readings in step method: https://github.com/reiniscimurs/GDAE/blob/main/Code/GDAM_env.py#L184-L196 In DRL-robot-navigation we use simulated lidars. There you can constrain the field of view in the sensor setup: https://github.com/reiniscimurs/DRL-robot-navigation/blob/main/catkin_ws/src/multi_robot_scenario/xacro/laser/hokuyo.xacro#L42-L43 |
Hi, thanks for your suggestion. |
Ok that is actually a good catch. Yes, move base is a path planner that you need to launch beforehand: https://wiki.ros.org/move_base and is part of navigation stack with its own setup: https://wiki.ros.org/navigation/Tutorials/RobotSetup Essentially, there is a callback to get the generated path from the move_base plugin in There are 2 things you could try:
|
That is great! I still do not see any other possible nodes though. These should be added based on laser scans and shown as blue dots in rviz. For the network performance, I would still first suggest to check if the laser input is actually what you expect. Without changing how the data is read and processed in the step method I would not expect the network to work either. |
At this point, I'm getting lots of bugs, the link above isn't working, could you tell me which part of the code needs to be modified? I've trained the model, but how do I use it with GADE |
I have been trying for a long time to get this work running in my computer simulation environment, my environment is 'ubuntu20.04,rosnoetic'. I installed the required packages following your 'README' file. In the test, I used the command to set the simulated interface "export ROS_HOSTNAME=localhost export ROS_MASTER_URI=http://localhost:11311 export ROS_PORT_SIM=11311 export GAZEBO_RESOURCE_PATH=~/GDAM_ws/src/multi_robot_scenario/launch", and then the robot remained stationary. Then I used my own map. Then I found that the robot did not move I manually killed the gmapping node and started the slam-toolbox node, below is rqt_graph. |
My guess is that if there is no movement and no error messages that it would be stuck at the point |
First thing you probably need to solve is the error when loading a model as it appears in your logs there. If there is an error, the model will not load and no motion will be carried out. Also are you trying to load the model from the drl repo directly into this repo? Note that this will not work directly as the drl repo has a different input space as well as it's written in pytorch but here it's in tensorflow. |
Hi, thanks for sharing your marvelous work!
I got some issues with the simulation using GADE, becasue I don't have an actual car.
I have trained the model from your another work "DRL navigation", and changed the according code in https://github.com/reiniscimurs/GDAE/blob/fc793eda8de23bed98ba3acd32908843c535510f/Code/GDAM.py, and I modified the topic depicited in GADM_env.py which alligned with my own robot topic as shown.
Meanwhile the SLAM_TOOBOX is used for map construction, but It seemed that there was no goal is published and the robot cannot move at all. I checked the self.nodes and found it was not empty.
Here were my RVIZ screenshot and rqt_graph, and I tried to run " rostopic echo / /global_goal_publisher ", but no response received. Could your give me some advice about this problem?
The text was updated successfully, but these errors were encountered: