Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Listener node cannot hear the message from the talker node #550

Open
yang-yuke opened this issue Jan 6, 2022 · 10 comments
Open

Listener node cannot hear the message from the talker node #550

yang-yuke opened this issue Jan 6, 2022 · 10 comments
Labels
more-information-needed Further information is required

Comments

@yang-yuke
Copy link

Hi dear RoS experts, could you please help take a look at this issue? I have been grappling with the issue for a few days but end up nowhere. I would appreciate if you might shed some light on it for me.

Bug report

Required Info:

  • Operating System:
    • Ubuntu 20.04
  • Installation type:
    • sudo apt install ros-galactic-desktop
  • Version or commit hash:
  • DDS implementation:
  • Client library (if applicable):

Steps to reproduce issue

  1. install RoS2 galactic desktop distro as instructed from this website
  2. source setup bash then run demo_nodes_cpp talker in terminal 1
  3. source setup bash then run demo_nodes_py listener in terminal 2

Expected behavior

terminal 2(listener) should be able to hear the message from terminal 1(talker)

Actual behavior

nothing shows up in terminal 2

Additional information

//In terminal one, I can see the messages printed.
image

//In terminal 2, however, it stuck as the picture shows. No messages printed out.
image

//I have also installed foxy distro. Strange though, if I switch to foxy version, I can see it is working normally. In terminal 1, talker messages can be printed normally.
image

//This time, in terminal 2 messages heard by listener can be printed out normally. It always succeeds once I switch to foxy distro but fails once I switch back to galactic distro. Seems something wrong with the galactic distro's library?
image


Feature request

Feature description

Implementation considerations

@clalancette
Copy link
Contributor

Two things you can try here:

  1. Run ros2 doctor, which may give you some feedback about network configuration.
  2. In galactic, try running both the talker and the listener using rmw_fastrtps_cpp. That is, run the commands like this:
RMW_IMPLEMENTATION=rmw_fastrtps_cpp ros2 run demo_nodes_cpp talker

That will tell us whether the problem you are having is generically in Galactic, or specifically in the default DDS vendor (which is CycloneDDS in Galactic).

@yang-yuke
Copy link
Author

Thanks, Chris!

I still have some confusion with regard to your reply. In step 2, what should be the command I should run for the listener? Do you mean I should run the below 2 commands in the terminal?
1.run the talker RMW_IMPLEMENTATION=rmw_fastrtps_cpp ros2 run demo_nodes_cpp talker
2.run the listener RMW_IMPLEMENTATION=rmw_fastrtps_cpp ros2 run demo_nodes_cpp listener

Or should I run the listener with the command RMW_IMPLEMENTATION=rmw_fastrtps_cpp ros2 run demo_nodes_py listener?

@clalancette
Copy link
Contributor

Either one will do. The important bit is that you are using rmw_fastrtps_cpp as the DDS vendor.

@yang-yuke
Copy link
Author

Hello, Chris,

I use the commands you gave me. Now it works normally. Could you please elaborate on what is the reason here? Is it because the default vendor in Galactic(i.e. CycloneDDS) has some bug? What is the DDS vendor we are using with the rmw_fastrtps_cpp?

//talker terminal, print out messages normally
image

//listener terminal, can print out messages telling that it is receiving messages from talker now.
image

//run the listener py script, can also print out messages normally.
image

@clalancette
Copy link
Contributor

I use the commands you gave me. Now it works normally. Could you please elaborate on what is the reason here? Is it because the default vendor in Galactic(i.e. CycloneDDS) has some bug?

I'm not sure what's going on here. But at least we know it has something to do with CycloneDDS. Ping @eboasson to take a look and maybe get some relevant information.

What is the DDS vendor we are using with the rmw_fastrtps_cpp?

It's using Fast-DDS instead.

@yang-yuke
Copy link
Author

I use the commands you gave me. Now it works normally. Could you please elaborate on what is the reason here? Is it because the default vendor in Galactic(i.e. CycloneDDS) has some bug?

I'm not sure what's going on here. But at least we know it has something to do with CycloneDDS. Ping @eboasson to take a look and maybe get some relevant information.

What is the DDS vendor we are using with the rmw_fastrtps_cpp?

It's using Fast-DDS instead.

Thanks Chris for your prompt reply. I have one more question. What is the default DDS vendor used by foxy BTW? Is it the same as Galactic(i.e. Cyclone DDS)?

@clalancette
Copy link
Contributor

Thanks Chris for your prompt reply. I have one more question. What is the default DDS vendor used by foxy BTW? Is it the same as Galactic(i.e. Cyclone DDS)?

No, Foxy, uses rmw_fastrtps_cpp by default.

@yang-yuke
Copy link
Author

I

Thanks Chris for your prompt reply. I have one more question. What is the default DDS vendor used by foxy BTW? Is it the same as Galactic(i.e. Cyclone DDS)?

No, Foxy, uses rmw_fastrtps_cpp by default.

I see, thank you so much Chris!

@eboasson
Copy link

Hi @yang-yuke, I would guess it has something to the with a networking configuration, but let's make sure by looking at the debugging traces that Cyclone can write. (There are other steps one might take to narrow down the problem, but if it is something simple where the other methods also work, it can usually be found in the traces as well; and if it is something totally weird, then one usually ends up needing the the traces anyway.)

Those traces are not the easiest thing to read, not even with a document describing them (and I only have a draft anyway), but I am happy to look at them for you. Would you be willing to share the logs with me? If you're not comfortable sharing internal internal IP address in GitHub attachments, email me directly. My address is pretty easy to find from the commit log of Cyclone 🙂

To get the traces, please run it with CYCLONEDDS_URI="$CYCLONEDDS_URI,<Tr><C>trace</><Out>cdds.log.\${CYCLONEDDS_PID}</></>" in the evironment? (This is abbreviated XML that one would normally write out in full and put in the configuration file.) This should get you one file for each process, cdds.log.PID where PID is the process id.

@yang-yuke
Copy link
Author

Hi @yang-yuke, I would guess it has something to the with a networking configuration, but let's make sure by looking at the debugging traces that Cyclone can write. (There are other steps one might take to narrow down the problem, but if it is something simple where the other methods also work, it can usually be found in the traces as well; and if it is something totally weird, then one usually ends up needing the the traces anyway.)

Those traces are not the easiest thing to read, not even with a document describing them (and I only have a draft anyway), but I am happy to look at them for you. Would you be willing to share the logs with me? If you're not comfortable sharing internal internal IP address in GitHub attachments, email me directly. My address is pretty easy to find from the commit log of Cyclone 🙂

To get the traces, please run it with CYCLONEDDS_URI="$CYCLONEDDS_URI,<Tr><C>trace</><Out>cdds.log.\${CYCLONEDDS_PID}</></>" in the evironment? (This is abbreviated XML that one would normally write out in full and put in the configuration file.) This should get you one file for each process, cdds.log.PID where PID is the process id.

Dear Mr. Erik,

I really appreciate your willingness to help me out. Can we sync up offline once? There are a lot of things I cannot make clear here. I have sent an email to you. My email address is [email protected]. Hope you receive it.

I will update the thread here once the problems are resolved.

@clalancette clalancette added the more-information-needed Further information is required label Jan 27, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
more-information-needed Further information is required
Projects
None yet
Development

No branches or pull requests

3 participants