Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Multi-Femto Mega camera connecting issue from the network #61

Closed
Anil-Bhujel opened this issue Sep 18, 2024 · 46 comments
Closed

Multi-Femto Mega camera connecting issue from the network #61

Anil-Bhujel opened this issue Sep 18, 2024 · 46 comments
Assignees

Comments

@Anil-Bhujel
Copy link

First of all, thanks for the nice SDK. I have setup the environment and run the OrbeecSDK_ROS2 in the Ubuntu machine and working fine. However, I didn't see the launch file for connecting multiple Femto Mega camera from the network. There is a launch file for multi-camera connected through USB but couldn't see for network ones. Could you please suggest about it how we can get it? Also, I would like to know about the compressed images from the camera. What compression technique is used and can we set the compression level during launch time?

Anil Bhujel,
Research Associate,
Michigan State University

@jian-dong
Copy link
Contributor

Hi @Anil-Bhujel

Thank you for using the Femto Mega camera. We're glad to hear that the OrbeecSDK_ROS2 is running smoothly on your Ubuntu machine.

Regarding your question about connecting multiple Femto Mega cameras over a network, I would first like to know how many cameras you plan to use and the resolution at which you plan to capture data. Our RGB streams use compression techniques such as MJPEG, H.264, and H.265, while depth streams are not compressed and are only available in Y16 format. Please note that H.264 and H.265 compression methods are only supported over the network.

We will also soon be able to give you an example of setting up multiple cameras over the network.

Best regards,

@Anil-Bhujel
Copy link
Author

@jian-dong
Thank you for your prompt response. Indeed, we are working to build a large computer vision dataset for Precision Livestock Farming (PLF) to attract computer vision and AI communities towards PLF. We do series of experiments with different combination of camera numbers. Currently, we have 6 in our laboratory but we could use it 2 or more at a time and plan to record Compressed color image and uncompressed depth topics from multiple cameras into single Rosbag2 file. I am happy to hear that you are willing to assist us. Overall and to your earlier response, we have following queries and requirements,

  1. We have to synchronize the timestamp for all cameras within a network with the system clock or camera clock.
  2. How we can select the compression techniques among MJPEG, H.264, and H.265 and which one you recommend along with compression ratios (if any). Please keep in mind that our experimental livestock is located in remote and there is limited space available to store long run videos with high frame rate. So plan to compress as much as possible while recording and revert to original quality for analysis in the lab.
  3. Can we get back the compressed images to original quality during playback of Rosbag2 file for analysis using any third-party decoder algorithm?
  4. Does OrbbecSDK_ROS2 utilizes the NVIDIA GPU similar to Isaac_ros_compression?
  5. To optimize the recording spaces we plan to record videos at full fps when the animal is there with certain motion activities but at reduced fps (let's say 1 fps) when there is no movement. Is there any mechanism you have applied to detect motion or such kind of event and activates camera accordingly?

Sorry for a bit long queries, this is quite critical experiments and we are happy to use Orbbec camera. FYI, we set the IPs to each camera connected via network for single camera. When we connected all cameras in the network and run ros2 launch orbbec_camera femto_mega.launch.py enumerate_net_device:=true, it only launches single camera having the least IP address.

Once again thank you for your time and support.

@jian-dong
Copy link
Contributor

Hi @Anil-Bhujel
Thank you for your detailed feedback. Since some of the technical questions you've raised are quite complex, I will consult with more specialized colleagues to provide you with a comprehensive response. Meanwhile, to better assess your needs, please provide the following information:

  1. Maximum Number of Cameras: How many cameras do you plan to use simultaneously at most during your experiments?
  2. Resolution and Data Streams: What resolution and frame rate will you use? Do you need only color images, or also depth and IR data streams?
  3. Recording Duration: How long do you plan to record during each experiment?
  4. Desired Bag File Size: Considering your storage limitations, what is the maximum acceptable size for the bag file?

This information will help us conduct a more accurate assessment. Thank you for your cooperation, and we will get back to you as soon as possible.

@Anil-Bhujel
Copy link
Author

Hi @jian-dong
Please find the response here in italic text.

  1. Maximum Number of Cameras: How many cameras do you plan to use simultaneously at most during your experiments?
    Mostly 2 but could be 4 at max.
  2. Resolution and Data Streams: What resolution and frame rate will you use? Do you need only color images, or also depth and IR data streams?
    In animal movement time maximum possible frames with high quality is preferred (but depending upon the other constraints, at least 15 fps with HD quality) and without animal movement period HD frame with 1 fps is enough. We can compress the images before storing and minor loss in the reconstruction will be acceptable. Yes, we need all RGB, depth, and IR images along with other metadata (timestamp, frame rate, resolution, and other camera configuration).
  3. Recording Duration: How long do you plan to record during each experiment?
    At least a week for an experiment but the bag file must be manageable with a minute or 5 or 10 minutes duration. Although we are going to record video throughout the animal cycle, unavoidable breaking in the video recording is acceptable.
  4. Desired Bag File Size: Considering your storage limitations, what is the maximum acceptable size for the bag file?
    We are planning to record one-minute-long Bag file for easy processing. However, we can go longer than it (5-minutes, 10-minutes) as we have 100TB NAS device to store the data but not prefer larger single bag file than 10GB.

I hope it clarifies you. If you have further let me know.

@Anil-Bhujel
Copy link
Author

@jian-dong
Just in case of data stream, we can exclude IR stream if it increases the complexity and memory.

@zhonghong322
Copy link

zhonghong322 commented Sep 21, 2024

In our previous test, the data size per second was:

RGB: 1920x1080, 30fps, H264, data size: 2.59MB
Depth: 640x576, 30fps, Y16, data size: 21.09MB (640x576x2x30)

From this, we estimated that the data size for 4 cameras over 10 minutes would be 55.5GB. If the frame rate is reduced to 15fps, the data size would be: 55.5 / 2 = 27.77GB.

If a lossless compression algorithm like RVL is used, with a compression ratio of 1/3, the total data size for 10 minutes should be reduced to around 9GB. @jian-dong Does rosbag2 support RVL compression and decompression during playback?

@Anil-Bhujel
Copy link
Author

Hi @zhonghong322
Thank you for the information. That would be great achievement if we could reduce the 10 minutes videos from 4 cameras to ~9GB. I am still struggling to connect multi-camera from network IP. I hope your great team will support on it.

Thanks in advance

@Danilrivero
Copy link

I believe RVL is not being supported but you can use the MCAP file format as shown in the rosbag2 repository to properly configure your recordings and obtain better results through testings with different compression options as well

@Anil-Bhujel
Copy link
Author

Hi @Danilrivero,
Thanks for the information. I will check out that. @jian-dong I am still waiting for the network camera launching setup.

@Anil-Bhujel
Copy link
Author

@jian-dong,
Also, I have a query regarding the camera's power requirement. I checked the camera's power adapter and found 12V 2.0A labelled there, which means less than 30W, but we connected it with the Cqenpr 19 port PoE switch (30W port), and the camera's response latency was very high (even higher for depth scene). When we connected it to the 60W port, the camera's response time was way better. We found the generic PoE switch has a 30W max output per port. Do you have the testing result on a generic PoE switch (30W max per port)? Please suggest us as we are connecting all the cameras through PoE due to site-specific issues. Also, what could be the maximum cable length (Cat6e) we can use without compromising the camera performance?

@jian-dong
Copy link
Contributor

jian-dong commented Sep 26, 2024

Hi @Anil-Bhujel

I have added a sample launch file for multiple network cameras with Femto Mega. You will need to download the Orbbec Viewer first from this link: OrbbecSDK Download. Locate the IP configuration options (as shown in the image below) and set a unique IP address for each camera. Please don't enable DHCP. Connect all cameras to a network switch, and then connect the switch to your computer. Your computer's network configuration should be on the same subnet as your cameras' IP addresses. After configuring, please ping the IP addresses of your cameras to ensure they are reachable.

Once everything is set up, you can modify the launch sample file I provided to start multiple network cameras. Let’s focus on getting the multi-camera setup working first, and we can address further issues step-by-step. If you encounter any problems, please comment below on this issue. cc @xcy2011sky @zhonghong322 @jjiszjj

image

@Anil-Bhujel
Copy link
Author

Hi @jian-dong, @xcy2011sky, @jjiszjj , @zhonghong322
Thank you very much for your sample code. It worked fine. Now, I connected only two cameras. Can I customize it for more than two cameras? Anyway, I will try it myself. I also tested the recording and playback of topics and found working well. Thank you for your kind support. Here I have attached the screenshot of some testing results.

Screenshot from 2024-09-26 12-08-53
Screenshot from 2024-09-26 12-05-14

@Anil-Bhujel
Copy link
Author

Successfully tested for 4 network cameras.
Screenshot from 2024-09-26 13-38-34

@jian-dong
Copy link
Contributor

@Anil-Bhujel
Thank you for your feedback! I'm glad to hear that all four cameras are running smoothly. If you need any more help, please don't hesitate to reply directly to this issue. Due to the time difference, our response might be delayed, so we appreciate your understanding.

By the way, if you encounter any issues related to transmission efficiency, you can refer to the DDS tuning configuration documentation. For FastDDS, please refer to FastDDS Tuning Guide. For CycloneDDS, please check CycloneDDS Tuning Guide.

@jian-dong
Copy link
Contributor

@Anil-Bhujel
Regarding the power requirement issue, I will consult with our electronics team to give you a more professional response. We will get back to you today. I appreciate your patience! cc @zhonghong322

@zhonghong322
Copy link

zhonghong322 commented Sep 27, 2024

@jian-dong, Also, I have a query regarding the camera's power requirement. I checked the camera's power adapter and found 12V 2.0A labelled there, which means less than 30W, but we connected it with the Cqenpr 19 port PoE switch (30W port), and the camera's response latency was very high (even higher for depth scene). When we connected it to the 60W port, the camera's response time was way better. We found the generic PoE switch has a 30W max output per port. Do you have the testing result on a generic PoE switch (30W max per port)? Please suggest us as we are connecting all the cameras through PoE due to site-specific issues. Also, what could be the maximum cable length (Cat6e) we can use without compromising the camera performance?
@Anil-Bhujel
1、Our hardware engineers tested a single Mega connected to a PSE (Power sourcing equipment)device with 802.3at (24W), and it works perfectly. However, we haven't tested multiple Megas connected to a single PSE device. If the 30W setup doesn't work, the issue might be on the PSE device side rather than with our Mega. Although the PSE device supports multiple PoE devices, if each device is operating simultaneously and drawing high power—around 20W—the total power supply of the PSE might not be sufficient. This is why the 60W setup performs better.
2、cable length (Cat6e) ,We have tested lengths over 15 meters, and it works normally. The Ethernet cable we are using is linked below. You can look for similar specifications on Amazon and try to choose a good one.
https://detail.tmall.com/item.htm?id=643056487037&priceTId=2147820017274045837478980e142e&spm=a21n57.sem.item.4.5ae43903nUafcY&utparam=%7B%22aplus_abtest%22%3A%22774849bb4671fa08f2549c0b887469df%22%7D&xxc=ad_ztc&sku_properties=1627207%3A20582712614

@Anil-Bhujel
Copy link
Author

@jian-dong
Thank you for your valuable resource documents. And, I understand the different timezones but it's fine.
@zhonghong322
Thanks for your information. The cable length between PoE switch and camera could be less than 15M but from switch to computer could be at least 70M. So not confident enough to get good data transmission. Anyway, we will try it and update the results.
@Danilrivero
Default MCAP didn't reduce file size and I didn't try with storage preset profile. They mentioned zstd_fast is not recommended for long-term storage (as we have long-term storage). And I doubt on retrieving quality data once we compressed. Anyway thanks for your suggestion.

@Anil-Bhujel
Copy link
Author

@jian-dong,
How we can synchronize the camera frames. Meaning that the frame should be captured at same time and frames in the recorded bag file must have same timestamp in header. Now, we found a bit random timestamp (see the attached screenshots). Can we set it by passing the time_domain launching parameter as "global"? We have to analyze the frames taken from two sides of an animal using two cameras. Please suggest us for the best configuration. What is the meaning of Modes Free Run, Standalone, Primary, and Secondary in synchronization configuration tab in OrbbecViewer?

https://github.com/orbbec/OrbbecSDK_ROS2?tab=readme-ov-file#launch-parameters can we leverage from that?

Our current recorded topics. You can see the ros2 timestamp at the bottom of the window and the color image from each camera at the right-side windows.
Screenshot from 2024-09-27 15-08-47
Screenshot from 2024-09-27 15-16-30

@zhonghong322
Copy link

@jian-dong, How we can synchronize the camera frames. Meaning that the frame should be captured at same time and frames in the recorded bag file must have same timestamp in header. Now, we found a bit random timestamp (see the attached screenshots). Can we set it by passing the time_domain launching parameter as "global"? We have to analyze the frames taken from two sides of an animal using two cameras. Please suggest us for the best configuration. What is the meaning of Modes Free Run, Standalone, Primary, and Secondary in synchronization configuration tab in OrbbecViewer?

https://github.com/orbbec/OrbbecSDK_ROS2?tab=readme-ov-file#launch-parameters can we leverage from that?

Our current recorded topics. You can see the ros2 timestamp at the bottom of the window and the color image from each camera at the right-side windows. Screenshot from 2024-09-27 15-08-47 Screenshot from 2024-09-27 15-16-30

@Anil-Bhujel
For multi-device synchronization, please refer to this document.
https://www.orbbec.com/docs/set-up-cameras-for-external-synchronization_v1-2/

@jian-dong
Copy link
Contributor

Hi @Anil-Bhujel,
Just to let you know, our team will be on National Day holiday from October 1st to October 7th. We will do our best to respond to your questions during this period, but please know that we may only be able to provide partial or delayed responses.

We appreciate your patience and understanding.

@Anil-Bhujel
Copy link
Author

Hi @jian-dong
It's ok have a nice holiday.
@zhonghong322
We have ordered Sync hub pro and femto-mega sync adapter cables. Let's hope it will solve the issue. I will keep updating.

@Anil-Bhujel
Copy link
Author

Hi @jian-dong
FastDDS Tunning works for me. Now the the image coming from the ROS topic is nearly in real-time. Thanks for the suggestion.

@Anil-Bhujel
Copy link
Author

Anil-Bhujel commented Oct 7, 2024

Hi @jian-dong
I faced an issue with selection of different compression technique in Femto Mega camera.
I tried to select it from the OrbbecViewer. It shows the format and resolution has been set in log but it didn't save in the device permanently. When I close the OrbbecViewer and open and check the format, it will be MJPG as default.
I have updated the firmware from 1.2.7 to 1.2.9 but the problem is still there. How we can select the different compression mode? Here are some screenshots.
Alos, how we can select it from launch file or if we select it in the device, we don't need to pass it as launch argument?

Screenshot from 2024-10-07 14-08-29

Screenshot from 2024-10-07 14-53-55

Screenshot from 2024-10-07 15-05-54

@zhonghong322
Copy link

zhonghong322 commented Oct 8, 2024

faced an issue with selection of different compression technique in Femto Mega camera.
I tried to select it from the OrbbecViewer. It shows the format and resolution has been set in log but it didn't save in the device permanently. When I close the OrbbecViewer and open and check the format, it will be MJPG as default.
I have updated the firmware from 1.2.7 to 1.2.9 but the problem is still there. How we can select the different compression mode? Here are some screenshots.
Alos, how we can select it from launch file or if we select it in the device, we don't need to pass it as launch argument?

You can configure the compression format in the ROS launch file.
DeclareLaunchArgument('color_format', default_value='MJPG'),

@Anil-Bhujel
Copy link
Author

Hi @zhonghong322
Thank you, but I think there is an issue with H264 and H265 color format. When I passed the parameter of MJPG in color_format or without color_format launch parameter (default) the node works fine and I can see the video in rqt, can save topics in ros2 bag file, can play back and also can see the ros2 topic echo /camera/color/image_raw/compressed. When I pass launch parameter as H264 or H265 in color_format, it starts the node but when I tried to visualize in rqt, record topics in ros2 bag, and echo the topic list in command line, the node started to give error like "Failed to convert frame to video frame". How we can record the compressed topic? And can we decode to original quality while playback?
Error on H264 and H265 while trying to record the topics or echo or visualize in rqt (Probably rqt couldn't decode the H264 compressed topic? But why we can't record the topic as well?)
Screenshot from 2024-10-08 16-21-24

Screenshot from 2024-10-08 16-20-23

Working launch command
Screenshot from 2024-10-08 16-17-11

@zhonghong322
Copy link

Hi @zhonghong322 Thank you, but I think there is an issue with H264 and H265 color format. When I passed the parameter of MJPG in color_format or without color_format launch parameter (default) the node works fine and I can see the video in rqt, can save topics in ros2 bag file, can play back and also can see the ros2 topic echo /camera/color/image_raw/compressed. When I pass launch parameter as H264 or H265 in color_format, it starts the node but when I tried to visualize in rqt, record topics in ros2 bag, and echo the topic list in command line, the node started to give error like "Failed to convert frame to video frame". How we can record the compressed topic? And can we decode to original quality while playback? Error on H264 and H265 while trying to record the topics or echo or visualize in rqt (Probably rqt couldn't decode the H264 compressed topic? But why we can't record the topic as well?) Screenshot from 2024-10-08 16-21-24

Screenshot from 2024-10-08 16-20-23

Working launch command Screenshot from 2024-10-08 16-17-11

@jian-dong Can ROS support recording and playback of H264 data

@jjiszjj
Copy link
Collaborator

jjiszjj commented Oct 9, 2024

@Anil-Bhujel A tool node for femto_mega decoding H264/H265 is provided under the femto_mega_h26x_decode branch. You can run this node to get the decoded video of H264/H265 and view the tool node from this link:https://github.com/orbbec/OrbbecSDK_ROS2/blob/femto_mega_h26x_decode/orbbec_camera/tools/mega_h26x_decode_node.cpp
lQLPKGPlY8e5wIvNAeXNA5SwOcwcUHf2oCcG7Vxd3GQqAA_916_485

@Anil-Bhujel
Copy link
Author

Anil-Bhujel commented Oct 10, 2024

Hi @jjiszjj
Thank you so much. Finally got it.

Screenshot from 2024-10-10 16-09-07

@Anil-Bhujel
Copy link
Author

Anil-Bhujel commented Oct 11, 2024

Hi @zhonghong322
We received the Multi-Camera Sync Hub Pro and Multi-Camera Sync Hub Pro Adapter. Do we need any configuration in SDK too? Currently, I just use OrbbecViewer and configured synchronization by setting primary and secondary device and tested on two devices. The result is attached here. Can we avoid the differences in nanoseconds too?

Screenshot from 2024-10-11 19-19-29

@jjiszjj
After I adding mega_h26x_decode_node.cpp to SDK and used color_format:=H264/H265, the camera node always published /camera/color/h26x_encoded_data even in color_format:=MJPG and without color_format launch parameter (default). However, the node provided log message as Format: OB_FORMAT_MJPG.
Could you please check once.

I used ros2 launch orbbec_camera femto_mega.launch.py color_format:=MJPG enumerate_net_device:=true and ros2 launch orbbec_camera femto_mega.launch.py enumerate_net_device:=true

However, ros2 launch orbbec_camera femto_mega.launch.py color_format:=H264/H265 enumerate_net_device:=true seems working fine.

Also, do we get back that coded topic to original image quality by running decoding node?

Screenshot from 2024-10-11 19-28-18

@jjiszjj
Copy link
Collaborator

jjiszjj commented Oct 12, 2024

Hi@Anil-Bhujel, thank you for your feedback.I have discovered this issue through your feedback and I will fix it in the near future.

@jjiszjj
Copy link
Collaborator

jjiszjj commented Oct 14, 2024

Hi@Anil-Bhujel .

  1. The issue of /camera/color/h26x_encoded_data always being displayed in the ros2 topic list has been resolved. For more information, please visit:405b815
  2. Regarding the image quality issue you mentioned, I think the quality of H264/H265 is lower than that of MJPG or YUYV in terms of the RGB image quality after decoding. If you want to save the encoded data instead of the image data through rosbag, you can refer to the modification method of ob_camera_node.cpp in the following link to modify the choice of data published in the topic /camera/color/h26x_encoded_data:3cf375f

@Anil-Bhujel
Copy link
Author

@jjiszjj
Thank you, it works well. Do you mean changing the published topic /camera/color/h26x_encoded_data to custom one? Why we couldn't use the topic /camera/color/image_raw/compressed in H264/5 encoding? And in case of MJPG or default color format, /camera/color/image_raw/compressed is the MJPG decoded image or still in encoded format? I will check test in all three formats and let you know.

Further, we need to optimize power to the camera, so is there any facility to keep camera in sleep mode when no foreground scene is there?

@Anil-Bhujel
Copy link
Author

Anil-Bhujel commented Oct 18, 2024

Hi @jian-dong and @zhonghong322 ,
We received the sync hub pro and sync hub pro adapter cable for femto mega. Do we need to do any amendment in SDK software? or just setting up synchronization configuration is enough to get synced frames from the multi camera? Also, Do I need to change the camera mode accordingly in the multi_net_camera.launch.py file?
Also H264/5 encoder/decoder not worked in multi_net_camera.launch.py launching.

Screenshot from 2024-10-18 13-45-53

Screenshot from 2024-10-18 13-55-56

@Anil-Bhujel
Copy link
Author

Anil-Bhujel commented Oct 23, 2024

Hello @jian-dong,
We have faced couple of issues while connecting Femto Mega cameras in the network.
The setup scenarios and test results of individual and all 4 cameras in the network. Please respond asap, how we can get rid of these issues.
What could be the reason and how we can receive the full rate messages?
Scene 1:
When we connect the camera with PoE switch, the depth frame rate received in the OrbbecViewer is reduced to ~7-8 FPS (depth frame rate set at 30FPS). We connect it's own DC adapter to power and network cable from a PoE switch, the problem of reduced depth rate still persistent. This is the same for both cases (only for a single camera connected to the PoE Switch and for all four cameras connected to the PoE switch). Moreover, we have received even less frame rate in ROS2 topic.
I have a query, if we power the camera both from PoE cable and DC adapter, doesn't take power from both? or which power supply prefers?
Screenshot from 2024-10-23 18-05-58
All four cameras with PoE switch and powered from their own DC adapter.
Screenshot from 2024-10-23 17-04-20
This is for single camera.

Scene 2:
We connect the individual camera and all 4 camera with non-PoE switch and powered the camera with its own DC adapter. The individual camera able to provide 30FPS for both color and depth images in OrbbecViwer and ROS2 topics. But, we connect all 4 cameras, the frame rate of 2 cameras received in full rate (30FPS for both color and depth) and depth frame rate for 2 cameras received with reduce rate (~7-8 FPS).

Screenshot from 2024-10-23 17-57-46
This is for all 4 cameras connected with non-PoE switch and powered with own DC adapters
Screenshot from 2024-10-23 13-52-12
This is for single camera connected with non-PoE switch and powered using it's own DC adapter

@jian-dong
Copy link
Contributor

Hello @jian-dong, We have faced couple of issues while connecting Femto Mega cameras in the network. The setup scenarios and test results of individual and all 4 cameras in the network. Please respond asap, how we can get rid of these issues. What could be the reason and how we can receive the full rate messages? Scene 1: When we connect the camera with PoE switch, the depth frame rate received in the OrbbecViewer is reduced to ~7-8 FPS (depth frame rate set at 30FPS). We connect it's own DC adapter to power and network cable from a PoE switch, the problem of reduced depth rate still persistent. This is the same for both cases (only for a single camera connected to the PoE Switch and for all four cameras connected to the PoE switch). Moreover, we have received even less frame rate in ROS2 topic. I have a query, if we power the camera both from PoE cable and DC adapter, doesn't take power from both? or which power supply prefers? Screenshot from 2024-10-23 18-05-58 All four cameras with PoE switch and powered from their own DC adapter. Screenshot from 2024-10-23 17-04-20 This is for single camera.

Scene 2: We connect the individual camera and all 4 camera with non-PoE switch and powered the camera with its own DC adapter. The individual camera able to provide 30FPS for both color and depth images in OrbbecViwer and ROS2 topics. But, we connect all 4 cameras, the frame rate of 2 cameras received in full rate (30FPS for both color and depth) and depth frame rate for 2 cameras received with reduce rate (~7-8 FPS).

Screenshot from 2024-10-23 17-57-46 This is for all 4 cameras connected with non-PoE switch and powered with own DC adapters Screenshot from 2024-10-23 13-52-12 This is for single camera connected with non-PoE switch and powered using it's own DC adapter

@zhonghong322

@zhonghong322
Copy link

zhonghong322 commented Oct 24, 2024

Hello @jian-dong, We have faced couple of issues while connecting Femto Mega cameras in the network. The setup scenarios and test results of individual and all 4 cameras in the network. Please respond asap, how we can get rid of these issues. What could be the reason and how we can receive the full rate messages? Scene 1: When we connect the camera with PoE switch, the depth frame rate received in the OrbbecViewer is reduced to ~7-8 FPS (depth frame rate set at 30FPS). We connect it's own DC adapter to power and network cable from a PoE switch, the problem of reduced depth rate still persistent. This is the same for both cases (only for a single camera connected to the PoE Switch and for all four cameras connected to the PoE switch). Moreover, we have received even less frame rate in ROS2 topic. I have a query, if we power the camera both from PoE cable and DC adapter, doesn't take power from both? or which power supply prefers? Screenshot from 2024-10-23 18-05-58 All four cameras with PoE switch and powered from their own DC adapter. Screenshot from 2024-10-23 17-04-20 This is for single camera.

Scene 2: We connect the individual camera and all 4 camera with non-PoE switch and powered the camera with its own DC adapter. The individual camera able to provide 30FPS for both color and depth images in OrbbecViwer and ROS2 topics. But, we connect all 4 cameras, the frame rate of 2 cameras received in full rate (30FPS for both color and depth) and depth frame rate for 2 cameras received with reduce rate (~7-8 FPS).

Screenshot from 2024-10-23 17-57-46 This is for all 4 cameras connected with non-PoE switch and powered with own DC adapters Screenshot from 2024-10-23 13-52-12 This is for single camera connected with non-PoE switch and powered using it's own DC adapter

@Anil-Bhujel
Scenario 1:
It could be an issue with your PoE switch. Even when connecting a single Mega device to the switch, you are experiencing frame drops.

Scenario 2: You connected 4 Mega devices to the switch, which is then connected to your PC. I suspect both your switch and PC network ports are Gigabit Ethernet. Connecting 4 Mega devices might cause bandwidth issues. In this case, you can try reducing the frame rate to 15 fps, which should help.

Additionally, when both PoE and DC power are supplied, the device will prioritize using DC power.

@Anil-Bhujel
Copy link
Author

Anil-Bhujel commented Oct 29, 2024

@zhonghong322
Thank you so much. Though we had connected the computer network cable to the Gig port of switch, 1 Gbps is not enough to transmit all the messages from four cameras. Single Orbbec camera transmits nearly 400 Mbps when we use all topics. Now, we are optimizing the network bandwidth by changing (disabling unused topics) default parameters in the launch file and also designing high throughput network. For your reference, I have attached the screenshot of traffic received from single camera with color resolution (1920x1080) depth resolution (640x576) and frame rates 30FPS for both and other parameters in default values.

Regarding optimization, if I change enable_ir to false, does it affect in depth calculation?
I am just using compressed color image and raw depth image. In that case, what parameters I can disable to reduce the network traffic without affecting the quality of color and depth image?

This is the traffic from single Orbbec camera.
Screenshot from 2024-10-25 12-57-45

@jian-dong @jjiszjj
We found that HS264/5 encoded videos are lightweight in term of storage file sizes. However, the encoder/decoder node you have provided is working only for single camera. Could you please update it to accept multi network camera. Currently, I am using multi_net_camera.launch.py file to connect multiple network.
Here is the result of file sizes comparison between default MJPG and H26X.
I have attached the results, it has saved around 300 MB for one-minute video, which is significant amount considering long term recording.

Screenshot from 2024-10-21 13-12-38

@jjiszjj
Copy link
Collaborator

jjiszjj commented Oct 30, 2024

@Anil-Bhujel In subsequent versions, I will adapt to HS264/5 encoded videos from multiple cameras.

@Anil-Bhujel
Copy link
Author

@jjiszjj
When is your plan to update the next version for H264/5 econder for multiple network cameras? We are deploying the camera soon.

@jjiszjj
Copy link
Collaborator

jjiszjj commented Nov 9, 2024

@Anil-Bhujel
I am very sorry that I have not been able to reply you in time. Due to my recent busy work, the next version of H264/5 econder for multiple network cameras has not been updated yet. I am very sorry. I will update a version in the next few days.

@jjiszjj
Copy link
Collaborator

jjiszjj commented Nov 11, 2024

Hi@Anil-Bhujel
H264/5 econder for multiple network cameras has been updated.
For details,please visit:6a364f8
For how to use the new version of the node, please visit the document description:ab1faaa

@Anil-Bhujel
Copy link
Author

Anil-Bhujel commented Nov 14, 2024

@jjiszjj
Thank you for your help. Though I couldn't build the package after updating those files. Here is my step-by-step approach.

  1. I downloaded the files mega_h26x_decode_params.json, ob_camera_node.cpp, and mega_h26x_decode_node.cpp from 6a364f8 link
  2. Create a tools folder and put mega_h26x_decode_params.json file in the existing package. The directory is: /src/OrbbecSDK_ROS2/orbbec_camera/config/tools/mega_h26x_decode_params.json (with updated parameters name for 4 cameras)
  3. Replaced the ob_camera_node.cpp file in /src/OrbbecSDK_ROS2/orbbec_camera/src/ob_camera_node.cpp
  4. Replaced the mega_h26x_decode_node.cpp in /src/OrbbecSDK_ROS2/orbbec_camera/tools/mega_h26x_decode_node.cpp
  5. Here I have attached the all the files and results I got after building the package. Could you please suggest what is going wrong here? Did you miss any updated file like CMakeLists.txt

orbbec_h26x_additional_files.zip

Screenshot from 2024-11-14 12-09-50

@jjiszjj
Copy link
Collaborator

jjiszjj commented Nov 18, 2024

I found that the compilation failed in ob_camera_node.cpp, but I only modified some codes here and the file you provided did not encounter this problem on my computer.
You can go back to the unmodified version of ob_camera_node.cpp and modify it manually to see if there will be an error.
Other: I found that the comma is missing in the mega_h26x_decode_params.json you provided
dingtalkgov_qt_clipbord_pic_2

@Anil-Bhujel
Copy link
Author

@jjiszjj
Thank you so much it worked fine.

@Anil-Bhujel
Copy link
Author

Hello,
We want to record the camera info parameter along with color and depth topic. But, we couldn't record the camera info from /camera_01/color/camera_info, /camera_01/depth/camera_info topics. When we tried to echo (rso2 topic echo /camera_01/color/camera_info) it raised an error on launch file and stopped that camera. Also, we tried to record the camera info using ros2 bag record command, same error occurred (Error: Received signal: 11, received signal: 6, and process has died). Can you please check once. I have attached a screenshot.
Screenshot from 2024-12-11 18-32-52

@jjiszjj
Copy link
Collaborator

jjiszjj commented Dec 19, 2024

@Anil-Bhujel Thank you for your feedback. I found your log here:#79

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

6 participants