-
Notifications
You must be signed in to change notification settings - Fork 64
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Calibration controller node process dies immediately #9
Comments
Hi, is the type of the We currently don't have an AVIA LiDAR at hand, if possible, it is best to send your rosbag(with |
Could you post more detailed log? It seems like something wrong with the |
In RViz, I see the /livox/lidar topic as PointCloud2, but I do not believe that is the issue here. This is a Jetson Xavier AGX with Jetpack 4.4, and PCL version is 1.8 while VTK is 6.3 (one of the many oversights with NVIDIA's platform). So I cloned the python-pcl repository and attempted to change the setup.py to address the problem. Python-PCL is still not installing, but I may be able to get it to work if I remove the following library references: /usr/bin/ld: cannot find -lvtkexpat-6.3 Are any of those needed by ACSC? If not, I can probably safely remove them, at least for this project. |
Please indicate which log you will need. I don't wish to waste your time until we have confirmed that this is not a dependency problem. |
|
The And I notice that you are using a AGX, if the dependencies are too complex on an embedded plantform, you may just use the AGX for data collection, and deploy the rest part of the calibration on a PC with the collected data. |
I have spent some time rebuilding dependencies and ensuring everything is installed. I rebuilt VTK 6.3, and python-pcl was finally able to build. Now, I am running the calibration node, and it is still failing immediately. Here is a copy of the entire log file: calibration log file
I have run rostopic list and verified that both /rgd/image_raw and /livox/lidar toipcs are available. Here is a copy of my data_collection.yaml (nothing was changed from the original): data_collection.yaml`# whether use automatic chessboard localization scene size when labeling chessboardsxmin,xmax,ymin,ymax,zmin,zmaxBOUND: [2, 12, -4, 4, -2, 1] if there is pre-labeled dataRELABEL: False resolution of displayed point cloud projection, adjust it according to your screen sizeVIRTUAL_CAM_INTRINSIC: 100 how many lines of log are displayed on chessboard labeler GUILOG_DISPLAY_LINES: 6` Is there a way to increase verbosity of the log to see why the calibration node is suddenly stopping? |
|
Thanks a lot for your affirmation! There are limited useful information in the log... it will be more efficient for debugging if you may send your rosbag to us, the rosbag needs a short sequence(about 1 minute) of recorded livox LiDAR pointcloud data, and your camera image data, to [email protected]; Since the data-root directory is being created successfully, the error may happen in the callback funtions And you may try to just run the launch file alone, without publish the sensor data topics, if the node still crashes immediately, then it may be the problem in the UI thread The For visualizing livox point cloud in rviz, you need to input the |
Yes, that was one of the first tests I tried -- running the launch file without sensor data topics. It dies with or with those topics. I also tried debugging with GDB, but this doesn't work with Python scripts and PDB does not show any issues in the XTERM console. I will follow your advice and try debugging b_image, cb_pointcloud, or thread_render_bird_view in calibration_controller_node.py. I have a feeling that the issue is in the UI, because these Jetson devices have very non-linear libraries. Jetpack 4.4 has VTK 6.3, for example, and OpenCV 4.1.1 is installed, while ROS Melodic, of course, uses 3.2. So we had to uninstall the OpenCV 3.2 core and compile our Catkin workspace with the components that were broken by doing that. There really is no other way on these edge devices. This brings up a very good question: I am tired of struggling with AGX, NX, Nano, etc. They have been nothing but a headache. Would you be able to recommend some basic hardware specs that work well with Livox-based SLAM? I am inclined to ship the AGX back to NVIDIA and just procure a laptop with a high-quality GPU. |
In that case, I suggest that you run the data collection and the rest of the calibration process on a PC (the compution requirement of this tool is low, so commonly any PC is able to run this), then use the solved extrinsic parameter in your downstream task on the AGX, since the calibration is an off-line process and it doesn't need to be done frequently, this may be a fesiable solution; Actually we also use AGX and TX2, and the the Livox-based SLAM algorithms works quite well on these platform... During deployment of these algorithm, the most problematic part are the UI libraries, other essential libraries like |
We just tested this tool on an AGX, and it works normally, the dependencies are list bellow if any help: System: ROS version C++ Libraries Python Libraries |
The entire problem we have had with AGX is that none of the libraries you mention are compiled for CUDA, so there are no accelerations. For example, we were looking at RTAB-Map as a possible solution to create RGB-pointclouds from Livox data, since none of the Livox SLAM solutions offer this capability. That particular project is quite slow on NVIDIA edge devices, so many of us have been trying to compile OpenCV 4.5.1 with CUDA enabled to realize performance gains. I believe that is where the problem is --- your configuration is basic, apart from VTK 7.1 (AGX Jetpack 4.4.1 has VTK 6.3). Also, you have NumPy 1.16.2, but this version presents issues when trying to compile Python 2.7 bindings for OpenCV 4 (CUDA-enabled). In summary, the problem is CUDA. It's probably not there for SLAM enhancement. From what I can see, we have been misled into believing that NVIDIA AGX CUDA-enabled GPU-computing would provide us with better throughput, but NVIDIA side-steps CUDA altogether. They install it on the edge devices via Jetpack, but OpenCV, PCL, Eigen, etc. are not compiled for CUDA. It has been very confusing for us. It would appear your project, also, does not leverage CUDA / GPU-acceleration and works best with the default libraries. We will try removing all CUDA-enabled libraries and going back to Jetpack's defaults. Perhaps then this calibration will work. In the meantime, if you could point us in the direction of an effective means to generate RGB-enhanced pointclouds from Livox data, we would be appreciative. That was the entire reason for considering your project. We wish to perform 3D reconstruction for "reality capture", transplanting parts of our time-consuming photogrammetry pipeline. I have seen the CamVox project, but that requires specific hardware. I like the idea of using Livox LOAM to generate dense point clouds and, perhaps, projecting RGB camera data onto those clouds either real time or post-processing. Any ideas you have here would be quite valuable. |
I have confirmed that the problem was the ROS catkin workspace compiled for OpenCV 4.5.1. This compilation is necessary for the Azure Kinect ROS driver, which has an OpenCV 4 dependency. The default ROS Melodic workspace, instead, uses OpenCV 3.2. With a new workspace using these defaults, we are able to run launch the calibration node. But what this does mean is that Azure Kinect cannot be calibrated with ACSC, at least not until you provide an upgrade to support ROS Noetic (which uses OpenCV 4). This appears to be a problem for many Livox-based tools, now. For example, the Livox SDK has a hard dependency on ROS Melodic. |
There are 3 feasible solutions for calibrating your Azure Kinect and Horizon LiDAR:
Notice that, the calibration is a offline process, it does not required to be performed frequently as long as your sensor placement is fixed for a period of time, so there is realy no need to spend a lot of time to solve dependency problems and make this tool run on a specific platform... For fusing Livox point clous with RGB information, you may check out implementation in |
Thank you for the details! Before I was able to read your comment, I was actually able to get Kinect + Livox Avia synchronized on the AGX in parallel. The approach is simple:
Now I can see both Avia LiDAR birdseye and Kinect RGB onscreen. Thank you for all your assistance with these challenges. I will report back once we have had an opportunity to calibrate. |
Testing was not successful. The birdseye and image views displayed correctly, but the main calibration window on AGX did not respond to keystroke or mouse wheel. Pressing Enter did not do anything. So we had to build a Linux VM with all the necessary libraries, including VTK 7.1, in order to run calibration offline. Nonetheless, ACSC did not recognize any corners in our checkerboard. A total of 24 image / PCD pairs were captured using printout of the checkerboard image you have provided. Varying distances were used between 4 and 6 meters. Unfortunately, the mandate to maintain more than 3m distance of obstacles from board could not be met, as I do not have this kind of space, and it is functionally impossible to go outdoors in my region at the moment. Both images and PCDs (npys) were captured without a problem, however upon running calibration, corners are not recognized on any of the pairs. Here are the Azure Kinect's intrinsics and distortion at 3072p:
And our chessboard section in the sample.yaml:
We measured the longest edge of the checkerboard, which was 18.5 inches (i.e. .4699m). Nonetheless, this is the result: Calibration Log`Localization done. min cost=55.8592336187 Optimized loss: 781.475127274 Optimized loss: 730.928944589 Optimized loss: 603.050610815 Optimized loss: 453.279894444 Optimized loss: 681.950046558 Optimized loss: 795.501110147 Optimized loss: 692.833137561 Optimized loss: 775.933379761 Optimized loss: 227.642919602 Optimized loss: 755.425766099 Optimized loss: 297.847730132 Optimized loss: 495.280161899 Optimized loss: 321.278684472 Optimized loss: 1031.14555376 Optimized loss: 685.531473613 Optimized loss: 403.712808516 Optimized loss: 445.822754019 Optimized loss: 279.453232926 Optimized loss: 333.487644462 Optimized loss: 772.225411781 Optimized loss: 960.135494077 Optimized loss: 361.263561029 Optimized loss: 383.640166802 Calculating frame: 0 / 24 Calculating frame: 1 / 24 Calculating frame: 2 / 24 Calculating frame: 3 / 24 Calculating frame: 4 / 24 Calculating frame: 5 / 24 Calculating frame: 6 / 24 Calculating frame: 7 / 24 Calculating frame: 8 / 24 Calculating frame: 9 / 24 Calculating frame: 10 / 24 Calculating frame: 11 / 24 Calculating frame: 12 / 24 Calculating frame: 13 / 24 Calculating frame: 14 / 24 Calculating frame: 15 / 24 Calculating frame: 16 / 24 Calculating frame: 17 / 24 Calculating frame: 18 / 24 Calculating frame: 19 / 24 Calculating frame: 20 / 24 Calculating frame: 21 / 24 Calculating frame: 22 / 24 Calculating frame: 23 / 24 If I turn on visualization, I receive multiple "mayavi not found, skip visualize" messages, and then 3-4 full screen windows appear that are not resizable. They do not respond to key controls, but was able to right-click on one of them and save the result: So it's clear corners are being recognized in at least a few of the photos. I am unable to view the NPY files, as they appear to be in a proprietary format. Visualizing them would be useful in pinpointing where the problem is here. |
Hi, the placement of the checkerboard is wrong, it is required to be placed on a thin monopod, or to be suspended in the air with a thin wire. It shouldn't be sticked to any surface larger than the checkerboard itself (as yours), or the checkerboard localization process from point clouds won't get right result (That's why the log constantly outputs 'Missing 3D Corners'). Please read the section of |
We have created another calibration board, this time with no borders, as discussed. ACSC fails to recognize 3D corners for all 18 pairs, without exception. Is there no way to verify the quality of the captured PCDs? I believe that may be where the problem is arising, but without a format that can be used for troubleshooting, we are left guessing if there is an issue with Livox. During capture, the monopod was visible in every single frame for both the Livox and the RGB camera. Distance used was between 4 and 8 meters. There should not be any detection issues with the Avia. Here is a single-frame capture of the calibration board: And here is the RGB image from the same position: Any additional guidance you could provide for troubleshooting this would be much appreciated. Looking at the NPY output, for example, would confirm the data stream, itself, isn't a problem. We are unable to take the registration outside due to dangerous conditions in ambient air in our region currently. We are confined to this space. Also, given that we have used your calibration checkerboard image, please confirm that its full printed grid size is .47m, as we have in our sample.yaml. In your instructions, you mention "length in meters of the grid's side", but this is slightly ambiguous. Side in this case is taken to mean the left or right edges, which are the long side. Again, here are our settings: chessboard: It is non-intuitive to list a width of 7 when, in fact, it is 5, but your comments clearly state that W will always be the long-side. Could settings these incorrectly cause issues with 3D corner detection? It does not seem to affect RGB corners. If the mirror or objects in the background are confusing the detection algorithm, I can put a black or green muslin in front of them. I didn't look at the code in detail, so I don't know if that would help. The only other thing I can think of is that we taped the sections of the checkerboard onto a cardboard backing, and the LiDAR may be reflecting off the tape surface in some areas (on the right of the grid for example). |
您好,我注意到您这边提到了在Xavier AGX上安装环境并且运行ACSC。最近我也在这个设备上配置环境,按照您提供的依赖以及该链接中的方法成功安装了VTK7.1以及PCL 1.8。但是在我安装python-pcl时,编译出现了错误,即运行python setup.py build_ext -i时,出现以下错误。请问您知道可能是什么原因吗?环境是Ubuntu 18.04。 |
您好,这个问题是由于python-pcl默认写在setup.py中的vtk路径和您本机安装vtk的路径不一致造成的, 请参考我文章中问题情况2或者问题情况3的解决方案, https://zhuanlan.zhihu.com/p/336875349 如果还有问题请在issue里更新,我会及时跟进 |
We have finally been able to achieve what appears to be a reasonably accurate calibration result. Only 14 of the 24 pairs were recognized, but the projection validation is showing this: The first question is -- why is the image color space being converted? I looked at the Python script, and I don't see methods invoked that would change that (e.g. cvtColor). The original image coming from the K4A is raw BGRA. Unfortunately, we are unable to visualize a colored pointcloud, because python-pcl is no longer supported (authors have left the project), and it does not work with Python 3. We have installed python-pcl for Python 2.7, however Mayavi does not support Python <3. If possible, could you share your Mayavi, traits, pyface, etc. versions. I did not see them in the dependency list you provided above on your AGX. I have been attempting to install them separately directly from enthought repositories, however even earlier versions have Python syntax issues. In the meantime, I will attempt to modify your code to output the colored pointcloud to a PCD directly. If you have any sample code to get us started, this would be helpful! |
The entire day has been spent trying to derive useful data from the Livox Avia. After conferring with another colleague, it would appear that these LiDARs are not capable of producing data usable by any other platform, given their proprietary ROS driver / SDK and absence of supporting software. For example, /livox/lidar custom messages may be broadcast and received by ROS subscribers, but even with the ACSC-derived extrinsics, those messages may not be converted into pointcloud files that can be read by CloudCompare, Meshlab, or any other major software that consumes pointcloud data. I presume that is why you chose to stream the data into NumPy arrays. It is the only way to make sense of otherwise non-standard data streams. We were able to use your img_to_pc script to provide the Python PCL library with data in the XYZRGBA format, however as mentioned above, color is distorted in some way by ACSC algorithms. For example, here is the original photo:: And here is the image projected onto the point cloud using your script: Mayavi is proprietary and only for visualiation. We need a standarized, portable solution. Converting NumPy to PCD is not a viable method to generate a colored pointcloud, as a single frame takes ~60 seconds, given the need to convert the NumPy array's RGBA into a np.float32. We have attempted to try other methods to achieve a satisfactory result, but no solutions have been found. For example, using ACSC extrincis, we applied them to the Livox SDK lidar-camera calibration tool, and it was able to successfully project an image to a poincloud from our original scene through the double doors: This is a very good result, considering the low-light environment at night. Unfortunately, this is an RViz snapshot. Nothing can be done from here to take this pointcloud offline. We modified the original code to include timestamps in the headers and then recorded those to PCD files using the pointcloud_to_pcd tool. Those PCDs were then merged into a consolidated file and the pcl_pcd2ply tool was used to convert the integrated file into a binary PLY. Ultimately, neither the PCD nor the PLY may be consumed by CloudCompare or Meshlab. Meshlab complains: After clicking OK, the screen is blank (no points). In CloudCompare, both the PCD and the PLY are loaded without a problem, but again -- no points are displayed. I understand these are not problem specifically related to your tool, but I wanted to make sure you know that even with the extrinsic data in hand, calibration of these solid-state LiDARs with RGB cameras appears to be a fruitless endeavor, as there is no apparent way to harvest their output. We were hopeful that Livox LiDAR could be used for the generation of RGB-enhanced point-clouds that are faithful to the original RGB stream. It is disappointing to find that this is not possible --- at least not yet. Hopefully, Livoxtech will address these problems in the future, and we can use tools in our pipeline. |
Hi, we are glad that this tool helps solved accurate extrinsic for you eventually. For your first question, the different color of projected points represent different reflectance intensities, since it's only a single channel data, we map it to RGB channels with a gist_ncar style colormap (implemented here Line 126 in fd989bd
![]() For the seconde question, it may be because some of the RGB values of the points overflowed, and you should manually constrain the RGB values between (0,255) (or between (0, 1), it depends on your visualization tool); For us, the solved extrinsic parameter is mostly used for multi-sensor fusion-based object detection/segmentation and path planning / obstacle avoidance for self-driving vehicles. Therefore the visualization part of this project is just a tool to verify the accuracy of the calibration results quickly and intuitively, and we don't dabble deep here... I am currently trying to play with CloudCompare, and working on writing some utilities to directly export the fused point cloud and RGB image after calibration, into files that the downstream tools like CloudCompare can directly use. |
I finally discovered the problem with colored pointcloud export, at least in one context. If the data is stored in an object of type pcl::PointCloudpcl::PointXYZRGB and then saved using pcl::PCDWriter::writeBinary, the RGB data will be removed and only one field used for intensity (PointXYZI). This will be interpreted by CloudCompare as NAN vertices. The solution is simple. Include <pcl/io/ply_io.h> in the source code and then save the PointXYZRGB cloud using pcl::io::savePLYFileBinary. Then, it will be readable by CloudCompare and other tools. Here is an example in Meshlab: |
谢谢您的解决方法,现在环境部分在xavier上已经成功安装了,但是我在用sample数据运行python calibration.py --config ./configs/sample.yaml时出现了以下提示: |
您好,不好意思回复晚了,首先在sample.yaml里将NUM_WORKERS设置成0,可以解决卡死的问题,并且可以获得更多的出错LOG; 目前这个问题看起来是PCL的VoxelGrid导致的问题,您在agx上安装的PCL版本是多少? |
您好,PCL的版本是1.8,初步调试看来是阻塞在了locate_chessboard函数的 |
您好,我在Xavier刷机后重新编译安装了VTK7.1和PCL1.8.1,编译python-pcl成功后import pcl时出现了undefined symbol: XXXXXXXXXX的错误,我按照您提供的链接https://zhuanlan.zhihu.com/p/336875349 |
I have successfully built the ACSC workspace and instead of passing parameters, have configured the lidar_camera_calibration.launch file as follows:
<arg name="config-path" default="/home/visionarymind/livox_ws/src/ACSC/configs/data_collection.yaml"/> <arg name="image-topic" default="/rgb/image_raw"/> <arg name="lidar-topic" default="/livox/lidar"/> <!-- <arg name="lidar-topic" default="/velodyne_points"/>--> <arg name="saving-path" default="/home/visionarymind/Documents/calibration"/> <arg name="data-tag" default="''"/>
I then do the following:
This immediately produces the following error (in red):
[calibration_controller_node-2] process has died [pid 15643, exit code -11, cmd /home/visionarymind/livox_ws/src/ACSC/ros/livox_calibration_ws/src/calibration_data_collection/scripts/calibration_controller_node.py --config-file /home/visionarymind/livox_ws/src/ACSC/configs/data_collection.yaml --data-saving-path /home/visionarymind/Documents/calibration --overlap 50 --data-tag --image-topic /rgb/image_raw --lidar-topic /livox/lidar --lidar-id -1 __name:=calibration_controller_node __log:=/home/visionarymind/.ros/log/1de314e4-56ea-11eb-8688-48b02d2b8961/calibration_controller_node-2.log]. log file: /home/visionarymind/.ros/log/1de314e4-56ea-11eb-8688-48b02d2b8961/calibration_controller_node-2*.log
Any ideas what could be the problem?
The text was updated successfully, but these errors were encountered: