The source code of the CRLFnet.
Env: Ubuntu20.04 + ROS(Noetic) + Python3.x
- If using Google-colab, there is a recommanded environment: CUDA10.2+PyTorch1.6. It is proved that CUDA11.3+PyTorch1.11 is incorrect.
- Please refer to INSTALL.md for the installation of
OpenPCDet
. Using correct version of CUDA, when a build process occurred errors , before change the version of CUDA and the next build Delete the entirebuild
folder. - Install
ros_numpy
package mannually. Source code: https://github.com/eric-wieser/ros_numpy. How to install: https://blog.csdn.net/mywxm/article/details/121945880
Absolute paths may need your mind:
file path | Line(s) |
---|---|
src/camera_info/get_cam_info.cpp | 26,64,102,140,170,216,254,292,330,368, |
src/LidCamFusion/OpenPCDet/tools/cfgs/custom_models/pointrcnn.yaml | 5 |
src/LidCamFusion/OpenPCDet/tools/cfgs/custom_models/pv_rcnn.yaml | 5 |
Build project from Dockerfile
:
docker build -t [name]:tag /docker/
or pull image directly:
docker pull gzzyyxy/crlfnet:yxy
-
If GPU and cuda is available on your device, you can set the parameter
use_cuda
toTrue
insrc/site_model/config/config.yaml
. -
Please download
yolo_weights.pth
from jbox, and move it tosrc/site_model/src/utils/yolo/model_data
.
The steps to run the radar-camera fusion is listed as follows.
For the last command, an optional parameter --save
or -s
is available if you need to save the track of vehicles as images. The --mode
or -m
parameter has three options, which are normal
, off-yolo
and from-save
. The off-yolo
and from-save
modes enable the user to run YOLO seprately to simulate a higher FPS.
cd /ROOT_DIR/
# load the simulation scene
roslaunch site_model spawn.launch # load the site
roslaunch pkg racecar.launch # load the vehicle
rosrun pkg servo_commands.py # control the vehicles manually
rosrun pkg keyboard_teleop.py # use WASD to control the vehicle
# run the radar message filter
rosrun site_model radar_listener.py
# run the rad-cam fusion program
cd src/site_model
python -m src.RadCamFusion.fusion [-m MODE] [-s]
Two commands are needed for camera calibration after spawn.launch
is launched. Relative files are already exist in the repository. If the poses of components of models in .urdf
files haven't been modified, skip this step.
rosrun site_model get_cam_info # get relevant parameters of cameras from gazebo
python src/site_model/src/tools/RadCamFusion/generate_calib.py # generate calibration formula according to parameters of cameras
This part use OpenPCDet
as the detection tool, refer to CustomDataset.md to find how to train self-product dataset.
Configurations for model and dataset need to be specified:
- Model Configs
tools/cfgs/custom_models/XXX.yaml
- Dataset Configs
tools/cfgs/dataset_configs/custom_dataset.yaml
Now pointrcnn.yaml
and pv_rcnn.yaml
are supported.
Create dataset infos before training:
cd OpenPCDet/
python -m pcdet.datasets.custom.custom_dataset create_custom_infos tools/cfgs/dataset_configs/custom_dataset.yaml
File custom_infos_train.pkl
, custom_dbinfos_train.pkl
and custom_infos_test.pkl
will be saved to data/custom
.
Specify the model using YAML files defined above.
cd tools/
python train.py --cfg_file path/to/config/file/
For example, if using PV_RCNN for training:
cd tools/
python train.py --cfg_file cfgs/custom_models/pv_rcnn.yaml --batch_size 2 --workers 4 --epochs 80
Download pretrained model through these links:
model | time cost | URL |
---|---|---|
PointRCNN | ~3h | https://drive.google.com/file/d/11gTjqraBqWP3-ocsRMxfXu2R7HsM0-qm/view?usp=sharing |
PV_RCNN | ~6h | https://drive.google.com/file/d/11gTjqraBqWP3-ocsRMxfXu2R7HsM0-qm/view?usp=sharing |
Prediction on local dataset help to check the result of training.
python pred.py --cfg_file path/to/config/file/ --ckpt path/to/checkpoint/ --data_path path/to/dataset/
For example:
python pred.py --cfg_file cfgs/custom_models/pv_rcnn.yaml --ckpt ../output/custom_models/pv_rcnn/default/ckpt/checkpoint_epoch_80.pth --data_path ../data/custom/testing/velodyne/
Visualize the results in rviz like:
Follow these steps for only lidar-camera fusion. Some of them need different bash terminals. For the last command, additional parameter --save_result
is required if need to save the results of fusion in the form of image.
cd to/ROOT_DIR/
roslaunch site_model spawn.launch # start the solid model
# (generate camera calibrations if needed)
python src/site_model/src/LidCamFusion/camera_listener.py # cameras around lidars start working
python src/site_model/src/LidCamFusion/pointcloud_listener.py # lidars start working
rosrun site_model pointcloud_combiner # combine all the point clouds and fix their coords
cd src/site_model/
python -m src.LidCamFusion.fusion [--save_result] # start camera-lidar fusion
The whole project contains several different parts which need to be start up through commands. Following commands show how to start.
cd to/ROOT_DIR/
source ./devel/setup.bash
roslaunch site_model spawn.launch
# (generate camera calibrations if needed)
rosrun site_model src/tools/radar_listener.py
cd src/site_model
python -m src.RadCamFusion.fusion [--save_result]
python src/site_model/src/LidCamFusion/camera_listener.py
python src/site_model/src/LidCamFusion/pointcloud_listener.py
rosrun site_model pointcloud_combiner
cd src/site_model/src/LidCamFusion/
python -m src.LidCamFusion.fusion [--save_result]
Some problems may occurred during debugging.
- Confused: set the batch_size=1 and still out of memory: open-mmlab/OpenPCDet#140
- 段错误(核心已转储) when run dem.py: open-mmlab/OpenPCDet#846
- N > 0 assert faild. CUDA kernel launch blocks must be positive, but got N= 0 when training: open-mmlab/OpenPCDet#945
- raise NotImplementedError, NaN or Inf found in input tensor when training: open-mmlab/OpenPCDet#280
- fix recall calculation bug for empty scene: open-mmlab/OpenPCDet#908
- installation Error " fatal error: THC/THC.h: No such file or directory #include <THC/THC.h> ": open-mmlab/OpenPCDet#1014