This package provides an automatic and target-less LiDAR-camera extrinsic calibration method using Segment Anything Model. The related paper is Calib-Anything: Zero-training LiDAR-Camera Extrinsic Calibration Method Using Segment Anything. For more calibration codes, please refer to the link SensorsCalibration.
- pcl 1.10
- opencv
- eigen 3
git clone https://github.com/OpenCalib/CalibAnything.git
cd CalibAnything
# mkdir build
mkdir -p build && cd build
# build
cmake .. && make
We provide examples of two dataset. You can download the processed data at Google Drive or BaiduNetDisk:
# baidunetdisk
Link: https://pan.baidu.com/s/1qAt7nYw5hYoJ1qrH0JosaQ?pwd=417d
Code: 417d
Run the command:
cd CalibAnything
./bin/run_lidar2camera ./data/kitti/calib.json # kitti dataset
./bin/run_lidar2camera ./data/nuscenes/calib.json # nuscenes dataset
- Several pairs of time synchronized RGB images and LiDAR point cloud (intensity is needed). One pair of data can also be used to calibrate, but the results may be ubstable.
- The intrinsic of the camera and the initial guess of the extrinsic.
Follow the instructions in Segment Anything and generate masks of your image.
-
First download a model checkpoint. You can choose vit-l.
-
Install SAM
# environment: python>=3.8, pytorch>=1.7, torchvision>=0.8
git clone [email protected]:facebookresearch/segment-anything.git
cd segment-anything; pip install -e .
pip install opencv-python pycocotools matplotlib onnxruntime onnx
- Run
python scripts/amg.py --checkpoint <path/to/checkpoint> --model-type <model_type> --input <image_or_folder> --output <path/to/output>
# example(recommend parameter)
python scripts/amg.py --checkpoint sam_vit_l_0b3195.pth --model-type vit_l --input ./data/kitti/000000/images/ --output ./data/kitti/000000/masks/ --stability-score-thresh 0.9 --box-nms-thresh 0.5 --stability-score-offset 0.9
The hierarchy of your folders should be formed as:
YOUR_DATA_FOLDER
├─calib.json
├─pc
| ├─000000.pcd
| ├─000001.pcd
| ├─...
├─images
| ├─000000.png
| ├─000001.png
| ├─...
├─masks
| ├─000000
| | ├─000.png
| | ├─001.png
| | ├─...
| ├─000001
| ├─...
For large masks, we only use part of it near the edge.
python processed_mask.py -i <YOUR_DATA_FOLDER>/masks/ -o <YOUR_DATA_FOLDER>/processed_masks/
Content description
cam_K
: camera intrinsic matrixcam_dist
: camera distortion coefficient.[k1, k2, p1, p2, p3, ...]
, use the same order as opencvT_lidar_to_cam
: initial guess of the extrinsicT_lidar_to_cam_gt
: ground-truth of the extrinsic (Used to calculate error. If not provided, set "available" to false)img_folder
: the path to imagesmask_folder
: the path to maskspc_folder
: the path to point cloudimg_format
: the suffix of the imagepc_format
: the suffix of the point cloud (support pcd or kitti bin)file_name
: the name of the input images and point cloudmin_plane_point_num
: the minimum number of point in plane extractioncluster_tolerance
: the spatial cluster tolerance in euclidean cluster (set larger if the point cloud is sparse, such as the 32-beam LiDAR)search_num
: the number of search timessearch_range
: the search range for rotation and translationpoint_range
: the approximate height range of the point cloud projected onto the image (the top of the image is 0.0 and the bottom of the image is 1.0)down_sample
: the point cloud downsample voxel size (if don't need downsample, set the "is_valid" to false)thread
: the number of thread to reduce calibration time
./bin/run_lidar2camera <path-to-json-file>
- initial projection:
init_proj.png
,init_proj_seg.png
- gt projection:
gt_proj.png
,gt_proj_seg.png
- refined projection:
refined_proj.png
,refined_proj_seg.png
- refined extrinsic:
extrinsic.txt
If you find this project useful in your research, please consider cite:
@misc{luo2023calibanything,
title={Calib-Anything: Zero-training LiDAR-Camera Extrinsic Calibration Method Using Segment Anything},
author={Zhaotong Luo and Guohang Yan and Yikang Li},
year={2023},
eprint={2306.02656},
archivePrefix={arXiv},
primaryClass={cs.CV}
}