Official PyTorch implementation of the paper “LCCNet: Lidar and Camera Self-Calibration Using Cost Volume Network”. A video of the demonstration of the method can be found on https://www.youtube.com/watch?v=UAAGjYT708A
- python 3.8 (recommend to use Anaconda) pip install -r requirements.txt
## Pre-trained model
Pre-trained models can be downloaded from [google drive](https://drive.google.com/drive/folders/1VbQV3ERDeT3QbdJviNCN71yoWIItZQnl?usp=sharing)
## KITTI Odometry Dataset
odometry_color$ tree -L 3 . ├── poses │ ├── 00.txt │ ├── 01.txt │ ├── 02.txt │ ├── 03.txt │ ├── 04.txt │ ├── 05.txt │ ├── 06.txt │ ├── 07.txt │ ├── 08.txt │ ├── 09.txt │ └── 10.txt └── sequences ├── 00 │ ├── calib.txt │ ├── image_2 │ ├── image_3 │ ├── times.txt │ └── velodyne ├── 01 │ ├── calib.txt │ ├── image_2 │ ├── image_3 │ ├── times.txt │ └── velodyne
## Evaluation
1. Download [KITTI odometry dataset](http://www.cvlibs.net/datasets/kitti/eval_odometry.php).
2. Change the path to the dataset in `evaluate_calib.py`.
```python
data_folder = '/path/to/the/KITTI/odometry_color/'
- Create a folder named
pretrained
to store the pre-trained models in the root path. - Download pre-trained models and modify the weights path in
evaluate_calib.py
.
weights = [
'./pretrained/kitti_iter1.tar',
'./pretrained/kitti_iter2.tar',
'./pretrained/kitti_iter3.tar',
'./pretrained/kitti_iter4.tar',
'./pretrained/kitti_iter5.tar',
]
- Run evaluation.
python evaluate_calib.py
python train_with_sacred.py
Thank you for citing our paper if you use any of this code or datasets.
@article{lv2020lidar,
title={Lidar and Camera Self-Calibration using CostVolume Network},
author={Lv, Xudong and Wang, Boya and Ye, Dong and Wang, Shuo},
journal={arXiv preprint arXiv:2012.13901},
year={2020}
}
We are grateful to Daniele Cattaneo for his CMRNet github repository. We use it as our initial code base.