- Problem: Depth Estimation
-
Method: Monocular depth estimation with self-supervised, based on Monodepth2 and HRDepth. Our proposal methods are detailed in folder document
-
Dataset: KITTI
We ran our experiments with PyTorch 1.10.1, CUDA 11.1, Python 3.6.6 and Ubuntu 18.04
You can download the entire KITTI_raw dataset by running:
wget -i splits/kitti_archives_to_download.txt -P kitti_data/
Then unzip with
cd kitti_data
unzip "*.zip"
cd ..
Warning: it weighs about 175GB, so make sure you have enough space to unzip too!
We have two versions corresponding to VDT_Phase1 and VDT_Phase2.
To run VDT_Phase1
CUDA_VISIBLE_DEVICES=0 python train.py --model_name densenet-hr-depth --split eigen_zhou --backbone densenet --depth_decoder hr-depth --png
To run VDT_Phase2
CUDA_VISIBLE_DEVICES=0 python train_v2.py
To prepare the ground truth depth maps run:
python export_gt_depth.py --data_path kitti_data --split eigen
python export_gt_depth.py --data_path kitti_data --split eigen_benchmark
...assuming that you have placed the KITTI dataset in the default location of ./kitti_data/
.
The following example command evaluates the epoch 19 weights of a model named densenet
:
python evaluate_depth.py --load_weights_folder ./densenet/models/weights_19/ --eval_mono --backbone densenet --depth_decoder hr-depth