This is the reference PyTorch implementation for training and testing depth estimation models using the method described in
Digging into Self-Supervised Monocular Depth Prediction
Clément Godard, Oisin Mac Aodha, Michael Firman and Gabriel J. Brostow
@article{monodepth2,
title = {Digging into Self-Supervised Monocular Depth Prediction},
author = {Cl{\'{e}}ment Godard and
Oisin {Mac Aodha} and
Michael Firman and
Gabriel J. Brostow},
booktitle = {The International Conference on Computer Vision (ICCV)},
month = {October},
year = {2019}
}
mkdir raw_dataset
- Download the UBP dataset into the "raw_dataset" directory. A sample of the UPB dataset is available here. Those video are 3FPS. Consider downloding the original dataset and downsample to 10FPS.
mkdir scene_splits
- Download the scene splits into the "scene_splits" directory. The train-validation split is available here. In the "scene_splits" directory you should have: "train_scenes.txt" and "test_scenes.txt".
# script to create the dataset
python3 scripts/create_dataset.py \
--src_dir raw_dataset \
--dst_dir ./dataset \
--split_dir scene_splits
- Downloading the pretrained model to fine-tune
# script to download pretrained model
python3 download.py
- Fine-tune existing model
# script to train the model
python3 train.py \
--model_name finetuned_mono \
--load_weights_folder ./models/mono_640x192 \
--data_path ./dataset\
--log_dir ./logs \
--height 256 \
--width 512 \
--num_workers 4 \
--split upb \
--dataset upb \
--learning_rate 1e-6 \
--batch_size 12 \
--num_epochs 5 \
--disparity_smoothness 1e-3\
Conisder playing with "disparity_smoothness".
- Copy trained model
cp -r logs/finetuned_mono/models/weights_4 models/monodepth
- Get samples
# script to get some sample results
python3 scripts/results.py \
--model_name monodepth\
--models_dir ./models\
--split_dir ./splits/upb\
--dataset_dir ./dataset\
--results_dir ./results
A pre-trained model (512x256 - 10FPS) is available here.