Skip to content

Latest commit

 

History

History
67 lines (50 loc) · 2.7 KB

README.md

File metadata and controls

67 lines (50 loc) · 2.7 KB

Monodepth2.cpp

⏳ Training

Monocular training:

💾 KITTI training data

You can download the entire raw KITTI dataset by running:

wget -i kitti_archives_to_download.txt -P kitti_data/

Then unzip with

cd kitti_data
unzip "*.zip"
cd ..

Warning: it weighs about 175GB, so make sure you have enough space to unzip too!

Our default settings expect that you have converted the png images to jpeg with this command, which also deletes the raw KITTI .png files:

find kitti_data/ -name '*.png' | parallel 'convert -quality 92 -sampling-factor 2x2,1x1,1x1 {.}.png {.}.jpg && rm {}'

To Do

  • [] add train & inference script
  • [] add kitti dataloader
  • [] distributed training
  • add loss function
  • [] add log printing

References

Torchscript demo

  1. download related libraries including OpenCV and libtorch.
  2. download my converted torchscript model or convert your trained model into torchscript by yourself. If you don't familiar with torchscript currently, please check the offical docs
  3. prepare a sample image and change its path in main.cpp
  4. if you don't have available gpus, please annotate CUDA options in CMakeLists.txt

runtime

Model Language 3D Packing Inference time / im Link
packnet_32 litorch Yes download
packnet_32 python Yes download

dockerfile

If you're familiar with docker, you could run this project withour the need of installing those libraries. please remember install nvidia-docker because our projects needs to use gpus.

converted models

you could follow to_jit.py to create your own torchscript model and use my converted model directly. We provide three different converted models as below.
monodepth2(FP32)
packnet-sfm(FP16)

onnx

we also offer a onnx file that could be accerlated with TensorRT. The related demo code will be released soon.
packnet-sfm(ONNX)