A complete pipeline and a python script to implement simple occlusion effects in AR environment or videos. The theory is from the paper Fast Depth Densification for Occlusion-aware Augmented Reality, and the c++ implementation is from AR-Depth-cpp.
mkdir ./AR-Depth-cpp/data/frames
ffmpeg -i video.MOV -vf "scale=480:270,fps=20" ./AR-Depth-cpp/data/frames/%06d.png
1.2 Get the reconstruction files with colmap.
colmap automatic_reconstructor --workspace_path ./AR-Depth-cpp/data \
--image_path ./AR-Depth-cpp/data/frames --camera_model=PINHOLE \
--single_camera=1 --data_type=video
If no GPU is available or no display is attached, use --use_gpu=false
.
mkdir ./AR-Depth-cpp/data/reconstruction
colmap model_converter --input_path ./AR-Depth-cpp/data/sparse/0 \
--output_path ./AR-Depth-cpp/data/reconstruction --output_type TXT
cd AR_DEPTH
./AR_DEPTH
cd ..
python fusion.py
Reference
- Fast Depth Densification for Occlusion-aware Augmented Reality, Aleksander Holynski and Johannes Kopf, IACM Transactions on Graphics (Proc. SIGGRAPH Asia). (https://homes.cs.washington.edu/~holynski/publications/occlusion/index.html)
- AR-Depth, (https://github.com/facebookresearch/AR-Depth)
- AR-Depth-cpp, (https://github.com/muskie82/AR-Depth-cpp)
GPLv3 license.