Skip to content

A complete tutorial to reproduce Fast Depth Densification for Occlusion-aware Augmented Reality (SIGGRAPH-Asia 2018), as well as a script to show effects on custom videos.

License

Notifications You must be signed in to change notification settings

conscienceli/AR-Depth-Occlusion

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

8 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

AR-Depth-Occlusion

A complete pipeline and a python script to implement simple occlusion effects in AR environment or videos. The theory is from the paper Fast Depth Densification for Occlusion-aware Augmented Reality, and the c++ implementation is from AR-Depth-cpp.

1. Main Steps

1.1 Prepare video file "video.MOV" and extract the frames.
mkdir  ./AR-Depth-cpp/data/frames
ffmpeg -i video.MOV -vf "scale=480:270,fps=20" ./AR-Depth-cpp/data/frames/%06d.png
1.2 Get the reconstruction files with colmap.
colmap automatic_reconstructor --workspace_path ./AR-Depth-cpp/data \
--image_path ./AR-Depth-cpp/data/frames --camera_model=PINHOLE \
--single_camera=1 --data_type=video

If no GPU is available or no display is attached, use --use_gpu=false.

1.3 Convert the reconstruction files to TXT format.
mkdir ./AR-Depth-cpp/data/reconstruction
colmap model_converter  --input_path ./AR-Depth-cpp/data/sparse/0 \
--output_path ./AR-Depth-cpp/data/reconstruction --output_type TXT
1.4 Remove all the comments lines in TXT files.
1.5 Build AR-Depth-cpp according to ./AR-Depth-cpp/README.md. Then
cd AR_DEPTH
./AR_DEPTH
cd ..
1.6 Run the fusion script.
python fusion.py

2. Reference

Reference

3. License

GPLv3 license.

About

A complete tutorial to reproduce Fast Depth Densification for Occlusion-aware Augmented Reality (SIGGRAPH-Asia 2018), as well as a script to show effects on custom videos.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published