🚀 Major upgrade 🚀 : Migration to Pytorch v1 and Python 3.7. The code is now much more generic and easy to install.
This repository contains a modified version of the AtlasNet network (AtlasNet: A Papier-Mâché Approach to Learning 3D Surface Generation ) which uses a portion of the Tangent Convolutions (Tangent Convolutions for Dense Prediction in 3D)network as an encoder.
If you find this work useful in your research, please cite all papers:
@inproceedings{groueix2018,
title={{AtlasNet: A Papier-M\^ach\'e Approach to Learning 3D Surface Generation}},
author={Groueix, Thibault and Fisher, Matthew and Kim, Vladimir G. and Russell, Bryan and Aubry, Mathieu},
booktitle={Proceedings IEEE Conf. on Computer Vision and Pattern Recognition (CVPR)},
year={2018}
}
@article{Tat2018,
author = {Maxim Tatarchenko* and Jaesik Park* and Vladlen Koltun and Qian-Yi Zhou.},
title = {Tangent Convolutions for Dense Prediction in {3D}},
journal = {CVPR},
year = {2018},
}
@article{Zhou2018,
author = {Qian-Yi Zhou and Jaesik Park and Vladlen Koltun},
title = {{Open3D}: {A} Modern Library for {3D} Data Processing},
journal = {arXiv:1801.09847},
year = {2018},
}
This implementation uses Pytorch.
## Download the repository
git clone https://github.com/ThibaultGROUEIX/AtlasNet.git
cd AtlasNet
## Create python env with relevant packages
conda create --name pytorch-atlasnet python=3.7
source activate pytorch-atlasnet
pip install pandas visdom
# Install PyTorch as directed:
# https://pytorch.org/get-started/locally/?source=Google&medium=PaidSearch&utm_campaign=1712418477&utm_adgroup=66821158477&utm_keyword=%2Binstalling%20%2Bpytorch&utm_offering=AI&utm_Product=PYTorch&gclid=Cj0KCQjw5MLrBRClARIsAPG0WGwp-txrGdm03ajH11PmA_yzO-3KdxmFpal62fq5xajWiM6RETcL0l4aAg3NEALw_wcB
# you're done ! Congrats :)
cd data; ./download_data.sh; cd ..
We used the ShapeNet dataset for 3D models, and rendered views from 3D-R2N2:
When using the provided data make sure to respect the shapenet license.
- The point clouds from ShapeNet, with normals go in
data/customShapeNet
- The corresponding normalized mesh (for the metro distance) go in
data/ShapeNetCorev2Normalized
- the rendered views go in
data/ShapeNetRendering
The trained models and some corresponding results are also available online :
- The trained_models go in
trained_models/
In case you need the results of ICP on PointSetGen output :
Using the custom chamfer distance will divide memory usage by 2 and will be a bit faster. Use it if you're short on memory especially when training models for Single View reconstruction.
source activate pytorch-atlasnet
cd ./extension
python setup.py install
- First launch a visdom server :
python -m visdom.server -p 8888
- Launch the training. Check out all the options in
./training/train_AE_AtlasNet.py
.
python ./training/train_TangConv_AtlasNet.py --env 'AE_AtlasNet' --nb_primitives 25
- Monitor your training on http://localhost:8888/
-
Compute some results with your trained model
python ./inference/run_AE_AtlasNet.py
The trained models accessible here have the following performances, slightly better than the one reported in the paper. The number reported is the chamfer distance.
The generated 3D models' surfaces are not oriented. As a consequence, some area will appear dark if you directly visualize the results in Meshlab. You have to incorporate your own fragment shader in Meshlab, that flip the normals in they are hit by a ray from the wrong side. An exemple is given for the Phong BRDF.
sudo mv /usr/share/meshlab/shaders/phong.frag /usr/share/meshlab/shaders/phong.frag.bak
sudo cp auxiliary/phong.frag /usr/share/meshlab/shaders/phong.frag #restart Meshlab
Open3D from source PyTorch
Change config/config.json file with input/output directories to fit user. Run utils/precompute.py