Bipasha Sen* 1 Gaurav Singh* 1, Aditya Agarwal* 1, Rohith Agaram 1, Madhava Krishna 1, Srinath Sridhar 2
*denotes equal contribution, 1 International Institute of Information Technology Hyderabad, 2 Brown University
applications_teaser.mp4
This is the official implementation of the paper "HyP-NeRF: Learning Improved NeRF Priors using a HyperNetwork" accepted at NeurIPS 2023
- Release pretrained checkpoints.
- Code Release
- Training Code
- Architecture modules, renderer, Meta MRHE
- ...
Please follow the steps outlined in torch-ngp repository for creating the environment upto and including the Build extension
subheading.
Download the ABO Dataset. We use the images and the transforms from abo-benchmark-material.tar and the metadata file abo-listings.tar for training. Place them in a directory structure as follows:
dataset_root
├── ABO_rendered
│ ├── B00EUL2B16
│ ├── B00IFHPVEU
│ ...
│
└── ABO_listings
└── listings
└── metadata
├── listings_0.json.gz
...
└── listings_f.json.gz
To train a model on the ABO dataset run the following command:
CUDA_VISIBLE_DEVICES=0 python main_nerf.py <dataset_root> --workspace <workspace dir> --bound 1.0 --scale 0.8 --dt_gamma 0 --class_choice CHAIR --load_ckpt
To render a specific NeRF from the codebook, run the following command:
CUDA_VISIBLE_DEVICES=0 python main_nerf.py <dataset_root> --workspace <workspace dir containing the pretrained ckpt> --bound 1.0 --scale 0.8 --dt_gamma 0 --class_choice CHAIR --load_ckpt --test --test_index <index of codebook>
Some parts of the code are inspired and borrowed from torch-ngp (which we use as our backbone) and INR-V. We thank the authors for providing the source code.
If you find HyP-NeRF useful in your work, consider citing us.
@article{hypnerf2023,
title={HyP-NeRF: Learning Improved NeRF Priors using a HyperNetwork},
author={Sen, Bipasha and Singh, Gaurav and Agarwal, Aditya and Agaram, Rohith and Krishna, K Madhava and Sridhar, Srinath},
journal={NeurIPS},
year={2023}
}