π°π· νκ΅μ΄ / π¬π§ English
This is a 3D model of Hyundong, a fitness influencer, demonstrating the proper plank posture. For those who want to become fitness enthusiasts, why not exercise by carefully examining and comparing your posture with the Hyundong figure extracted using the NeRF model? ππ»
DataYanolja 2022 project description video / Presentation material
Final result (actual figure), Qualitative evaluation | NeRF 3D representation |
---|---|
PlankHyundong/
βββ nerf_quick_start.ipynb
βββ notebooks
β βββ nerf_colab.ipynb
β βββ nerf_wandb_colab.ipynb
β βββ colmap_colab.ipynb
β βββ extract_mesh_colab.ipynb
β βββ sampling_colab.ipynb
βββ data
βββ video
β βββ video.MOV
βββ (images)
β βββ ...
βββ (logs)
βββ ...
- The notebooks folder contains notebooks for each step of the pipeline needed to create the final result.
- A single notebook,
nerf_quick_start.ipynb
, is provided for quickly reviewing the entire workflow. - The data folder contains the pseudo-depth video and the results of each step of the pipeline.
- The required data can be found at
data/video/video.MOV
. - A cell to clone the folder has been added at the beginning.
- Create subfolders named
images
andlogs
in the above folder. Each folder will store sampled images, config, mesh, weight, video files, etc.
Output | Description | |
---|---|---|
1οΈβ£ | RGB video | Collect video by rotating around the object. |
2οΈβ£ | N * RGB image set | Sample the video to obtain a set of images. |
3οΈβ£ | N * camera pose set | Camera poses are necessary for NeRF training. Perform LLFF on the sampled images from 2οΈβ£. |
4οΈβ£ | Trained NeRF model file | Train the NeRF model. The 3D representation contained in the NeRF model is called an implicit 3D representation. |
5οΈβ£ | Mesh object file | Apply a mesh to the implicit 3D representation. Convert to the form of an 'explicit' 3D representation. |
6οΈβ£ | 3D printer print file | Use slicer software. Set the parameters of the 3D printer and prepare for printing. |
7οΈβ£ | 3D printed figure | Print the final product using a 3D printer. |
Below is an explanation of each step in the pipeline.
Recommended, 360 degree capture |
---|
Method used in this project |
β Please check the following link for recommended shooting practices and precautions.
- This script samples images from a video taken by a camera with a fixed time interval.
- β If the camera trajectory is long, it is better to sample images more frequently.
- β If the camera trajectory is short and the time the lens is open is also short, sampling images too frequently from the video may negatively impact the performance.
β For more details on parameters, please check here!
NOTE: You must use a GPU runtime.
- The input for NeRF is a set of (image, camera pose) pairs.
- To compute the camera pose corresponding to each image in a custom dataset, we use the script developed by the author of LLFF that is based on COLMAP.
- When the script is finished, a
poses_bounds.npy
file necessary for running the NeRF model is created in the dataset folder.
β If you encounter difficulties in setting up the LLFF environment, please check here for more information.
NOTE: You must use a GPU runtime.
Option | Function |
---|---|
--no_ndc , --spherify , --lindisp |
These flags are not necessary for forward-facing scenes, but are required for 360 scenes. |
RGB | RGB_still | disparity |
---|---|---|
β If you want to know about the wandb integration and the results of the NeRF parameter experiments, check here!
NOTE: You must use a GPU runtime.
- This step involves loading the model trained using the NeRF model, extracting the surface (iso-surface) through the
PyMCubes
package, and saving the resulting3d.obj
file. This is done to visualize the 3D representation of the trained NeRF model usingpyrender
and generate theturntable.mp4
video. - The source of this notebook is from the official NeRF repository.
- As the NeRF model is trained using data collected directly rather than through a simulator, the extracted mesh may have a lot of noise. In this case, it is necessary to remove the noise using Blender before printing the 3D object.
- If you want to know about experimental results and considerations for mesh creation parameters and mesh refinement depending on the data, please check here!
Slicer Software | Printing in Progress |
---|---|
β To learn about the experimental results of the 3D printer options, please check here!
Before | After | |
---|---|---|
Removing Raft |
- Google COLAB
- All experiments by the PlankHyundong team were conducted on Google COLAB Pro and Google COLAB Pro+.
- Notebook files of the PlankHyundong team are available on GitHub, where all dependencies are already defined as scripts for worry-free execution.
- Weight and Bias (wandb)
- Local Light Field Fusion (LLFF), COLMAP
- Tensorflow 1.15
- The official NeRF repository and the modified NeRF repository with wandb integration both use TensorFlow 1.15.
Google COLAB | WandB | Tensorflow (1.15.x) |
---|---|---|
Blender | Sindoh | 3DWOX1/DP203 |
---|---|---|
- This project follows the all-contributors specification and welcomes any contributions!
- Sejong University Artificial Intelligence Club (SAI)
- Project Kanban