Skip to content

Latest commit

Β 

History

History
332 lines (260 loc) Β· 14.4 KB

README_ENG.md

File metadata and controls

332 lines (260 loc) Β· 14.4 KB

logo


All Contributors

Hits

πŸ‡°πŸ‡· ν•œκ΅­μ–΄ / πŸ‡¬πŸ‡§ English


Plank Hyundong 3D

This is a 3D model of Hyundong, a fitness influencer, demonstrating the proper plank posture. For those who want to become fitness enthusiasts, why not exercise by carefully examining and comparing your posture with the Hyundong figure extracted using the NeRF model? πŸ‹πŸ»

DataYanolja 2022 project description video / Presentation material

Final result

Final result (actual figure), Qualitative evaluation NeRF 3D representation

Table of Contents

Quick Start

Open In Colab

PlankHyundong/
β”œβ”€β”€ nerf_quick_start.ipynb
β”œβ”€β”€ notebooks
β”‚   β”œβ”€β”€ nerf_colab.ipynb
β”‚   β”œβ”€β”€ nerf_wandb_colab.ipynb
β”‚   β”œβ”€β”€ colmap_colab.ipynb
β”‚   β”œβ”€β”€ extract_mesh_colab.ipynb
β”‚   └── sampling_colab.ipynb
└── data
    β”œβ”€β”€ video
    β”‚   └── video.MOV
    β”œβ”€β”€ (images)
    β”‚   └── ...
    └── (logs)
        └── ...
  • The notebooks folder contains notebooks for each step of the pipeline needed to create the final result.
  • A single notebook, nerf_quick_start.ipynb, is provided for quickly reviewing the entire workflow.
  • The data folder contains the pseudo-depth video and the results of each step of the pipeline.
Step Content
1️⃣ Video Sampling: Sampling Images from a Video
2️⃣ Run COLMAP to get camera pose: Obtaining Camera Pose for Images
3️⃣ Run NeRF: Training the NeRF Model
4️⃣ Get Mesh file: Creating and Refining Mesh from the NeRF Model
  • The required data can be found at data/video/video.MOV.
  • A cell to clone the folder has been added at the beginning.
  • Create subfolders named images and logs in the above folder. Each folder will store sampled images, config, mesh, weight, video files, etc.

Getting Started by Component

Pipeline

Output Description
1️⃣ RGB video Collect video by rotating around the object.
2️⃣ N * RGB image set Sample the video to obtain a set of images.
3️⃣ N * camera pose set Camera poses are necessary for NeRF training.
Perform LLFF on the sampled images from 2️⃣.
4️⃣ Trained NeRF model file Train the NeRF model.
The 3D representation contained in the NeRF model is called an implicit 3D representation.
5️⃣ Mesh object file Apply a mesh to the implicit 3D representation.
Convert to the form of an 'explicit' 3D representation.
6️⃣ 3D printer print file Use slicer software.
Set the parameters of the 3D printer and prepare for printing.
7️⃣ 3D printed figure Print the final product using a 3D printer.

Below is an explanation of each step in the pipeline.


1️⃣ Capture the Object on Video

Recommended, 360 degree capture
Method used in this project

βž• Please check the following link for recommended shooting practices and precautions.


2️⃣ Sampling Images from Videos

Open In Colab

  • This script samples images from a video taken by a camera with a fixed time interval.
  • βœ… If the camera trajectory is long, it is better to sample images more frequently.
  • ❗ If the camera trajectory is short and the time the lens is open is also short, sampling images too frequently from the video may negatively impact the performance.

βž• For more details on parameters, please check here!


3️⃣ Estimating Camera Poses for Images

Open In Colab

NOTE: You must use a GPU runtime.

  • The input for NeRF is a set of (image, camera pose) pairs.
  • To compute the camera pose corresponding to each image in a custom dataset, we use the script developed by the author of LLFF that is based on COLMAP.
  • When the script is finished, a poses_bounds.npy file necessary for running the NeRF model is created in the dataset folder.

βž• If you encounter difficulties in setting up the LLFF environment, please check here for more information.


4️⃣ Training the NeRF Model

Open In Colab

NOTE: You must use a GPU runtime.

Training Options

Option Function
--no_ndc, --spherify, --lindisp These flags are not necessary for forward-facing scenes, but are required for 360 scenes.

Results

RGB RGB_still disparity

βž• If you want to know about the wandb integration and the results of the NeRF parameter experiments, check here!

5️⃣ Creating and Refining Mesh from NeRF Model

Creating Mesh

Open In Colab

NOTE: You must use a GPU runtime.

  • This step involves loading the model trained using the NeRF model, extracting the surface (iso-surface) through the PyMCubes package, and saving the resulting 3d.obj file. This is done to visualize the 3D representation of the trained NeRF model using pyrender and generate the turntable.mp4 video.
  • The source of this notebook is from the official NeRF repository.

Refining Mesh

  • As the NeRF model is trained using data collected directly rather than through a simulator, the extracted mesh may have a lot of noise. In this case, it is necessary to remove the noise using Blender before printing the 3D object.
  • If you want to know about experimental results and considerations for mesh creation parameters and mesh refinement depending on the data, please check here!

6️⃣ Printing and Post-Processing the Figure

Printing the Figure

Slicer Software Printing in Progress

βž• To learn about the experimental results of the 3D printer options, please check here!

Post-Processing the Printed Figure

Before After
Removing Raft

Environment

  • Google COLAB
    • All experiments by the PlankHyundong team were conducted on Google COLAB Pro and Google COLAB Pro+.
    • Notebook files of the PlankHyundong team are available on GitHub, where all dependencies are already defined as scripts for worry-free execution.
  • Weight and Bias (wandb)
  • Local Light Field Fusion (LLFF), COLMAP
  • Tensorflow 1.15
Google COLAB WandB Tensorflow (1.15.x)
Blender Sindoh 3DWOX1/DP203

Team

logo-color.png