In this work, we propose a novel clothed human reconstruction method called GaussianBody, based on 3D Gaussian Splatting. Compared with the costly neural radiance based models, 3D Gaussian Splatting has recently demonstrated great performance in terms of training time and rendering quality. However, applying the static 3D Gaussian Splatting model to the dynamic human reconstruction problem is non-trivial due to complicated non-rigid deformations and rich cloth details. To address these challenges, our method considers explicit pose-guided deformation to associate dynamic Gaussians across the canonical space and the observation space, introducing a physically-based prior with regularized transformations helps mitigate ambiguity between the two spaces. During the training process, we further propose a pose refinement strategy to update the pose regression for compensating the inaccurate initial estimation and a split-with-scale mechanism to enhance the density of regressed point clouds. The experiments validate that our method can achieve state-of-the-art photorealistic novel-view rendering results with high-quality details for dynamic clothed human bodies, along with explicit geometry reconstruction.
在这项工作中,我们提出了一种基于3D高斯喷溅的新型穿着人体重建方法,名为GaussianBody。与成本高昂的神经辐射基模型相比,3D高斯喷溅最近在训练时间和渲染质量方面展现出了优异的性能。然而,由于复杂的非刚性变形和丰富的衣物细节,将静态的3D高斯喷溅模型应用于动态人体重建问题并非易事。为了解决这些挑战,我们的方法考虑了显式的姿态引导变形,以关联规范空间和观察空间中的动态高斯,引入基于物理的先验并配合规则化变换有助于减少两个空间之间的歧义。在训练过程中,我们进一步提出了一种姿态细化策略,用于更新姿态回归以补偿初始估计的不准确性,以及一种分割与缩放机制,用以增强回归点云的密度。实验验证了我们的方法能够为动态穿着的人体实现具有高质量细节的最先进的逼真新视角渲染结果,同时还实现了显式几何重建。