Skip to content

Latest commit

 

History

History
5 lines (3 loc) · 2.25 KB

2312.05941.md

File metadata and controls

5 lines (3 loc) · 2.25 KB

ASH: Animatable Gaussian Splats for Efficient and Photoreal Human Rendering

Real-time rendering of photorealistic and controllable human avatars stands as a cornerstone in Computer Vision and Graphics. While recent advances in neural implicit rendering have unlocked unprecedented photorealism for digital avatars, real-time performance has mostly been demonstrated for static scenes only. To address this, we propose ASH, an animatable Gaussian splatting approach for photorealistic rendering of dynamic humans in real-time. We parameterize the clothed human as animatable 3D Gaussians, which can be efficiently splatted into image space to generate the final rendering. However, naively learning the Gaussian parameters in 3D space poses a severe challenge in terms of compute. Instead, we attach the Gaussians onto a deformable character model, and learn their parameters in 2D texture space, which allows leveraging efficient 2D convolutional architectures that easily scale with the required number of Gaussians. We benchmark ASH with competing methods on pose-controllable avatars, demonstrating that our method outperforms existing real-time methods by a large margin and shows comparable or even better results than offline methods.

实时渲染逼真且可控的人类虚拟形象是计算机视觉和图形学的一个基石。虽然近期在神经隐式渲染方面的进展为数字化虚拟形象解锁了前所未有的逼真度,但实时性能大多只在静态场景中得到展示。为了解决这个问题,我们提出了ASH,一种可动画的3D高斯喷溅方法,用于实时渲染动态人类的逼真图像。我们将穿衣的人类参数化为可动画的3D高斯,这些高斯可以高效地喷溅到图像空间中以生成最终渲染。然而,天真地在3D空间中学习高斯参数在计算上提出了严峻挑战。相反,我们将高斯附加到一个可变形的角色模型上,并在2D纹理空间中学习它们的参数,这允许利用高效的2D卷积架构,轻松地扩展所需的高斯数量。我们在姿势可控虚拟形象上对ASH进行了基准测试,并与竞争方法进行了比较,证明我们的方法在实时方法上的性能远远超过现有方法,并且显示出与离线方法相当甚至更好的结果。