Skip to content

Latest commit

 

History

History
5 lines (3 loc) · 2 KB

2311.08581.md

File metadata and controls

5 lines (3 loc) · 2 KB

Drivable 3D Gaussian Avatars

We present Drivable 3D Gaussian Avatars (D3GA), the first 3D controllable model for human bodies rendered with Gaussian splats. Current photorealistic drivable avatars require either accurate 3D registrations during training, dense input images during testing, or both. The ones based on neural radiance fields also tend to be prohibitively slow for telepresence applications. This work uses the recently presented 3D Gaussian Splatting (3DGS) technique to render realistic humans at real-time framerates, using dense calibrated multi-view videos as input. To deform those primitives, we depart from the commonly used point deformation method of linear blend skinning (LBS) and use a classic volumetric deformation method: cage deformations. Given their smaller size, we drive these deformations with joint angles and keypoints, which are more suitable for communication applications. Our experiments on nine subjects with varied body shapes, clothes, and motions obtain higher-quality results than state-of-the-art methods when using the same training and test data.

我们提出了可驾驶的3D高斯头像(D3GA),这是第一个使用高斯溅射渲染的可控制人体3D模型。目前的逼真可驾驶头像在训练期间要求精确的3D注册,在测试期间需要密集的输入图像,或者两者兼而有之。基于神经辐射场的头像也倾向于在远程呈现应用中过于缓慢。这项工作使用最近提出的3D高斯溅射(3DGS)技术,使用密集校准的多视图视频作为输入,以实时帧率渲染逼真的人类。为了变形这些原语,我们放弃了常用的点变形方法线性混合蒙皮(LBS),使用一种经典的体积变形方法:笼形变形。考虑到它们较小的尺寸,我们用关节角度和关键点来驱动这些变形,这更适合通信应用。我们对九个身材、衣着和动作各异的受试者进行的实验,当使用相同的训练和测试数据时,获得了比现有最先进方法更高质量的结果。