Skip to content

Latest commit

 

History

History
5 lines (3 loc) · 3.07 KB

2312.13308.md

File metadata and controls

5 lines (3 loc) · 3.07 KB

SWAGS: Sampling Windows Adaptively for Dynamic 3D Gaussian Splatting

Novel view synthesis has shown rapid progress recently, with methods capable of producing evermore photo-realistic results. 3D Gaussian Splatting has emerged as a particularly promising method, producing high-quality renderings of static scenes and enabling interactive viewing at real-time frame rates. However, it is currently limited to static scenes only. In this work, we extend 3D Gaussian Splatting to reconstruct dynamic scenes. We model the dynamics of a scene using a tunable MLP, which learns the deformation field from a canonical space to a set of 3D Gaussians per frame. To disentangle the static and dynamic parts of the scene, we learn a tuneable parameter for each Gaussian, which weighs the respective MLP parameters to focus attention on the dynamic parts. This improves the model's ability to capture dynamics in scenes with an imbalance of static to dynamic regions. To handle scenes of arbitrary length whilst maintaining high rendering quality, we introduce an adaptive window sampling strategy to partition the sequence into windows based on the amount of movement in the sequence. We train a separate dynamic Gaussian Splatting model for each window, allowing the canonical representation to change, thus enabling the reconstruction of scenes with significant geometric or topological changes. Temporal consistency is enforced using a fine-tuning step with self-supervising consistency loss on randomly sampled novel views. As a result, our method produces high-quality renderings of general dynamic scenes with competitive quantitative performance, which can be viewed in real-time with our dynamic interactive viewer.

近期,新视角合成技术取得了迅速的进展,这些方法能够制作出越来越逼真的照片级结果。三维高斯飞溅(3D Gaussian Splatting)作为一种特别有前景的方法,已经出现,能够产生高质量的静态场景渲染,并支持实时帧率的交互式查看。然而,它目前仅限于静态场景。在这项工作中,我们将三维高斯飞溅扩展到动态场景重建。我们使用可调整的多层感知器(MLP)来模拟场景的动态,它学习从规范空间到每帧的一组三维高斯的变形场。为了分离场景的静态和动态部分,我们为每个高斯学习一个可调整的参数,该参数加权了相应的MLP参数,以专注于动态部分。这提高了模型捕捉静态与动态区域不平衡场景动态的能力。为了处理任意长度的场景,同时保持高渲染质量,我们引入了一种自适应窗口采样策略,根据序列中的运动量将序列划分为窗口。我们为每个窗口训练一个单独的动态高斯飞溅模型,允许规范表示发生变化,从而使得具有显著几何或拓扑变化的场景得以重建。使用随机采样的新视角上的自监督一致性损失进行微调,以强制时间一致性。结果是,我们的方法生成了具有竞争性定量表现的一般动态场景的高质量渲染,并且可以通过我们的动态交互式查看器实时查看。