Skip to content

Latest commit

 

History

History
5 lines (3 loc) · 2.14 KB

2403.11447.md

File metadata and controls

5 lines (3 loc) · 2.14 KB

Motion-aware 3D Gaussian Splatting for Efficient Dynamic Scene Reconstruction

3D Gaussian Splatting (3DGS) has become an emerging tool for dynamic scene reconstruction. However, existing methods focus mainly on extending static 3DGS into a time-variant representation, while overlooking the rich motion information carried by 2D observations, thus suffering from performance degradation and model redundancy. To address the above problem, we propose a novel motion-aware enhancement framework for dynamic scene reconstruction, which mines useful motion cues from optical flow to improve different paradigms of dynamic 3DGS. Specifically, we first establish a correspondence between 3D Gaussian movements and pixel-level flow. Then a novel flow augmentation method is introduced with additional insights into uncertainty and loss collaboration. Moreover, for the prevalent deformation-based paradigm that presents a harder optimization problem, a transient-aware deformation auxiliary module is proposed. We conduct extensive experiments on both multi-view and monocular scenes to verify the merits of our work. Compared with the baselines, our method shows significant superiority in both rendering quality and efficiency.

3D高斯平滑(3DGS)已成为动态场景重建的新兴工具。然而,现有方法主要集中在将静态3DGS扩展到时间变化表示上,而忽视了2D观察所携带的丰富运动信息,因此遭受性能下降和模型冗余问题。为了解决上述问题,我们提出了一种新颖的感知运动增强框架,用于动态场景重建,该框架从光流中挖掘有用的运动线索以改善动态3DGS的不同范式。具体来说,我们首先建立3D高斯运动与像素级流之间的对应关系。然后引入一种新颖的流增强方法,并对不确定性和损失协作提供额外的见解。此外,对于流行的基于变形的范式,它提出了一个难以优化问题,我们提出了一个瞬态感知的变形辅助模块。我们在多视图和单目场景上进行了广泛的实验以验证我们工作的优点。与基准相比,我们的方法在渲染质量和效率方面显示了显著的优势。