Skip to content

Latest commit

 

History

History
5 lines (3 loc) · 2.1 KB

2402.17427.md

File metadata and controls

5 lines (3 loc) · 2.1 KB

VastGaussian: Vast 3D Gaussians for Large Scene Reconstruction

Existing NeRF-based methods for large scene reconstruction often have limitations in visual quality and rendering speed. While the recent 3D Gaussian Splatting works well on small-scale and object-centric scenes, scaling it up to large scenes poses challenges due to limited video memory, long optimization time, and noticeable appearance variations. To address these challenges, we present VastGaussian, the first method for high-quality reconstruction and real-time rendering on large scenes based on 3D Gaussian Splatting. We propose a progressive partitioning strategy to divide a large scene into multiple cells, where the training cameras and point cloud are properly distributed with an airspace-aware visibility criterion. These cells are merged into a complete scene after parallel optimization. We also introduce decoupled appearance modeling into the optimization process to reduce appearance variations in the rendered images. Our approach outperforms existing NeRF-based methods and achieves state-of-the-art results on multiple large scene datasets, enabling fast optimization and high-fidelity real-time rendering.

现有基于NeRF的大场景重建方法在视觉质量和渲染速度上往往存在限制。虽然最近的3D高斯喷溅在小规模和以对象为中心的场景上表现良好,但将其扩展到大场景时面临着有限的视频内存、长时间的优化以及明显的外观变化等挑战。为了解决这些挑战,我们提出了VastGaussian,这是第一个基于3D高斯喷溅,针对大场景进行高质量重建和实时渲染的方法。我们提出了一种渐进式分割策略,将一个大场景分成多个单元,其中训练相机和点云以一个空域感知的可见性标准适当分布。这些单元在并行优化后合并成一个完整的场景。我们还将解耦的外观建模引入优化过程,以减少渲染图像中的外观变化。我们的方法超越了现有的基于NeRF的方法,并在多个大场景数据集上实现了最先进的结果,实现了快速优化和高保真实时渲染。