ULSR-GS: Ultra Large-scale Surface Reconstruction Gaussian Splatting with Multi-View Geometric Consistency
While Gaussian Splatting (GS) demonstrates efficient and high-quality scene rendering and small area surface extraction ability, it falls short in handling large-scale aerial image surface extraction tasks. To overcome this, we present ULSR-GS, a framework dedicated to high-fidelity surface extraction in ultra-large-scale scenes, addressing the limitations of existing GS-based mesh extraction methods. Specifically, we propose a point-to-photo partitioning approach combined with a multi-view optimal view matching principle to select the best training images for each sub-region. Additionally, during training, ULSR-GS employs a densification strategy based on multi-view geometric consistency to enhance surface extraction details. Experimental results demonstrate that ULSR-GS outperforms other state-of-the-art GS-based works on large-scale aerial photogrammetry benchmark datasets, significantly improving surface extraction accuracy in complex urban environments.
尽管高斯喷溅(Gaussian Splatting, GS)在高效高质量场景渲染和小面积表面提取方面表现出色,但在处理大规模航拍图像的表面提取任务时存在不足。为了解决这一问题,我们提出了ULSR-GS,一个专注于超大规模场景高保真表面提取的框架,用于克服现有基于GS的网格提取方法的局限性。 具体而言,我们提出了一种点到照片的分区方法(point-to-photo partitioning),结合多视图最佳视角匹配原则,选择每个子区域的最佳训练图像。此外,在训练过程中,ULSR-GS采用了一种基于多视图几何一致性的稠密化策略,以增强表面提取细节。 实验结果表明,ULSR-GS在大规模航拍摄影测量基准数据集上的表现显著优于其他最先进的基于GS的方法,在复杂城市环境中大幅提高了表面提取的准确性。