3D Gaussian Splatting (3DGS) has emerged as a promising approach for 3D scene representation, offering a reduction in computational overhead compared to Neural Radiance Fields (NeRF). However, 3DGS is susceptible to high-frequency artifacts and demonstrates suboptimal performance under sparse viewpoint conditions, thereby limiting its applicability in robotics and computer vision. To address these limitations, we introduce SVS-GS, a novel framework for Sparse Viewpoint Scene reconstruction that integrates a 3D Gaussian smoothing filter to suppress artifacts. Furthermore, our approach incorporates a Depth Gradient Profile Prior (DGPP) loss with a dynamic depth mask to sharpen edges and 2D diffusion with Score Distillation Sampling (SDS) loss to enhance geometric consistency in novel view synthesis. Experimental evaluations on the MipNeRF-360 and SeaThru-NeRF datasets demonstrate that SVS-GS markedly improves 3D reconstruction from sparse viewpoints, offering a robust and efficient solution for scene understanding in robotics and computer vision applications.
3D 高斯点云(3DGS)作为 3D 场景表示的一种有前景的方法,相较于神经辐射场(NeRF)能减少计算开销。然而,3DGS 易受到高频伪影的影响,在稀疏视角条件下表现不佳,从而限制了其在机器人技术和计算机视觉中的应用。为了解决这些问题,我们引入了 SVS-GS,一个新颖的稀疏视角场景重建框架,它集成了 3D 高斯平滑滤波器以抑制伪影。此外,我们的方法还结合了深度梯度轮廓先验(DGPP)损失和动态深度掩码来锐化边缘,并使用 2D 扩散和评分蒸馏采样(SDS)损失来增强新视角合成中的几何一致性。在 MipNeRF-360 和 SeaThru-NeRF 数据集上的实验评估表明,SVS-GS 显著改善了从稀疏视角进行的 3D 重建,为机器人技术和计算机视觉应用中的场景理解提供了稳健高效的解决方案。