Differentiable 3D-Gaussian splatting (GS) is emerging as a prominent technique in computer vision and graphics for reconstructing 3D scenes. GS represents a scene as a set of 3D Gaussians with varying opacities and employs a computationally efficient splatting operation along with analytical derivatives to compute the 3D Gaussian parameters given scene images captured from various viewpoints. Unfortunately, capturing surround view (360∘ viewpoint) images is impossible or impractical in many real-world imaging scenarios, including underwater imaging, rooms inside a building, and autonomous navigation. In these restricted baseline imaging scenarios, the GS algorithm suffers from a well-known 'missing cone' problem, which results in poor reconstruction along the depth axis. In this manuscript, we demonstrate that using transient data (from sonars) allows us to address the missing cone problem by sampling high-frequency data along the depth axis. We extend the Gaussian splatting algorithms for two commonly used sonars and propose fusion algorithms that simultaneously utilize RGB camera data and sonar data. Through simulations, emulations, and hardware experiments across various imaging scenarios, we show that the proposed fusion algorithms lead to significantly better novel view synthesis (5 dB improvement in PSNR) and 3D geometry reconstruction (60% lower Chamfer distance).
可微分的3D高斯喷涂(GS)技术正在计算机视觉和图形学中作为重建3D场景的突出技术而兴起。GS通过使用不同透明度的一组3D高斯模型来表示场景,并采用计算效率高的喷涂操作以及解析导数来根据从各种视角捕获的场景图像计算3D高斯参数。不幸的是,在许多现实世界的成像场景中,包括水下成像、建筑物内部的房间和自动导航,捕获周围视图(360°视角)图像是不可能或不切实际的。在这些受限基线成像场景中,GS算法遭受众所周知的“缺失锥”问题,这导致深度轴上重建质量差。在这份手稿中,我们展示了使用瞬态数据(来自声纳)允许我们通过沿深度轴采样高频数据来解决缺失锥问题。我们扩展了高斯喷涂算法适用于两种常用的声纳,并提出了融合算法,这些算法同时利用RGB摄像机数据和声纳数据。通过各种成像场景的模拟、仿真和硬件实验,我们展示了所提出的融合算法导致显著更好的新视角合成(PSNR提高了5 dB)和3D几何重建(Chamfer距离降低了60%)。