Skip to content

Latest commit

 

History

History
5 lines (3 loc) · 2.52 KB

2409.12323.md

File metadata and controls

5 lines (3 loc) · 2.52 KB

Depth Estimation Based on 3D Gaussian Splatting Siamese Defocus

Depth estimation is a fundamental task in 3D geometry. While stereo depth estimation can be achieved through triangulation methods, it is not as straightforward for monocular methods, which require the integration of global and local information. The Depth from Defocus (DFD) method utilizes camera lens models and parameters to recover depth information from blurred images and has been proven to perform well. However, these methods rely on All-In-Focus (AIF) images for depth estimation, which is nearly impossible to obtain in real-world applications. To address this issue, we propose a self-supervised framework based on 3D Gaussian splatting and Siamese networks. By learning the blur levels at different focal distances of the same scene in the focal stack, the framework predicts the defocus map and Circle of Confusion (CoC) from a single defocused image, using the defocus map as input to DepthNet for monocular depth estimation. The 3D Gaussian splatting model renders defocused images using the predicted CoC, and the differences between these and the real defocused images provide additional supervision signals for the Siamese Defocus self-supervised network. This framework has been validated on both artificially synthesized and real blurred datasets. Subsequent quantitative and visualization experiments demonstrate that our proposed framework is highly effective as a DFD method.

深度估计是3D几何中的一项基础任务。虽然立体深度估计可以通过三角测量方法实现,但对于单目方法来说,整合全局和局部信息则没有那么直接。深度由散焦(DFD)方法利用相机镜头模型和参数从模糊图像中恢复深度信息,并已被证明表现良好。然而,这些方法依赖全聚焦(AIF)图像进行深度估计,而在实际应用中几乎无法获取这种图像。为了解决这一问题,我们提出了一个基于3D高斯散点和Siamese网络的自监督框架。通过学习同一场景在不同焦距下的模糊程度,该框架能够从单张散焦图像中预测散焦图和模糊圆(CoC),并将散焦图作为输入传递给DepthNet进行单目深度估计。3D高斯散点模型使用预测的CoC渲染散焦图像,真实散焦图像与渲染图像的差异为Siamese Defocus自监督网络提供了额外的监督信号。该框架已经在人工合成和真实模糊数据集上进行了验证,后续的定量和可视化实验表明,我们提出的框架作为DFD方法具有很高的有效性。