Skip to content

Latest commit

 

History

History
5 lines (3 loc) · 3.01 KB

2410.09467.md

File metadata and controls

5 lines (3 loc) · 3.01 KB

Enhancing Single Image to 3D Generation using Gaussian Splatting and Hybrid Diffusion Priors

3D object generation from a single image involves estimating the full 3D geometry and texture of unseen views from an unposed RGB image captured in the wild. Accurately reconstructing an object's complete 3D structure and texture has numerous applications in real-world scenarios, including robotic manipulation, grasping, 3D scene understanding, and AR/VR. Recent advancements in 3D object generation have introduced techniques that reconstruct an object's 3D shape and texture by optimizing the efficient representation of Gaussian Splatting, guided by pre-trained 2D or 3D diffusion models. However, a notable disparity exists between the training datasets of these models, leading to distinct differences in their outputs. While 2D models generate highly detailed visuals, they lack cross-view consistency in geometry and texture. In contrast, 3D models ensure consistency across different views but often result in overly smooth textures. We propose bridging the gap between 2D and 3D diffusion models to address this limitation by integrating a two-stage frequency-based distillation loss with Gaussian Splatting. Specifically, we leverage geometric priors in the low-frequency spectrum from a 3D diffusion model to maintain consistent geometry and use a 2D diffusion model to refine the fidelity and texture in the high-frequency spectrum of the generated 3D structure, resulting in more detailed and fine-grained outcomes. Our approach enhances geometric consistency and visual quality, outperforming the current SOTA. Additionally, we demonstrate the easy adaptability of our method for efficient object pose estimation and tracking.

从单张图像生成3D物体需要估计出在自然环境中拍摄的无姿态RGB图像的完整3D几何形状和纹理。准确重建物体的完整3D结构和纹理在现实世界中有广泛的应用,例如机器人操作、抓取、3D场景理解以及AR/VR。最近在3D物体生成方面的进展引入了通过优化高效的高斯散射表示,结合预训练的2D或3D扩散模型来重建物体的3D形状和纹理。然而,这些模型的训练数据集存在显著差异,导致输出结果的不同。2D模型尽管生成了高度细致的视觉效果,但在几何形状和纹理的一致性上表现欠佳。相反,3D模型能够确保跨视图的一致性,但通常会导致纹理过于平滑。为了解决这一局限性,我们提出了通过结合2D和3D扩散模型来弥合这一差距的方法,并将其与高斯散射相集成,采用两阶段的基于频率的蒸馏损失。具体来说,我们利用3D扩散模型在低频谱中的几何先验来保持几何一致性,同时使用2D扩散模型在高频谱中精细化生成的3D结构的保真度和纹理,从而生成更加详细和精细的结果。我们的方法提升了几何一致性和视觉质量,超越了当前的最先进技术。此外,我们展示了该方法在高效物体姿态估计和跟踪方面的易于适应性。