3D immersive scene generation is a challenging yet critical task in computer vision and graphics. A desired virtual 3D scene should 1) exhibit omnidirectional view consistency, and 2) allow for free exploration in complex scene hierarchies. Existing methods either rely on successive scene expansion via inpainting or employ panorama representation to represent large FOV scene environments. However, the generated scene suffers from semantic drift during expansion and is unable to handle occlusion among scene hierarchies. To tackle these challenges, we introduce LayerPano3D, a novel framework for full-view, explorable panoramic 3D scene generation from a single text prompt. Our key insight is to decompose a reference 2D panorama into multiple layers at different depth levels, where each layer reveals the unseen space from the reference views via diffusion prior. LayerPano3D comprises multiple dedicated designs: 1) we introduce a novel text-guided anchor view synthesis pipeline for high-quality, consistent panorama generation. 2) We pioneer the Layered 3D Panorama as underlying representation to manage complex scene hierarchies and lift it into 3D Gaussians to splat detailed 360-degree omnidirectional scenes with unconstrained viewing paths. Extensive experiments demonstrate that our framework generates state-of-the-art 3D panoramic scene in both full view consistency and immersive exploratory experience. We believe that LayerPano3D holds promise for advancing 3D panoramic scene creation with numerous applications.
3D 沉浸式场景生成是计算机视觉和图形学中的一个具有挑战性但至关重要的任务。理想的虚拟 3D 场景应具备 1) 全方位视图一致性,以及 2) 允许在复杂场景层级中自由探索。现有方法要么依赖于通过修补逐步扩展场景,要么使用全景表示来展现大视场的场景环境。然而,这些方法在扩展过程中常常出现语义漂移,并且无法处理场景层级间的遮挡。为了解决这些问题,我们提出了 LayerPano3D,一种从单一文本提示生成全视角、可探索全景 3D 场景的新框架。我们的关键见解是将参考的 2D 全景分解为不同深度层次的多个层,其中每一层通过扩散先验揭示参考视角下未见的空间。LayerPano3D 包括多个专用设计:1) 我们引入了一种新型的文本引导锚点视图合成管道,实现高质量、一致的全景生成。2) 我们开创了层次化 3D 全景作为基础表示,以管理复杂的场景层级,并将其提升为 3D 高斯,生成详细的 360 度全方位场景,支持无约束的视角路径。广泛的实验表明,我们的框架在全视角一致性和沉浸式探索体验上生成了最先进的 3D 全景场景。我们相信,LayerPano3D 对于推动 3D 全景场景创作具有广泛的应用前景。