Text-to-3D generation represents an exciting field that has seen rapid advancements, facilitating the transformation of textual descriptions into detailed 3D models. However, current progress often neglects the intricate high-order correlation of geometry and texture within 3D objects, leading to challenges such as over-smoothness, over-saturation and the Janus problem. In this work, we propose a method named 3D Gaussian Generation via Hypergraph (Hyper-3DG)'', designed to capture the sophisticated high-order correlations present within 3D objects. Our framework is anchored by a well-established mainflow and an essential module, named
Geometry and Texture Hypergraph Refiner (HGRefiner)''. This module not only refines the representation of 3D Gaussians but also accelerates the update process of these 3D Gaussians by conducting the Patch-3DGS Hypergraph Learning on both explicit attributes and latent visual features. Our framework allows for the production of finely generated 3D objects within a cohesive optimization, effectively circumventing degradation. Extensive experimentation has shown that our proposed method significantly enhances the quality of 3D generation while incurring no additional computational overhead for the underlying framework.
文本到3D生成是一个令人兴奋的领域,已经看到了快速的进展,促进了将文本描述转化为详细3D模型的转变。然而,当前的进展常常忽视了3D对象内部几何与纹理之间错综复杂的高阶相关性,导致了如过度平滑、过度饱和和Janus问题等挑战。在这项工作中,我们提出了一种名为“通过超图的3D高斯生成(Hyper-3DG)”的方法,旨在捕捉3D对象内存在的复杂高阶相关性。我们的框架由一个建立良好的主流程和一个关键模块支撑,名为“几何与纹理超图细化器(HGRefiner)”。该模块不仅细化了3D高斯的表示,还通过对显式属性和潜在视觉特征进行Patch-3DGS超图学习,加速了这些3D高斯的更新过程。我们的框架允许在一个连贯的优化中生产精细生成的3D对象,有效地规避了退化。广泛的实验表明,我们提出的方法显著提高了3D生成的质量,同时不增加底层框架的额外计算开销。