Recent advances in 3D content creation mostly leverage optimization-based 3D generation via score distillation sampling (SDS). Though promising results have been exhibited, these methods often suffer from slow per-sample optimization, limiting their practical usage. In this paper, we propose DreamGaussian, a novel 3D content generation framework that achieves both efficiency and quality simultaneously. Our key insight is to design a generative 3D Gaussian Splatting model with companioned mesh extraction and texture refinement in UV space. In contrast to the occupancy pruning used in Neural Radiance Fields, we demonstrate that the progressive densification of 3D Gaussians converges significantly faster for 3D generative tasks. To further enhance the texture quality and facilitate downstream applications, we introduce an efficient algorithm to convert 3D Gaussians into textured meshes and apply a fine-tuning stage to refine the details. Extensive experiments demonstrate the superior efficiency and competitive generation quality of our proposed approach. Notably, DreamGaussian produces high-quality textured meshes in just 2 minutes from a single-view image, achieving approximately 10 times acceleration compared to existing methods.
最近在3D内容创作方面的进展主要利用基于优化的3D生成,通过分数蒸馏采样(SDS)。尽管展示了有希望的结果,但这些方法通常受到每个样本优化速度慢的限制,限制了它们的实际使用。在本文中,我们提出了DreamGaussian,一种新型的3D内容生成框架,同时实现了效率和质量。我们的关键见解是设计一个生成性的3D高斯溅射模型,并配备网格提取和UV空间的纹理细化。与神经辐射场中使用的占用率修剪相比,我们证明了3D高斯的逐步密集化对于3D生成任务的收敛速度明显更快。为了进一步提高纹理质量并促进下游应用,我们引入了一种高效的算法,将3D高斯转换为带纹理的网格,并应用微调阶段来细化细节。广泛的实验表明我们提出的方法具有超越的效率和竞争性的生成质量。值得注意的是,DreamGaussian能够在仅用单视图图像的情况下在2分钟内产生高质量的带纹理网格,与现有方法相比大约实现了10倍的加速。