In recent years, a range of neural network-based methods for image rendering have been introduced. For instance, widely-researched neural radiance fields (NeRF) rely on a neural network to represent 3D scenes, allowing for realistic view synthesis from a small number of 2D images. However, most NeRF models are constrained by long training and inference times. In comparison, Gaussian Splatting (GS) is a novel, state-of-theart technique for rendering points in a 3D scene by approximating their contribution to image pixels through Gaussian distributions, warranting fast training and swift, real-time rendering. A drawback of GS is the absence of a well-defined approach for its conditioning due to the necessity to condition several hundred thousand Gaussian components. To solve this, we introduce Gaussian Mesh Splatting (GaMeS) model, a hybrid of mesh and a Gaussian distribution, that pin all Gaussians splats on the object surface (mesh). The unique contribution of our methods is defining Gaussian splats solely based on their location on the mesh, allowing for automatic adjustments in position, scale, and rotation during animation. As a result, we obtain high-quality renders in the real-time generation of high-quality views. Furthermore, we demonstrate that in the absence of a predefined mesh, it is possible to fine-tune the initial mesh during the learning process.
近年来,引入了一系列基于神经网络的图像渲染方法。例如,广泛研究的神经辐射场(NeRF)依赖于神经网络来表示3D场景,允许从少量2D图像合成真实视图。然而,大多数NeRF模型都受到长时间训练和推理时间的限制。相比之下,高斯散射(GS)是一种新颖的、最先进的技术,通过高斯分布近似它们对图像像素的贡献来渲染3D场景中的点,保证了快速训练和快速、实时渲染。GS的一个缺点是缺乏一个明确定义的方法来进行其条件化,因为需要对几十万个高斯组件进行条件化。为了解决这个问题,我们引入了高斯网格散射(GaMeS)模型,这是一种网格和高斯分布的混合体,将所有高斯散射固定在对象表面(网格)上。我们方法的独特贡献是仅根据它们在网格上的位置定义高斯散射,允许在动画过程中自动调整位置、规模和旋转。结果是,我们获得了高质量的实时生成高质量视图的渲染。此外,我们证明,在没有预定义网格的情况下,有可能在学习过程中对初始网格进行微调。