In computer graphics, there is a need to recover easily modifiable representations of 3D geometry and appearance from image data. We introduce a novel method for this task using 3D Gaussian Splatting, which enables intuitive scene editing through mesh adjustments. Starting with input images and camera poses, we reconstruct the underlying geometry using a neural Signed Distance Field and extract a high-quality mesh. Our model then estimates a set of Gaussians, where each component is flat, and the opacity is conditioned on the recovered neural surface. To facilitate editing, we produce a proxy representation that encodes information about the Gaussians' shape and position. Unlike other methods, our pipeline allows modifications applied to the extracted mesh to be propagated to the proxy representation, from which we recover the updated parameters of the Gaussians. This effectively transfers the mesh edits back to the recovered appearance representation. By leveraging mesh-guided transformations, our approach simplifies 3D scene editing and offers improvements over existing methods in terms of usability and visual fidelity of edits. The complete source code for this project can be accessed at \url{this https URL}
在计算机图形学中,从图像数据中恢复易于修改的 3D 几何和外观表示是一项重要需求。我们提出了一种新方法,利用 3D 高斯投影(3D Gaussian Splatting) 实现这一任务,从而通过网格调整实现直观的场景编辑。以输入图像和相机位姿为起点,我们通过神经有符号距离场(Signed Distance Field, SDF)重建底层几何并提取高质量网格。 我们的模型随后估计一组高斯分布,其中每个高斯组件是平坦的,其不透明度由恢复的神经表面决定。为简化编辑,我们生成了一种代理表示,编码了高斯的形状和位置信息。与其他方法不同,我们的流程允许对提取网格的修改自动传播到代理表示中,并由此恢复更新后的高斯参数。这一机制将网格编辑有效地传递回恢复的外观表示。 通过利用基于网格的变换,我们的方法简化了 3D 场景编辑,并在编辑的可用性和视觉保真度方面优于现有方法。这种方式为用户提供了更直观和灵活的 3D 编辑体验,同时保证了高质量的渲染效果。