Skip to content

Latest commit

 

History

History
7 lines (5 loc) · 2.2 KB

2411.17605.md

File metadata and controls

7 lines (5 loc) · 2.2 KB

Distractor-free Generalizable 3D Gaussian Splatting

We present DGGS, a novel framework addressing the previously unexplored challenge of Distractor-free Generalizable 3D Gaussian Splatting (3DGS). It accomplishes two key objectives: fortifying generalizable 3DGS against distractor-laden data during both training and inference phases, while successfully extending cross-scene adaptation capabilities to conventional distractor-free approaches. To achieve these objectives, DGGS introduces a scene-agnostic reference-based mask prediction and refinement methodology during training phase, coupled with a training view selection strategy, effectively improving distractor prediction accuracy and training stability. Moreover, to address distractor-induced voids and artifacts during inference stage, we propose a two-stage inference framework for better reference selection based on the predicted distractor masks, complemented by a distractor pruning module to eliminate residual distractor effects. Extensive generalization experiments demonstrate DGGS's advantages under distractor-laden conditions. Additionally, experimental results show that our scene-agnostic mask inference achieves accuracy comparable to scene-specific trained methods. Homepage is \url{this https URL}.

我们提出了 DGGS,一个针对以往未探索的无干扰可泛化 3D 高斯投影(3D Gaussian Splatting, 3DGS)挑战的新框架。DGGS 实现了两个主要目标:在训练和推理阶段增强可泛化 3DGS 对含干扰数据的鲁棒性,同时将跨场景适应能力扩展到传统无干扰方法。 为实现这些目标,DGGS 在训练阶段引入了与场景无关的参考掩码预测与优化方法,结合训练视角选择策略,有效提高了干扰预测的准确性并增强了训练稳定性。此外,为解决推理阶段干扰引起的空洞和伪影问题,我们提出了一种两阶段推理框架,通过基于预测干扰掩码的参考选择改进推理效果,同时结合干扰修剪模块消除残余干扰影响。 广泛的泛化实验表明,DGGS 在含干扰条件下展现了显著优势。此外,实验结果显示,我们的场景无关掩码推理方法在准确性上可媲美场景特定训练方法。