Recent advancements in high-fidelity dynamic scene reconstruction have leveraged dynamic 3D Gaussians and 4D Gaussian Splatting for realistic scene representation. However, to make these methods viable for real-time applications such as AR/VR, gaming, and rendering on low-power devices, substantial reductions in memory usage and improvements in rendering efficiency are required. While many state-of-the-art methods prioritize lightweight implementations, they struggle in handling scenes with complex motions or long sequences. In this work, we introduce Temporally Compressed 3D Gaussian Splatting (TC3DGS), a novel technique designed specifically to effectively compress dynamic 3D Gaussian representations. TC3DGS selectively prunes Gaussians based on their temporal relevance and employs gradient-aware mixed-precision quantization to dynamically compress Gaussian parameters. It additionally relies on a variation of the Ramer-Douglas-Peucker algorithm in a post-processing step to further reduce storage by interpolating Gaussian trajectories across frames. Our experiments across multiple datasets demonstrate that TC3DGS achieves up to 67× compression with minimal or no degradation in visual quality.
在高保真动态场景重建领域,动态3D高斯点云(Dynamic 3D Gaussians)和4D高斯点云(4D Gaussian Splatting)技术取得了显著进展,为真实场景表示提供了强大支持。然而,为使这些方法在增强现实(AR)、虚拟现实(VR)、游戏以及低功耗设备上的实时应用中具备可行性,需显著降低内存使用并提高渲染效率。目前许多先进方法优先考虑轻量化实现,但在处理复杂运动场景或长时间序列时往往表现欠佳。 为解决这一问题,我们提出了 TC3DGS(Temporally Compressed 3D Gaussian Splatting),这是一种专为高效压缩动态3D高斯表示而设计的新技术。TC3DGS通过基于时间相关性的选择性剪枝策略对高斯点进行优化,并采用梯度感知的混合精度量化动态压缩高斯参数。此外,在后处理步骤中,我们引入了一种改进的Ramer-Douglas-Peucker算法,通过跨帧插值高斯轨迹进一步减少存储需求。 在多个数据集上的实验表明,TC3DGS 在保持视觉质量几乎无损的前提下,实现了高达67倍的压缩效果。这种方法为动态场景的高效表示提供了新的可能性,同时满足实时渲染和资源受限设备的需求。