Ensuring the safety of autonomous robots, such as self-driving vehicles, requires extensive testing across diverse driving scenarios. Simulation is a key ingredient for conducting such testing in a cost-effective and scalable way. Neural rendering methods have gained popularity, as they can build simulation environments from collected logs in a data-driven manner. However, existing neural radiance field (NeRF) methods for sensor-realistic rendering of camera and lidar data suffer from low rendering speeds, limiting their applicability for large-scale testing. While 3D Gaussian Splatting (3DGS) enables real-time rendering, current methods are limited to camera data and are unable to render lidar data essential for autonomous driving. To address these limitations, we propose SplatAD, the first 3DGS-based method for realistic, real-time rendering of dynamic scenes for both camera and lidar data. SplatAD accurately models key sensor-specific phenomena such as rolling shutter effects, lidar intensity, and lidar ray dropouts, using purpose-built algorithms to optimize rendering efficiency. Evaluation across three autonomous driving datasets demonstrates that SplatAD achieves state-of-the-art rendering quality with up to +2 PSNR for NVS and +3 PSNR for reconstruction while increasing rendering speed over NeRF-based methods by an order of magnitude. See this https URL for our project page.
确保自主机器人(如自动驾驶车辆)的安全性需要在多样化的驾驶场景中进行广泛测试。仿真是以成本有效且可扩展的方式开展此类测试的关键工具。神经渲染方法因其能够以数据驱动的方式从收集的日志中构建仿真环境而日益受到关注。然而,现有的基于神经辐射场(Neural Radiance Field, NeRF)的摄像头和激光雷达数据传感器真实感渲染方法,由于渲染速度较慢,限制了其在大规模测试中的应用。 尽管 3D 高斯投影(3D Gaussian Splatting, 3DGS)支持实时渲染,但现有方法仅限于摄像头数据,无法渲染对自动驾驶至关重要的激光雷达数据。为解决这些限制,我们提出了 SplatAD,这是第一个基于 3DGS 的方法,能够对动态场景的摄像头和激光雷达数据进行真实感的实时渲染。SplatAD 通过专门设计的算法优化了渲染效率,精确建模了关键的传感器特定现象,例如滚动快门效应、激光雷达强度和激光雷达射线丢失。 在三个自动驾驶数据集上的评估表明,SplatAD 在渲染质量上达到了最先进水平,对于新视角合成(NVS)提升了 +2 PSNR,对于重建任务提升了 +3 PSNR,同时渲染速度比基于 NeRF 的方法提高了一个数量级。