TrafficBots, a multi-agent policy that generates realistic behaviors for bot agents by learning from real-world data.
TrafficBots: Towards World Models for Autonomous Driving Simulation and Motion Prediction
Zhejun Zhang, Alexander Liniger, Dengxin Dai, Fisher Yu and Luc Van Gool.
@inproceedings{zhang2023trafficbots,
title = {{TrafficBots}: Towards World Models for Autonomous Driving Simulation and Motion Prediction},
author = {Zhang, Zhejun and Liniger, Alexander and Dai, Dengxin and Yu, Fisher and Van Gool, Luc},
booktitle = {International Conference on Robotics and Automation (ICRA)},
year = {2023}
}
- Create the conda environment by running
conda env create -f environment.yml
. - Install Waymo Open Dataset API manually because the pip installation of version 1.5.2 is not supported on some linux, e.g. CentOS. Run
conda activate traffic_bots wget https://files.pythonhosted.org/packages/85/1d/4cdd31fc8e88c3d689a67978c41b28b6e242bd4fe6b080cf8c99663b77e4/waymo_open_dataset_tf_2_11_0-1.5.2-py3-none-manylinux_2_24_x86_64.manylinux_2_28_x86_64.whl mv waymo_open_dataset_tf_2_11_0-1.5.2-py3-none-manylinux_2_24_x86_64.manylinux_2_28_x86_64.whl waymo_open_dataset_tf_2_11_0-1.5.2-py3-none-any.whl pip install --no-deps waymo_open_dataset_tf_2_11_0-1.5.2-py3-none-any.whl rm waymo_open_dataset_tf_2_11_0-1.5.2-py3-none-any.whl
- We use WandB for logging. You can register an account for free.
- Be aware
- We use 6 NVIDIA RTX 2080Ti for training and a single 2080Ti for evaluation. The training takes at least 5 days to converge.
- This repo contains only the experiments for the Waymo Motion Prediction Challenge.
- We cannot share pre-trained models according to the terms of the Waymo Open Motion Dataset.
- Download the Waymo Open Motion Dataset. We use v1.2.
- Run
python src/pack_h5_womd.py
or use bash/pack_h5.sh to pack the dataset into h5 files to accelerate data loading during the training and evaluation. - You should pack three datasets:
training
,validation
andtesting
. Packing thetraining
dataset takes around 2 days. Forvalidation
andtesting
it takes a few hours.
Please refer to bash/train.sh for the training.
Once the training converges, you can use the saved checkpoints (WandB artifacts) to do validation and testing, please refer to bash/submission.sh for more details.
Once the validation/testing is finished, download the file womd_joint_future_pred_K6.tar.gz
from WandB and submit to the Waymo Motion Prediction Leaderboard.
Due to code refactoring, the performance now is slightly different than the numbers reported in the original paper.
WOMD test | Soft mAP | mAP | Min ADE | Min FDE | Miss Rate | Overlap Rate |
---|---|---|---|---|---|---|
ICRA paper | 0.219 | 0.212 | 1.313 | 3.102 | 0.344 | 0.145 |
Refactored | 0.199 | 0.192 | 1.319 | 3.046 | 0.380 | 0.157 |
The refactored version is less diverse than the original version, hence the higher miss rate and the lower (soft) mAP. Since anyway the performance of TrafficBots is really poor on the Waymo Open Motion Prediction leaderboard, please do not spend too many efforts on improving it by tweaking the hyper-parameters or the architectures. Nevertheless, if you find any bugs, please inform us by raising a GitHub issue.
There are mainly two reasons for the poor performance of TrafficBots compared to the motion prediction methods. Firstly, to ensure scalability and efficiency TrafficBots uses the scene-centric representation, which is known to suffer from poor performance. Secondly, training a closed-loop policy such as TrafficBots is always more difficult than training open-loop models. Recently, we present a new network architecture called HPTR that addresses the first problem. In fact, the repository is refactored because of HPTR and the new Waymo Sim Agents Challenge. We are working on the second problem and the sim agent challenge. Stay tuned and wait for our new publications!
Please refer to docs/ablation_models.md for the configurations of ablation models.
Specifically you can find the SimNet and TrafficSim based on our backbone. You can also try out different positional encodings for the encoder, as well as different configurations for goal/destination, personality, and training strategies of the world model.
This software is made available for non-commercial use under a creative commons license. You can find a summary of the license here.
This work is funded by Toyota Motor Europe via the research project TRACE-Zurich (Toyota Research on Automated Cars Europe).