OpenLane is the first real-world and the largest scaled 3D lane dataset to date. Our dataset collects valuable contents from public perception dataset Waymo Open Dataset and provides lane&closest-in-path object(CIPO) annotation for 1000 segments. In short, OpenLane owns 200K frames and over 880K carefully annotated lanes. We have released the OpenLane Dataset publicly to aid the research community in making advancements in 3D perception and autonomous driving technology. [Paper]
This repository is organized as the following. Note that our OpenLane is an autonomous driving dataset, while there's another repository with the same name The-OpenROAD-Project/OpenLane.
We released v1.0 of the Openlane dataset including 1000 segments with labels of 3D/2D lanes and CIPO/Scenes.
Please follow these steps to make yourself familiar with the OpenLane dataset. Create an issue if you need any further information.
You can download the entire OpenLane dataset here
We provide evaluation tools on both lane and CIPO, following the same data format as Waymo and common evaluation pipeline in 2D/3D lane detection. Please refer to Evaluation Kit Instruction
OpenLane dataset is constructed on mainstream datasets in the field of autonomous driving. In v1.0, we release the annotation on Waymo Open Dataset. In the future we'll update for annotation on nuScenes. OpenLane dataset focuses on lane detection as well as CIPO. We annotate all the lanes in each frame, including those in the opposite direction if no curbside exists in the middle. In addition to the lane detection task, we also annotate: (a) scene tags, such as weather and locations; (b) the CIPO, which is defined as the most concerned target w.r.t. ego vehicle; such a tag is quite pragmatic for subsequent modules as in planning/control, besides a whole set of objects from perception.
We annotate lane in the following format.
- Lane shape. Each 2D/3D lane is presented as a set of 2D/3D points.
- Lane category. Each lane has a category such as double yellow line or curb.
- Lane property. Some of lanes have a property such as right, left.
- Lane tracking ID. Each lane except curb has a unique id.
- Stopline and curb.
For more annotation criterion, please refer to Lane Anno Criterion
We annotate CIPO and Scenes in the following format.
- 2D bounding box with a category representing the importance level of object.
- Scene Tag. It describes in which scenario this frame is collected.
- Weather Tag. It describes under what weather this frame is collected.
- Hours Tag. It annotates in what time this frame is collected.
For more annotation criterion, please refer to CIPO Anno Criterion
We provide an initial benchmark on OpenLane 2D/3D Lane Detection. To thoroughly evaluate the model, we provide different case split from the entire validation set. They are Up&Down case, Curve case, Extreme Weather case, Night case, Intersection case, and Merge&Split case. More detail can be found in Lane Anno Criterion Based on the Lane Eval Metric, results (F-Score) of different 2D/3D methods on different cases are shown as follows.
- 2D Lane Detection
Method | All | Up& Down |
Curve | Extreme Weather |
Night | Intersection | Merge& Split |
---|---|---|---|---|---|---|---|
LaneATT-S | 28.3 | 25.3 | 25.8 | 32.0 | 27.6 | 14.0 | 24.3 |
LaneATT-M | 31.0 | 28.3 | 27.4 | 34.7 | 30.2 | 17.0 | 26.5 |
PersFormer | 42.0 | 40.7 | 46.3 | 43.7 | 36.1 | 28.9 | 41.2 |
CondLaneNet-S | 52.3 | 55.3 | 57.5 | 45.8 | 46.6 | 48.4 | 45.5 |
CondLaneNet-M | 55.0 | 58.5 | 59.4 | 49.2 | 48.6 | 50.7 | 47.8 |
CondLaneNet-L | 59.1 | 62.1 | 62.9 | 54.7 | 51.0 | 55.7 | 52.3 |
- 3D Lane Detection
Method | All | Up & Down |
Curve | Extreme Weather |
Night | Intersection | Merge& Split |
---|---|---|---|---|---|---|---|
GenLaneNet | 29.7 | 24.2 | 31.1 | 26.4 | 17.5 | 19.7 | 27.4 |
3DLaneNet | 40.2 | 37.7 | 43.2 | 43.0 | 39.3 | 29.3 | 36.5 |
PersFormer | 47.8 | 42.4 | 52.8 | 48.7 | 46.0 | 37.9 | 44.6 |
Please use the following citation when referencing OpenLane:
@article{chen2022persformer,
title={PersFormer: 3D Lane Detection via Perspective Transformer and the OpenLane Benchmark},
author={Chen, Li and Sima, Chonghao and Li, Yang and Zheng, Zehan and Xu, Jiajie and Geng, Xiangwei and Li, Hongyang and He, Conghui and Shi, Jianping and Qiao, Yu and Yan, Junchi},
journal={arXiv preprint arXiv:2203.11089},
year={2022}
}
And the paper for the Waymo Open Dataset:
@inproceedings{Sun_2020_CVPR,
author = {Sun, Pei and Kretzschmar, Henrik and Dotiwalla, Xerxes and Chouard, Aurelien and Patnaik, Vijaysai and Tsui, Paul and Guo, James and Zhou, Yin and Chai, Yuning and Caine, Benjamin and Vasudevan, Vijay and Han, Wei and Ngiam, Jiquan and Zhao, Hang and Timofeev, Aleksei and Ettinger, Scott and Krivokon, Maxim and Gao, Amy and Joshi, Aditya and Zhang, Yu and Shlens, Jonathon and Chen, Zhifeng and Anguelov, Dragomir},
title = {Scalability in Perception for Autonomous Driving: Waymo Open Dataset},
booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
month = {June},
year = {2020}
}
Our dataset is based on the Waymo Open Dataset and therefore we distribute the data under Creative Commons Attribution-NonCommercial-ShareAlike license. You are free to share and adapt the data, but have to give appropriate credit and may not use the work for commercial purposes. All code within this repository is under Apache License 2.0.