Skip to content

Latest commit

 

History

History
65 lines (47 loc) · 2.79 KB

README.md

File metadata and controls

65 lines (47 loc) · 2.79 KB

LEOD

This is the official Pytorch implementation for our CVPR 2024 paper:

LEOD: Label-Efficient Object Detection for Event Cameras
Ziyi Wu, Mathias Gehrig, Qing Lyu, Xudong Liu, Igor Gilitschenski
CVPR'24 | GitHub | arXiv

TL;DR

Event cameras are bio-inspired low-latency sensors, which hold great potentials for safety-critical applications such as object detection in self-driving. Due to the high temporal resolution (>1000 FPS) of event data, existing datasets are annotated at a low frame rate (e.g., 4 FPS). As a result, models are only trained on these annotated frames, leading to sub-optimal performance and slow convergence speed. In this paper, we tackle this problem from the perspective of weakly-/semi-supervised learning. We design a novel self-training framework that pseudo-labels unannotated events with reliable model predictions, which achieves SOTA performance on two largest detection benchmarks.

Install

This codebase builds upon RVT. Please refer to install.md for detailed instructions.

Experiments

This codebase is tailored to Slurm GPU clusters with preemption mechanism. There are some functions in the code (e.g. auto-detect and load previous checkpoints) which you might not need. Please go through all fields marked with TODO in train.py in case there is any conflict with your environment. To reproduce the results in the paper, please refer to benchmark.md.

Citation

Please cite our paper if you find it useful in your research:

@inproceedings{wu2024leod,
  title={LEOD: Label-Efficient Object Detection for Event Cameras},
  author={Wu, Ziyi and Gehrig, Mathias and Lyu, Qing and Liu, Xudong and Gilitschenski, Igor},
  booktitle={CVPR},
  year={2024}
}

Acknowledgement

We thank the authors of RVT, SORT, Soft Teacher, Unbiased Teacher and all the packages we use in this repo for opening source their wonderful works.

License

LEOD is released under the MIT License. See the LICENSE file for more details.

Contact

If you have any questions about the code, please contact Ziyi Wu [email protected]