Skip to content

Latest commit

 

History

History
75 lines (49 loc) · 5.03 KB

README.md

File metadata and controls

75 lines (49 loc) · 5.03 KB

LEDetection: Label-Efficient Object Detection

Data

Why LEDetection?

LEDetection (French pronounciation as leuh detection), short for label-efficient object detection, is an open-source toolbox focused on semi-supervised and few-shot object detection, both of which are important and emerging topics in computer vision. LEDetection enables modern detection systems to do more with less hand-labeled data, thereby alleviating the dependency on large amounts of instance-level class annotations and bounding boxes.

Highlights and Features

  • LEDetection is based on MMDetection, and PyTorch, thus inherits all of its world-class features including its modular design and high efficiency;
  • Use LEDetection to train contemporary, high-performance supervised MMDetection models out-of-the-box;
  • Add unlabeled data to train LEDetection models for state-of-the-art semi-supervised and few-shot detection performances.

Our goal is to expand the utility of LEDetection for the CV/ML practitioner by incorporating the latest advances in self-supervised, semi-supervised, and few-shot learning to boost the accuracy performance of conventional supervised detectors in the limited labeled data setting. If you find this work useful, let us know by starring this repo. Issues and PRs are also welcome!

One Toolbox - Multiple Detection Paradigms

LEDetection is versatile to support multiple detection paradigms including supervised, semi-supervised, few-shot, and semi-supervised few-shot.

Supervised Detection

Use LEDetection to train popular supervised detection frameworks such as Faster R-CNN, Mask R-CNN, etc. See example configs.

Semi-Supervised Detection

Add unlabeled data to your LEDetection pipeline to enable robust semi-supervised detection using the implemented Soft Teacher and SoftER Teacher models. See example configs.

Few-Shot Detection

LEDetection models can be re-purposed into label-efficient few-shot detectors following the simple yet effective two-stage fine-tuning approach TFA. See example configs.

Semi-Supervised Few-Shot Detection

Why not combine both semi-supervised and few-shot training protocols, on datasets with limited labels, to enable semi-supervised few-shot detection, as described in our AISTATS 2024 paper? See example configs.

Get Started

Getting started is quick and easy:

  1. Please refer to this installation guide;
  2. Have fun with this quickstart guide.

Datasets, Models, and Reproducibility

Data

Refer to this aistats2024 documentation for a guide to reproduce the results reported in Tables 1-3 of our AISTATS 2024 paper.

We provide the semi-supervised and few-shot datasets, along with pre-trained models, via the Zenodo link above.

License

We release LEDetection under the permissive Apache 2.0 license. Any contributions made will also be subject to the same licensing.

Copyright 2023 LexisNexis Risk Solutions. All Rights Reserved.

Acknowledgments

We are grateful for the open-source contributions from

and many other projects by the broader CV/ML communities, without which this project would not have been possible.

Citation

If you find this work useful, consider citing the related paper:

@inproceedings{TranLEDetection,
  title="{LEDetection: A Simple Framework for Semi-Supervised Few-Shot Object Detection}",
  author={Phi Vu Tran},
  booktitle="{International Conference on Artificial Intelligence and Statistics (AISTATS)}",
  publisher={PMLR},
  volume={238},
  year={2024}
}