LMC: Large Model Collaboration with Cross-assessment for Training-Free Open-Set Object Recognition, NeurIPS 2023
1Haoxuan Qu*, 1Xiaofei Hui*, 2Yujun Cai, 1Jun Liu,
* equal contribution
1Singapore University of Technology and Design, 2Meta
[Paper] | [Arxiv] | [SUTD-VLG Lab]
The code is developed and tested under the following environment:
- Python 3.9
- PyTorch 2.0.0
- CUDA 11.7
You can create the environment via:
conda create -n lmc python=3.9
conda activate lmc
conda install pytorch==2.0.0 torchvision==0.15.0 torchaudio==2.0.0 pytorch-cuda=11.7 -c pytorch -c nvidia
pip install ftfy regex tqdm scikit-learn scipy pandas six timm
pip install transformers openai
pip install git+https://github.com/openai/CLIP.git
Tinyimagenet can be downloaded by running:
cd data
bash tinyimagenet.sh
To load CLIP pre-trained weights, you can visit official CLIP GitHub Repo and download CLIP "ViT-B/32" to pretrained_model
using download address in this page.
To load DINO pre-trained weights, you can visit official DINOv2 and download "ViT-B/14 distilled" to pretrained_model
using download address in this page.
To evaluate using our provided virtual open-set classes, please unzip tiny_img.zip
and run:
python tinyimagenet_eval_msp.py --save_dir path\to\save\result --image_path path\to\unzipped\images
Note that here we provide virtual open-set classes and generated images that can yield slightly better results than the results provided in our paper for Tinyimagenet.
Part of our code is borrowed from ZOC. We thank the authors for releasing the codes.