Skip to content

Latest commit

 

History

History
58 lines (40 loc) · 5.9 KB

README.md

File metadata and controls

58 lines (40 loc) · 5.9 KB

VFSOWL

(Work (porting code on GitHub) in Progress)

Implementation for our CVPR workshop paper listed below:

Variable Few Shot Class Incremental and Open World Learning, CVPR-Workshops 2022.

Authors: Touqeer Ahmad, Akshay Raj Dhamija, Mohsen Jafarzadeh, Steve Cruz, Ryan Rabinowitz, Chunchun Li, and Terrance E. Boult

The paper is focused on Variable Few-Shot Class Class Incremental (VFSCIL) and Variable Few-Shot Open-World Learning (VFSOWL). Unlike earlier approaches on Few-Shot Class Incremental Learning (FSCIL) that assume fixed number of classes (N-ways) and fixed number of samples (K-shots), VFSCIL operates in a more natural/practical setting where each incremental session could have up-to-N-classes (ways) and each class could have up-to-K-samples (shots). VFSCIL is then extended into VFSOWL.

The approach extended for VFSCIL/VFSOWL stems from our concurrent work on FSCIL named FeSSSS where we extended Continually Evolved Classifiers. For ease we provide our code integrated into CEC repo where we have made changes to their original code, for licensing of CEC, please consult CEC-authors' original repo.

Datasets

We have conducted our experiments on miniImageNet and CUB200 datasets which are typically employed for fixed-FSCIL. Our self-archived copies of both datasets are available from the following Google drive link.

Variable Few-Shot Class Incremental Learning

Here we focus the description for CUB200 dataset, similar details follow for mini-ImageNet.

Files for Incremental Sessions

First, all the incremental session files are generated using the script random_N_Ways_K_Shots_cub200.py for different experimental settings. We have explored the following four experimental settings:

  • Up-to 10-Ways, Up-to 10-Shots (15 incremental sessions)
  • Up-to 10-Ways, Up-to 5-Shots (15 incremental sessions)
  • Up-to 5-Ways, Up-to 5-Shots (30 incremental sessions)
  • Up-to 5-Ways, Up-to 10-Shots (30 incremental sessions)

For each experimental setting, we generate 5 experiments and those session files are made available in respective directories inside experiments_cub200 directory for using exactly the same instances as we used in our experiments. The base session is still comprised of 100 classes and 30 samples-per-class. The instances for base session are identical to earlier work on fixed-FSCIL e.g., CEC. More experiments for the said experimental settings, or even different experimental settings can be generated using the above stand-alone code file by altering the number of increments and N_ways/K_shots accordingly.

Feature Extraction

The paper is focused on concatenating self-supervised and supervised features respectively learned from a disjoint unlabeled data set and data from the base-session of the VFSCIL/VFSOWL setting. Both types of features are extracted and pre-saved as the first step.

Self-Supervised Feature Extraction

To extract the self-supervised features, run the feature extractor FeatureExtraction_MocoV2.py in respective data set directory by providing the path to the pre-trained self-supervised model and other required arguments. For example for cub200, the above file is located in CEC_CVPR2021/dataloader/cub200/ directory. A bash file (temp.sh) demonstrates running feature extractor for all incremental sessions and saving self-supervised features for 60 random crops per each training sample and one central crop per each validation sample.

Supervised Feature Extraction

To extract the supervised features learned using the base-session data of VFSCIL, run the feature extractor FeatureExtraction_MocoV2.py in respective data set directory by providing the path to the pre-trained supervised model and other required arguments. While any supervised model can be trained using the base-session data, we have specifically used the CEC-based learned models.

Checkpoints for Supervised & Self-Supervised Models

The pre-trained self-supervised models on ImageNet-2012/OpenImages-v6 and supervised models on base session data are available from the following Google drive link. For self-supervised features, we use models trained on ImageNet-2012 and OpenImages-v6 respectively for experiments on CUB200, and miniImageNet. Specifically, we use ResNet-50 models trained by DeepCluster-v2, and Moco-v2 respectively for CUB200, and miniImageNet. In FeSSSS we found DeepCluster-v2 performed best for FSCIL evaluation and OpenImages-v6 is used for miniImageNet experiments to avoid the overlap between miniImageNet and ImageNet-2012 classes.

BibTeX

If you find our work helpful, please cite the following:

@InProceedings{Ahmad_2022_CVPR,
    author    = {Ahmad, Touqeer and Dhamija, Akshay Raj and Jafarzadeh, Mohsen and Cruz, Steve and Rabinowitz, Ryan and Li, Chunchun and Boult, Terrance E.},
    title     = {Variable Few Shot Class Incremental and Open World Learning},
    booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops},
    month     = {June},
    year      = {2022},
    pages     = {3688-3699}
}