Official code for the paper titled "Meta Learning Deep Visual Words for Fast Video Object Segmentation" If you use this code, please cite this Meta Learning Deep Visual Words for Fast Video Object Segmentation:
@inproceedings{DBLP:journals/corr/abs-1812-01397,
author = {Harkirat Singh Behl and
Mohammad Najafi and
Anurag Arnab and
Philip H. S. Torr},
title = {Meta Learning Deep Visual Words for Fast Video Object Segmentation},
booktitle = {NeurIPS 2019 Workshop on Machine Learning for Autonomous Driving},
year = {2018}
}
Pytorch version 0.4.1 is used with Python 2.7 using Anaconda2. The different libraries needed and the commands needed to install them are given in the file 'environment_setup.txt'
DAVIS-17 Download the DAVIS-17 Train and Val dataset from link. After downloading the dataset, extract it within the 'metavos' directory.
For your own dataset You will need to define a new class for the dataloader for the dataset. Please refer to the class 'DavisSiameseMAMLSet' in file 'deeplab/dataset.py'.
The trained model for DAVIS-17 can be downloaded from link.
After downloading the weights, please put it in the 'snapshots' folder
Run the file 'meta_test.py'
python meta_test.py
To visualize the segmentation, make the flag 'DISPLAY = 1' in file 'meta_test.py'
For Training, we start from the Pascal pretraining weights. First download the Pascal weights from link and store them in the 'snapshots' folder.
Run the file 'meta_train.py'
python meta_train.py