This is the repository containing the solution for FG-2020 ABAW Competition
Pretrained models can be downloaded through this link
We aim for a unifed model to solve three tasks: Facial Action Units (FAU) prediction, Facial Expression (7 basic emotions) prediction, Valence and Arousal prediction. For abbreviation, we refer to them as FAU, EXPR and VA.
UPDATES: The challenge leaderboard has been released. Our solution won two challenege tracks (FAU and VA) among six teams!
To make such a demo, modify the video_file
in emotion_demo.py
and then run python emotion_demo.py
. The output video will be saved under the save_dir
.
To run this demo, MTCNN must be installed.
Before training, we change the data distribution of experiment datasets by (1) importing external datasets, such as the DISFA dataset for FAU, the ExpW dataset for EXPR, and the AFEW-VA dataset for VA; (2) resampling the minority class and the majority class. Our purpose is to create a more balanced data distribution for each individual class.
This the data disribution of the Aff-wild2 dataset, the DISFA dataset and the merged dataset. We resampled the merged dataset using ML-ROS, which is short for Multilabel Randomly Oversampling
This the data distribution of the Aff-wild2 dataset, the ExpW dataset and the merged dataset. We resample the merged dataset to ensure the instances of each class have the same probability of appearing in one epoch.
This the data distribution of the Aff-wild2 dataset, the AFEW-VA dataset and the merged dataset. We discretize the continuous valence/arousal scores in [-1, 1] into 20 bins of the same width. We treat each bin as a category, and apply the oversampling/undersampling strategy.
For the current datasets, each dataset only contain one type of labels (FAU, EXPR or VA). Therefore we propose an algorithm for a deep neural network to learn multitask from partial labels. The algorithm has two steps: firstly, we train a teacher model to perform all three tasks, where each instance is trained by the ground truth label of its corresponding task. Secondly, we refer to the outputs of the teacher model as the soft labels. We use the soft labels and the ground truths to train the student model.
This is the diagram for our proposed algorithm. Given the input images of three tasks and the ground truths of three tasks , we first train the teacher model using the teacher loss between the teacher outputs and the ground truth . Secondly, we train the student model using the student loss which consists of two parts: one is calcaluted from the teacher outputs and the student outputs , another is calculated from the ground truth and the student outputs .
- Pytorch 1.3.1 or higher version
- Numpy
- pytorch benchmark
- pandas, pickle, matplotlib
-
Download all required datasets, crop and align face images;
-
Create the annotation files for each dataset, using the script in
create_annotation_file
directory; -
Change the annotation file paths in the
Multitask-CNN(Multitask-CNN-RNN)/PATH/__init__.py
; -
Training: For Multitask-CNN, run
python train.py --force_balance --name image_size_112_n_students_5 --image_size 112 --pretrained_teacher_model path-to-teacher-model-if-exists
, the argumentname
is experiment name (save path), the--force_balance
will make the sampled dataset more balanced.
For Multitask-CNN-RNN, runpython train.py --name image_size_112_n_students_5_seq_len=32 --image_size 112 --seq_len 32 --frozen --pretrained_resnet50_model path-to-the-pretrained-Multitask-CNN-model --pretrained_teacher_model path-to-teacher-model-if-exists
-
Validation: Run the
python val.py --name image_size_112_n_students_5 --image_size 112 --teacher_model_path path-to-teacher-model --mode Validation --ensemble
for Multitask-CNN, and runpython val.py --name image_size_112_n_students_5_seq_len=32 --image_size 112 --teacher_model_path path-to-teacher-model --pretrained_resnet50_model path-to-the-pretrained-Multitask-CNN-model --mode Validation --ensemble --seq_len 32
for Multitask-CNN-RNN. -
From the results on the validation set, we obtain the best AU thresholds on the validation set.
Modify this linebest_thresholds_over_models = []
in thetest.py
to the best thresholds on the validation set. -
Testing: run
python test.py --name image_size_112_n_students_5 --image_size 112 --teacher_model_path path-to-teacher-model --mode Test --save_dir Predictions --ensemble
for Multitask-CNN, and runpython test.py --name image_size_112_n_students_5_seq_len=32 --image_size 112 --teacher_model_path path-to-teacher-model --pretrained_resnet50_model path-to-the-pretrained-Multitask-CNN-model --mode Test --ensemble --seq_len 32
for Multitask-CNN-RNN.
-
Download the pretrained CNNs and unzip them.
-
Crop and align face images, save them to a directory.
-
For CNN model:
python run_pretrained_model.py --image_dir directory-containing-sequence-of-face-images --model_type CNN --batch_size 12 --eval_with_teacher --eval_with_students --save_dir save-directory --workers 8 --ensemble
. For CNN-RNN model:python run_pretrained_model.py --image_dir directory-containing-sequence-of-face-images --model_type CNN-RNN --seq_len 32 --batch_size 6 --eval_with_teacher --eval_with_students --save_dir save-directory --workers 8 --ensemble
@inproceedings{deng2020multitask,
title={Multitask Emotion Recognition with Incomplete Labels},
author={Deng, Didan and Chen, Zhaokang and Shi, Bertram E},
booktitle={2020 15th IEEE International Conference on Automatic Face and Gesture Recognition (FG 2020)(FG)},
pages={828--835},
organization={IEEE Computer Society}
}