Skip to content

Latest commit

 

History

History
167 lines (105 loc) · 5.84 KB

README.md

File metadata and controls

167 lines (105 loc) · 5.84 KB

Link to published paper associated with this code

Table of Contents
  1. About The Project
  2. Project Results
  3. Getting Started
  4. References

About The Project

This project covers emotion recognition from facial visual signals while the participant wears a virtual reality headset. We use a valence and arousal scale, a multi-dimensional emotion representation. The main contributions of this project are:

  • We propose the novel EmoFAN-VR algorithm for emotion detection, trained to solve the partial face problem
  • We design and record EmoVR, a novel dataset of participants displaying spontaneous emotion expressions in response to videos watched in a virtual reality environment

This video shows the valence and arousal predictions of the EmoFAN-VR algorithm on a participant wearing a virtual reality headset.

VR-demo.mov

Project Results

We further trained the EmoFAN algorithm [1] on the AffectNet dataset [2], with virtual reality occlusions applied around the eye region.

We then further tested our algorithm on the AFEW-VA dataset [3], with virtual reality occlusions applied around the eye region.

The EmoFAN-VR algorithm outperforms the EmoFAN algorithm, on the AFEW-VA dataset with virtual reality occlusions, by a very large margin on all metrics. This result is a new baseline for the AFEW-VA dataset with VR occlusions applied. What makes this result even more remarkable and exciting is that the EmoFAN-VR algorithm was not fine-tuned on the AFEW-VA dataset. This shows that the EmoFAN-VR algorithm generalises well to new unseen data.

Getting Started

This is an example of how you may give instructions on setting up your project locally. To get a local copy up and running follow these simple example steps.

Prerequisites

The code requires the following Python packages

numpy version==1.19.0
PIL version==7.2.0
json version==2.0.9
imutils version==0.5.4
face_alignment version==1.3.4
torch version==1.7.1
torchvision version==0.8.2
cv2 version==4.5.2
skimage version==0.16.2
matplotlib version==3.2.2
seaborn version==0.10.1

Data and Models

  1. There are two models we can run, the original EmoFAN model 'emonet_8.pth' and the new algorithm created in this work 'EmoFAN-VR.pth'

  2. AFEW-VA Dataset: Download all twelve zip files from the AFEW-VA-database. Place all 600 files downloaded from these zip files into the 'data' folder, in a folder named 'AFEW-VA'.

Running the Scripts

Running the script

python3 train_and_test.py

References

[1] Toisoul, A., Kossaifi, J., Bulat, A. et al. Estimation of continuous valence and arousal levels from faces in naturalistic conditions. Nat Mach Intell 3, 42–50 (2021).

[2] Mollahosseini, A. et al. AffectNet: A Database for Facial Expression, Valence, and Arousal Computing in the Wild. IEEE Transactions on Affective Computing 10, 18-31 (2019).

[3] Kossaifi J, Tzimiropoulos G, Todorovic S, Pantic M. AFEW-VA database for valence and arousal estimation in-the-wild. Image and vision computing 65, 23-36 (2017).