This repository provides Python code for the decoding of different motor imagery conditions from raw EEG data, using a Convolutional Neural Network (CNN).
To run the code, create and activate a dedicated Anaconda environment by typing the following into your terminal:
curl -O https://raw.githubusercontent.com/gifale95/eeg_motor_imagery_decoding/main/environment.yml
conda env create -f environment.yml
conda activate dnn_bci
Here two publicly available EEG BCI datasets are decoded: 5F and HaLT. For the decoding analysis, the 19-EEG-channels signal is standardized, downsampled to 100Hz, and each trial is epoched in the range [-250ms 1000ms] relative to onset. The data along with the accompanying paper can be found at (Kaya et al., 2018).
This is a motor imagery dataset of the 5 hand fingers movement: thumb, index finger, middle finger, ring finger, pinkie finger. The following files are used for the analyses:
- 5F-SubjectA-160405-5St-SGLHand.mat
- 5F-SubjectB-160316-5St-SGLHand.mat
- 5F-SubjectC-160429-5St-SGLHand-HFREQ.mat
- 5F-SubjectE-160415-5St-SGLHand-HFREQ.mat
- 5F-SubjectF-160210-5St-SGLHand-HFREQ.mat
- 5F-SubjectG-160413-5St-SGLHand-HFREQ.mat
- 5F-SubjectI-160719-5St-SGLHand-HFREQ.mat
To run the code, add the data files to the directory /project_dir/datasets/5f/data/
.
This is a dataset consisting of 6 motor imagery conditions: left hand, right hand, left foot/leg, right foot/leg, tongue, passive/neutral state. The following files are used for the analyses:
- HaLTSubjectA1602236StLRHandLegTongue.mat
- HaLTSubjectB1602186StLRHandLegTongue.mat
- HaLTSubjectC1602246StLRHandLegTongue.mat
- HaLTSubjectE1602196StLRHandLegTongue.mat
- HaLTSubjectF1602026StLRHandLegTongue.mat
- HaLTSubjectG1603016StLRHandLegTongue.mat
- HaLTSubjectI1606096StLRHandLegTongue.mat
- HaLTSubjectJ1611216StLRHandLegTongue.mat
- HaLTSubjectK1610276StLRHandLegTongue.mat
- HaLTSubjectL1611166StLRHandLegTongue.mat
- HaLTSubjectM1611086StLRHandLegTongue.mat
To run the code, add the data files to the directory /project_dir/datasets/halt/data/
.
The decoding analysis is performed using the shallow ConvNet architecture described in Schirrmeister et al., 2018.
This is analogous to a data augmentation technique: instead of full trials, the CNN is fed with crops (across time) of the original trials. This procedure results in more training data, and has been shown to increase decoding accuracy. More information about cropped trials decoding in Schirrmeister et al., 2018, and a tutorial for the Python implementation of the method can be found on the Braindecode website.
Inter-subject learning is a zero-shot learning approach which aims at understanding how well a CNN trained on decoding the motor imagery trials of a set of subjects is capable of generalizing its decoding performance on a held-out subject. In other words, this is testing the possibility of pre-trained EEG BCI devices which readily work on novel subjects without the need of any training data from these subjects.
The CNN models have been trained using the following parameters:
- Learning rate: 0.001
- Weight decay: 0.01
- Batch size: 128
- Training epochs: 500
Results are shown for the training epochs which yielded highest decoding accuracies: