-
Notifications
You must be signed in to change notification settings - Fork 14
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Explanation Input-Feature Unit-Output Correlation Maps (Envelope Activation Correlation) #2
Comments
How to interpret and calculate the activations? Different than layer output?I couldn't follow your calculation of the activations to 100%. As I understood from the paper the activations of one layer are the unit outputs. So the dimension of the activations/unit outputs is the same as the output shape of that layer!? To calculate the correlation between the mean squared envelopes and the activations, their dimensions have to be the same. And you said that they are exactly the same since the envelopes are calculated with the receptive field size of the corresponding layer. Your deep model contains the following layers with output shapes:
I implemented a keras version of the model with exactly the same dimensions. Only using channels last and average pooling.
In keras I calculate the unit outputs for the pooling layers and the last convolution layer with the following code (these are the ends of the different blocks you mentioned in the paper):
Due to that my unit outputs have the same dimensions as the output shape as the layer (channels last):
The receptive field size and the resulting envelope shapes of the corresponding layers are as follow:
Can you tell me how to understand the activations if they are not (simply) the output of one layer and how to compute them? |
Hi, For cropped decoding without padding, the model will get as many outputs as it has inputs - receptive field size + 1. Concretely to have cropped decoding, one must appropriately replace the max pooling stride by dilations in the following layers. Code from current braindecode does that here: (it was manually done in this old repo, potentially more difficult to understand). That may clear things up? |
Hi, |
No there is no need to transform the input. In our paper we actually used [-500,+4000] ms for both trialwise and cropped decoding, which is 1125 timesteps @ 250 Hz. [0,4000] should give similar, if slightly worse results. |
Filter to frequency bands:
braindevel/braindecode/analysis/envelopes.py
Lines 153 to 162 in 21f58aa
Compute envelope
(absolute of hilbert transform):
braindevel/braindecode/analysis/envelopes.py
Line 171 in 21f58aa
Square Envelope
(
square_before_mean
wasTrue
in our setting)[Envelope was saved to a file and reloaded]
braindevel/braindecode/analysis/envelopes.py
Lines 30 to 31 in 21f58aa
Compute Moving Average of the envelope within the receptive field of the corresponding layer
Basic steps:
braindevel/braindecode/analysis/envelopes.py
Lines 37 to 41 in 21f58aa
braindevel/braindecode/analysis/envelopes.py
Line 76 in 21f58aa
braindevel/braindecode/analysis/envelopes.py
Line 95 in 21f58aa
braindevel/braindecode/analysis/envelopes.py
Lines 80 to 85 in 21f58aa
Compute Correlation with Activations
For trained model
braindevel/braindecode/analysis/create_env_corrs.py
Lines 44 to 45 in 21f58aa
and random model
braindevel/braindecode/analysis/create_env_corrs.py
Lines 47 to 48 in 21f58aa
Compute Activations
braindevel/braindecode/analysis/create_env_corrs.py
Line 60 in 21f58aa
So Compute per-batch activations and then aggregate to per-trial activations in
braindevel/braindecode/veganlasagne/layer_util.py
Lines 30 to 54 in 21f58aa
Compute Correlation Envelope and Activations
braindevel/braindecode/analysis/create_env_corrs.py
Line 76 in 21f58aa
braindevel/braindecode/analysis/envelopes.py
Lines 59 to 71 in 21f58aa
In the end these correlations for trained and untrained model will be saved:
braindevel/braindecode/analysis/create_env_corrs.py
Lines 52 to 53 in 21f58aa
Now when you have these correlations for trained and untrained model you can average across units in a layer and then compute the difference of them (difference between trained and untrained model correlations). This is Figure 15 in https://onlinelibrary.wiley.com/doi/full/10.1002/hbm.23730
As a comparison we also compute the correlations of the envelope with the class labels (no network involved!). This is in the rightmost plots in Figure 15, or class-resolved/per class in Figure 14.
The text was updated successfully, but these errors were encountered: