I use spiking data from 175 neurons to predict hand movements in 3D dimensions.
- Neural Data: recorded from a Utah Array implanted in sensorimotor cortex of a common marmoset. Spike counts are binned in 20ms windows.
- Kinematic Data: 3D hand position, which was estimated from video data using DeepLabCut.
- Experimental Behavior: the marmoset is capturing live moths in a prey-capture box.
I implement an LSTM model to predict instananeous position of the hand in 3D from spike count data in 175 neurons. I define a sequence length and decoder lead time such that the specified time bins of neural activity predict movement a short time later (50ms). There is also a linear readout layer after the LSTM layer to convert outputs from the LSTM to x-y-z position.
I improve the performance on held-out test data from r2=0.79 to r2=0.86 by:
- Doubling the size of the hidden layer(s).
- Adding a second hidden layer.
- Regularization: dropout=0.4 and weight_decay=1e-5
- Decreased epochs required by increasing the learning rate from 0.001 to 0.01.
collect_data_from_nwb.py: samples spikes and position from NWB file.
decoding_movement_with_LSTMs.ipynb: Implementation of LSTM model, with longer description of model development and evaluation.