Skip to content

Commit

Permalink
Merge pull request #122 from MouseLand/dev
Browse files Browse the repository at this point in the history
Change display layout
  • Loading branch information
Atika-Syeda authored Sep 8, 2023
2 parents a383298 + 5a58318 commit 4e11b0e
Show file tree
Hide file tree
Showing 10 changed files with 1,256 additions and 1,186 deletions.
18 changes: 13 additions & 5 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -27,7 +27,7 @@ Syeda, A., Zhong, L., Tung, R., Long, W., Pachitariu, M.\*, & Stringer, C.\* (20
Stringer, C.\*, Pachitariu, M.\*, Steinmetz, N., Reddy, C. B., Carandini, M., & Harris, K. D. (2019). Spontaneous behaviors drive multidimensional, brainwide activity. <em>Science, 364</em>(6437), eaav7893.
[[bibtex](https://scholar.googleusercontent.com/scholar.bib?q=info:DNVOkEas4K8J:scholar.google.com/&output=citation&scisdr=CgXHFLYtEMb9qP1Bt0Q:AAGBfm0AAAAAY3JHr0TJourtY6W2vbjy7opKXX2jOX9Z&scisig=AAGBfm0AAAAAY3JHryiZnvgWM1ySwd_xQ9brvQxH71UM&scisf=4&ct=citation&cd=-1&hl=en&scfhb=1)]

The MATLAB version of the GUI is no longer supported (old [documentation](docs/svd_matlab_tutorial.md)).
The MATLAB version of the GUI is no longer supported (see old [documentation](https://github.com/MouseLand/facemap/blob/main/docs/svd_matlab_tutorial.md)).

## Installation

Expand All @@ -42,6 +42,14 @@ If you are using a GPU, make sure its drivers and the cuda libraries are correct
5. To install the minimal version of facemap, run `python -m pip install facemap`.
6. To install facemap and the GUI, run `python -m pip install facemap[gui]`. If you're on a zsh server, you may need to use ' ' around the facemap[gui] call: `python -m pip install 'facemap[gui]'.

For Macbook users with M1/M2 please follow the following instructions for installation:
1. Install an [Anaconda](https://www.anaconda.com/products/distribution) distribution of Python. Note you might need to use an anaconda prompt if you did not add anaconda to the path.
2. Open an anaconda prompt / command prompt which has `conda` for **python 3** in the path
3. Create a new environment with `conda create -y -n facemap python=3.9 pyqt imagecodecs pyqtgraph matplotlib`
4. To activate this new environment, run `conda activate facemap`
5. Next, install the facemap in the environment: `pip install facemap`
6. Finally, run `python -m facemap` to launch facemap GUI.

To upgrade facemap (package [here](https://pypi.org/project/facemap/)), run the following in the environment:

~~~sh
Expand Down Expand Up @@ -86,7 +94,7 @@ Facemap supports grayscale and RGB movies. The software can process multi-camera
'.mj2','.mp4','.mkv','.avi','.mpeg','.mpg','.asf'
For more details, please refer to the [data acquisition page](docs/data_acquisition.md).
For more details, please refer to the [data acquisition page](https://github.com/MouseLand/facemap/blob/main/docs/data_acquisition.md).
## Support
Expand Down Expand Up @@ -118,7 +126,7 @@ Facemap provides a trained network for tracking distinct keypoints on the mouse
Keypoints will be predicted in the selected bounding box region so please ensure the bounding box focuses on the face. See example frames [here](figs/mouse_views.png).
For more details on using the tracker, please refer to the [GUI Instructions](docs/pose_tracking_gui_tutorial.md). See [command line interface (CLI) instructions](docs/pose_tracking_cli_tutorial.md) and for more examples, please see [tutorial notebooks](https://github.com/MouseLand/facemap/tree/dev/notebooks).
For more details on using the tracker, please refer to the [GUI Instructions](https://github.com/MouseLand/facemap/blob/main/docs/pose_tracking_gui_tutorial.md). Check out the [notebook](https://github.com/MouseLand/facemap/blob/main/docs/notebooks/process_keypoints.ipynb) for processing keypoints in colab.
<p float="middle">
<img src="https://raw.githubusercontent.com/MouseLand/facemap/main/figs/mouse_face1_keypoints.png" width="310" height="290" title="View 1" alt="view1" align="left" vspace = "10" hspace="30" style="border: 0.5px solid white" />
Expand All @@ -134,7 +142,7 @@ Facemap allows pupil tracking, blink tracking and running estimation, see more d
You can draw ROIs to compute the motion/movie SVD within the ROI, and/or compute the full video SVD by checking `multivideo`. Then check `motSVD` and/or `movSVD` and click `process`. The processed SVD `*_proc.npy` (and optionally `*_proc.mat`) file will be saved in the output folder selected.
For more details see [SVD python tutorial](docs/svd_python_tutorial.md) or [SVD MATLAB tutorial](docs/svd_matlab_tutorial.md).
For more details see [SVD python tutorial](https://github.com/MouseLand/facemap/blob/main/docs/svd_python_tutorial.md) or [SVD MATLAB tutorial](https://github.com/MouseLand/facemap/blob/main/docs/svd_matlab_tutorial.md).
([video](https://www.youtube.com/watch?v=Rq8fEQ-DOm4) with old install instructions)
Expand All @@ -149,5 +157,5 @@ The encoding model used for prediction is described as follows:
<img src="https://raw.githubusercontent.com/MouseLand/facemap/main/figs/encoding_model.png" width="70%" height="300" title="neural model" alt="neural model" align="center" vspace = "10" hspace="30" style="border: 0.5px solid white" />
</p>
Please see neural activity prediction [tutorial](docs/neural_activity_prediction_tutorial.md) for more details.
Please see neural activity prediction [tutorial](https://github.com/MouseLand/facemap/blob/main/docs/neural_activity_prediction_tutorial.md) for more details.
23 changes: 12 additions & 11 deletions docs/neural_activity_prediction_tutorial.md
Original file line number Diff line number Diff line change
@@ -1,23 +1,24 @@
# Neural activity prediction

This tutorial shows how to use the deep neural network encoding model for neural prediction using mouse orofacial behavior.
This tutorial shows how to use the deep neural network encoding model to predict neural activity based on mouse orofacial behavior.

To process neural activity prediction using pose estimates extracted using the tracker:
To process neural activity prediction using pose estimates extracted using the keypoint tracker:

1. Load or process keypoints ([see pose tracking tutorial](docs/pose_tracking_gui_tutorial.md)).
2. Select `Neural activity` from file menu to `Load neural data`.
3. Load neural activity data (2D-array stored in *.npy) and (optionally) timestamps for neural and behavioral data (1D-array stored in *.npy) then click `Done`.
4. Select `Run neural prediction` from the `Neural activity` file menu.
5. Select `Keypoints` as input data and set whether the output of the model's prediction to be `neural PCs` or neural activity. Use help button to set training parameters for the model.
5. The predicted neural activity *.npy file will be saved in the selected output folder.
1. Load or process keypoints ([see pose tracking tutorial](https://github.com/MouseLand/facemap/blob/main/docs/pose_tracking_gui_tutorial.md)).
2. Select `Neural activity` from file menu.
3. Click on `Launch neural activity window`.
4. Select `Load neural activity` (2D-array stored in *.npy) and (optionally) timestamps for neural and behavioral data (1D-arrays stored in *.npy) then click `Done`.
4. Once the neural data is loaded, click on `Run neural predictions`.
5. Select `Keypoints` as input data and select one of the options for output of the model's prediction, which can be `Neural PCs` or neural activity. CLick on `Help` button for more information.
5. The predicted neural activity (*.npy) file will be saved in the selected output folder.

To process neural activity prediction using pose estimates extracted using the tracker:
To predict neural activity using SVDs from Facemap:

1. Load or process SVDs for the video. ([see SVD tutorial](docs/svd_tutorial.md)).
1. Load or process SVDs for the video. ([see SVD tutorial](https://github.com/MouseLand/facemap/blob/main/docs/svd_python_tutorial.md)).
2. Follow steps 2-5 above.

Note: a linear model is used for prediction using SVDs.

Predicted neural activity will be plotted in the bottom-right window of the GUI. You can highlight test data by selecting `Highlight test data` from the `Neural activity` file menu. Further information about neural prediction, including variance explained can be found in the saved neural prediction file.
Predicted neural activity will be plotted in the neural activity window. Toggle `Highlight test data` to highlight time segments not used for training i.e. test data. Further information about neural prediction, including variance explained can be found in the saved neural prediction file (*.npy).


39 changes: 19 additions & 20 deletions docs/neural_activity_prediction_tutorial.rst
Original file line number Diff line number Diff line change
@@ -1,37 +1,36 @@
Neural activity prediction
==========================

This tutorial shows how to use the deep neural network encoding model
for neural prediction using mouse orofacial behavior.
This tutorial shows how to use the deep neural network encoding model to
predict neural activity based on mouse orofacial behavior.

To process neural activity prediction using pose estimates extracted
using the tracker:
using the keypoint tracker:

1. Load or process keypoints (`see pose tracking
tutorial <https://github.com/MouseLand/facemap/blob/main/docs/pose_tracking_gui_tutorial.md>`__).
2. Select ``Neural activity`` from file menu to ``Load neural data``.
3. Load neural activity data (2D-array stored in *.npy) and (optionally)
timestamps for neural and behavioral data (1D-array stored in*.npy)
then click ``Done``.
4. Select ``Run neural prediction`` from the ``Neural activity`` file
menu.
5. Select ``Keypoints`` as input data and set whether the output of the
model’s prediction to be ``neural PCs`` or neural activity. Use help
button to set training parameters for the model.
6. The predicted neural activity \*.npy file will be saved in the
2. Select ``Neural activity`` from file menu.
3. Click on ``Launch neural activity window``.
4. Select ``Load neural activity`` (2D-array stored in \*.npy) and
(optionally) timestamps for neural and behavioral data (1D-arrays
stored in*.npy) then click ``Done``.
5. Once the neural data is loaded, click on ``Run neural predictions``.
6. Select ``Keypoints`` as input data and select one of the options for
output of the model’s prediction, which can be ``Neural PCs`` or
neural activity. CLick on ``Help`` button for more information.
7. The predicted neural activity (\*.npy) file will be saved in the
selected output folder.

To process neural activity prediction using pose estimates extracted
using the tracker:
To predict neural activity using SVDs from Facemap:

1. Load or process SVDs for the video. (`see SVD
tutorial <https://github.com/MouseLand/facemap/blob/main/docs/svd_python_tutorial.md>`__).
2. Follow steps 2-5 above.

Note: a linear model is used for prediction using SVDs.

Predicted neural activity will be plotted in the bottom-right window of
the GUI. You can highlight test data by selecting
``Highlight test data`` from the ``Neural activity`` file menu. Further
information about neural prediction, including variance explained can be
found in the saved neural prediction file.
Predicted neural activity will be plotted in the neural activity window.
Toggle ``Highlight test data`` to highlight time segments not used for
training i.e. test data. Further information about neural prediction,
including variance explained can be found in the saved neural prediction
file (\*.npy).
Loading

0 comments on commit 4e11b0e

Please sign in to comment.