-
Notifications
You must be signed in to change notification settings - Fork 0
Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
- Loading branch information
0 parents
commit 2c834de
Showing
125 changed files
with
35,703 additions
and
0 deletions.
There are no files selected for viewing
Binary file not shown.
Binary file not shown.
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1 @@ | ||
{} |
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,4 @@ | ||
# Sphinx build info version 1 | ||
# This file hashes the configuration used when building these files. When it is not found, a full rebuild will be done. | ||
config: 2fbfe5c28f0eb11da3c292dfcd2c70cd | ||
tags: 645f666f9bcd5a90fca523b33c5a78b7 |
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
1 change: 1 addition & 0 deletions
1
_build/html/_panels_static/panels-main.c949a650a448cc0ae9fd3441c0e17fb0.css
Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.
Oops, something went wrong.
7 changes: 7 additions & 0 deletions
7
_build/html/_panels_static/panels-variables.06eb56fa6e07937060861dad626602ad.css
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,7 @@ | ||
:root { | ||
--tabs-color-label-active: hsla(231, 99%, 66%, 1); | ||
--tabs-color-label-inactive: rgba(178, 206, 245, 0.62); | ||
--tabs-color-overline: rgb(207, 236, 238); | ||
--tabs-color-underline: rgb(207, 236, 238); | ||
--tabs-size-label: 1rem; | ||
} |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,18 @@ | ||
# Entries | ||
|
||
This section includes all entries for my daily journal. Entries are separated based on type of work and/or stage of the research. Formatting for entries is described below: | ||
|
||
|
||
<b>Title</b><br> | ||
<i>Date [mm/dd/yyyy], Start time [hh:mm am/pm]</i> | ||
|
||
|
||
Synopsis: Provides an overview of my plan for the work session. | ||
|
||
Data: Describes what I accomplished during the session, key observations, challenges, solutions, additional notes, etc. | ||
|
||
Resources: Links pages in other parts of the Jupyter notebook and external resources used during the session. | ||
|
||
<i>End time [hh:mm am/pm], Minutes</i> | ||
|
||
|
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1 @@ | ||
# Welcome |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,125 @@ | ||
# Markdown Files | ||
|
||
Whether you write your book's content in Jupyter Notebooks (`.ipynb`) or | ||
in regular markdown files (`.md`), you'll write in the same flavor of markdown | ||
called **MyST Markdown**. | ||
|
||
## What is MyST? | ||
|
||
MyST stands for "Markedly Structured Text". It | ||
is a slight variation on a flavor of markdown called "CommonMark" markdown, | ||
with small syntax extensions to allow you to write **roles** and **directives** | ||
in the Sphinx ecosystem. | ||
|
||
## What are roles and directives? | ||
|
||
Roles and directives are two of the most powerful tools in Jupyter Book. They | ||
are kind of like functions, but written in a markup language. They both | ||
serve a similar purpose, but **roles are written in one line**, whereas | ||
**directives span many lines**. They both accept different kinds of inputs, | ||
and what they do with those inputs depends on the specific role or directive | ||
that is being called. | ||
|
||
### Using a directive | ||
|
||
At its simplest, you can insert a directive into your book's content like so: | ||
|
||
```` | ||
```{mydirectivename} | ||
My directive content | ||
``` | ||
```` | ||
|
||
This will only work if a directive with name `mydirectivename` already exists | ||
(which it doesn't). There are many pre-defined directives associated with | ||
Jupyter Book. For example, to insert a note box into your content, you can | ||
use the following directive: | ||
|
||
```` | ||
```{note} | ||
Here is a note | ||
``` | ||
```` | ||
|
||
This results in: | ||
|
||
```{note} | ||
Here is a note | ||
``` | ||
|
||
In your built book. | ||
|
||
For more information on writing directives, see the | ||
[MyST documentation](https://myst-parser.readthedocs.io/). | ||
|
||
|
||
### Using a role | ||
|
||
Roles are very similar to directives, but they are less-complex and written | ||
entirely on one line. You can insert a role into your book's content with | ||
this pattern: | ||
|
||
``` | ||
Some content {rolename}`and here is my role's content!` | ||
``` | ||
|
||
Again, roles will only work if `rolename` is a valid role's name. For example, | ||
the `doc` role can be used to refer to another page in your book. You can | ||
refer directly to another page by its relative path. For example, the | ||
role syntax `` {doc}`intro` `` will result in: {doc}`intro`. | ||
|
||
For more information on writing roles, see the | ||
[MyST documentation](https://myst-parser.readthedocs.io/). | ||
|
||
|
||
### Adding a citation | ||
|
||
You can also cite references that are stored in a `bibtex` file. For example, | ||
the following syntax: `` {cite}`holdgraf_evidence_2014` `` will render like | ||
this: {cite}`holdgraf_evidence_2014`. | ||
|
||
Moreover, you can insert a bibliography into your page with this syntax: | ||
The `{bibliography}` directive must be used for all the `{cite}` roles to | ||
render properly. | ||
For example, if the references for your book are stored in `references.bib`, | ||
then the bibliography is inserted with: | ||
|
||
```` | ||
```{bibliography} | ||
``` | ||
```` | ||
|
||
Resulting in a rendered bibliography that looks like: | ||
|
||
```{bibliography} | ||
``` | ||
|
||
|
||
### Executing code in your markdown files | ||
|
||
If you'd like to include computational content inside these markdown files, | ||
you can use MyST Markdown to define cells that will be executed when your | ||
book is built. Jupyter Book uses *jupytext* to do this. | ||
|
||
First, add Jupytext metadata to the file. For example, to add Jupytext metadata | ||
to this markdown page, run this command: | ||
|
||
``` | ||
jupyter-book myst init markdown.md | ||
``` | ||
|
||
Once a markdown file has Jupytext metadata in it, you can add the following | ||
directive to run the code at build time: | ||
|
||
```` | ||
```{code-cell} | ||
print("Here is some code to execute") | ||
``` | ||
```` | ||
|
||
When your book is built, the contents of any `{code-cell}` blocks will be | ||
executed with your default Jupyter kernel, and their outputs will be displayed | ||
in-line with the rest of your content. | ||
|
||
For more information about executing computational content with Jupyter Book, | ||
see [The MyST-NB documentation](https://myst-nb.readthedocs.io/). |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1 @@ | ||
# Deep Learning |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,61 @@ | ||
# Intro | ||
### Basics of Convolutional Neural Networks (CNN) | ||
* Artificial neural networks made of layers | ||
* Performs single image processing function | ||
* Convolution: summarization of each region of the image or matrix | ||
* Flashlight metaphor: | ||
* Filter/neuron/kernel - array of numbers representing weights | ||
* Sliding motion - convolution | ||
* Illuminated region - receptive field | ||
* Multiply values in the filter with original pixel values; sum result | ||
* Feature map of smaller size is created | ||
* Stack different convolution modules | ||
* Units cover larger zones of input from early to late layers - tolerance to spacial translation of features/shape in image, so later layers can identify patterns or shapes independently of the original location of the shape | ||
* Features are more complex from early to late layers | ||
* Feature detectors that receive input and pass information to the next | ||
* Resemblance to primate visualization system | ||
|
||
### CNN Learning and Training | ||
* Network receives input and produces an output related to the network’s task (image classification, etc.) | ||
* Weights are initially set randomly | ||
* Supervised network - give network the correct answer | ||
* Working network gives a probability between 0 and 1 for each label | ||
* Cost = 𝛴(Network’s answer - wanted answer)^2 | ||
* Backpropagation: weights changed by sending cost back through the network | ||
* Network iteratively calculates error values for each layer; updates parameters | ||
* Show images of all categories to teach network | ||
* Each layer learns more complex patterns: contours => shapes => objects | ||
|
||
### Visualizations of CNN units | ||
* Analyze what the network has learned | ||
* Inspiration from neuroscience: show specific image/pattern to a brain while recording a response of the cell | ||
* Black-box | ||
* For artificial neural network, show a lot of different patterns and record responses of units of neural networks to try to determine sensitivities of each cell | ||
* Estimating receptive fields: | ||
* Identify regions of the image that lead to high unit activations | ||
* Sliding-window stimuli contains small randomized patch at different spatial locations | ||
* Feed occluded images into same network | ||
* Record change in activation as compared to original image | ||
* Large discrepancy = given patch is important | ||
* Obtain discrepancy map for each unit of each image shown | ||
* Re-center discrepancy map and average calibrated discrepancy maps to generalize final receptive field for that unit | ||
* Train same network to solve different tasks | ||
* Find discriminative features relevant to categorization tasks | ||
* Network plasticity/fine-tuning - what happens to neurons when they relearn | ||
* Features forgotten | ||
|
||
### Comparison Natural (Brain) vs Artificial Neural Networks | ||
* Use artificial networks to learn about how biological neural networks work | ||
* Correspondence between response to visual objects | ||
* Algorithmic-specific fMRI searchlight analysis | ||
* Move spherical of cortex patchy searchlight through brain volume to select location of local set of voxels | ||
* Vector corresponds to activity of patch | ||
* Build matrix of dissimilarity between pairs of image response | ||
* Present object images shown to humans in fMRI to neural network | ||
* Extract responses over all units of layers of all images to build matrices of similarity for each layer | ||
* Compare by taking Spearman correlation between matrices | ||
* Representation Similarity Analysis (RSA): compare responses from different sensors/data sources by measuring difference between responses | ||
* Spatiotemporal maps of correlations between human brain and model layers | ||
|
||
## Resources | ||
* [Neuromatch Academy: Computational Neuroscience](https://compneuro.neuromatch.io/tutorials/W1D5_DeepLearning/student/W1D5_Intro.html) |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,49 @@ | ||
# Tutorial 1: Decoding Neural Responses | ||
### Decoding vs encoding models | ||
* Decoding - Neural activity → variable | ||
* How much information a brain region contains about that variable | ||
* Encoding - Variable → neural activity | ||
* Approximate transformations brain performs on input variables | ||
* Understand how brain represents information | ||
|
||
### Data | ||
* Neural recordings in mice | ||
* Two photon calcium imaging to record neural activity | ||
* Convert imaging data into matrix of neural responses by stimuli presented | ||
* Bin neural responses (one degree) and compute neuron’s tuning curve | ||
|
||
### Decoding model | ||
* Linear network with no hidden layers | ||
* Stimulus prediction y = weights_out * neural response r + bias | ||
* Fit network by minimizing squared error between stimulus prediction and true stimulus with loss function | ||
* Add single hidden layer with m units to linear model | ||
* Y = weights_out * hidden_output + bias | ||
* Hidden layer h = weights_in * neural response r + bias | ||
* Increasing depth and width (number of units) can increase expressivity of model - how well it can fit complex non-linear functions | ||
|
||
### Non-linear activation functions | ||
* Add non-linearities that allow flexible fitting | ||
* Relu: phi(x) = max(0, x) | ||
* Sigmoid | ||
* Tanh | ||
* Relu best because gradient is 1 for all x > 0 and 0 for all x < 0, so gradient can back propagate through the network as long as x > 0 | ||
|
||
### Neural network depth, width, and expressivity | ||
* Depth - number of hidden layers | ||
* Width - number of units in each hidden layer | ||
* Expressivity - set of input/output transformations a network can perform, often determined by depth and width | ||
* Cost of wider, deeper networks: | ||
* More parameters = more data | ||
* A highly parameterized network is more prone to overfitting, and requires sophisticated optimization algorithms | ||
|
||
### Gradient descent | ||
1. Evaluate loss on training data | ||
2. Compute gradient of loss through backpropagation | ||
3. Update network weights | ||
#### Stochastic gradient descent | ||
* Evaluate the loss at a random subset of data samples from full training set, called a mini-batch | ||
* Bypasses the restrictive memory demands of GD by subsampling the training set into smaller mini-batches | ||
* Adds some noise in the search for local minima of the loss function, which can help prevent overfitting | ||
|
||
## Resources | ||
* [Neuromatch Academy: Computational Neuroscience](https://compneuro.neuromatch.io/tutorials/W1D5_DeepLearning/student/W1D5_Tutorial1.html) |
Oops, something went wrong.