-
Notifications
You must be signed in to change notification settings - Fork 150
Use the Allen Brain Observatory – Visual Coding on AWS
The Allen Brain Observatory – Visual Coding is the first standardized in vivo survey of physiological activity in the mouse visual cortex, featuring representations of visually evoked calcium responses from GCaMP6-expressing neurons in selected cortical layers, visual areas, and Cre lines. All of this data is now available to everyone in Amazon S3. We are excited to make the motion-corrected calcium fluorescence videos for all recording sessions available in S3 so that users can systematically test their own analysis algorithms on the entire data set without having to ship physical hard disks. Users also have access to the Neurodata Without Borders (NWB) files containing raw and baseline-corrected fluorescence traces extracted by the Allen Institute for comparison.
In this data set, we record from a single population of neurons while the mouse passively observes a battery of visual stimuli in three separate ~90 min. sessions. The three sessions consist of interleaved presentations of the following stimuli:
- drifting gratings, natural movies (clip 1, clip 3)
- static gratings, natural movies (clip 1), natural scenes/images
- locally sparse noise, natural movies (clip 1, clip 2)
We call the combination of the three recordings from a single population of neurons an “experiment container.” Each session has a separate NWB file with cell-level response data, for a total of 3 NWB files in each experiment container. Learn more about the design of the experiment, the stimulus protocol of each session, and the organization of the data on our web site and in our technical whitepapers.
The Allen Brain Observatory data set is hosted on Amazon Web Services (AWS) in an S3 bucket. In order to use the data set, you need to have an AWS account. You can create an AWS account by following these instructions.
-
manifest.json
: used by AllenSDK to look up file paths -
experiment_containers.csv
: metadata for each container (area, imaging depth, etc) -
ophys_experiments.csv
: metadata for each experiment session -
ophys_experiment_data
-
<experiment_id>.nwb
: traces, running speed, etc per experiment session
-
-
ophys_experment_analysis
-
<experiment_id>_<session_name>.h5
: analysis files per experiment session
-
-
ophys_movies
-
<experiment_id>.h5
: motion-corrected video per experiment session
-
The instructions below walk through the steps necessary for creating a Jupyter notebook instance and using this data set. A couple important points:
- The template below uses a customized AWS SageMaker template that comes preconfigured with a large number of environments. We pre-installed allensdk into the
conda_python2
andconda_python3
environments - Make sure you create your instance in the us-west-2 region -- that's where our bucket lives.
Option 1: Create a Jupyter Notebook Instance via A Launch Button
- Click on
- Continue clicking next until you get to the review page.
- On the review page, check the checkbox that allows the AWS Cloudformation to create roles and click on Create. You will be redirected to the Cloudformation page. Wait for the template to be created.
- You can check the status of the notebook instance here.
The URL of the notebook instance is the following: https://allen-brain-observatory.notebook.us-west-2.sagemaker.aws/tree
Option 2: Create a Jupyter Notebook Instance via the AWS CLI
- Install the AWS CLI by following these instructions.
- Configure your machine to use your AWS account by following these instructions.
- Download the template from here.
- Run the following command in the directory where you downloaded the template and wait for the instance to be created
aws cloudformation create-stack --stack-name allen-brain-observatory --template-body file://./allen-brain-observatory-sagemaker.yml --capabilities CAPABILITY_IAM
You can check the status of the notebook instance here.
The URL of the notebook instance is the following: https://allen-brain-observatory.notebook.us-west-2.sagemaker.aws/tree
Once your notebook is running, you can access frames of a video like this:
# imports
import h5py
import matplotlib.pyplot as plt
from allensdk.core.brain_observatory_cache import BrainObservatoryCache
%matplotlib inline
# find an ophys experiment
boc = BrainObservatoryCache(manifest_file='/data/allen-brain-observatory/visual-coding-2p/manifest.json')
exps = boc.get_ophys_experiments()
exp = exps[0]
# pull some frames out of the movie
movie_path = '/data/allen-brain-observatory/visual-coding-2p/ophys_movies/ophys_experiment_%d.h5' % exp['id']
f = h5py.File(movie_path,'r')
frames = f["data"][:10,:,:]
f.close()
plt.imshow(frames[0])
plt.show()
You can also access the dF/F traces of an experiment like this:
ds = boc.get_ophys_experiment_data(exp['id']
t, dff = ds.get_dff_traces()
plt.plot(t, dff[0])
plt.show()
For more detailed examples, take a look at the Allen SDK documentation page.
If you have questions about the data or the Allen SDK, open an issue on the Allen SDK issue tracker or ask a question on Stack Overflow with the ‘allen-sdk’ tag.
The data in this data set is provided under a non-commercial use policy. Look at our terms of use for more details: http://www.alleninstitute.org/legal/terms-use/.