Most of my lab protocols documented in my LabArchives Notebooks and also kept on our Microsoft 365 infrastructure. I'm creating this Datalad data set to try an 'open' format, which doesn't require authentication to view and allows others to comment on the contents. I am currently piloting a version of this repository with Quarto. You can see it's deployment at the GitHub Pages site.
If you have any questions, comments, or critiques, please do not hesitate to open an issue on GitHub.
Pranav Kumar Mishra, MBBS
Post-Doctoral Research Fellow
Departments of Surgery and Orthopedic Surgery
Rush University Medical Center
The GitHub repository is the primary public-facing repository, while the GIN repository hosts annex-files as a common data source. If you would like to view the files without having to clone/download the Datalad data set, you please visit the GIN repository.
This is a DataLad dataset (id: 3d6beff7-a55d-43a2-ab44-6def6a79bde5).
This repository is a DataLad dataset. It provides fine-grained data access down to the level of individual files, and allows for tracking future updates. In order to use this repository for data retrieval, DataLad is required. It is a free and open source command line tool, available for all major operating systems, and builds up on Git and git-annex to allow sharing, synchronizing, and version controlling collections of large files.
More information on how to install DataLad and how to install it can be found in the DataLad Handbook.
A DataLad dataset can be cloned
by running
datalad clone <url>
Once a dataset is cloned, it is a light-weight directory on your local machine. At this point, it contains only small metadata and information on the identity of the files in the dataset, but not actual content of the (sometimes large) data files.
After cloning a dataset, you can retrieve file contents by running
datalad get <path/to/directory/or/file>
This command will trigger a download of the files, directories, or subdatasets you have specified.
DataLad datasets can contain other datasets, so called subdatasets. If you clone the top-level dataset, subdatasets do not yet contain metadata and information on the identity of files, but appear to be empty directories. In order to retrieve file availability metadata in subdatasets, run
datalad get -n <path/to/subdataset>
Afterwards, you can browse the retrieved metadata to find out about subdataset contents, and retrieve individual files with datalad get
. If you use datalad get <path/to/subdataset>
, all contents of the subdataset will be downloaded at once.
DataLad datasets can be updated. The command datalad update
will fetch updates and store them on a different branch (by default remotes/origin/master
). Running
datalad update --merge
will pull available updates and integrate them in one go.
DataLad datasets contain their history in the git log
. By running git log
(or a tool that displays Git history) in the dataset or on specific files, you can find out what has been done to the dataset or to individual files by whom, and when.