A generalized Brain Extraction Net for multimodal MRI data from rodents, nonhuman primates, and humans
Paper | Feature | Replicate demo | MRI data release | Pretrained weight | Interface | Contributing to BEN | Documentation | Contents | Video tutorials
See also: | Github project link
Z. Yu, X. Han, S. Zhang, J. Feng, T. Peng and X. -Y. Zhang, "MouseGAN++: Unsupervised Disentanglement and Contrastive Representation for Multiple MRI Modalities Synthesis and Structural Segmentation of Mouse Brain," in IEEE Transactions on Medical Imaging, 2022, doi: 10.1109/TMI.2022.3225528.
We release BEN (version 0.2) during November based on the reviewers' suggestions and our experiences accumulated in clinical practice.
(Old version is moved to 'doc' branch)
New features | Location |
---|---|
Add orientation detection (Note: if you want to run MR scans on the original orientation, don't set "-check" parameter in commands) | utils folder |
Add utils functions (some visualization and postprocessing functions) | utils folder |
Add Human-T1WI-HCP (baby) pretrained weight | dataset_release folder |
Optimize BEN pipeline. | BEN_DA.py, BEN_infer.py |
Visual (segmentation quality) and volumetric (brain volume) reports in automatically generated HTML web page. | utils folder |
Video tutorials | Video tutorials |
Please refer to list.
Coincidentally, the motivation of BEN's training strategy is somewhat consistent with Cellpose v2. BEN tries to quickly develop a customized model specific to each user application with the help of the AdaBN module.
🚀 Quick start to use BEN or replicate our experiments in 5 minutes!
The details can be found in this folder.
Visit our documentation or video tutorials for installation, tutorials and more.
An Nvidia GPU is needed for faster inference (less than 1 sec/scan on 1080ti gpu).
Requirements:
- tensorflow-gpu == 1.15.4
- Keras == 2.2.4
- numpy == 1.16
- SimpleITK == 2.0
- opencv-python == 4.1
- scikit-image == 0.16.2
Install dependencies:
git clone https://github.com/yu02019/BEN.git
cd BEN
pip install -r requirement.txt
The target domain data folder looks like this: (Download data from this repository/Colab or put your data here.)
- All the undermentioned results can be repeated via our tutorial Notebook.
- New weight will be saved independently for further customized application.
-
Modality: T2WI -> EPI
-
For this exemplar domain adaptation (DA) task, No label is used (zero-shot).
-
From top row to the third row: Raw image, Baseline result, BEN's result.
-
MR scanner with different field strengths: 11.7 T -> 7 T
-
For this exemplar domain adaptation (DA) task, No label is used (zero-shot).
-
From top row to the third row: Raw image, Baseline result, BEN's result.
-
Species: Mouse -> Rat
-
For this exemplar domain adaptation (DA) task, only ONE label is used.
-
The segmentation results are shown in red, the ground truth are shown in orange.
-
From top row to the fifth row: Raw image, Zero-shot (0 label used), finetune (1 label used), BEN's result (1 label used), Ground truth.
-
(Optional) Just do some simple postprocessing here, e.g., only save the top-K largest connected regions.
-
Compared with other methods, it further shows BEN's advantages
Feel free to try your data or deploy BEN to your preprocessing pipeline. Details can be found in notebook and video tutorials. Pretrained weights can be found in dataset_release.
# Update BEN (domain adaptation)
python BEN_DA.py -t train_folder -l label_folder -r raw_image_folder -weight pretrained_weight_path -prefix new_model_name -check check_orientation
# Run inference
python BEN_infer.py -i input_folder -o output_folder -weight model_weight_path -check check_orientation
To further validate BEN’s generalization, we have evaluated BEN on two new external public ex-vivo MRI datasets (rTg4510 mouse: 25 ex-vivo scans, and C57BL/6 mouse: 15 ex-vivo scans). When only one label is used for BEN adaptation/retraining, impressive performance is achieved on both datasets, despite the fact that BEN was originally designed for in-vivo MRI data.
Dataset | Used label | Description | Automatically generated reports | Video link |
---|---|---|---|---|
rTg4510 mouse | 1 | Ex-vivo scans with obvious distortion and different orientations. | youtube | |
C57BL/6 mouse | 1 | Ex-vivo scans. There is no obvious gap between the brain and the skull borderlines, making the task difficult. | youtube | |
C57BL/6 mouse | 0 (zero-shot) |
Ex-vivo scans. The domain gap exists in ex-vivo MRI data and in-vivo images in our training images could be so large that it compromises the performance. In this case, we suggest users add several labels to update BEN. | youtube |
Pretrained weight used in tutorials can download from Google Drive
The usages and details can be found in this folder.
Name | Link |
---|---|
AFNI | afni.nimh.nih.gov/afni |
ANTs | stnava.github.io/ANTs/ |
FSL | fsl.fmrib.ox.ac.uk/fsl/fslwiki |
FreeSurfer | freesurfer.net |
SPM | fil.ion.ucl.ac.uk/spm |
Nipype | pypi.org/project/nipype/ |
The details can be found in this folder.
If you find our work / datasets / pretrained models useful for your research, please consider citing:
@article{yu2022generalizable,
title={A generalizable brain extraction net (BEN) for multimodal MRI data from rodents, nonhuman primates, and humans},
author={Yu, Ziqi and Han, Xiaoyang and Xu, Wenjing and Zhang, Jie and Marr, Carsten and Shen, Dinggang and Peng, Tingying and Zhang, Xiao-Yong and Feng, Jianfeng},
journal={Elife},
volume={11},
pages={e81217},
year={2022},
publisher={eLife Sciences Publications Limited}
}
@dataset{yu_ziqi_2022_6844489,
author = {Yu Ziqi and
Xu Wenjing and
Zhang Xiao-Yong},
title = {{A longitudinal MRI dataset of young adult C57BL6J
mouse brain}},
month = jul,
year = 2022,
publisher = {Zenodo},
doi = {10.5281/zenodo.6844489},
url = {https://doi.org/10.5281/zenodo.6844489}
}
Disclaimer: This toolkit is only for research purpose.