Skip to content

Commit

Permalink
Merge pull request #80 from neuropoly/nm/create_pypi_package
Browse files Browse the repository at this point in the history
Create PyPI package
  • Loading branch information
NathanMolinier authored Nov 29, 2024
2 parents 88377f2 + 8e0c2f8 commit d3e3157
Show file tree
Hide file tree
Showing 11 changed files with 675 additions and 157 deletions.
3 changes: 3 additions & 0 deletions .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -173,3 +173,6 @@ cython_debug/
# and can be added to the global gitignore or merged into this file. For a more nuclear
# option (not recommended) you can uncomment the following to ignore the entire idea folder.
#.idea/

# Extra files
/totalspineseg/models/nnUNet
129 changes: 72 additions & 57 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -33,9 +33,9 @@ TotalSpineSeg uses a hybrid approach that integrates nnU-Net with an iterative a

For comparison, we also trained a single model (`Dataset103`) that outputs individual label values for each vertebra and IVD in a single step.

![Figure 1](https://github.com/user-attachments/assets/7b82d6b8-d584-47ef-8504-fe06962bb82e)
![Figure 1](https://github.com/user-attachments/assets/9017fb8e-bed5-413f-a80f-b123a97f5735)

**Figure 1**: Illustration of the hybrid method for automatic segmentation of spinal structures. (A) MRI image used to train the Step 1 model. (B) The Step 1 model outputs nine classes. (C) Individual IVDs extracted from the output labels. (D) Odd IVDs extracted from the individual IVDs. (E) MRI image and odd IVDs used as inputs to train the Step 2 model, which outputs ten classes. (F) Final segmentation with individual labels for each vertebra and IVD.
**Figure 1**: Illustration of the hybrid method for automatic segmentation of spinal structures. (A) Input MRI image. (B) Step 1 model prediction. (C) Odd IVDs extraction from the Step1 prediction. (D) Step 2 model prediction. (E) Final segmentation with individual labels for each vertebra and IVD.

## Datasets

Expand All @@ -62,44 +62,49 @@ When not available, sacrum segmentations were generated using the [totalsegmenta

1. Open a `bash` terminal in the directory where you want to work.

1. Create the installation directory:
```bash
mkdir TotalSpineSeg
cd TotalSpineSeg
```
2. Create the installation directory:
```bash
mkdir TotalSpineSeg
cd TotalSpineSeg
```

1. Create and activate a virtual environment (highly recommended):
3. Create and activate a virtual environment using one of the following options (highly recommended):
- venv
```bash
python3 -m venv venv
source venv/bin/activate
```
- conda env
```
conda create -n myenv python=3.9
conda activate myenv
```

1. Clone and install this repository:
4. Install this repository using one of the following options:
- Git clone (for developpers)
> **Note:** If you pull a new version from GitHub, make sure to rerun this command with the flag `--upgrade`
```bash
git clone https://github.com/neuropoly/totalspineseg.git
python3 -m pip install -e totalspineseg
```

1. For CUDA GPU support, install **PyTorch** following the instructions on their [website](https://pytorch.org/). Be sure to add the `--upgrade` flag to your installation command to replace any existing PyTorch installation.
Example:
```bash
python3 -m pip install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu118 --upgrade
```

1. Set the path to TotalSpineSeg and data folders in the virtual environment:
```bash
mkdir data
export TOTALSPINESEG="$(realpath totalspineseg)"
export TOTALSPINESEG_DATA="$(realpath data)"
echo "export TOTALSPINESEG=\"$TOTALSPINESEG\"" >> venv/bin/activate
echo "export TOTALSPINESEG_DATA=\"$TOTALSPINESEG_DATA\"" >> venv/bin/activate
- PyPI installation (for inference only)
```
python3 -m pip install totalspineseg
```

**Note:** If you pull a new version from GitHub, make sure to reinstall the package to apply the updates using the following command:
5. For CUDA GPU support, install **PyTorch** following the instructions on their [website](https://pytorch.org/). Be sure to add the `--upgrade` flag to your installation command to replace any existing PyTorch installation.
Example:
```bash
python3 -m pip install -e $TOTALSPINESEG --upgrade
python3 -m pip install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu118 --upgrade
```

6. **OPTIONAL STEP:** Define a folder where weights will be stored:
> By default, weights will be stored in the package under `totalspineseg/models`
```bash
mkdir data
export TOTALSPINESEG_DATA="$(realpath data)"
```

## Training

To train the TotalSpineSeg model, you will need the following hardware specifications:
Expand All @@ -109,62 +114,72 @@ To train the TotalSpineSeg model, you will need the following hardware specifica

Please ensure that your system meets these requirements before proceeding with the training process.

1. Make sure that the `bash` terminal is opened with the virtual environment (if used) activated (using `source <path to installation directory>/venv/bin/activate`).
1. Make sure that the `bash` terminal is opened with the virtual environment activated (see [Installation](#installation)).

1. Ensure training dependencies are installed:
```bash
apt-get install git git-annex jq -y
```
2. Ensure training dependencies are installed:
```bash
apt-get install git git-annex jq -y
```

1. Download the required datasets into `$TOTALSPINESEG_DATA/bids` (make sure you have access to the specified repositories):
```bash
bash "$TOTALSPINESEG"/scripts/download_datasets.sh
```
3. Set the path to TotalSpineSeg and data folders in the virtual environment:
```bash
mkdir data
export TOTALSPINESEG="$(realpath totalspineseg)"
export TOTALSPINESEG_DATA="$(realpath data)"
echo "export TOTALSPINESEG=\"$TOTALSPINESEG\"" >> venv/bin/activate
echo "export TOTALSPINESEG_DATA=\"$TOTALSPINESEG_DATA\"" >> venv/bin/activate
```

1. Temporary step (until all labels are pushed into the repositories) - Download labels into `$TOTALSPINESEG_DATA/bids`:
```bash
curl -L -O https://github.com/neuropoly/totalspineseg/releases/download/labels/labels_iso_bids_0924.zip
unzip -qo labels_iso_bids_0924.zip -d "$TOTALSPINESEG_DATA"
rm labels_iso_bids_0924.zip
```
4. Download the required datasets into `$TOTALSPINESEG_DATA/bids` (make sure you have access to the specified repositories):
```bash
bash "$TOTALSPINESEG"/scripts/download_datasets.sh
```

1. Prepare datasets in nnUNetv2 structure into `$TOTALSPINESEG_DATA/nnUnet`:
```bash
bash "$TOTALSPINESEG"/scripts/prepare_datasets.sh [DATASET_ID] [-noaug]
```
5. Temporary step (until all labels are pushed into the repositories) - Download labels into `$TOTALSPINESEG_DATA/bids`:
```bash
curl -L -O https://github.com/neuropoly/totalspineseg/releases/download/labels/labels_iso_bids_0924.zip
unzip -qo labels_iso_bids_0924.zip -d "$TOTALSPINESEG_DATA"
rm labels_iso_bids_0924.zip
```

6. Prepare datasets in nnUNetv2 structure into `$TOTALSPINESEG_DATA/nnUnet`:
```bash
bash "$TOTALSPINESEG"/scripts/prepare_datasets.sh [DATASET_ID] [-noaug]
```

The script optionally accepts `DATASET_ID` as the first positional argument to specify the dataset to prepare. It can be either 101, 102, 103, or all. If `all` is specified, it will prepare all datasets (101, 102, 103). By default, it will prepare datasets 101 and 102.

Additionally, you can use the `-noaug` parameter to prepare the datasets without data augmentations.

1. Train the model:
```bash
bash "$TOTALSPINESEG"/scripts/train.sh [DATASET_ID [FOLD]]
```
7. Train the model:
```bash
bash "$TOTALSPINESEG"/scripts/train.sh [DATASET_ID [FOLD]]
```
The script optionally accepts `DATASET_ID` as the first positional argument to specify the dataset to train. It can be either 101, 102, 103, or all. If `all` is specified, it will train all datasets (101, 102, 103). By default, it will train datasets 101 and 102.
Additionally, you can specify `FOLD` as the second positional argument to specify the fold. It can be either 0, 1, 2, 3, 4, 5 or all. By default, it will train with fold 0.
## Inference
1. Make sure that the `bash` terminal is opened with the virtual environment (if used) activated (using `source <path to installation directory>/venv/bin/activate`).
1. Make sure that the `bash` terminal is opened with the virtual environment activated (see [Installation](#installation)).
1. Run the model on a folder containing the images in .nii.gz format, or on a single .nii.gz file:
```bash
totalspineseg INPUT OUTPUT_FOLDER [--step1] [--iso]
```
2. Run the model on a folder containing the images in .nii.gz format, or on a single .nii.gz file:
> If you haven't trained the model, the script will automatically download the pre-trained models from the GitHub release.
```bash
totalspineseg INPUT OUTPUT_FOLDER [--step1] [--iso]
```
This will process the images in INPUT or the single image and save the results in OUTPUT_FOLDER. If you haven't trained the model, the script will automatically download the pre-trained models from the GitHub release.
This will process the images in INPUT or the single image and save the results in OUTPUT_FOLDER.
**Important Note:** By default, the output segmentations are resampled back to the input image space. If you prefer to obtain the outputs in the model's original 1mm isotropic resolution, especially useful for visualization purposes, we strongly recommend using the `--iso` argument.
Additionally, you can use the `--step1` parameter to run only the step 1 model, which outputs a single label for all vertebrae, including the sacrum.
For more options, you can use the `--help` parameter:
```bash
totalspineseg --help
```
```bash
totalspineseg --help
```
**Output Data Structure:**
Expand Down
21 changes: 8 additions & 13 deletions pyproject.toml
Original file line number Diff line number Diff line change
@@ -1,17 +1,13 @@
[project]
name = "totalspineseg"
version = "20241005"
version = "20241129"
requires-python = ">=3.9"
description = "TotalSpineSeg is a tool for automatic instance segmentation and labeling of all vertebrae, intervertebral discs (IVDs), spinal cord, and spinal canal in MRI images."
readme = "README.md"
authors = [
{ name = "Yehuda Warszawer", email = "[email protected]"},
{ name = "Nathan Molinier"},
{ name = "Jan Valosek"},
{ name = "Emanuel Shirbint"},
{ name = "Pierre-Louis Benveniste"},
{ name = "Nathan Molinier", email = "[email protected]"},
{ name = "Anat Achiron"},
{ name = "Arman Eshaghi"},
{ name = "Julien Cohen-Adad"},
]
classifiers = [
Expand All @@ -21,9 +17,6 @@ classifiers = [
"Topic :: Scientific/Engineering :: Artificial Intelligence",
"Topic :: Scientific/Engineering :: Image Recognition",
"Topic :: Scientific/Engineering :: Medical Science Apps.",
"Topic :: Scientific/Engineering :: MRI Images.",
"Topic :: Scientific/Engineering :: Spinal Cord.",
"Topic :: Scientific/Engineering :: Spine.",
]
keywords = [
'deep learning',
Expand Down Expand Up @@ -53,18 +46,19 @@ dependencies = [
# https://github.com/MIC-DKFZ/nnUNet/issues/2480
# --verify_dataset_integrity not working in nnunetv2==2.4.2 do we need to update this when fixed
# https://github.com/MIC-DKFZ/nnUNet/issues/2144
"nnunetv2==2.4.2",
"nnunetv2<=2.4.2",
"psutil",
]

[project.urls]
homepage = "https://github.com/neuropoly/totalspineseg"
repository = "https://github.com/neuropoly/totalspineseg"
Dataset101_TotalSpineSeg_step1 = "https://github.com/neuropoly/totalspineseg/releases/download/r20241005/Dataset101_TotalSpineSeg_step1_r20241005.zip"
Dataset102_TotalSpineSeg_step2 = "https://github.com/neuropoly/totalspineseg/releases/download/r20241005/Dataset102_TotalSpineSeg_step2_r20241005.zip"
Dataset101_TotalSpineSeg_step1 = "https://github.com/neuropoly/totalspineseg/releases/download/r20241115/Dataset101_TotalSpineSeg_step1_r20241115.zip"
Dataset102_TotalSpineSeg_step2 = "https://github.com/neuropoly/totalspineseg/releases/download/r20241115/Dataset102_TotalSpineSeg_step2_r20241115.zip"

[project.scripts]
totalspineseg = "totalspineseg.inference:main"
totalspineseg_init = "totalspineseg.init_inference:main"
totalspineseg_cpdir = "totalspineseg.utils.cpdir:main"
totalspineseg_fill_canal = "totalspineseg.utils.fill_canal:main"
totalspineseg_augment = "totalspineseg.utils.augment:main"
Expand All @@ -81,6 +75,7 @@ totalspineseg_extract_soft = "totalspineseg.utils.extract_soft:main"
totalspineseg_extract_levels = "totalspineseg.utils.extract_levels:main"
totalspineseg_extract_alternate = "totalspineseg.utils.extract_alternate:main"
totalspineseg_install_weights = "totalspineseg.utils.install_weights:main"
totalspineseg_predict_nnunet = "totalspineseg.utils.predict_nnunet:main"

[build-system]
requires = ["pip>=23", "setuptools>=67"]
Expand All @@ -90,4 +85,4 @@ build-backend = "setuptools.build_meta"
include-package-data = true

[tool.setuptools.package-data]
'totalspineseg' = ['resources/**.json']
'totalspineseg' = ['resources/**.json']
8 changes: 6 additions & 2 deletions scripts/train.sh
Original file line number Diff line number Diff line change
Expand Up @@ -63,8 +63,12 @@ export nnUNet_results="$TOTALSPINESEG_DATA"/nnUNet/results
export nnUNet_exports="$TOTALSPINESEG_DATA"/nnUNet/exports

nnUNetTrainer=${3:-nnUNetTrainer_DASegOrd0_NoMirroring}
nnUNetPlanner=${4:-nnUNetPlannerResEncL}
nnUNetPlans=${5:-nnUNetResEncUNetLPlans}
nnUNetPlanner=${4:-ExperimentPlanner}
# Note on nnUNetPlans_small configuration:
# To train with a small patch size, verify that the nnUNetPlans_small.json file
# in $nnUNet_preprocessed/Dataset10[1,2]_TotalSpineSeg_step[1,2] matches the version provided in the release.
# Make any necessary updates to this file before starting the training process.
nnUNetPlans=${5:-nnUNetPlans_small}
configuration=3d_fullres
data_identifier=nnUNetPlans_3d_fullres

Expand Down
7 changes: 6 additions & 1 deletion totalspineseg/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -13,4 +13,9 @@
from .utils.reorient_canonical import reorient_canonical_mp
from .utils.resample import resample, resample_mp
from .utils.transform_seg2image import transform_seg2image, transform_seg2image_mp
from .utils.install_weights import install_weights
from .utils.install_weights import install_weights
from .utils.predict_nnunet import predict_nnunet
from .utils.utils import ZIP_URLS, VERSION
from . import models

__version__ = VERSION
Loading

0 comments on commit d3e3157

Please sign in to comment.