-
Notifications
You must be signed in to change notification settings - Fork 2
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Lumbar rootlets - model training on Draw Tube
labels
#67
Comments
This comment was marked as resolved.
This comment was marked as resolved.
This comment was marked as resolved.
This comment was marked as resolved.
This comment was marked as resolved.
This comment was marked as resolved.
The training of both models has finished. TL;DR: predictions of the binary model ( Semantic (level-specific) model (Dataset301_LumbarRootlets)training_log2024-07-21 01:07:55.046880: Current learning rate: 5e-05
2024-07-21 01:09:39.849596: train_loss -0.8012
2024-07-21 01:09:39.849765: val_loss -0.3613
2024-07-21 01:09:39.849864: Pseudo dice [0.0, 0.2224, 0.4656, 0.2038, 0.0393, 0.1114, 0.1499, 0.1689, 0.0]
2024-07-21 01:09:39.849931: Epoch time: 104.8 s
2024-07-21 01:09:41.051663:
2024-07-21 01:09:41.051810: Epoch 998
2024-07-21 01:09:41.051903: Current learning rate: 4e-05
2024-07-21 01:11:26.669583: train_loss -0.8066
2024-07-21 01:11:26.669759: val_loss -0.3505
2024-07-21 01:11:26.669865: Pseudo dice [0.0, 0.2126, 0.4824, 0.1682, 0.0279, 0.0927, 0.1335, 0.1464, 0.0]
2024-07-21 01:11:26.669945: Epoch time: 105.62 s
2024-07-21 01:11:27.936484:
2024-07-21 01:11:27.936840: Epoch 999
2024-07-21 01:11:27.937007: Current learning rate: 2e-05
2024-07-21 01:13:13.107770: train_loss -0.7985
2024-07-21 01:13:13.107922: val_loss -0.3357
2024-07-21 01:13:13.108025: Pseudo dice [0.0, 0.1974, 0.4752, 0.1902, 0.0334, 0.1057, 0.1582, 0.1612, 0.0]
2024-07-21 01:13:13.108104: Epoch time: 105.17 s
2024-07-21 01:13:15.006795: Training done.
2024-07-21 01:13:15.025028: Using splits from existing split file: /home/GRAMES.POLYMTL.CA/p118175/data/nnunetv2/nnUNet_preprocessed/Dataset301_LumbarRootlets/splits_final.json
2024-07-21 01:13:15.025259: The split file contains 5 splits.
2024-07-21 01:13:15.025300: Desired fold for training: 0
2024-07-21 01:13:15.025336: This split has 4 training and 2 validation cases.
2024-07-21 01:13:15.025455: predicting sub-CTS10_ses-SPanat_T2w_001
2024-07-21 01:13:15.026371: sub-CTS10_ses-SPanat_T2w_001, shape torch.Size([1, 192, 372, 1023]), rank 0
2024-07-21 01:15:05.478443: predicting sub-CTS15_ses-SPpre_T2w_001
2024-07-21 01:15:05.497945: sub-CTS15_ses-SPpre_T2w_001, shape torch.Size([1, 192, 372, 1023]), rank 0
2024-07-21 01:16:55.437292: Validation complete
2024-07-21 01:16:55.437382: Mean Validation Dice: 0.12612002796626479 The predictions of the semantic model on unseen subjects are not good. The rootlets are predicted only the caudal part of the FOV. Binary (all rootlets set to 1) model (Dataset302_LumbarRootlets)training_log2024-07-21 01:44:41.538921: Epoch 998
2024-07-21 01:44:41.539023: Current learning rate: 4e-05
2024-07-21 01:45:34.577191: train_loss -0.8987
2024-07-21 01:45:34.577362: val_loss -0.3704
2024-07-21 01:45:34.577416: Pseudo dice [0.3858]
2024-07-21 01:45:34.577475: Epoch time: 53.04 s
2024-07-21 01:45:35.760479:
2024-07-21 01:45:35.760615: Epoch 999
2024-07-21 01:45:35.760712: Current learning rate: 2e-05
2024-07-21 01:46:28.753630: train_loss -0.8903
2024-07-21 01:46:28.753798: val_loss -0.3714
2024-07-21 01:46:28.753851: Pseudo dice [0.3877]
2024-07-21 01:46:28.753914: Epoch time: 52.99 s
2024-07-21 01:46:30.514437: Training done.
2024-07-21 01:46:30.529201: Using splits from existing split file: /home/GRAMES.POLYMTL.CA/p118175/data/nnunetv2/nnUNet_preprocessed/Dataset302_LumbarRootlets/splits_final.json
2024-07-21 01:46:30.529334: The split file contains 5 splits.
2024-07-21 01:46:30.529372: Desired fold for training: 0
2024-07-21 01:46:30.529406: This split has 4 training and 2 validation cases.
2024-07-21 01:46:30.529503: predicting sub-CTS10_ses-SPanat_T2w_001
2024-07-21 01:46:30.530237: sub-CTS10_ses-SPanat_T2w_001, shape torch.Size([1, 192, 372, 1023]), rank 0
2024-07-21 01:47:25.381866: predicting sub-CTS15_ses-SPpre_T2w_001
2024-07-21 01:47:25.399373: sub-CTS15_ses-SPpre_T2w_001, shape torch.Size([1, 192, 372, 1023]), rank 0
2024-07-21 01:48:20.876391: Validation complete
2024-07-21 01:48:20.876483: Mean Validation Dice: 0.40915237629260454 The predictions of the binary model on unseen subjects are reasonable; see the example on 2 testing subjects below. The comparison shows predictions obtained using:
Interestingly, some rootlets predicted by the older model (202) were not predicted by the new model (302) and vice versa. |
Next steps: tweak model training parameters to improve the model, namely:
|
Looking at (the axis order here is: SI, AP, RL) "3d_fullres": {
"batch_size": 2,
"patch_size": [
64,
112,
320
],
"median_image_size_in_voxels": [
192.0,
372.0,
1023.0
],
"spacing": [
0.5,
0.29296875,
0.29296875
], The For example, when comparing with "3d_fullres": {
"batch_size": 2,
"patch_size": [
224,
224,
48
],
"median_image_size_in_voxels": [
320.0,
320.0,
64.0
],
"spacing": [
0.800000011920929,
0.800000011920929,
0.7999992370605469
], This brings me to the idea that maybe I could try to crop the images around the SC before running the training. The contrast-agnostic model seems to work well! So, the cropping could be done easily. |
Training on the cropped images started: binary model ( Crop data around the spinal cordcd ~/code/model-spinal-rootlets/
git fetch
git checkout jv/lumbar_rootlets
cd $nnUNet_raw
cp -r Dataset302_LumbarRootlets Dataset312_LumbarRootlets
cd Dataset312_LumbarRootlets/imagesTr
bash ~/code/model-spinal-rootlets/training/crop_lumbar_data.sh Note Note that I had to change nnUNetPlans.json "3d_fullres": {
"batch_size": 2,
"patch_size": [
128,
128,
128
],
"median_image_size_in_voxels": [
192.0,
161.0,
166.5
],
"spacing": [
0.5,
0.29296875,
0.29296875
], Start training:bash ~/code/model-spinal-rootlets/training/run_training.sh 1 312 Dataset312_LumbarRootlets |
Okay, training on the cropped images is done (binary model training_log2024-07-25 07:57:28.071485: Current learning rate: 5e-05
2024-07-25 07:58:12.384417: train_loss -0.9216
2024-07-25 07:58:12.384571: val_loss -0.3303
2024-07-25 07:58:12.384622: Pseudo dice [0.3783]
2024-07-25 07:58:12.384680: Epoch time: 44.31 s
2024-07-25 07:58:13.608948:
2024-07-25 07:58:13.609235: Epoch 998
2024-07-25 07:58:13.609570: Current learning rate: 4e-05
2024-07-25 07:58:57.742579: train_loss -0.9236
2024-07-25 07:58:57.742764: val_loss -0.3314
2024-07-25 07:58:57.742818: Pseudo dice [0.3775]
2024-07-25 07:58:57.742882: Epoch time: 44.13 s
2024-07-25 07:58:58.964358:
2024-07-25 07:58:58.964548: Epoch 999
2024-07-25 07:58:58.964673: Current learning rate: 2e-05
2024-07-25 07:59:43.275873: train_loss -0.9214
2024-07-25 07:59:43.276055: val_loss -0.3291
2024-07-25 07:59:43.276127: Pseudo dice [0.3773]
2024-07-25 07:59:43.276187: Epoch time: 44.31 s
2024-07-25 07:59:45.474522: Training done.
2024-07-25 07:59:45.489014: Using splits from existing split file: /home/GRAMES.POLYMTL.CA/p118175/data/nnunetv2/nnUNet_preprocessed/Dataset312_LumbarRootlets/splits_final.json
2024-07-25 07:59:45.489179: The split file contains 5 splits.
2024-07-25 07:59:45.489222: Desired fold for training: 0
2024-07-25 07:59:45.489260: This split has 4 training and 2 validation cases.
2024-07-25 07:59:45.489359: predicting sub-CTS10_ses-SPanat_T2w_001
2024-07-25 07:59:45.490096: sub-CTS10_ses-SPanat_T2w_001, shape torch.Size([1, 192, 163, 163]), rank 0
2024-07-25 08:00:06.051738: predicting sub-CTS15_ses-SPpre_T2w_001
2024-07-25 08:00:06.053505: sub-CTS15_ses-SPpre_T2w_001, shape torch.Size([1, 192, 160, 168]), rank 0
2024-07-25 08:00:12.400051: Validation complete
2024-07-25 08:00:12.400232: Mean Validation Dice: 0.36048916255847474 The Mean Validation Dice is 0.360, which is lower than 0.409 for the non-cropped model (binary model sub-CTS03_ses-SPpre_acq-zoomit_T2w.nii.gzlight blue - model trained on non-cropped images ( Preliminary conclusion: training on images cropped around the SC does not increase the segmentation performance. In contrast, it introduces a dependency on the SC seg used for cropping (which is a disadvantage). |
Thank you for testing the model on additional images, @RaphaSchl! The 3D rendering is useful here! I had a discussion with @naga-karthik, and he suggested using the commandscd $nnUNet_preprocessed
cp -r Dataset312_LumbarRootlets Dataset322_LumbarRootlets modify manually "3d_fullres": {
"data_identifier": "nnUNetPlans_3d_fullres",
"preprocessor_name": "DefaultPreprocessor",
"batch_size": 2,
"patch_size": [
192,
160,
96
], (192: SI, 160: AP, 96: RL) change manually the dataset name to start training: bash ~/code/model-spinal-rootlets/training/run_training.sh 1 322 Dataset322_LumbarRootlets |
Okay, the training of the model with Comparison with the non-cropped model Dataset302_LumbarRootlets on |
Notes about running the models on other testing images (done by @RaphaSchl -- thank you!): For CTS03 :
For CTS09 (remarkable!):
For CTS13:
For CTS17:
|
This issue summarizes model training on T2w lumbar data with relabeled rootlets using the 3D Slicer
Draw Tube
module.This is a follow up of #48.
0. Data overview
We have 6 subjects with the following labels:
1. Preparing nnUNet folders
details
2. Changing label values to be consecutive (this is required by nnUNet)
details
Original values
Removing label 18 (presented only for two subjects) for now:
Recoding using recode_nii.py:
3. Training
fold1
, 4 training and 2 validation images.Semantic (level-specific) model:
Dataset301_LumbarRootlets
Binary model (all rootlets set to 1):
Dataset302_LumbarRootlets
Binarize labels and modify
dataset.json
:The text was updated successfully, but these errors were encountered: