-
Notifications
You must be signed in to change notification settings - Fork 16
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[PW41] Add MRSegmentator Model #90
base: main
Are you sure you want to change the base?
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Great Work!!
I suggested some minor changes. I'll update segdb and once we have that we can continue with the test routine. Thx @FJDorfner, very clean and to the point :)
{ | ||
"type": "Segmentation", | ||
"classes": [ | ||
"SPLEEN", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'll add SPINE
to SegDB. I can lookup the codes, but feel free to add SCT code's if you already have them ;))
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@FJDorfner We're adding a quite broad label here, can you explain weather the model delineates the entire spine or the bone-structure of the spine?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hi @LennyN95, the model segments the bodies of the vertebrae individually. The disk between is not included in the label. This would be the yellow label in the attached Screenshot as a visual:)
"label": "Input Image", | ||
"description": "The MR or CT scan of a patient (Thorax, Abdomen and Pelvis).", | ||
"format": "DICOM", | ||
"modality": "MRI|CT", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
- TODO @LennyN95: Add support for multi-modality inputs. Dash separation is a good idea and in line with DTQ syntax.
@FJDorfner since all comments from the previous review are now implemented, we can continue with the testing procedure. |
@LennyN95 Thanks! I will continue with the testing. |
Please note, we updated our base image. All mhub dependencies are now installed in a virtual environment under We also simplified our test routine. Sample and reference data now have to be uploaded to Zenodo and provided in a mhub.tom file at the project root. The process how to create and provide these sample data is explained in the updated testing phase article of our documentation. Under doi.org/10.5281/zenodo.13785615 we provide sample data as a reference. |
@LennyN95 Thanks for letting me know about the mhub update. I am currently trying to adapt the configuration to these changes in order to complete the PR. Unfortunately running the model is not working at the moment. The DicomImporter step comes up empty even though the correct paths are configured. This is what the output looks like:Start DicomImporter
sorting dicom data
0it [00:00, ?it/s]
. Instance [/app/data/_global] When I run the container interactively, I can run `dicomsort -k -u /app/data/input_data /app/data/sorted_data/%SeriesInstanceUID/dicom/%SOPInstanceUID.dcm' and it successfully finds and sorts the files. (After activating the venv at /app/.venv) Do you have any idea what the problem might be here? Thanks! |
RUN buildutils/import_mhub_model.sh mrsegmentator ${MHUB_MODELS_REPO} | ||
|
||
# Install MRSegmentator | ||
RUN uv pip install -p /app/.venv mrsegmentator |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
FYI; uv should use .venv
by default.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If you want to make sure there's no dependency conflict between mhubio and mrsegmentator you can use uv to create a custom venv and write a small cli wrapper running in that venv that can be called from within your Runner IO Module via self.subprocess.
Example:
https://github.com/MHubAI/models/blob/main/models/pyradiomics/dockerfiles/Dockerfile#L3-L8
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks! I don't think there is a dependency conflict, as the container and dicomsort work in interactive mode. Is there anything that I need to do to activate the .venv? When I run the container interactively dicomsort only works after activating the .venv.
RUN touch .temp_image.nii.gz | ||
RUN mrsegmentator -i .temp_image.nii.gz; exit 0 | ||
RUN rm .temp_image.nii.gz |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Could this be the issue? I am not sure if mrsegmentator is available, try uv run mrsegmentator
if it's installed in the .venv
.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't think this would be the issue, these lines seem to run fine when building the container, and there is no error message produced. The problem mainly is that the dicomsort module does not seem to find any data. I will still revise and use uv run mrsegmentator.
@LennyN95 I have tried to troubleshoot this some more and found that running: and then inside running the entrypoint command as defined in the dockerfile: works perfectly fine. Whereas when I run: it doesn't work. (see attached screenshot) Edit: Edit2: |
Thank you @LennyN95! The PR should now be compliant with the updated submission and testing procedure. |
Hey @FJDorfner, oh.. yes all "inside the container commands" need to be after the image name. However, I see you specify one extra cli argument |
Unfortunately, |
Can you identify where exactly MRSegmentator uses this? Then we can have a look together. We ran into a similar issue with NNUnetV2 and ended up providing a patch PR to allow single processing inference. So likely there is a way to make it work, ideal scenario you can set some num_worker=0, worst it might require some changes. |
Thanks! MRSegmentator is based upon nnUNetv2, the multiprocessing happens in there I think. What did you end up doing to get nnUNetv2 working? I think this would likely also fix the problem here. |
We have an open PR with a patch here MIC-DKFZ/nnUNet#2614. However, this is based on the latest version which as of now is |
Thank you for investigating and already opening a PR with nnUnetv2. Currently MRSegmentator is based on version nnunetv2==2.2.1. For now we would suggest to use the current fast configuration, which uses the split_level 1 flag, as the default configuration. In our tests this has very little impact on performance and runs perfectly without any changes to shm-size. |
This is great, thx @FJDorfner. Then, we are all-set to move on with the testing? |
Thanks! @LennyN95 yes we are all set to move on with the testing. I have updated the zenodo upload to accurately reflect the new configuration. |
Pull Request to add the MRSegmentator model to mhub.