Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Train a model to detect spinal cord centerline #29

Closed
jcohenadad opened this issue Jan 10, 2023 · 4 comments
Closed

Train a model to detect spinal cord centerline #29

jcohenadad opened this issue Jan 10, 2023 · 4 comments

Comments

@jcohenadad
Copy link
Member

Given the importance of cropping the volume for training/inference, a possible strategy would be to train a model to detect the centerline, that would be specific to these image.

@naga-karthik
Copy link
Member

Based on a brief (water cooler) conversation along with @valosekj, here's a proof-of-concept to test the feasibility of this idea. We could formulate this as a segmentation problem and make the model output the spinal cord (SC) centerlines as segmentation niftis.

Rationale? - We already get decent outputs from sct_get_centerline as binary nifti files containing discretized SC centerlines. Moreover, these are of the same shape as the input image, thereby setting up conveniently as a segmentation problem. Hence, the input to the model will be a T2w sagittal image (initially), and it is trained to output a binarized prediction of the centerline.

Implementational details that need to be ironed out:

  1. Usually, the input image is cropped to a manageable size using the (dilated) centerline or SC mask as the reference. How will the input be cropped if the goal itself is to predict the centerline?
  2. Should orientations other sagittal be considered? If so, how? What additional information can they provide?

Alternative - If the above idea does not work for some reason, here's an alternative. Instead of posing it as a segmentation problem, pose it as a regression problem. This is because sct_get_centerline also outputs a .csv file containing the voxel coordinates of the centerline along with binarized nifti. Then, a model is trained to regress these coordinates (3 values) for each slice in the sagittal image (hoping that it would be continuous). A centerline could then be constructed based the predicted voxel coordinates.

This is a rough sketch of what I had in mind, any suggestions are welcome!

@valosekj
Copy link
Member

Also relevant ivadomed/canproco#7

@jcohenadad
Copy link
Member Author

Should orientations other sagittal be considered? If so, how? What additional information can they provide?

I'm not sure I understand that part. Did you mean "other than sagittal"? And if so, at what level are you planning to enforce this orientation? Usually a specific orientation is not required (and should not)

@naga-karthik
Copy link
Member

This is answered in this issue(comment). It would be great if you could follow-up with your latest thoughts on that instead!

As a result, closing this issue here to avoid duplication.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants