Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Extend the model for ventral rootlets #30

Closed
valosekj opened this issue Jan 15, 2024 · 4 comments
Closed

Extend the model for ventral rootlets #30

valosekj opened this issue Jan 15, 2024 · 4 comments

Comments

@valosekj
Copy link
Member

valosekj commented Jan 15, 2024

Context

TODO

Methods

Currently, the rootlet model segments only dorsal rootlets (the model is nnUNet-based and segments cervical dorsal rootlets (C2-C8, i.e., 7 classes)).
It would be great to extend the model to segment ventral rootlets as well:

image

Steps:

  1. Identify T2w images with well visible ventral rootlets.

Possible open-source datasets:

Possible internal datasets:

  • canproco - cervical spine, 0.8x0.5x0.5 mm
  • whole-spine - whole spine, iso 1 mm
  • marseille-rootlets - cervical spine, iso 0.8 mm
  • twh-rootlets - cervical spine, 0.4x0.3x0.4 mm
  1. Run prediction using the current M5 fold_all model to segment dorsal rootlets.
  2. Manually segment ventral rootlets (labeling tutorial here). As in the case of dorsal rootlets, the ventral rootlets segmentation should be level-specific (e.g., "2" for C2, "3" for C3). Point for discussion: do we want to have the same or different classes (voxel values) for dorsal and ventral rootlets? Different classes would increase the model complexity (two times more classes). Alternatively, we can separate dorsal and ventral (and possibly also right and left) rootlets using post-processing.
  3. Train a nnUNet/MONAI model to segment both dorsal and ventral rootlets.
  4. Compare the performance of the nnUNet/MONAI model segmenting both dorsal and ventral rootlets and the current nnUNet model segmenting only dorsal rootlets.
  5. Evaluate the variability in positioning in the S-I direction of the rostral/caudal rootlets between ventral dorsal and right/left across subjects.
@sandrinebedard
Copy link
Member

Why do you want to train a MONAI model now instead of the nnunet you were doing?

@valosekj
Copy link
Member Author

An idea from @naga-karthik on how to potentially segment ventral rootlets automatically: retrain the model on dorsal rootlets without A-P flipping augmentation. Then, before running the inference, flip an image along the A-P axis (sct_image -i sub-001_T2w.nii.gz -flip y -o sub-001_T2w_flip_y.nii.gz).

@valosekj
Copy link
Member Author

Why do you want to train a MONAI model now instead of the nnunet you were doing?

Mainly to move all the models to MONAI to reduce the number of SCT dependencies. Also, it seems that the inference of MONAI's models is faster compared to nnUNet.

@valosekj
Copy link
Member Author

Closing -- see summary: #42

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

2 participants