-
Notifications
You must be signed in to change notification settings - Fork 0
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Pretrain on dcm-zurich
for compression detection and finetune for dcm-zurich-lesions
for lesion seg
#3
Comments
Seems like pre-training (3D) model for compression detection is not learning anything (see pseudo dice 0) in the graph below Maybe 2D model will work? OR, instead of segmenting one-voxel compression labels (which is hard to train), how about training a classification model for compression detection? |
I don't think predicting a single voxel is robust enough, I am tagging @NathanMolinier who is working on labeling intervertebral discs and i'm sure has a lot to say about this. My two cents: start with an object detection (to mitigate class imbalance), or a region-based seg, or a multi-channel input, in this case image & SC seg (@plbenveniste is working on this and can elaborate on pros/cons) |
You're right, thanks! Indeed, Naga and I have discussed internally that predicting a single voxel is probably not the way to go.
Thanks for the ideas. We are currently considering a classification task (compressed/non-compressed slice) followed by placing the "compression pixel" during post-processing (for example, we automatically segment SC and then put the pixel into the SC center of mass). Cross-referencing relevant issue: spinalcordtoolbox/spinalcordtoolbox#4333 (comment). |
From my experience with the vertebral labeling project, using segmentation algorithm such as nnunet to identify single voxels is not relevant and not effective. However, you could still try to create spheres centered on your voxels to improve your performances but I'm not sure it will lead to incredible results either. The main issue with single voxel detection is the loss function you want to use. Indeed, I am currently trying to replace the dice loss function with other loss functions such as the mean square error to evaluate the distance error between the ground truth and predictions. |
What about pre-training the model to segment SC and then fine-tuning it for lesion segmentation? |
This issue intends to compare performances between the model trained from scratch on
dcm-zurich-lesions-*
(#1) vs. a model pretrained ondcm-zurich
for detecting compression sites and using those pre-trained weights to fine-tune a model for lesion segmentation ondcm-zurich-lesions-*
datasets.Pre-training and fine-tuning are done using nnUNet to get a baseline estimate for the model performance (and to see if this idea works at all)
Working branch:
nk/dcm-zurich-pretraining
Training script: https://github.com/ivadomed/model-seg-dcm/blob/nk/dcm-zurich-pretraining/nnunet/run_dcm_zurich_pretraining_and_finetuning.sh
The text was updated successfully, but these errors were encountered: