-
Notifications
You must be signed in to change notification settings - Fork 0
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Supervised pre-training on SC segmentations using swinunetr
#12
Comments
Pre-training with 'transforms.CropForegroundd' was crashing to zero. Details: #12
Since both Experiment 3 - Model with
|
I kind of don't agree with this because I have trained on spine-generic and basel-mp2rage for contrast-agnostic and it worked fine. This crashing you report on multiple datasets might be an issue with the specific experiment -- once the training stopped and resumed from checkpoint -- there might have been an issue with loading the checkpoint and resuming training. if we compare: (1) spine-generic with CropForegroundd, (2) spine-generic + lesion datasets with CropForegroundd, while ensuring that the training did not crash at any point -- we might have different conclusion! |
Thanks @naga-karthik!
I tried the following experiment:
So, now we have several pre-trained models, I'm moving to fine-tuning on lesions! btw, hard to say what was the origin of training crashing in #12 (comment). I'll try to figure this out later. |
Do you also have some pre-trained |
Description
This issue summarizes some early experiments with supervised pre-training SC segmentations.
WIP branch: nk/jv_vit_unetr_ssl
Pre-training script: pretraining_and_finetuning/main_supervised_pretraining.py
Experiments
Unlike SSL experiments done in #7 and #9, the pre-training done in this issue is supervised, done on SC segmentations.
T2w images for the supervised pre-training come from 5 datasets (
canproco
,dcm-zurich
,sci-colorado
,sci-paris
,spine-generic multi-subject
). Number of training samples: 654. Number of validation samples: 163. Details about the images are provided in dataset-conversion/README.md.I'm currently training two different
swinunetr
models. Both withcrop_pad_size: [64, 160, 320]
andpatch_size: [64, 64, 64]
.Experiment 1 - Model with
CropForegroundd
- multiple datasetsThis model uses
transforms.CropForegroundd(keys=all_keys, source_key="label_sc")
to crop everything outside the SC mask. See pretraining_and_finetuning/transforms.py.GIF of validation samples
Note: validation is done every 5 epochs
Validation hard dice dropped to zero after ~100 epochs:
loss_plots
Note: validation is done every 5 epochs
Experiment 2 - Model without
CropForegroundd
- multiple datasetsThis model does NOT use
transforms.CropForegroundd(keys=all_keys, source_key="label_sc")
.GIF of validation samples
Note: validation is done every 5 epochs
loss_plots
Note: validation is done every 5 epochs
Model training crashed due to
OSErrror: [Errno 112] Host is down ...
(possibly because I'm still usingduke/temp
to load data from?). So I resumed the training from the best checkpoint (~65 epoch). Training resumed but then the validation hard dice dropped to zero:loss_plots after resume
Note: validation is done every 5 epochs
The text was updated successfully, but these errors were encountered: