-
Notifications
You must be signed in to change notification settings - Fork 1
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Region-based nnUNet training on PSIR and STIR images #68
Comments
3d_fullres region-based nnUNet on STIR and PSIR images The results are the following:
|
3d_fullres region-based nnUNet on STIR and mutliplied by -1 PSIR images The results are the following:
|
To compare the two models more thoroughly, we performed inference on the test set and computed metrics to compare them. To do so, we used :
This analysis pushed us to choose the fold 2 of the model trained on STIR and multiplied by -1 PSIR. The following table shows the comparison in terms of performance: TO DO:
|
Performance of 2d nnUNet model trained on PSIR and STIR images Model:
|
Performance of 2d nnUNet model trained on PSIR and STIR images Model:
|
Model choice In the file nnUNet_inference_analysis.ipynb we performed an extensive analysis of the model's performance over the test set (89 images : 20% of each site. We used Anima to compute:
The two best model's fold were :
The performance comparison can be seen in the following plots: The following table displays the performance value: Also, it seems that after performing visual comparison of 10 inferences, the 2d nnUNet seems to perform better then the 3d nnUNet. @valosekj @jcohenadad Any feedback ? |
Indeed, the 2D seems to give better performances on paper, but I find that loosing the 3D information is problematic for segmentation tasks involving very small objects. I would still go with the 3D I think. |
Thanks for the feedback ! Even though, from what I have seen on inferences, the 3d aspect of the lesions cannot really be seen, I also think that it makes more sense using a 3d model since it segments the spinal cord as well as lesions. Predictions of M12 time-point were done with both 2d and 3d model and converted to BIDS format. They are available here :
|
In this issue, I detail the process used to training several region-based nnUNet :
Each model is trained on 5 folds.
The CanProCo dataset was split in a training and testing set (80% and 20% respectively). For the first two model, the dataset was formatted to the nnUNet format using convert_BIDS_to_nnunet.py. For the last two models, the dataset was formatted to the nnUNet format using convert_BIDS_to_nnunet_with_mul_PSIR.py so that the PSIR images are multiplied by -1.
There are 336 images for training and 89 images for testing. The split was done to have around 80% of each site in the training dataset and 20% of each site in the testing dataset
The text was updated successfully, but these errors were encountered: