-
Notifications
You must be signed in to change notification settings - Fork 0
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Balakrishnan2019TMI - combining weak supervision #8
Comments
Relevant bits for preprocessing:
That's a lot! I only have half, around 1800 images.
We should discuss this tomorrow. I have started installing FreeSurfer on the DGX. |
Don't worry on the dataset size for the moment :) We can add other datasets later. |
FreeSurfer is installed in the cluster: If we need to perform a lot of parcellations, that's probably the way to go. The ones I have are GIF parcellations, which were also computed on the cluster. GIF is fine, but less standard, making our experiments less replicable. The FreeSurfer parcellation pipeline, called Summarising: we should probably use the cluster to run the FreeSurfer pipeline on all images to 1) be reproducible and 2) follow the paper. That will give us:
|
Some more information from voxelmorph:
We encourage users to download and process their own data. See a list of medical imaging datasets here. Note that you likely do not need to perform all of the preprocessing steps, and indeed VoxelMorph has been used in other work with other data. I'm not sure what those coordinates mean. |
I was looking into that as well. I think the coordinates may refer the cropping points in the 256x256x256 volume from freesurfer (i.e. volume[48:-48, 31:-33, 3:-29] |
That would make sense. Also, I just checked an output of
|
Thanks! I agree that FreeSurfer is better (a more standardized way) than using standalone algorithms for the required steps you already mentioned.
Looking at VoxelMorph papers they only report metrics on the 'aseg' labels (step 11 from recon-all). So, in principle, we may want ro run only autorecon1 and autorecon2 if that helps speed (I know sometimes it can be quite slow). |
Regarding the implementation, acc to the paper, we need
we should expect a dice loss around 0.766 for MSE and 0.774 for LNCC loss, so we can start with LNCC loss Launching experiments should be very simple with CLI tool, but we need to first enable changing the encoder/decoder channels as for now we are assuming the channels are doubled at each layer. |
As we are encountering some loss explosion effect for now, we should first use VM on our data first. |
Benchmark the selected experiments described in:
Balakrishnan, G., Zhao, A., Sabuncu, M.R., Guttag, J. and Dalca, A.V., 2019. Voxelmorph: a learning framework for deformable medical image registration. IEEE transactions on medical imaging, 38(8), pp.1788-1800.
This is related to #3.
Summary:
Tasks:
Unsupervised algorithms with segmentation-based weak supervision
Transformation:
predicting spatial transformation in DDF
Network and loss:
encoder-decoder with 2^4 resampling.
unsupervised loss: MSE and LNCCs
regulariser: L2-norm disp. gradient
label: Dice over all fixed-number of labels
difference: leaky_relu;
Data and experiments:
1.atlas-based registration, i.e. register each image to an atlas computed independently
2. random inter-subject paris
3. with manual segmentation
Metrics:
Dice on warped segmentation maps
Jacobian
The text was updated successfully, but these errors were encountered: