-
Notifications
You must be signed in to change notification settings - Fork 1
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
A way to assess normalization quality #32
Comments
I'm not sure I fully understand the context of this issue. The current repository is about cord segmentation on EPI scans. Why are we talking about registration to the template here? |
Hi Julien, I posted this here as it is relevant to this project and the publication that will come out of this project (instead of an email). Sorry that it was not clear. I thought that one of the main reasons that we need to manually segment the cord is to be able to properly register EPI data to the PAM50 template (from the perspective of an SCT user who is analyzing fMRI data ). Then, for someone interested in spinal fMRI analysis, would not it help to see that the normalization using automated segmentation performs comparable to manual segmetation? If yes, would not it be important to demonstrate it in the paper? |
indeed, it is
Demonstrating improvements in registration is quite tricky. One thing we could do, though, is run the registration pipeline and, for each site, show the group average EPI in the PAM50 space (one fig: one sub-panel per axial view) |
Thank you, Julien! Yes, definitely!! I assume that quality would be tricky to distinguish from the average (hopefully not!) . We can maybe accompany it with some sort of heat map (like a standard deviation map) to show consistency across subjects. What did you think about the metric that was employed in the recent preprint? |
I wouldn't go that far-- I think that a group average per site is enough. If reviewers want more meat, we will produce more meat.
What preprint are you referring to? |
Good to know, thank you!
This one that I posted above! |
Dear all (@jcohenadad @kennethaweberii ),
Currently, we are using the Dice Coefficient (between manual vs automatic mask) as a metric to assess the performance of newly trained model.
I had a meeting with @rohanbanerjee and we were discussing possible metrics to assess the quality of EPI to template normalization/registration (as this is one of the most important outcomes of automated segmentation from the user perspective, right?).
Therefore, the question is: how to assess and quantify the quality of template normalization?
I encountered this recent preprint.
Here they use a multi-step normalization method and compare it with single-step normalization (details are not important for my question), using the following approach:
`
`
What do you think? Is this a valid approach to evaluate the performance of registration?
The text was updated successfully, but these errors were encountered: