-
Notifications
You must be signed in to change notification settings - Fork 12
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Computing values on ground truth does not give perfect scores #26
Comments
It seems to do with the smooth parameter, changing it to smooth=0.0 gives a dice of 1.0. Some discussion about that here. segmentation_metrics/seg_metrics/seg_metrics.py Lines 81 to 89 in df9a231
|
@agarcia-ruiz thanks for the answer. I understand that it may be useful to have it to prevent a division by zero, but I am not convinced that it should be there outside the scope of training a DL model (have gone through the discussion you pointed, and even there it seem like for some cases the training does not work unless setting it to 0). In this context, the case should probably be dealt with in another way when |
@jhlegarreta Thank you very much proposing this question. Actually, we have had some discussion on this issue and we decided to use 0.001 as the default smooth. But indeed we found that a lot of users may feel confused about it especially when running our examples. So we will have another deeper discussion on it and see how to solve it. A possible solution, as you mentioned, is to make it as a parameter. Another solution is make it as 0 and raise Exception when division zero error occured. Anyway, we will solve it soon. And if you have better solution, please let us know. Best, |
@jhlegarreta Another solution could be that we have the default smooth value of 0 followed by a try excep function for the metrics calculation. If we met the division zero exception in the try function, we reset the smooth value to 0.001 and recalculate the metrics, (and may show a warning to the users about this case). What do you think of this solution? |
@Jingnan-Jia it looks like one possible workaround; I am not sure what the best approach is. Maybe worthwhile looking at what other scientific tools that compute at least the DSC do. In my mind, when giving the ground truth it should clearly provide a perfect score. Providing tests on well-known cases would probably be enlightening. |
@jhlegarreta Thank you for your reply. I updated the package. By installing the latest version ( I removed the use of "smooth= 0.001". Instead, I used the By running the code you provide you can get the following results.
|
v1.2.3 or later the perfect metrics can be obtained. |
@Jingnan-Jia thanks for the effort. |
Following one of the examples in the
README
file, a quick test using a ground truth image as the predicted image,does not give perfect scores: i.e. the dice score, Jaccard index, precision and recall are not 1.0. Although they are close (0.999) they should be a perfect 1.0.
The text was updated successfully, but these errors were encountered: