You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
might be unrelated but because we're on the topic, i am documenting it
I think we should compute the metrics individually per each class. That is, say, if we have region-based seg/lesion label, then the current script automatically computes metrics for both of them sequentially in a for loop. But, for the bavaria-quebec project, I have had issues where even if the prediciton has only 1 class, I see two classes in the output csv file. When metrics are aggregated across all subjects then this results in incorrect scores. All I'm tyring to say here is that the for loop and iterating across unique labels is not robust. In the end, I had to separate SC and lesion labels and then compute the metrics for SC and lesions independently (to be sure)
how about we:
create temporary masks for each available class in the predictino mask
run the metrics on these temporary (single-class) masks
delete the temporary masks?
The text was updated successfully, but these errors were encountered:
I have had issues where even if the prediciton has only 1 class, I see two classes in the output csv file.
This is interesting and definitely not intended. If the prediction has only 1 class, the output CSV file should also contain only a single class. Would you have a sample subject for debugging?
Originally posted by @naga-karthik in #17 (comment)
The text was updated successfully, but these errors were encountered: