You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
At the moment, the model has 2 output channels for the axon and myelin masks. This formulation of the problem is not ideal because the classes are not mutually exclusive and the model gives lots of false positives for the background class. This could be mitigated by having a separate output channel for the background and using one-hot encoding for the loss/metrics computations.
The explanation related to this for the ivadomed implementation can be found in this comment
For reference, the current predictions look like this
The text was updated successfully, but these errors were encountered:
After some preliminary tests on the loader (see updated notebook), I found it was tricky to merge the labels. Fortunately, we have the axonmyelin-manual labels included in our datasets. Note that we need to add these 2 transforms to the labels:
Otherwise, one-hot encoding will fail. The first transform will change the [0, 127, 255] values to [0, 1.something, 2.something], which we can then round with the latter transform to get discrete values.
At the moment, the model has 2 output channels for the axon and myelin masks. This formulation of the problem is not ideal because the classes are not mutually exclusive and the model gives lots of false positives for the background class. This could be mitigated by having a separate output channel for the background and using one-hot encoding for the loss/metrics computations.
The explanation related to this for the ivadomed implementation can be found in this comment
For reference, the current predictions look like this
The text was updated successfully, but these errors were encountered: