Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Confusion with result calculations #27

Open
shaswatpatel123 opened this issue Mar 12, 2022 · 0 comments
Open

Confusion with result calculations #27

shaswatpatel123 opened this issue Mar 12, 2022 · 0 comments

Comments

@shaswatpatel123
Copy link

shaswatpatel123 commented Mar 12, 2022

While observing few predictions, I came across few examples where the model predicted [0,0,0,0]. In such cases, the micro-average F1-score is 0.75. For such cases how do we calculate FPR(False Positive Rate) and TPR(True Positive Rate).
Suppose correct label is "Unknown" and the predicted outcome is "[]"(basically, the model predicted no outcome). So what do we imply from such cases?

Kindly please reply as soon as possible.
Thank you.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant