-
Notifications
You must be signed in to change notification settings - Fork 2
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
What Metrics do we want to have for training/testing here? #4
Comments
This sounds good to me. I guess one other basic question is whether we want to have metrics for how "well-calibrated" the predictive uncertainties are, and if so, what those should look like. If this is in-scope, perhaps @karalets can provide some references / pointers? |
Great point, I am happy to take point on that with some references once we start having results to discuss how to evaluate such things. The general idea is the following: So once we have some experiments lined up where molecules for both categories are chosen well and start plotting stuff we can discuss that subtlety. |
But I would still like it if the chemists here could add some more informative and concrete metrics with relation to usefulness here. Examples are the things the Cambridge-group paper evaluates. |
Personally I'd imagine real chemists care about false positive rates with a cutoff and so on. But at this stage let's just suppose they're no different. |
I thought more about downstream quantities of interest as metrics. No clue what people care about. |
Initially, we can have things like log-likelihood in order to just be able to get some reasonable quantitative thing.
Over time, however, we may want to have more informative metrics for performance of the deep net on the task at hand, for instance downstream metrics for a chemistry application or so.
While this is not pressing to do at first, I am opening this issue so we can collect ideas for:
Both of those can and should also take into account the evaluation chosen in https://pubs.rsc.org/en/content/articlepdf/2019/sc/c9sc00616h as ultimately we will need to compare to it.
My first pitch is as stated:
The nice thing about this is we can rerun the same evaluation protocols with any metric, not just LLK.
The text was updated successfully, but these errors were encountered: