-
Notifications
You must be signed in to change notification settings - Fork 7
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Clarify naming of Bayesian solvers #41
Comments
First draft of the revised API. I am proposing collapsing BRR and ARD into a single user interface, controlled by the @casv2 @cortner @bernstei @jameskermode
|
I like this a lot. How do will I construct a solver object? In Julia rather than JSON style? |
See this, with the caveat that the name will change, it sounds like to BLR |
Looks good. Is there a reason you’ve parametrised in terms of variances rather than precisions as is more common? |
We should be quite careful with this. When I was messing around with the python re-implementation I got quite different performance when transforming those parameters to 1/x. That was with very ill conditioned matrices, which may be a lot better with the purified ACE1x basis, but it's not just a matter of notation. |
I agree with this point, we’ve made similar observations here. Precision seems better behaved generally.
…________________________________
From: bernstei ***@***.***>
Sent: Tuesday, January 31, 2023 7:54:18 PM
To: ACEsuit/ACEfit.jl ***@***.***>
Cc: James Kermode ***@***.***>; Mention ***@***.***>
Subject: Re: [ACEsuit/ACEfit.jl] Clarify naming of Bayesian solvers (Issue #41)
Looks good. Is there a reason you’ve parametrised in terms of variances rather than precisions as is more common?
We should be quite careful with this. When I was messing around with the python re-implementation I got quite different performance when transforming those parameters to 1/x. That was with very ill conditioned matrices, which may be a lot better with the purified ACE1x basis, but it's not just a matter of notation.
—
Reply to this email directly, view it on GitHub<#41 (comment)>, or unsubscribe<https://github.com/notifications/unsubscribe-auth/AAIAKTAMP5U4NUQHNZ6LUJTWVFUWVANCNFSM6AAAAAAUAO2AVU>.
You are receiving this because you were mentioned.Message ID: ***@***.***>
|
My personal hyperprior on the noise is:
When I went to translate that prior into code, it was simplest to enforce a lower bound on the variances - that's the entire explanation. Of course, we could convert the same intuition into an upper bound on the precisions, and I'm definitely open to that if it's better behaved. So I would argue that the interface should use variances - because I'd be surprised if any of us reason in terms of precisions - but for the actual optimization we should do whatever works best numerically. |
TODO: need to handle the transition between overconstrained/underconstrained more gracefully - should do that as part of this refactoring. |
Recording from discussion elsewhere: "They should all get [a maxiter parameter] please, and then they should fail with a nice user-friendly message, something along the lines of "even when the solver hasn't converged the quality of the solution may be good, please test this before changing solver parameters"" |
I've experimented with a few ways of implementing Bayesian ridge (for example), and the proliferation of functions has gotten a bit confusing. Now that more people are starting to use them, it's crucial to reorganize and document them.
The text was updated successfully, but these errors were encountered: