Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Tutorial] LOO-CV #41

Open
yebai opened this issue Feb 13, 2022 · 5 comments
Open

[Tutorial] LOO-CV #41

yebai opened this issue Feb 13, 2022 · 5 comments

Comments

@yebai
Copy link
Member

yebai commented Feb 13, 2022

There are some excellent packages for estimating Bayesian evidence for Turing models. It would allow us to perform model comparisons for various priors and model choices. We should consider supporting these options - it could be a (killer) feature!

LOO-CV can be viewed as a proxy for Bayesian evidence / marginal likelihood.

See Vehtari, A., Gelman, A., & Gabry, J. (2017). Practical Bayesian model evaluation using leave-one-out cross-validation and WAIC. Statistics and Computing. https://link.springer.com/article/10.1007/s11222-016-9696-4

@storopoli
Copy link
Member

Do we need support? TuringGLM.jl just returns an instantiated Turing.jl model. It also re-exports Turing.jl, so you can do anything you want with the instantiated model. There is no scaffolding or anything between you and the model once you specify it with the turing_model function.

Maybe we could convert this issue into a "call for tutorials" on those topics?

@yebai
Copy link
Member Author

yebai commented Feb 14, 2022

Yes, AIS and TI should just work. NS still misses Turing integration, but that is not related to TuringGLM. So this should be another tutorial.

@storopoli storopoli self-assigned this Feb 14, 2022
@storopoli
Copy link
Member

Ok, converting to a Tutorial issue.

@ParadaCarleton
Copy link
Member

ParadaCarleton commented Feb 14, 2022

LOO-CV can be viewed as a proxy for Bayesian evidence / marginal likelihood.

Clarification in case someone stumbles across this in the future: This isn't quite true.

Bayesian evidence / marginal likelihood is equal to exhaustive cross-validation, rather than leave-one-out. In exhaustive CV, you do cross-validation for all 2^n possible train-test splits. This includes some pretty weird splits, e.g. your training dataset has 0 data points, while your test set includes all of the data.

To use an analogy, LOO-CV does the same thing as AIC (estimates the expected loss). Bayes factors do something like BIC (estimate the probability that a model is the best model in a candidate set).

@yebai
Copy link
Member Author

yebai commented Feb 14, 2022

thanks, @ParadaCarleton - your clarification is correct. Sorry, I wasn't precise. For a good reference on this, see below.

Fong, E., & Holmes, C. C. (2020). On the marginal likelihood and cross-validation. Biometrika, 107(2), 489–496. https://academic.oup.com/biomet/article/107/2/489/5715611

@storopoli storopoli changed the title Support Bayesian evidence estimation for GLM models [Tutorial] LOO-CV Feb 18, 2022
@storopoli storopoli removed their assignment Mar 1, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

3 participants