Probably Approximately Correct (PAC) learning and bayesian view #5
emptymalei
started this conversation in
2.Journal Club (Machine Learning)
Replies: 0 comments
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
The first time I read about PAC was in the book The Nature of Statistical Learning Theory by Vapnik 1.
PAC is a systematic theory on why learning from data is even feasible 2. The idea is to quantify the errors when learning from data and we find that it is possible to have infinitesimal error under certain conditions, e.g., large datasets. Quote from Guedj 3:
Bayesian learning is a very important topic in machine learning. We implement the Bayesian rule in the components of learning, e.g., posterior in the loss function. There also exists a PAC theory for Bayesian learning that explains why Bayesian algorithms work. Guedj wrote a primer on this topic3.
Footnotes
Vladimir N. Vapnik. The Nature of Statistical Learning Theory. 2000. doi:10.1007/978-1-4757-3264-1 ↩
Valiant LG. A theory of the learnable. Commun ACM. 1984;27: 1134–1142. doi:10.1145/1968.1972 ↩
Guedj B. A Primer on PAC-Bayesian Learning. arXiv [stat.ML]. 2019. Available: http://arxiv.org/abs/1901.05353 ↩ ↩2
Beta Was this translation helpful? Give feedback.
All reactions