This repository contains code for the inference benchmarks and results reported in several papers on Variational Bayesian Monte Carlo (VBMC) [1,2,3].
The Variational Bayesian Monte Carlo (VBMC) algorithm is available at this repository.
The goal of infbench
is to compare various sample-efficient approximate inference algorithms which have been proposed in the machine learning literature so as to deal with (moderately) expensive and potentially noisy likelihoods. In particular, we want to infer the posterior over model parameters and (an approximation of) the model evidence or marginal likelihood, that is the normalization constant of the posterior. Crucially, we assume that the budget is of the order of up to several hundreds likelihood evaluations.
Notably, this goal is more ambitious than simply finding the maximum of the posterior (MAP), problem that we previously tackled with Bayesian Adaptive Direct Search (aka BADS).
Our first benchmark shows that existing inference algorithms perform quite poorly at reconstructing posteriors (or evaluating their normalization constant) with both syntethic and real pdfs that have moderately challenging features, showing that this is a much harder problem [1,2].
Our second extensive benchmark shows that the latest version of VBMC (v1.0, June 2020) beats other state-of-the-art methods on real computational problems when dealing with noisy log-likelihood evaluations, such as those arising from simulation-based estimation techniques [3].
You can run the benchmark on one test problem in the vbmc18
problem set as follows:
> options = struct('SpeedTests',false);
> [probstruct,history] = infbench_run('vbmc18',testfun,D,[],algo,id,options);
The arguments are:
testfun
(string) is the test pdf, which can be'cigar'
,'lumpy'
,'studentt'
(synthetic test functions with different properties), or'goris2015'
(real model fitting problem with neuronal data, see here).D
(integer) is the dimensionality of the problem for the synthetic test functions (typical values are fromD=2
toD=10
), and7
or8
for thegoris2015
test set (corresponding to two different neuron datasets).algo
(string) is the inference algorithm being tested. Specific settings for the chosen inference algorithm are selected with'algo@setting'
, e.g.'agp@debug'
.id
(integer) is the run id, used to set the random seed. It can be an array, such asid=1:5
, for multiple consecutive runs.options
(struct) sets various options for the benchmark. For a fast test, I recommend to set the fieldSpeedTests
tofalse
since initial speed tests can be quite time consuming.
The outputs are:
probstruct
(struct) describes the current problem in detail.history
(struct array) summary statistics of the run for each id.
The vbm20
benchmark includes a number of real, challenging models and data largely from computational and cognitive neuroscience, from D = 3
to D = 9
. The benchmark is mostly designed to test methods that deal with noisy log-likelihood evaluations.
You can run the benchmark on one test problem in the vbmc20
problem set as follows:
> options = struct('SpeedTests',false);
> [probstruct,history] = infbench_run('vbmc20',testfun,D,noise,algo,id,options);
The arguments are:
testfun
(string) indicates the tested model, which can be'wood2010'
(Ricker),'krajbich2010'
(aDDM),'acerbi2012'
(Timing),'acerbidokka2018'
(Multisensory),'goris2015b'
(Neuronal),'akrami2018b'
(Rodent). The additional model presented in the Supplement is'price2018'
(g-and-k).D
(integer) is the dataset, withD = 1
for Ricker, Timing, Rodent, g-and-k;D = 1
orD = 2
for aDDM and Multisensory;D = 107
orD = 108
for Neuronal. For all problems (except Neuronal), add 100 toD
to obtain a noiseless version of the problem.algo
(string) is the inference algorithm being tested. For the algorithms tested in the paper [3], use'vbmc@imiqr'
,'vbmc@viqr'
,'vbmc@npro'
,'vbmc@eig'
,'parallelgp@v3'
,'wsabiplus@ldet'
.noise
(double) added Gaussian noise to the log-likelihood. Leave this empty (noise = []
) for all problems except for Neuronal, for which in the benchmark we usednoise = 2
(all other problems invbmc20
are intrinsically noisy).
For the other input and output arguments, see above.
Code used to generate figures in the paper [3] is available in this folder. However, you will first need to run the benchmark (due to space limitations, we cannot upload the bulk of the numerical results here).
- Acerbi, L. (2018). Variational Bayesian Monte Carlo. In Advances in Neural Information Processing Systems 31: 8222-8232. (paper + supplement on arXiv, NeurIPS Proceedings)
- Acerbi, L. (2019). An Exploration of Acquisition and Mean Functions in Variational Bayesian Monte Carlo. In Proc. Machine Learning Research 96: 1-10. 1st Symposium on Advances in Approximate Bayesian Inference, Montréal, Canada. (paper in PMLR)
- Acerbi, L. (2020). Variational Bayesian Monte Carlo with Noisy Likelihoods. To appear in Advances in Neural Information Processing Systems 33. arXiv preprint arXiv:2006.08655 (preprint on arXiv).
This repository is currently actively used for research, stay tuned for updates:
- Follow me on Twitter for updates about my work on model inference and other projects I am involved with;
- If you have questions or comments about this work, get in touch at [email protected] (putting 'infbench' in the subject of the email).
The inference benchmark is released under the terms of the GNU General Public License v3.0.