Convergence guarantees for adaptive Bayesian quadrature methods

Kanagawa, Motonobu
Risk and Statistics 2019, 2nd ISM-UUlm Joint Workshop, 8-10 October 2019, Ulm, Germany

Adaptive Bayesian quadrature (ABQ) is a powerful approach to numerical integration that empirically compares favorably with Monte Carlo integration on problems of medium dimensionality (where non-adaptive quadrature is not competitive). Its key ingredient is an acquisition function that changes as a function of previously collected values of the integrand. While this adaptivity appears to be empirically powerful, it complicates analysis. Consequently, there are no theoretical guarantees so far for this class of methods. In this work, for a broad class of adaptive Bayesian quadrature methods, we prove consistency, deriving non-tight but informative convergence rates. To do so we introduce a new concept we call weak adaptivity. In guaranteeing consistency of ABQ, weak adaptivity is notionally similar to the ideas of detailed balance and ergodicity in Markov Chain Monte Carlo methods, which allow sufficient conditions for consistency of MCMC. Likewise, our results identify a large and flexible class of adaptive Bayesian quadrature rules as consistent, within which practitioners can develop empirically efficient methods.


Type:
Conférence
City:
Ulm
Date:
2019-10-08
Department:
Data Science
Eurecom Ref:
6131
Copyright:
© EURECOM. Personal use of this material is permitted. The definitive version of this paper was published in Risk and Statistics 2019, 2nd ISM-UUlm Joint Workshop, 8-10 October 2019, Ulm, Germany and is available at :
See also:

PERMALINK : https://www.eurecom.fr/publication/6131