Jonathan Huggins, Boston University


Talk Title: “Trustworthy Variational Inference”

Abstract: Black-box variational inference (BBVI) has become an increasingly attractive fast alternative to Markov chain Monte Carlo methods for general-purpose approximate Bayesian inference. However, two major obstacles to the widespread use of BBVI methods have remained – particularly in data science applications where reliable inferences are a necessity. The first challenge is that standard stochastic optimization methods used for black-box variational inference lack robustness across diverse model types. I will present a more robust and accurate stochastic optimization framework that is tailored to variational inference. The second challenge is that variational methods lack post-hoc accuracy measures that are both theoretically justified and computationally efficient. To close this gap, I will describe new diagnostics that quantify the error of variational posterior mean and uncertainty estimates. I will conclude by showing how our optimization framework and diagnostics point toward a new and improved workflow for more trustworthy variational inference.