Master's Defense

Boosting Variational Inference: Theory and Examples

Speaker:Xiangyu Wang
xw56 at
Date: Thursday, July 21, 2016
Time: 10:30am - 12:30pm
Location: D344 LSRC, Duke


Variational inference (VI) has rapidly emerged as a popular alternative to traditional Monte Carlo algorithms for Bayesian inference due to the simplicity and efficiency. However, the application of VI has been largely limited by its stringent requirement on the approximate family, leading to inferior approximations for complicated Bayesian models. To ameliorate this limitation, we propose a boosting variational inference (BVI) approach that leverages the gradient boosting technique to improve VI. The BVI seeks to improve the initial variational approximation by adaptively mixing the approximate density with a new distributional component from the base family that best aligned with the {\em residual likelihood}. The resulted mixture approximation greatly expands VI's ability in capturing multimodality and complicated tail behaviors, while maintains a similar level of computation cost. In addition, the consistency of BVI is guaranteed by the boosting framework, allowing a tractable analysis and monitoring on its performance. This article focus on the theoretical aspect of this new methodology and we show the excellent performance of BVI via examples.
Advisor(s): Katherine Heller
Committee: Rong Ge, David Dunson