Robust and Automated Variational Inference

Akash Kumar Dhaka

Research output: ThesisDoctoral ThesisCollection of Articles

Abstract

Bayesian inference offers a sound and consistent framework to analyze data under uncertainty. Good decision making under uncertainty requires writing elaborate probabilistic models to model and simulate the systems we find around us. The hope is to obtain calibrated probabilistic predictions on unseen conditions. The challenge is that, inference for these models is intractable in general. This necessitates application of approximate inference techniques which can perform fast and accurate inference on probabilistic models. The success of Bayesian methods and any application which builds on it, depends to a large extent on the success of approximate inference algorithm chosen by the user. Variational inference has emerged as a popular approximate inference algorithm. It can be seen as an optimization problem where the task is to find an optimal distribution as an approximation to the true intractable posterior. This optimization requires computing fast and unbiased gradients. The contributions of this thesis can be broadly divided into two themes. The first part is the application of variational inference to the task of fitting a Gaussian process model where we have access to batches of observations, which do not have a numerical value, but are available as rankings in a set. Interestingly, the approximation of softmax link function for multi-class Gaussian Process classification can also be seen as a pairwise comparison of classes. This viewpoint helps in deriving a similar variational inference algorithm for scaling Gaussian Process classification to settings where the number of classes and data points is very large compared to existing algorithms. The second part of thesis deals with automated variational inference as a general tool of inference for probabilistic programs in context of modern programming frameworks which use automatic differentiation to compute gradients. The recent innovations in automatic differentiation software and algorithmic improvements in the form of computation of noisy unbiased gradients with Monte Carlo and mini-batching has made it possible to use model agnostic and standardized automated stochastic optimization-based algorithms and scale it to large datasets. In settings where accurate posterior inference is important, this work shows some potential pitfalls of current practices which may lead to incorrect conclusions. This work provides a wide set of diagnostic tools to evaluate if the stochastic optimization has worked well enough and the obtained solution is accurate enough to be used as approximation to the true posterior. This work concludes by providing a set of recommendations to the end user which is- either to use a more expressive approximating distribution or to reparameterize the model itself to hopefully end up with an easier posterior distribution.
Translated title of the contributionRobust and Automated Variational Inference
Original languageEnglish
QualificationDoctor's degree
Awarding Institution
  • Aalto University
Supervisors/Advisors
  • Vehtari, Aki, Supervising Professor
  • Vehtari, Aki, Thesis Advisor
Publisher
Print ISBNs978-952-64-1076-0
Electronic ISBNs978-952-64-1077-7
Publication statusPublished - 2022
MoE publication typeG5 Doctoral dissertation (article)

Keywords

  • Bayesian inference
  • uncertainty
  • data analyzing

Fingerprint

Dive into the research topics of 'Robust and Automated Variational Inference'. Together they form a unique fingerprint.

Cite this