Abstract
This thesis studies Bayesian inference in the context of high dimensional and complex models.The main focus is on robustness and reliability from the angle of two important topics. First, we study projection predictive inference in the context of variable selection, where the goal is to accurately identify the minimal subset of variables which are relevant to predict the outcome. Second, we study variational inference and how different choices in the variational family, divergence measure and gradient estimator affect posterior inference in high dimensional problems. Traditionally, variable selection is carried out as part of the model estimation by means of incorporating a penalized likelihood term or sparsifying prior. These approaches favour sparse solutions, but the ultimate variable selection depends on arbitrary criteria imposed by the user (e.g. thresholding the inclusion probability of a variable). Instead, projection predictive inference solves variable selection and estimation in two stages. First, one builds the best-performing model possible. Then, one finds the minimal subset of variables that achieve the closest predictions to the reference model. Variable selection becomes a substantially easier problem through the use of a very accurate prediction model as the reference model. On the other hand, variational inference is a widely used framework for approximate inference in many models where exact inference is often not tractable. While it has been shown to scale well to large datasets, it has its limitations when dealing with high dimensional data. One limitation is the unreliable estimation of the objective function, which impacts the lack of robustness in the termination of these algorithms. This thesis studies and connects these topics. We extend projection predictive inference to complex models, such as generalized multilevel models and models whose observation family does not belong to the exponential family. In such cases, the underlying projection cannot be solved exactly and one needs to rely on approximate inference methods. Complementary, we develop novel methods to ensure more robust convergence on variational inference algorithms. We also study how variational inference extrapolates to high dimensional problems and propose a unified framework to better understand its limitations, which directly benefits our projection predictive inference work.
Translated title of the contribution | Robust Bayesian Inference: variable and structure selection and variational inference |
---|---|
Original language | English |
Qualification | Doctor's degree |
Awarding Institution |
|
Supervisors/Advisors |
|
Publisher | |
Print ISBNs | 978-952-64-1453-9 |
Electronic ISBNs | 978-952-64-1454-6 |
Publication status | Published - 2023 |
MoE publication type | G5 Doctoral dissertation (article) |
Keywords
- computer science
- variational inference