Efficient leave-one-out cross-validation for Bayesian non-factorized normal and Student-t models

Paul Christian Bürkner*, Jonah Gabry, Aki Vehtari

*Corresponding author for this work

Research output: Contribution to journalArticleScientificpeer-review

10 Citations (Scopus)
34 Downloads (Pure)

Abstract

Cross-validation can be used to measure a model’s predictive accuracy for the purpose of model comparison, averaging, or selection. Standard leave-one-out cross-validation (LOO-CV) requires that the observation model can be factorized into simple terms, but a lot of important models in temporal and spatial statistics do not have this property or are inefficient or unstable when forced into a factorized form. We derive how to efficiently compute and validate both exact and approximate LOO-CV for any Bayesian non-factorized model with a multivariate normal or Student-t distribution on the outcome values. We demonstrate the method using lagged simultaneously autoregressive (SAR) models as a case study.

Original languageEnglish
Pages (from-to)1243–1261
Number of pages19
JournalComputational Statistics
Volume36
Early online date2020
DOIs
Publication statusPublished - 2021
MoE publication typeA1 Journal article-refereed

Keywords

  • Bayesian inference
  • Cross-validation
  • Non-factorized models
  • Pareto-smoothed importance-sampling
  • SAR models

Fingerprint

Dive into the research topics of 'Efficient leave-one-out cross-validation for Bayesian non-factorized normal and Student-t models'. Together they form a unique fingerprint.

Cite this