Uncertainty in Bayesian Leave-One-Out Cross-Validation Based Model Comparison

Tuomas Sivula, Aki Vehtari, Måns Magnusson

Research output: Contribution to journalArticleScientificpeer-review


Leave-one-out cross-validation (LOO-CV) is a popular method for comparing Bayesian models based on their estimated predictive performance on new, unseen, data. Estimating the uncertainty of the resulting LOO-CV estimate is a complex task and it is known that the commonly used standard error estimate is often too small. We analyse the frequency properties of the LOO-CV estimator and study the uncertainty related to it. We provide new results of the properties of the uncertainty both theoretically and empirically and discuss the challenges of estimating it. We show that problematic cases include: comparing models with similar predictions, misspecified models, and small data. In these cases, there is a weak connection in the skewness of the sampling distribution and the distribution of the error of the LOO-CV estimator. We show that it is possible that the problematic skewness of the error distribution, which occurs when the models make similar predictions, does not fade away when the data size grows to infinity in certain situations.
Original languageEnglish
Number of pages88
Publication statusSubmitted - 3 Sep 2020
MoE publication typeA1 Journal article-refereed


  • Bayesian computation
  • model comparison
  • leave-one-out cross-validation
  • uncertainty
  • asymptotics

Fingerprint Dive into the research topics of 'Uncertainty in Bayesian Leave-One-Out Cross-Validation Based Model Comparison'. Together they form a unique fingerprint.

Cite this