## Abstract

This paper proposes a way of providing transparent and interpretable results for ELM models by adding confidence intervals to the predicted outputs. In supervised learning, outputs are often random variables because they may depend on information that is unavailable, due to the presence of noise, or the projection function itself may be stochastic. Probability distribution of outputs is input dependent, and the observed output values are samples from that distribution. However, ELM predicts deterministic outputs. The proposed method addresses that problem by estimating predictive Confidence Intervals (CIs) at a confidence level α, such that random output values fall between these intervals with probability α. Assuming that the outputs are normally distributed, only a standard deviation is needed to compute CIs of a predicted output (the predicted output itself is a mean). Our method provides CIs for ELM predictions by estimating standard deviation of a random output for a particular input sample. It shows good results on both toy and real skin segmentation datasets, and compares well with the existing Confidence-weighted ELM methods. On a toy dataset, the predicted CIs accurately represent the variable variance of outputs. On a real dataset, CIs improve the precision of a classification task at a cost of recall.

Original language | English |
---|---|

Pages (from-to) | 232-241 |

Number of pages | 10 |

Journal | Neurocomputing |

Volume | 219 |

DOIs | |

Publication status | Published - 5 Jan 2017 |

MoE publication type | A1 Journal article-refereed |

## Keywords

- Big data
- Confidence
- Confidence interval
- Extreme learning machines
- Regression
- Skin segmentation