Bayesian learning of feature spaces for multitask regression

Carlos Sevilla-Salcedo*, Ascensión Gallardo-Antolín, Vanessa Gómez-Verdejo, Emilio Parrado-Hernández

*Corresponding author for this work

Research output: Contribution to journalArticleScientificpeer-review

31 Downloads (Pure)

Abstract

This paper introduces a novel approach to learn multi-task regression models with constrained architecture complexity. The proposed model, named RFF-BLR, consists of a randomised feedforward neural network with two fundamental characteristics: a single hidden layer whose units implement the random Fourier features that approximate an RBF kernel, and a Bayesian formulation that optimises the weights connecting the hidden and output layers. The RFF-based hidden layer inherits the robustness of kernel methods. The Bayesian formulation enables promoting multioutput sparsity: all tasks interplay during the optimisation to select a compact subset of the hidden layer units that serve as common non-linear mapping for every tasks. The experimental results show that the RFF-BLR framework can lead to significant performance improvements compared to the state-of-the-art methods in multitask nonlinear regression, especially in small-sized training dataset scenarios.

Original languageEnglish
Article number106619
Pages (from-to)1-16
Number of pages16
JournalNeural Networks
Volume179
DOIs
Publication statusPublished - Nov 2024
MoE publication typeA1 Journal article-refereed

Keywords

  • Bayesian regression
  • Extreme learning machine
  • Kernel methods
  • Multitask regression
  • Random fourier features
  • Random vector functional link networks

Fingerprint

Dive into the research topics of 'Bayesian learning of feature spaces for multitask regression'. Together they form a unique fingerprint.

Cite this