Abstract
This paper introduces a novel approach to learn multi-task regression models with constrained architecture complexity. The proposed model, named RFF-BLR, consists of a randomised feedforward neural network with two fundamental characteristics: a single hidden layer whose units implement the random Fourier features that approximate an RBF kernel, and a Bayesian formulation that optimises the weights connecting the hidden and output layers. The RFF-based hidden layer inherits the robustness of kernel methods. The Bayesian formulation enables promoting multioutput sparsity: all tasks interplay during the optimisation to select a compact subset of the hidden layer units that serve as common non-linear mapping for every tasks. The experimental results show that the RFF-BLR framework can lead to significant performance improvements compared to the state-of-the-art methods in multitask nonlinear regression, especially in small-sized training dataset scenarios.
Original language | English |
---|---|
Article number | 106619 |
Pages (from-to) | 1-16 |
Number of pages | 16 |
Journal | Neural Networks |
Volume | 179 |
DOIs | |
Publication status | Published - Nov 2024 |
MoE publication type | A1 Journal article-refereed |
Keywords
- Bayesian regression
- Extreme learning machine
- Kernel methods
- Multitask regression
- Random fourier features
- Random vector functional link networks