Neural Generalization of Multiple Kernel Learning

Ahmad Navid Ghanizadeh, Kamaledin Ghiasi-Shirazi*, Reza Monsefi, Mohammadreza Qaraei

*Corresponding author for this work

Research output: Contribution to journalArticleScientificpeer-review

18 Downloads (Pure)


Multiple Kernel Learning (MKL) is a conventional way to learn the kernel function in kernel-based methods. MKL algorithms enhance the performance of kernel methods. However, these methods have a lower complexity compared to deep models and are inferior to them regarding recognition accuracy. Deep learning models can learn complex functions by applying nonlinear transformations to data through several layers. In this paper, we show that a typical MKL algorithm can be interpreted as a one-layer neural network with linear activation functions. By this interpretation, we propose a Neural Generalization of Multiple Kernel Learning (NGMKL), which extends the conventional MKL framework to a multi-layer neural network with nonlinear activation functions. Our experiments show that the proposed method, which has a higher complexity than traditional MKL methods, leads to higher recognition accuracy on several benchmarks.

Original languageEnglish
Article number12
Pages (from-to)1-14
Number of pages14
JournalNeural Processing Letters
Issue number1
Publication statusPublished - Feb 2024
MoE publication typeA1 Journal article-refereed


  • Deep learning
  • Kernel methods
  • MKL
  • Multiple Kernel learning
  • Neural networks


Dive into the research topics of 'Neural Generalization of Multiple Kernel Learning'. Together they form a unique fingerprint.

Cite this