Mixture kernel least mean square

Rosha Pokharel, Sohan Seth, Jose C. Principe

    Research output: Chapter in Book/Report/Conference proceedingConference contributionScientificpeer-review

    28 Citations (Scopus)

    Abstract

    Instead of using single kernel, different approaches of using multiple kernels have been proposed recently in kernel learning literature, one of which is multiple kernel learning (MKL). In this paper, we propose an alternative to MKL in order to select the appropriate kernel given a pool of predefined kernels, for a family of online kernel filters called kernel adaptive filters (KAF). The need for an alternative is that, in a sequential learning method where the hypothesis is updated at every incoming sample, MKL would provide a new kernel, and thus a new hypothesis in the new reproducing kernel Hilbert space (RKHS) associated with the kernel. This does not fit well in the KAF framework, as learning a hypothesis in a fixed RKHS is the core of the KAF algorithms. Hence, we introduce an adaptive learning method to address the kernel selection problem for the KAF, based on competitive mixture of models. We propose mixture kernel least mean square (MxKLMS) adaptive filtering algorithm, where the kernel least mean square (KLMS) filters learned with different kernels, act in parallel at each input instance and are competitively combined such that the filter with the best kernel is an expert for each input regime. The competition among these experts is created by using a performance based gating, that chooses the appropriate expert locally. Therefore, the individual filter parameters as well as the weights for combination of these filters are learned simultaneously in an online fashion. The results obtained suggest that the model not only selects the best kernel, but also significantly improves the prediction accuracy.

    Original languageEnglish
    Title of host publication2013 International Joint Conference on Neural Networks, IJCNN 2013
    DOIs
    Publication statusPublished - 2013
    MoE publication typeA4 Article in a conference publication
    EventInternational Joint Conference on Neural Networks - Dallas, United States
    Duration: 4 Aug 20139 Aug 2013

    Conference

    ConferenceInternational Joint Conference on Neural Networks
    Abbreviated titleIJCNN
    CountryUnited States
    CityDallas
    Period04/08/201309/08/2013

    Fingerprint Dive into the research topics of 'Mixture kernel least mean square'. Together they form a unique fingerprint.

    Cite this