Multilabel Classification through Random Graph Ensembles

Hongyu Su, Juho Rousu

Research output: Contribution to journalArticleScientificpeer-review

7 Citations (Scopus)

Abstract

We present new methods for multilabel classification, relying on ensemble learning on a collection of random output graphs imposed on the multilabel, and a kernel-based structured output learner as the base classifier. For ensemble learning, differences among the output graphs provide the required base classifier diversity and lead to improved performance in the increasing size of the ensemble. We study different methods of forming the ensemble prediction, including majority voting and two methods that perform inferences over the graph structures before or after combining the base models into the ensemble. We put forward a theoretical explanation of the behaviour of multilabel ensembles in terms of the diversity and coherence of microlabel predictions, generalizing previous work on single target ensembles. We compare our methods on a set of heterogeneous multilabel benchmark problems against the state-of-the-art machine learning approaches, including multilabel AdaBoost, convex multitask feature learning, as well as single target learning approaches represented by Bagging and SVM. In our experiments, the random graph ensembles are very competitive and robust, ranking first or second on most of the datasets. Overall, our results show that our proposed random graph ensembles are viable alternatives to flat multilabel and multitask learners.
Original languageEnglish
Pages (from-to)231-256
Number of pages26
JournalMachine Learning
Volume99
Issue number2
DOIs
Publication statusPublished - 2015
MoE publication typeA1 Journal article-refereed

Fingerprint Dive into the research topics of 'Multilabel Classification through Random Graph Ensembles'. Together they form a unique fingerprint.

  • Cite this