Abstract
Multi-output inference tasks, such as multi-label classification, have become increasingly important in recent years. A popular method for multi-label classification is classifier chains, in which the predictions of individual classifiers are cascaded along a chain, thus taking into account inter-label dependencies and improving the overall performance. Several varieties of classifier chain methods have been introduced, and many of them perform very competitively across a wide range of benchmark datasets. However, scalability limitations become apparent on larger datasets when modelling a fully cascaded chain. In particular, the methods strategies for discovering and modelling a good chain structure constitute a major computational bottleneck. In this paper, we present the classifier trellis (CT) method for scalable multi-label classification. We compare CT with several recently proposed classifier chain methods to show that it occupies an important niche: it is highly competitive on standard multi-label problems, yet it can also scale up to thousands or even tens of thousands of labels.
Original language | English |
---|---|
Pages (from-to) | 2096-2109 |
Number of pages | 14 |
Journal | Pattern Recognition |
Volume | 48 |
Issue number | 6 |
DOIs | |
Publication status | Published - 1 Jun 2015 |
MoE publication type | A1 Journal article-refereed |
Keywords
- Bayesian networks
- Classifier chains
- Multi-label classification
- Multi-output prediction
- Structured inference