On Flat versus Hierarchical Classification in Large-Scale Taxonomies

Rohit Babbar, Ioannis Partalas, Eric Gaussier, Massih-Reza Amini

Research output: Chapter in Book/Report/Conference proceedingConference contributionScientificpeer-review

Abstract

We study in this paper flat and hierarchical classification strategies in the context of large-scale taxonomies. To this end, we first propose a multiclass, hierarchical data dependent bound on the generalization error of classifiers deployed in large-scale taxonomies. This bound provides an explanation to several empirical results reported in the literature, related to the performance of flat and hierarchical classifiers. We then introduce another type of bound targeting the approximation error of a family of classifiers, and derive from it features used in a meta-classifier to decide which nodes to prune (or flatten) in a large-scale taxonomy. We finally illustrate the theoretical developments through several experiments conducted on two widely used taxonomies.
Original languageEnglish
Title of host publicationNIPS'13: Proceedings of the 26th International Conference on Neural Information Processing Systems
Pages1824-1832
Volume2
Publication statusPublished - 2013
MoE publication typeA4 Article in a conference publication
EventIEEE Conference on Neural Information Processing Systems - Lake Tahoe, United States
Duration: 5 Dec 201310 Dec 2013

Publication series

Name
ISSN (Print)1049-5258

Conference

ConferenceIEEE Conference on Neural Information Processing Systems
Abbreviated titleNIPS
CountryUnited States
CityLake Tahoe
Period05/12/201310/12/2013

Fingerprint

Dive into the research topics of 'On Flat versus Hierarchical Classification in Large-Scale Taxonomies'. Together they form a unique fingerprint.

Cite this