Convex Surrogates for Unbiased Loss Functions in Extreme Classification With Missing Labels

Mohammadreza Mohammadnia Qaraei, Erik Schultheis, Priyanshu Gupta, Rohit Babbar*

*Corresponding author for this work

Research output: Chapter in Book/Report/Conference proceedingConference article in proceedingsScientificpeer-review

18 Citations (Scopus)
427 Downloads (Pure)

Abstract

Extreme Classification (XC) refers to supervised learning where each training/test instance is labeled with small subset of relevant labels that are chosen from a large set of possible target labels. The framework of XC has been widely employed in web applications such as automatic labeling of web-encyclopedia, prediction of related searches, and recommendation systems. While most state-of-the-art models in XC achieve high overall accuracy by performing well on the frequently occurring labels, they perform poorly on a large number of infrequent (tail) labels. This arises from two statistical challenges, (i) missing labels, as it is virtually impossible to manually assign every relevant label to an instance, and (ii) highly imbalanced data distribution where a large fraction of labels are tail labels. In this work, we consider common loss functions that decompose over labels, and calculate unbiased estimates that compensate missing labels according to Natarajan et al. [26]. This turns out to be disadvantageous from an optimization perspective, as important properties such as convexity and lower-boundedness are lost. To circumvent this problem, we use the fact that typical loss functions in XC are convex surrogates of the 0-1 loss, and thus propose to switch to convex surrogates of its unbiased version. These surrogates are further adapted to the label imbalance by combining with label-frequency-based rebalancing. We show that the proposed loss functions can be easily incorporated into various different frameworks for extreme classification. This includes (i) linear classifiers, such as DiSMEC, on sparse input data representation, (ii) attention-based deep architecture, AttentionXML, learnt on dense Glove embeddings, and (iii) XLNet-based transformer model for extreme classification, APLC-XLNet. Our results demonstrate consistent improvements over the respective vanilla baseline models, on the propensity-scored metrics for precision and nDCG.
Original languageEnglish
Title of host publicationThe Web Conference 2021 - Proceedings of the World Wide Web Conference, WWW 2021
PublisherACM
Pages3711-3720
Number of pages10
ISBN (Electronic)9781450383127
DOIs
Publication statusPublished - 19 Apr 2021
MoE publication typeA4 Conference publication
EventThe Web Conference - Ljubljana, Slovenia
Duration: 19 Apr 202123 Apr 2021

Conference

ConferenceThe Web Conference
Abbreviated titleWWW
Country/TerritorySlovenia
CityLjubljana
Period19/04/202123/04/2021

Fingerprint

Dive into the research topics of 'Convex Surrogates for Unbiased Loss Functions in Extreme Classification With Missing Labels'. Together they form a unique fingerprint.

Cite this