Projects per year
Abstract
Extreme multilabel classification (XMLC) problems occur in settings such as related product recommendation, large-scale document tagging, or ad prediction, and are characterized by a label space that can span millions of possible labels. There are two implicit tasks that the classifier performs: \emph{Evaluating} each potential label for its expected worth, and then \emph{selecting} the best candidates. For the latter task, only the relative order of scores matters, and this is what is captured by the standard evaluation procedure in the XMLC literature. However, in many practical applications, it is important to have a good estimate of the actual probability of a label being relevant, e.g., to decide whether to pay the fee to be allowed to display the corresponding ad. To judge whether an extreme classifier is indeed suited to this task, one can look, for example, to whether it returns \emph{calibrated} probabilities, which has hitherto not been done in this field. Therefore, this paper aims to establish the current status quo of calibration in XMLC by providing a systematic evaluation, comprising nine models from four different model families across seven benchmark datasets.
As naive application of Expected Calibration Error (ECE) leads to meaningless results in long-tailed XMC datasets, we instead introduce the notion of \emph{calibration@k} (e.g., ECE@k), which focusses on the top-$k$ probability mass, offering a more appropriate measure for evaluating probability calibration in XMLC scenarios. While we find that different models can exhibit widely varying reliability plots, we also show that post-training calibration via a computationally efficient isotonic regression method enhances model calibration without sacrificing prediction accuracy. Thus, the practitioner can choose the model family based on accuracy considerations, and leave calibration to isotonic regression.
As naive application of Expected Calibration Error (ECE) leads to meaningless results in long-tailed XMC datasets, we instead introduce the notion of \emph{calibration@k} (e.g., ECE@k), which focusses on the top-$k$ probability mass, offering a more appropriate measure for evaluating probability calibration in XMLC scenarios. While we find that different models can exhibit widely varying reliability plots, we also show that post-training calibration via a computationally efficient isotonic regression method enhances model calibration without sacrificing prediction accuracy. Thus, the practitioner can choose the model family based on accuracy considerations, and leave calibration to isotonic regression.
Original language | English |
---|---|
Title of host publication | KDD 2025 :Proceedings of the 31st ACM SIGKDD Conference on Knowledge Discovery and Data Mining |
Number of pages | 21 |
Publication status | Accepted/In press - 17 Nov 2024 |
MoE publication type | A4 Conference publication |
Event | ACM SIGKDD International Conference on Knowledge Discovery and Data Mining - Toronto, Canada Duration: 3 Aug 2025 → 7 Aug 2025 Conference number: 31 |
Conference
Conference | ACM SIGKDD International Conference on Knowledge Discovery and Data Mining |
---|---|
Abbreviated title | KDD |
Country/Territory | Canada |
City | Toronto |
Period | 03/08/2025 → 07/08/2025 |
Fingerprint
Dive into the research topics of 'Labels in Extremes: How Well Calibrated are Extreme Multi-label Classifiers?'. Together they form a unique fingerprint.Projects
- 1 Active
-
ScaleX/Babbar: Scalable and Robust Representation Learning in Large output Spaces
Babbar, R. (Principal investigator)
01/09/2022 → 31/08/2026
Project: RCF Academy Project