Navigating Extremes: Dynamic Sparsity in Large Output Spaces

Nasib Ullah*, Erik Schultheis, Mike Lasby, Yani Ioannou, Rohit Babbar

*Tämän työn vastaava kirjoittaja

Tutkimustuotos: Artikkeli kirjassa/konferenssijulkaisussaConference article in proceedingsScientificvertaisarvioitu

Abstrakti

In recent years, Dynamic Sparse Training (DST) has emerged as an alternative to post-training pruning for generating efficient models. In principle, DST allows for a more memory efficient training process, as it maintains sparsity throughout the entire training run. However, current DST implementations fail to capitalize on this in practice. Because sparse matrix multiplication is much less efficient than dense matrix multiplication on GPUs, most implementations simulate sparsity by masking weights. In this paper, we leverage recent advances in semi-structured sparse training to apply DST in the domain of classification with large output spaces, where memory-efficiency is paramount. With a label space of possibly millions of candidates, the classification layer alone will consume several gigabytes of memory. Switching from a dense to a fixed fan-in sparse layer updated with sparse evolutionary training (SET); however, severely hampers training convergence, especially at the largest label spaces. We find that poor gradient flow from the sparse classifier to the dense text encoder make it difficult to learn good input representations. By employing an intermediate layer or adding an auxiliary training objective, we recover most of the generalisation performance of the dense model. Overall, we demonstrate the applicability and practical benefits of DST in a challenging domain - characterized by a highly skewed label distribution that differs substantially from typical DST benchmark datasets - which enables end-to-end training with millions of labels on commodity hardware.

AlkuperäiskieliEnglanti
OtsikkoAdvances in Neural Information Processing Systems 37 (NeurIPS 2024)
ToimittajatA. Globerson, L. Mackey, D. Belgrave, A. Fan, U. Paquet, J. Tomczak, C. Zhang
KustantajaCurran Associates Inc.
Sivumäärä27
ISBN (painettu)9798331314385
TilaJulkaistu - 2025
OKM-julkaisutyyppiA4 Artikkeli konferenssijulkaisussa
TapahtumaConference on Neural Information Processing Systems - Vancouver, Canada, Vancouver , Kanada
Kesto: 10 jouluk. 202415 jouluk. 2024
Konferenssinumero: 38
https://neurips.cc/Conferences/2024

Julkaisusarja

NimiAdvances in Neural Information Processing Systems
KustantajaCurran Associates Inc.
Vuosikerta37
ISSN (painettu)1049-5258

Conference

ConferenceConference on Neural Information Processing Systems
LyhennettäNeurIPS
Maa/AlueKanada
KaupunkiVancouver
Ajanjakso10/12/202415/12/2024
www-osoite

Sormenjälki

Sukella tutkimusaiheisiin 'Navigating Extremes: Dynamic Sparsity in Large Output Spaces'. Ne muodostavat yhdessä ainutlaatuisen sormenjäljen.

Siteeraa tätä