Navigating Extremes: Dynamic Sparsity in Large Output Spaces

Nasib Ullah*, Erik Schultheis, Mike Lasby, Yani Ioannou, Rohit Babbar

*Corresponding author for this work

Research output: Chapter in Book/Report/Conference proceedingConference article in proceedingsScientificpeer-review

Abstract

In recent years, Dynamic Sparse Training (DST) has emerged as an alternative to post-training pruning for generating efficient models. In principle, DST allows for a more memory efficient training process, as it maintains sparsity throughout the entire training run. However, current DST implementations fail to capitalize on this in practice. Because sparse matrix multiplication is much less efficient than dense matrix multiplication on GPUs, most implementations simulate sparsity by masking weights. In this paper, we leverage recent advances in semi-structured sparse training to apply DST in the domain of classification with large output spaces, where memory-efficiency is paramount. With a label space of possibly millions of candidates, the classification layer alone will consume several gigabytes of memory. Switching from a dense to a fixed fan-in sparse layer updated with sparse evolutionary training (SET); however, severely hampers training convergence, especially at the largest label spaces. We find that poor gradient flow from the sparse classifier to the dense text encoder make it difficult to learn good input representations. By employing an intermediate layer or adding an auxiliary training objective, we recover most of the generalisation performance of the dense model. Overall, we demonstrate the applicability and practical benefits of DST in a challenging domain - characterized by a highly skewed label distribution that differs substantially from typical DST benchmark datasets - which enables end-to-end training with millions of labels on commodity hardware.

Original languageEnglish
Title of host publicationAdvances in Neural Information Processing Systems 37 (NeurIPS 2024)
EditorsA. Globerson, L. Mackey, D. Belgrave, A. Fan, U. Paquet, J. Tomczak, C. Zhang
PublisherCurran Associates Inc.
Number of pages27
ISBN (Print)9798331314385
Publication statusPublished - 2025
MoE publication typeA4 Conference publication
EventConference on Neural Information Processing Systems - Vancouver, Canada, Vancouver , Canada
Duration: 10 Dec 202415 Dec 2024
Conference number: 38
https://neurips.cc/Conferences/2024

Publication series

NameAdvances in Neural Information Processing Systems
PublisherCurran Associates Inc.
Volume37
ISSN (Print)1049-5258

Conference

ConferenceConference on Neural Information Processing Systems
Abbreviated titleNeurIPS
Country/TerritoryCanada
CityVancouver
Period10/12/202415/12/2024
Internet address

Fingerprint

Dive into the research topics of 'Navigating Extremes: Dynamic Sparsity in Large Output Spaces'. Together they form a unique fingerprint.

Cite this