Projects per year
Abstract
In classification problems with large output spaces (up to millions of labels), the last layer can require an enormous amount of memory. Using sparse connectivity would drastically reduce the memory requirements, but as we show below, applied naïvely it can result in much diminished predictive performance. Fortunately, we found that this can be mitigated by introducing an intermediate layer of intermediate size. We further demonstrate that one can constrain the connectivity of the sparse layer to be of constant fan-in, in the sense that each output neuron will have the exact same number of incoming connections, which allows for more efficient implementations, especially on GPU hardware. The CUDA implementation of our approach is provided at https://github.com/xmc-aalto/ecml23-sparse.
Original language | English |
---|---|
Title of host publication | Machine Learning and Knowledge Discovery in Databases |
Subtitle of host publication | Research Track - European Conference, ECML PKDD 2023, Proceedings |
Editors | Danai Koutra, Claudia Plant, Manuel Gomez Rodriguez, Elena Baralis, Francesco Bonchi |
Publisher | Springer |
Pages | 689-704 |
Number of pages | 16 |
ISBN (Print) | 978-3-031-43417-4 |
DOIs | |
Publication status | Published - 2023 |
MoE publication type | A4 Conference publication |
Event | European Conference on Machine Learning and Principles and Practice of Knowledge Discovery in Databases - Turin, Italy Duration: 18 Sept 2023 → 22 Sept 2023 |
Publication series
Name | Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) |
---|---|
Publisher | Springer |
Volume | 14171 LNAI |
ISSN (Print) | 0302-9743 |
ISSN (Electronic) | 1611-3349 |
Conference
Conference | European Conference on Machine Learning and Principles and Practice of Knowledge Discovery in Databases |
---|---|
Abbreviated title | ECML PKDD |
Country/Territory | Italy |
City | Turin |
Period | 18/09/2023 → 22/09/2023 |
Fingerprint
Dive into the research topics of 'Towards Memory-Efficient Training for Extremely Large Output Spaces : Learning with 670k Labels on a Single Commodity GPU'. Together they form a unique fingerprint.-
ScaleX/Babbar: Scalable and Robust Representation Learning in Large output Spaces
Babbar, R. (Principal investigator)
01/09/2022 → 31/08/2026
Project: Academy of Finland: Other research funding
-
HPC-HD/Babbar: High Performance Computing for the Detection and Analysis of Historical Discourses
Babbar, R. (Principal investigator)
01/01/2022 → 31/12/2024
Project: Academy of Finland: Other research funding