Projekteja vuodessa
Abstrakti
In classification problems with large output spaces (up to millions of labels), the last layer can require an enormous amount of memory. Using sparse connectivity would drastically reduce the memory requirements, but as we show below, applied naïvely it can result in much diminished predictive performance. Fortunately, we found that this can be mitigated by introducing an intermediate layer of intermediate size. We further demonstrate that one can constrain the connectivity of the sparse layer to be of constant fan-in, in the sense that each output neuron will have the exact same number of incoming connections, which allows for more efficient implementations, especially on GPU hardware. The CUDA implementation of our approach is provided at https://github.com/xmc-aalto/ecml23-sparse.
Alkuperäiskieli | Englanti |
---|---|
Otsikko | Machine Learning and Knowledge Discovery in Databases |
Alaotsikko | Research Track - European Conference, ECML PKDD 2023, Proceedings |
Toimittajat | Danai Koutra, Claudia Plant, Manuel Gomez Rodriguez, Elena Baralis, Francesco Bonchi |
Kustantaja | Springer |
Sivut | 689-704 |
Sivumäärä | 16 |
ISBN (painettu) | 978-3-031-43417-4 |
DOI - pysyväislinkit | |
Tila | Julkaistu - 2023 |
OKM-julkaisutyyppi | A4 Artikkeli konferenssijulkaisussa |
Tapahtuma | European Conference on Machine Learning and Principles and Practice of Knowledge Discovery in Databases - Turin, Italia Kesto: 18 syysk. 2023 → 22 syysk. 2023 |
Julkaisusarja
Nimi | Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) |
---|---|
Kustantaja | Springer |
Vuosikerta | 14171 LNAI |
ISSN (painettu) | 0302-9743 |
ISSN (elektroninen) | 1611-3349 |
Conference
Conference | European Conference on Machine Learning and Principles and Practice of Knowledge Discovery in Databases |
---|---|
Lyhennettä | ECML PKDD |
Maa/Alue | Italia |
Kaupunki | Turin |
Ajanjakso | 18/09/2023 → 22/09/2023 |
Sormenjälki
Sukella tutkimusaiheisiin 'Towards Memory-Efficient Training for Extremely Large Output Spaces : Learning with 670k Labels on a Single Commodity GPU'. Ne muodostavat yhdessä ainutlaatuisen sormenjäljen.-
ScaleX/Babbar: Scalable and Robust Representation Learning in Large output Spaces
Babbar, R. (Vastuullinen tutkija)
01/09/2022 → 31/08/2026
Projekti: RCF Academy Project
-
HPC-HD/Babbar: High Performance Computing for the Detection and Analysis of Historical Discourses
Babbar, R. (Vastuullinen tutkija)
01/01/2022 → 31/12/2024
Projekti: RCF Academy Project targeted call