1 Citation (Scopus)

Abstract

Sequential learning paradigms pose challenges for gradient-based deep learning due to difficulties incorporating new data and retaining prior knowledge. While Gaussian processes elegantly tackle these problems, they struggle with scalability and handling rich inputs, such as images. To address these issues, we introduce a technique that converts neural networks from weight space to function space, through a dual parameterization. Our parameterization offers: (i) a way to scale function-space methods to large data sets via sparsification, (ii) retention of prior knowledge when access to past data is limited, and (iii) a mechanism to incorporate new data without retraining. Our experiments demonstrate that we can retain knowledge in continual learning and incorporate new data efficiently. We further show its strengths in uncertainty quantification and guiding exploration in model-based RL. Further information and code is available on the project website.

Original languageEnglish
Number of pages29
Publication statusPublished - 2024
MoE publication typeNot Eligible
EventInternational Conference on Learning Representations - Messe Wien Exhibition and Congress Center, Vienna, Austria
Duration: 7 May 202411 May 2024
Conference number: 12
https://iclr.cc/

Conference

ConferenceInternational Conference on Learning Representations
Abbreviated titleICLR
Country/TerritoryAustria
CityVienna
Period07/05/202411/05/2024
Internet address

Fingerprint

Dive into the research topics of 'FUNCTION-SPACE PARAMETERIZATION OF NEURAL NETWORKS FOR SEQUENTIAL LEARNING'. Together they form a unique fingerprint.

Cite this