Projects per year
Abstract
The slowness principle is a concept inspired by the visual cortex of the brain. It postulates that the underlying generative factors of a quickly varying sensory signal change on a different, slower time scale. By applying this principle to state-of-the-art unsupervised representation learning methods one can learn a latent embedding to perform supervised downstream regression tasks more data efficient. In this paper, we compare different approaches to unsupervised slow representation learning such as L norm based slowness regularization and the SlowVAE, and propose a new term based on Brownian motion used in our method, the S-VAE.
We empirically evaluate these slowness regularization terms with respect to their downstream task performance and data efficiency in state estimation and behavioral cloning tasks. We find that slow representations show great performance improvements in settings where only sparse labeled training data is available. Furthermore, we present a theoretical and empirical comparison of the discussed slowness regularization terms. Finally, we discuss how the Fr\'echet Inception Distance (FID), commonly used to determine the generative capabilities of GANs, can predict the performance of trained models in supervised downstream tasks.
We empirically evaluate these slowness regularization terms with respect to their downstream task performance and data efficiency in state estimation and behavioral cloning tasks. We find that slow representations show great performance improvements in settings where only sparse labeled training data is available. Furthermore, we present a theoretical and empirical comparison of the discussed slowness regularization terms. Finally, we discuss how the Fr\'echet Inception Distance (FID), commonly used to determine the generative capabilities of GANs, can predict the performance of trained models in supervised downstream tasks.
| Original language | English |
|---|---|
| Article number | 6299 |
| Pages (from-to) | 2297-2315 |
| Number of pages | 19 |
| Journal | Machine Learning |
| Volume | 112 |
| Issue number | 7 |
| Early online date | 25 Jan 2023 |
| DOIs | |
| Publication status | Published - Jul 2023 |
| MoE publication type | A1 Journal article-refereed |
Keywords
- Unsupervised Representation Learning
- Slowness Principle
- Data-efficient downstream tasks
Fingerprint
Dive into the research topics of 'Autoencoding Slow Representations for Semi-supervised Data-Efficient Regression'. Together they form a unique fingerprint.Projects
- 1 Finished
-
HBP SGA2: Human Brain Project Specific Grant Agreement 2
Kyrki, V. (Principal investigator), Struckmeier, O. (Project Member) & Tiwari, K. (Project Member)
01/04/2018 → 31/03/2020
Project: EU: Framework programmes funding