Autoencoding Slow Representations for Semi-supervised Data-Efficient Regression

Oliver Struckmeier, Kshitij Tiwari, Ville Kyrki

Tutkimustuotos: LehtiartikkeliArticleScientificvertaisarvioitu

54 Lataukset (Pure)

Abstrakti

The slowness principle is a concept inspired by the visual cortex of the brain. It postulates that the underlying generative factors of a quickly varying sensory signal change on a different, slower time scale. By applying this principle to state-of-the-art unsupervised representation learning methods one can learn a latent embedding to perform supervised downstream regression tasks more data efficient. In this paper, we compare different approaches to unsupervised slow representation learning such as L norm based slowness regularization and the SlowVAE, and propose a new term based on Brownian motion used in our method, the S-VAE.
We empirically evaluate these slowness regularization terms with respect to their downstream task performance and data efficiency in state estimation and behavioral cloning tasks. We find that slow representations show great performance improvements in settings where only sparse labeled training data is available. Furthermore, we present a theoretical and empirical comparison of the discussed slowness regularization terms. Finally, we discuss how the Fr\'echet Inception Distance (FID), commonly used to determine the generative capabilities of GANs, can predict the performance of trained models in supervised downstream tasks.
AlkuperäiskieliEnglanti
Artikkeli6299
Sivut2297-2315
Sivumäärä19
JulkaisuMachine Learning
Vuosikerta112
Numero7
Varhainen verkossa julkaisun päivämäärä25 tammik. 2023
DOI - pysyväislinkit
TilaJulkaistu - heinäk. 2023
OKM-julkaisutyyppiA1 Alkuperäisartikkeli tieteellisessä aikakauslehdessä

Sormenjälki

Sukella tutkimusaiheisiin 'Autoencoding Slow Representations for Semi-supervised Data-Efficient Regression'. Ne muodostavat yhdessä ainutlaatuisen sormenjäljen.

Siteeraa tätä