How to Train Deep Variational Autoencoders and Probabilistic Ladder Networks

Casper Kaae Sønderby, Tapani Raiko, Lars Maaløe, Søren Kaae Sønderby, Ole Winther

Research output: Working paperProfessional

Abstract

Variational autoencoders are a powerful framework for unsupervised learning. However, previous work has been restricted to shallow models with one or two layers of fully factorized stochastic latent variables, limiting the flexibility of the latent representation. We propose three advances in training algorithms of variational autoencoders, for the first time allowing to train deep models of up to five stochastic layers, (1) using a structure similar to the Ladder network as the inference model, (2) warm-up period to support stochastic units staying active in early training, and (3) use of batch normalization. Using these improvements we show state-of-the-art log-likelihood results for generative modeling on several benchmark datasets.
Original languageEnglish
Publication statusPublished - 2016
MoE publication typeD4 Published development or research report or study

Keywords

  • Statistics - Machine Learning
  • Computer Science - Learning

Fingerprint

Dive into the research topics of 'How to Train Deep Variational Autoencoders and Probabilistic Ladder Networks'. Together they form a unique fingerprint.

Cite this