Deep bottleneck classifiers in supervised dimension reduction

Elina Parviainen*

*Corresponding author for this work

    Research output: Chapter in Book/Report/Conference proceedingConference contributionScientificpeer-review

    2 Citations (Scopus)


    Deep autoencoder networks have successfully been applied in unsupervised dimension reduction. The autoencoder has a "bottleneck" middle layer of only a few hidden units, which gives a low dimensional representation for the data when the full network is trained to minimize reconstruction error. We propose using a deep bottlenecked neural network in supervised dimension reduction. Instead of trying to reproduce the data, the network is trained to perform classification. Pretraining with restricted Boltzmann machines is combined with supervised finetuning. Finetuning with supervised cost functions has been done, but with cost functions that scale quadratically. Training a bottleneck classifier scales linearly, but still gives results comparable to or sometimes better than two earlier supervised methods.

    Original languageEnglish
    Title of host publicationArtificial Neural Networks, ICANN 2010 - 20th International Conference, Proceedings
    Number of pages10
    Volume6354 LNCS
    EditionPART 3
    Publication statusPublished - 2010
    MoE publication typeA4 Article in a conference publication
    EventInternational Conference on Artificial Neural Networks - Thessaloniki, Greece
    Duration: 15 Sep 201018 Sep 2010
    Conference number: 20

    Publication series

    NameLecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
    NumberPART 3
    Volume6354 LNCS
    ISSN (Print)03029743
    ISSN (Electronic)16113349


    ConferenceInternational Conference on Artificial Neural Networks
    Abbreviated titleICANN

    Fingerprint Dive into the research topics of 'Deep bottleneck classifiers in supervised dimension reduction'. Together they form a unique fingerprint.

    Cite this