Visual Interpretation of DNN-based Acoustic Models using Deep Autoencoders

Tamás Grósz*, Mikko Kurimo

*Corresponding author for this work

Research output: Chapter in Book/Report/Conference proceedingConference contributionScientificpeer-review

24 Downloads (Pure)

Abstract

In the past few years, Deep Neural Networks (DNN) have become the state-of-the-art solution in several areas, including automatic speech recognition (ASR), unfortunately, they are generally viewed as black boxes. Recently, this started to change as researchers have dedicated much effort into interpreting their behavior. In this work, we concentrate on visual interpretation by depicting the hidden activation vectors of the DNN, and propose the usage of deep Autoencoders (DAE) to transform these hidden representations for inspection. We use multiple metrics to compare our approach with other, widely-used algorithms and the results show that our approach is quite competitive. The main advantage of using Autoencoders over the existing ones is that after the training phase, it applies a fixed transformation that can be used to visualize any hidden activation vector without any further optimization, which is not true for the other methods.
Original languageEnglish
Title of host publicationMachine Learning Methods in Visualisation for Big Data
Subtitle of host publicationEurographics proceedings
EditorsDaniel Archambault, Ian Nabney, Jaakko Peltonen
Pages25-29
ISBN (Electronic)978-3-03868-113-7
DOIs
Publication statusPublished - 2020
MoE publication typeA4 Article in a conference publication
EventInternational Workshop on Machine Learning in Visualisation for Big Data - Norrköping, Sweden
Duration: 25 May 202025 May 2020

Conference

ConferenceInternational Workshop on Machine Learning in Visualisation for Big Data
Abbreviated titleMLVis
CountrySweden
CityNorrköping
Period25/05/202025/05/2020

Fingerprint Dive into the research topics of 'Visual Interpretation of DNN-based Acoustic Models using Deep Autoencoders'. Together they form a unique fingerprint.

Cite this