Neural Variational Sparse Topic Model for Sparse Explainable Text Representation

Qianqian Xie, Prayag Tiwari*, Deepak Gupta, Jimin Huang, Min Peng

*Corresponding author for this work

Research output: Contribution to journalArticleScientificpeer-review

7 Citations (Scopus)
228 Downloads (Pure)

Abstract

Texts are the major information carrier for internet users, from which learning the latent representations has important research and practical value. Neural topic models have been proposed and have great performance in extracting interpretable latent topics and representations of texts. However, there remain two major limitations: 1) these methods generally ignore the contextual information of texts and have limited feature representation ability due to the shallow feed-forward network architecture, 2) Sparsity of the representations in topic semantic space is ignored. To address these issues, in this paper, we propose a semantic reinforcement neural variational sparse topic model (SR-NSTM) towards explainable and sparse latent text representation learning. Compared with existing neural topic models, SR-NSTM models the generative process of texts with probabilistic distributions parameterized with neural networks and incorporates Bi-directional LSTM to embed contextual information at the document level. It achieves sparse posterior representations over documents and words with zero-mean Laplace distribution and topics with sparsemax. Moreover, we propose a supervised extension of SR-NSTM via adding the max-margin posterior regularization to tackle the supervised tasks. The neural variational inference method is utilized to learn our models efficiently. Experimental results on Web Snippets, 20Newsgroups, BBC, and Biomedical datasets demonstrate that the contextual information and revisiting generative process can improve the performance, leading to the competitive performance of our models in learning coherent topics and explainable sparse representations for texts.
Original languageEnglish
Article number102614
Pages (from-to)1-15
Number of pages15
JournalInformation Processing and Management
Volume58
Issue number5
DOIs
Publication statusPublished - Sep 2021
MoE publication typeA1 Journal article-refereed

Keywords

  • Neural Variational Inference
  • Neural Sparse Topic Model
  • Explainable Text Representation

Fingerprint

Dive into the research topics of 'Neural Variational Sparse Topic Model for Sparse Explainable Text Representation'. Together they form a unique fingerprint.

Cite this