Word embedding based on low-rank doubly stochastic matrix decomposition

Research output: Chapter in Book/Report/Conference proceedingConference contributionScientificpeer-review

Details

Original languageEnglish
Title of host publicationNeural Information Processing
Subtitle of host publication25th International Conference, ICONIP 2018 Siem Reap, Cambodia, December 13–16, 2018 Proceedings, Part III
Publication statusPublished - 2018
MoE publication typeA4 Article in a conference publication
EventInternational Conference on Neural Information Processing - Siem Reap, Cambodia
Duration: 13 Dec 201816 Dec 2018
Conference number: 25

Publication series

NameLecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
PublisherSpringer
Volume11303 LNCS
ISSN (Print)0302-9743
ISSN (Electronic)1611-3349

Conference

ConferenceInternational Conference on Neural Information Processing
Abbreviated titleICONIP
CountryCambodia
CitySiem Reap
Period13/12/201816/12/2018

Researchers

Research units

  • Norwegian University of Science and Technology

Abstract

Word embedding, which encodes words into vectors, is an important starting point in natural language processing and commonly used in many text-based machine learning tasks. However, in most current word embedding approaches, the similarity in embedding space is not optimized in the learning. In this paper we propose a novel neighbor embedding method which directly learns an embedding simplex where the similarities between the mapped words are optimal in terms of minimal discrepancy to the input neighborhoods. Our method is built upon two-step random walks between words via topics and thus able to better reveal the topics among the words. Experiment results indicate that our method, compared with another existing word embedding approach, is more favorable for various queries.

    Research areas

  • Nonnegative matrix factorization, Word embedding, Cluster analysis, Doubly stochastic

ID: 30347849