Abstract

Neighbor embedding (NE) aims to preserve pairwise similarities between data items and has been shown to yield an effective principle for data visualization. However, even the best existing NE methods such as stochastic neighbor embedding (SNE) may leave large-scale patterns hidden, for example clusters, despite strong signals being present in the data. To address this, we propose a new cluster visualization method based on the Neighbor Embedding principle. We first present a family of Neighbor Embedding methods that generalizes SNE by using non-normalized Kullback–Leibler divergence with a scale parameter. In this family, much better cluster visualizations often appear with a parameter value different from the one corresponding to SNE. We also develop an efficient software that employs asynchronous stochastic block coordinate descent to optimize the new family of objective functions. Our experimental results demonstrate that the method consistently and substantially improves the visualization of data clusters compared with the state-of-the-art NE approaches. The code of our method is publicly available at https://github.com/rozyangno/sce.

Original languageEnglish
Article number12
Pages (from-to)1-14
Number of pages14
JournalSTATISTICS AND COMPUTING
Volume33
Issue number1
DOIs
Publication statusPublished - Feb 2023
MoE publication typeA1 Journal article-refereed

Keywords

  • Clustering
  • Information divergence
  • Neighbor embedding
  • Nonlinear dimensionality reduction
  • Stochastic optimization
  • Visualization

Fingerprint

Dive into the research topics of 'Stochastic cluster embedding'. Together they form a unique fingerprint.

Cite this