Discrete Latent Structure in Neural Networks

Vlad Niculae, Caio Corro, Nikita Nangia, Tsvetomila Mihaylova, André F.T. Martins

Research output: Contribution to journalReview Articlepeer-review

Abstract

Many types of data from fields including natural language processing, computer vision, and bioinformatics are well represented by discrete, compositional structures such as trees, sequences, or matchings. Latent structure models are a powerful tool for learning to extract such representations, offering a way to incorporate structural bias, discover insight about the data, and interpret decisions. However, effective training is challenging as neural networks are typically designed for continuous computation. This text explores three broad strategies for learning with discrete latent structure: continuous relaxation, surrogate gradients, and probabilistic estimation. Our presentation relies on consistent notations for a wide range of models. As such, we reveal many new connections between latent structure learning strategies, showing how most consist of the same small set of fundamental building blocks, but use them differently, leading to substantially different applicability and properties.

Original languageEnglish
Pages (from-to)99-211
Number of pages113
JournalFoundations and Trends in Signal Processing
Volume19
Issue number2
DOIs
Publication statusPublished - 2 Jun 2025
MoE publication typeA2 Review article, Literature review, Systematic review

Fingerprint

Dive into the research topics of 'Discrete Latent Structure in Neural Networks'. Together they form a unique fingerprint.

Cite this