Improving BERT Pretraining with Syntactic Supervision

Giorgos Tziafas, Kokos Kogkalidis, Gijs Wijnholds, Michael Moortgat

Research output: Chapter in Book/Report/Conference proceedingConference article in proceedingsScientificpeer-review

Abstract

Bidirectional masked Transformers have become the core theme in the current NLP landscape. Despite their impressive benchmarks, a recurring theme in recent research has been to question such models’ capacity for syntactic
generalization. In this work, we seek to address this question by adding a supervised, token-level supertagging objective to standard unsupervised pretraining, enabling the explicit incorporation of syntactic biases into the network’s training dynamics. Our approach is straightforward to implement, induces a marginal computational overhead and is general enough to adapt to a variety of settings. We apply our methodology on Lassy Large, an automatically annotated corpus of written Dutch. Our experiments suggest that our syntax-aware model performs on par with established baselines, despite Lassy Large being one order of magnitude smaller than commonly used corpora.
Original languageEnglish
Title of host publicationProceedings of the 2023 CLASP Conference on Learning with Small Data
PublisherAssociation for Computational Linguistics
Pages176-184
ISBN (Electronic)979-8-89176-000-4
Publication statusPublished - 2023
MoE publication typeA4 Conference publication
EventLearning with Small Data - Gothenburg, Sweden
Duration: 11 Sept 202312 Sept 2023

Publication series

NameCLASP Papers in Computational Linguistics
PublisherAssociation for Computational Linguistics
Volume5
ISSN (Electronic)2002-9764

Conference

ConferenceLearning with Small Data
Abbreviated titleLSD
Country/TerritorySweden
CityGothenburg
Period11/09/202312/09/2023

Fingerprint

Dive into the research topics of 'Improving BERT Pretraining with Syntactic Supervision'. Together they form a unique fingerprint.

Cite this