LT@Helsinki at SemEval-2020 Task 12 : Multilingual or language-specific BERT?

Marc Pàmies*, Emily Öhman, Kaisla Kajava, Jörg Tiedemann

*Corresponding author for this work

Research output: Chapter in Book/Report/Conference proceedingConference article in proceedingsScientificpeer-review

Abstract

This paper presents the different models submitted by the LT@Helsinki team for the SemEval2020 Shared Task 12. Our team participated in sub-tasks A and C; titled offensive language identification and offense target identification, respectively. In both cases we used the so called Bidirectional Encoder Representation from Transformer (BERT), a model pre-trained by Google and fine-tuned by us on the OLID dataset. The results show that offensive tweet classification is one of several language-based tasks where BERT can achieve state-of-the-art results.
Original languageEnglish
Title of host publicationProceedings of the 14th International Workshop on Semantic Evaluation
PublisherInternational Committee on Computational Linguistics (ICCL
Number of pages7
ISBN (Print)978-1-952148-31-6
Publication statusPublished - 2020
MoE publication typeA4 Conference publication
EventInternational Workshop on Semantic Evaluation - Barcelona, Spain
Duration: 12 Dec 202013 Dec 2020
http://alt.qcri.org/semeval2020/

Workshop

WorkshopInternational Workshop on Semantic Evaluation
Abbreviated titleSemEval
Country/TerritorySpain
CityBarcelona
Period12/12/202013/12/2020
OtherCollocated with The 28th International Conference on Computational Lingustics (COLING-2020)
Internet address

Fingerprint

Dive into the research topics of 'LT@Helsinki at SemEval-2020 Task 12 : Multilingual or language-specific BERT?'. Together they form a unique fingerprint.

Cite this