Conference article

Is Multilingual BERT Fluent in Language Generation?

Samuel Rönnqvist
TurkuNLP, Department of Future Technologies, University of Turku, Finland

Jenna Kanerva
TurkuNLP, Department of Future Technologies, University of Turku, Finland

Tapio Salakoski
TurkuNLP, Department of Future Technologies, University of Turku, Finland

Filip Ginter
TurkuNLP, Department of Future Technologies, University of Turku, Finland

Download article

Published in: DL4NLP 2019. Proceedings of the First NLPL Workshop on Deep Learning for Natural Language Processing, 30 September, 2019, University of Turku, Turku, Finland

Linköping Electronic Conference Proceedings 163:4, p. 29-36

NEALT Proceedings Series 38:4, p. 29-36

Show more +

Published: 2019-09-27

ISBN: 978-91-7929-999-6

ISSN: 1650-3686 (print), 1650-3740 (online)

Abstract

The multilingual BERT model is trained on 104 languages and meant to serve as a universal language model and tool for encoding sentences. We explore how well the model performs on several languages across several tasks: a diagnostic classification probing the embeddings for a particular syntactic property, a cloze task testing the language modelling ability to fill in gaps in a sentence, and a natural language generation task testing for the ability to produce coherent text fitting a given context. We find that the currently available multilingual BERT model is clearly inferior to the monolingual counterparts, and cannot in many cases serve as a substitute for a well-trained monolingual model. We find that the English and German models perform well at generation, whereas the multilingual model is lacking, in particular, for Nordic languages. The code of the experiments in the paper is available at: https://github.com/TurkuNLP/bert-eval

Keywords

natural language generation, BERT, nordic lanugages, Finnish, Swedish, German, English, multilingual, generation, comparison

References

No references available

Citations in Crossref