Konferensartikel

Multilingual Probing of Deep Pre-Trained Contextual Encoders

Vinit Ravishankar
Language Technology Group, Department of Informatics, University of Oslo

Memduh Gökirmak
Institute of Formal and Applied Linguistics, Faculty of Mathematics and Physics, Charles University, Prague, Czech Republic

Øvrelid Øvrelid
Language Technology Group, Department of Informatics, University of Oslo

Erik Velldal
Language Technology Group, Department of Informatics, University of Oslo

Ladda ner artikel

Ingår i: DL4NLP 2019. Proceedings of the First NLPL Workshop on Deep Learning for Natural Language Processing, 30 September, 2019, University of Turku, Turku, Finland

Linköping Electronic Conference Proceedings 163:5, s. 37-47

NEALT Proceedings Series 38:5, p. 37-47

Visa mer +

Publicerad: 2019-09-27

ISBN: 978-91-7929-999-6

ISSN: 1650-3686 (tryckt), 1650-3740 (online)

Abstract

Encoders that generate representations based on context have, in recent years, benefited from adaptations that allow for pre-training on large text corpora. Earlier work on evaluating fixed-length sentence representations has included the use of `probing’ tasks, that use diagnostic classifiers to attempt to quantify the extent to which these encoders capture specific linguistic phenomena. The principle of probing has also resulted in extended evaluations that include relatively newer word-level pre-trained encoders. We build on probing tasks established in the literature and comprehensively evaluate and analyse -- from a typological perspective amongst others -- multilingual variants of existing encoders on probing datasets constructed for 6 non-English languages. Specifically, we probe each layer of a multiple monolingual RNN-based ELMo models, the transformer-based BERT’s cased and uncased multilingual variants, and a variant of BERT that uses a cross-lingual modelling scheme (XLM).

Nyckelord

multilingual BERT, ELMo, probing, xlm

Referenser

Inga referenser tillgängliga

Citeringar i Crossref