Constraint Grammar is a hand-crafted Transformer

Anssi Yli-Jyrä
Helsinki Centre for Digital Humanities (HELDIG), P.O. Box 24, 00014 University of Helsinki, Finland

Ladda ner artikel

Ingår i: Proceedings of the NoDaLiDa 2019 Workshop on Constraint Grammar - Methods, Tools and Applications, 30 September 2019, Turku, Finland

Linköping Electronic Conference Proceedings 168:9, s. 45-49

NEALT Proceedings Series 43:9, p. 45-49

Visa mer +

Publicerad: 2019-12-03

ISBN: 978-91-7929-918-7

ISSN: 1650-3686 (tryckt), 1650-3740 (online)


Deep neural networks (DNN) and linguistic rules are currently the opposite ends in the scale for NLP technologies. Until recently, it has not been known how to combine these technologies most effectively. Therefore, the technologies have been the object of almost disjoint research communities. In this presentation, I first recall that both Constraint Grammar (CG) and vanilla RNNs have finite-state properties. Then I relate CG to Google’s Transformer architecture (with two kinds of attention) and argue that there are significant similarities between these two seemingly unrelated architectures.


constraint grammar, finite-state capacity, recurrent neural networks, self-attention, attention, rule conditions, Transformer


Inga referenser tillgängliga

Citeringar i Crossref