Typology emerges from simplicity in representations and learning

Authors

  • Dakotah Jay Lambert Stony Brook University
  • Jonathan Rawski San José State University
  • Jeffrey Heinz Stony Brook University

Keywords:

model theory, subregularity, grammatical inference, formal language theory, phonology, learning complexity

Abstract

We derive well-understood and well-studied subregular classes of formal languages purely from the computational perspective of algorithmic learning problems. We parameterise the learning problem along dimensions of representation and inference strategy. Of special interest are those classes of languages whose learning algorithms are necessarily not prohibitively expensive in space and time, since learners are often exposed to adverse conditions and sparse data. Learned natural language patterns are expected to be most like the patterns in these classes, an expectation supported by previous typological and linguistic research in phonology. A second result is that the learning algorithms presented here are completely agnostic to choice of linguistic representation. In the case of the subregular classes, the results fall out from traditional model-theoretic treatments of words and strings. The same learning algorithms, however, can be applied to model-theoretic treatments of other linguistic representations such as syntactic trees or autosegmental graphs, which opens a useful direction for future research.

DOI:

https://doi.org/10.15398/jlm.v9i1.262

Full article

Published

2021-08-17

How to Cite

Lambert, D. J., Rawski, J., & Heinz, J. (2021). Typology emerges from simplicity in representations and learning. Journal of Language Modelling, 9(1), 151–194. https://doi.org/10.15398/jlm.v9i1.262