Typology emerges from simplicity in representations and learning
Keywords:
model theory, subregularity, grammatical inference, formal language theory, phonology, learning complexityAbstract
We derive well-understood and well-studied subregular classes of formal languages purely from the computational perspective of algorithmic learning problems. We parameterise the learning problem along dimensions of representation and inference strategy. Of special interest are those classes of languages whose learning algorithms are necessarily not prohibitively expensive in space and time, since learners are often exposed to adverse conditions and sparse data. Learned natural language patterns are expected to be most like the patterns in these classes, an expectation supported by previous typological and linguistic research in phonology. A second result is that the learning algorithms presented here are completely agnostic to choice of linguistic representation. In the case of the subregular classes, the results fall out from traditional model-theoretic treatments of words and strings. The same learning algorithms, however, can be applied to model-theoretic treatments of other linguistic representations such as syntactic trees or autosegmental graphs, which opens a useful direction for future research.
DOI:
https://doi.org/10.15398/jlm.v9i1.262Full article
Published
How to Cite
Issue
Section
License
Copyright (c) 2021 Dakotah Jay Lambert, Jonathan Rawski, Jeffrey Heinz
This work is licensed under a Creative Commons Attribution 4.0 International License.