Modelling a subregular bias in phonological learning with Recurrent Neural Networks
Keywords:neural networks, learning bias, Formal Language Theory, phonology
A number of experiments have demonstrated what seems to be a bias in human phonological learning for patterns that are simpler according to Formal Language Theory (Finley and Badecker 2008; Lai 2015; Avcu 2018). This paper demonstrates that a sequence-to-sequence neural network (Sutskever et al. 2014), which has no such restriction explicitly built into its architecture, can successfully capture this bias. These results suggest that a bias for patterns that are simpler according to Formal Language Theory may not need to be explicitly incorporated into models of phonological learning.
How to Cite
Copyright (c) 2021 Journal of Language Modelling
This work is licensed under a Creative Commons Attribution 4.0 International License.
All content is licensed under the Creative Commons Attribution 4.0 International License.