Learning reduplication with a neural network that lacks explicit variables

Authors

  • Brandon Prickett University of Massachusetts Amherst
  • Aaron Traylor Brown University
  • Joe Pater University of Massachusetts Amherst

Keywords:

neural networks, reduplication, symbolic computation, connectionism, generalization, phonology

Abstract

Reduplicative linguistic patterns have been used as evidence for explicit algebraic variables in models of cognition.1 Here, we show that a variable-free neural network can model these patterns in a way that predicts observed human behavior. Specifically, we successfully simulate the three experiments presented by Marcus et al. (1999), as well as Endress et al.’s (2007) partial replication of one of those experiments. We then explore the model’s ability to generalize reduplicative mappings to different kinds of novel inputs. Using Berent’s (2013) scopes of generalization as a metric, we claim that the model matches the scope of generalization that has been observed in humans. We argue that these results challenge past claims about the necessity of symbolic variables in models of cognition.

DOI:

https://doi.org/10.15398/jlm.v10i1.274

Full article

Published

2022-03-31

How to Cite

Prickett, B., Traylor, A. ., & Pater, J. (2022). Learning reduplication with a neural network that lacks explicit variables. Journal of Language Modelling, 10(1), 1–38. https://doi.org/10.15398/jlm.v10i1.274

Issue

Section

Articles