A number of experiments have demonstrated what seems to be a bias in human phonological learning for patterns that are simpler according to Formal Language Theory (Finley and Badecker 2008; Lai 2015; Avcu 2018). This paper demonstrates that a sequence-to-sequence neural network (Sutskever et al. 2014), which has no such restriction explicitly built into its architecture, can successfully capture this bias. These results suggest that a bias for patterns that are simpler according to Formal Language Theory may not need to be explicitly incorporated into models of phonological learning
This dissertation investigates the relation between the complexity of phonological patterns, their l...
We derive well-understood and well-studied subregular classes of formal languages purely from the co...
The Subregular Hypothesis (Heinz 2010) states that only patterns with specific subregular computatio...
A number of experiments have demonstrated what seems to be a bias in human phonological learning for...
This paper argues that if phonological and phonetic phenomena found in language data and in experime...
Recurrent neural networks are capable of learning context-free tasks, however learning performance i...
This dissertation tests sequence-to-sequence neural networks to see whether they can simulate human ...
We discuss experiments with neural networks being trained in a phonotactic processing task. A recurr...
A fundamental debate in the machine learning of language has been the role of prior knowledge in the...
Simple Recurrent Networks (SRN) are Neural Network (connectionist) models able to process natural la...
Simple Recurrent Networks (SRN) are Neural Network (connectionist) models able to process natural la...
Computational models of phonotactics share much in common with language models, which assign probabi...
The very promising reported results of Neural Networks grammar modelling has motivated a lot of rese...
An ongoing debate in phonology concerns the extent to which the phonological typology is shaped by s...
An ongoing debate in phonology concerns the extent to which the phonological typology is shaped by s...
This dissertation investigates the relation between the complexity of phonological patterns, their l...
We derive well-understood and well-studied subregular classes of formal languages purely from the co...
The Subregular Hypothesis (Heinz 2010) states that only patterns with specific subregular computatio...
A number of experiments have demonstrated what seems to be a bias in human phonological learning for...
This paper argues that if phonological and phonetic phenomena found in language data and in experime...
Recurrent neural networks are capable of learning context-free tasks, however learning performance i...
This dissertation tests sequence-to-sequence neural networks to see whether they can simulate human ...
We discuss experiments with neural networks being trained in a phonotactic processing task. A recurr...
A fundamental debate in the machine learning of language has been the role of prior knowledge in the...
Simple Recurrent Networks (SRN) are Neural Network (connectionist) models able to process natural la...
Simple Recurrent Networks (SRN) are Neural Network (connectionist) models able to process natural la...
Computational models of phonotactics share much in common with language models, which assign probabi...
The very promising reported results of Neural Networks grammar modelling has motivated a lot of rese...
An ongoing debate in phonology concerns the extent to which the phonological typology is shaped by s...
An ongoing debate in phonology concerns the extent to which the phonological typology is shaped by s...
This dissertation investigates the relation between the complexity of phonological patterns, their l...
We derive well-understood and well-studied subregular classes of formal languages purely from the co...
The Subregular Hypothesis (Heinz 2010) states that only patterns with specific subregular computatio...