In an influential paper, Marcus et al. [1999] claimed that connectionist models cannot account for human success at learning tasks that involved generalization of abstract knowledge such as grammatical rules. This claim triggered a heated debate, centered mostly around variants of the Simple Recurrent Network model [Elman, 1990]. In our work, we revisit this unresolved debate and analyze the underlying issues from a different perspective. We argue that, in order to simulate human-like learning of grammatical rules, a neural network model should not be used as a tabula rasa , but rather, the initial wiring of the neural connections and the experience acquired prior to the actual task should be incorporated into the model. We present two meth...
Over-paramaterized neural models have become dominant in Natural Language Processing. Increasing the...
Children learn their mother tongue spontaneously and effortlessly through communicative interaction ...
In this paper, we show that standard feed-forward and recurrent neural networks fail to learn abstra...
In an influential paper (“Rule Learning by Seven-Month-Old Infants”), Marcus, Vijayan, Rao and Visht...
In an influential paper (“Rule Learning by Seven-Month-Old Infants”), Marcus, Vijayan, Rao and Visht...
We present a critical review of computational models of generalization of simple grammar-like rules,...
Abstract Echo state networks (ESNs) are recurrent neural networks that can be trained efficiently be...
The ability to learn abstractions and generalise is seen as the essence of human intelligence.7 Sinc...
Echo state networks (ESNs) are recurrent neural networks that can be trained efficiently because the...
7-month-old infants with sequences of syllables generated by an artificial grammar; the infants were...
The straightforward mapping of a grammar onto a connectionist architecture is to make each grammar s...
Echo state networks (ESNs) are recurrent neural networks that can be trained efficiently because the...
Recent studies have shown that infants have access to what would seem to be highly useful language a...
One of the hallmarks of intelligent behaviour is the ability to generalise, i.e. to make use of info...
Systematic generalization is the ability to combine known parts into novel meaning; an important asp...
Over-paramaterized neural models have become dominant in Natural Language Processing. Increasing the...
Children learn their mother tongue spontaneously and effortlessly through communicative interaction ...
In this paper, we show that standard feed-forward and recurrent neural networks fail to learn abstra...
In an influential paper (“Rule Learning by Seven-Month-Old Infants”), Marcus, Vijayan, Rao and Visht...
In an influential paper (“Rule Learning by Seven-Month-Old Infants”), Marcus, Vijayan, Rao and Visht...
We present a critical review of computational models of generalization of simple grammar-like rules,...
Abstract Echo state networks (ESNs) are recurrent neural networks that can be trained efficiently be...
The ability to learn abstractions and generalise is seen as the essence of human intelligence.7 Sinc...
Echo state networks (ESNs) are recurrent neural networks that can be trained efficiently because the...
7-month-old infants with sequences of syllables generated by an artificial grammar; the infants were...
The straightforward mapping of a grammar onto a connectionist architecture is to make each grammar s...
Echo state networks (ESNs) are recurrent neural networks that can be trained efficiently because the...
Recent studies have shown that infants have access to what would seem to be highly useful language a...
One of the hallmarks of intelligent behaviour is the ability to generalise, i.e. to make use of info...
Systematic generalization is the ability to combine known parts into novel meaning; an important asp...
Over-paramaterized neural models have become dominant in Natural Language Processing. Increasing the...
Children learn their mother tongue spontaneously and effortlessly through communicative interaction ...
In this paper, we show that standard feed-forward and recurrent neural networks fail to learn abstra...