In this paper, we show that standard feed-forward and recurrent neural networks fail to learn abstract patterns based on identity rules. We propose Repetition Based Pattern (RBP) extensions to neural network structures that solve this problem and answer, as well as raise, questions about integrating structures for inductive bias into neural networks. Examples of abstract patterns are the sequence patterns ABA and ABB where A or B can be any object. These were introduced by Marcus et al (1999) who also found that 7 month old infants recognise these patterns in sequences that use an unfamiliar vocabulary while simple recurrent neural networks do not. This result has been contested in the literature but it is c...
Recurrent neural networks (RNNs) is a useful tool for sequence labelling tasks in natural language p...
Recurrent neural networks are nowadays successfully used in an abundance of applications, going from...
Observations of various deep neural network architectures indicate that deep networks may be spontan...
In this paper, we show that standard feed-forward and recurrent neural networks fail to learn abstra...
The ability to learn abstractions and generalise is seen as the essence of human intelligence.7 Sinc...
Deep neural networks have been widely used for various applications and have produced state-of-the-a...
Learning abstract and systematic relations has been an open issue in neural network learning for ove...
Many researchers implicitly assume that neural networks learn relations and generalise them to new u...
In an influential paper (“Rule Learning by Seven-Month-Old Infants”), Marcus, Vijayan, Rao and Visht...
In an influential paper (“Rule Learning by Seven-Month-Old Infants”), Marcus, Vijayan, Rao and Visht...
In an influential paper, Marcus et al. [1999] claimed that connectionist models cannot account for h...
Basic binary relations such as equality and inequality are fundamental to relational data structures...
Sequence processing involves several tasks such as clustering, classification, prediction, and trans...
The majority of current applications of neural networks are concerned with problems in pattern recog...
Infants can discriminate between familiar and unfamiliar grammatical patterns expressed in a vocabul...
Recurrent neural networks (RNNs) is a useful tool for sequence labelling tasks in natural language p...
Recurrent neural networks are nowadays successfully used in an abundance of applications, going from...
Observations of various deep neural network architectures indicate that deep networks may be spontan...
In this paper, we show that standard feed-forward and recurrent neural networks fail to learn abstra...
The ability to learn abstractions and generalise is seen as the essence of human intelligence.7 Sinc...
Deep neural networks have been widely used for various applications and have produced state-of-the-a...
Learning abstract and systematic relations has been an open issue in neural network learning for ove...
Many researchers implicitly assume that neural networks learn relations and generalise them to new u...
In an influential paper (“Rule Learning by Seven-Month-Old Infants”), Marcus, Vijayan, Rao and Visht...
In an influential paper (“Rule Learning by Seven-Month-Old Infants”), Marcus, Vijayan, Rao and Visht...
In an influential paper, Marcus et al. [1999] claimed that connectionist models cannot account for h...
Basic binary relations such as equality and inequality are fundamental to relational data structures...
Sequence processing involves several tasks such as clustering, classification, prediction, and trans...
The majority of current applications of neural networks are concerned with problems in pattern recog...
Infants can discriminate between familiar and unfamiliar grammatical patterns expressed in a vocabul...
Recurrent neural networks (RNNs) is a useful tool for sequence labelling tasks in natural language p...
Recurrent neural networks are nowadays successfully used in an abundance of applications, going from...
Observations of various deep neural network architectures indicate that deep networks may be spontan...