Humans can systematically generalize to novel compositions of existing concepts. Recent studies argue that neural networks appear inherently ineffective in such cognitive capacity, leading to a pessimistic view and a lack of attention to optimistic results. We revisit this controversial topic from the perspective of meaningful learning, an exceptional capability of humans to learn novel concepts by connecting them with known ones. We reassess the compositional skills of sequence-to-sequence models conditioned on the semantic links between new and old concepts. Our observations suggest that models can successfully one-shot generalize to novel concepts and compositions through semantic linking, either inductively or deductively. We demonstrat...
Over-paramaterized neural models have become dominant in Natural Language Processing. Increasing the...
Neural networks have revolutionized language modeling and excelled in various downstream tasks. Howe...
Experiments in Artificial Language Learn- ing have revealed much about the cogni- tive mechanisms un...
Neural networks drive the success of natural language processing. A fundamental property of language...
Generic unstructured neural networks have been shown to struggle on out-of-distribution compositiona...
Compositionality---the principle that the meaning of a complex expression is built from the meanings...
Systematic generalization is the ability to combine known parts into novel meaning; an important asp...
The power of human language and thought arises from systematic compositionality—the algebraic abilit...
In the last decade, deep artificial neural networks have achieved astounding performance in many nat...
Compositional generalization is a basic mechanism in human language learning, which current neural n...
Flexible neural sequence models outperform grammar- and automaton-based counterparts on a variety of...
Humans are remarkably flexible when under- standing new sentences that include combinations of conce...
When writing programs, people have the ability to tackle a new complex task by decomposing it into s...
We present a neural-symbolic learning model of sentence production which displays strong semantic sy...
Despite the tremendous success, existing machine learning models still fall short of human-like syst...
Over-paramaterized neural models have become dominant in Natural Language Processing. Increasing the...
Neural networks have revolutionized language modeling and excelled in various downstream tasks. Howe...
Experiments in Artificial Language Learn- ing have revealed much about the cogni- tive mechanisms un...
Neural networks drive the success of natural language processing. A fundamental property of language...
Generic unstructured neural networks have been shown to struggle on out-of-distribution compositiona...
Compositionality---the principle that the meaning of a complex expression is built from the meanings...
Systematic generalization is the ability to combine known parts into novel meaning; an important asp...
The power of human language and thought arises from systematic compositionality—the algebraic abilit...
In the last decade, deep artificial neural networks have achieved astounding performance in many nat...
Compositional generalization is a basic mechanism in human language learning, which current neural n...
Flexible neural sequence models outperform grammar- and automaton-based counterparts on a variety of...
Humans are remarkably flexible when under- standing new sentences that include combinations of conce...
When writing programs, people have the ability to tackle a new complex task by decomposing it into s...
We present a neural-symbolic learning model of sentence production which displays strong semantic sy...
Despite the tremendous success, existing machine learning models still fall short of human-like syst...
Over-paramaterized neural models have become dominant in Natural Language Processing. Increasing the...
Neural networks have revolutionized language modeling and excelled in various downstream tasks. Howe...
Experiments in Artificial Language Learn- ing have revealed much about the cogni- tive mechanisms un...