Inductive learning is impossible without overhypotheses, or constraints on the hypotheses considered by the learner. Some of these overhypotheses must be innate, but we suggest that hierarchical Bayesian models can help to explain how the rest are acquired. To illustrate this claim, we develop models that acquire two kinds of overhypotheses--overhypotheses about feature variability (e.g. the shape bias in word learning) and overhypotheses about the grouping of categories into ontological kinds like objects and substances.</p
The use of abstract higher-level knowledge (overhypotheses) allows humans to learn quickly from spar...
We argue that human inductive generalization is best explained in a Bayesian framework, rather than ...
Hierarchical processing is pervasive in the brain, but its computational significance for learning u...
Inductive learning is impossible without overhypotheses, or constraints on the hypotheses considered...
Inductive learning is impossible without overhypothe-ses, or constraints on the hypotheses considere...
We present a hierarchical Bayesian framework for modeling the acqui-sition of verb argument construc...
Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Brain and Cognitive Sciences, 2008....
In previous work [3] we have proposed Hierarchical Bayesian Networks (HBNs) as an extension of Bay...
The human ability to learn quickly about causal relationships requires abstract knowledge that provi...
Introduction: The need for hierarchical models Those of us who study human cognition have no easy ta...
Many of the central problems of cognitive science are problems of induction, calling for uncertain i...
We demonstrate the potential of using hierarchical Bayesian methods to relate models and data in the...
International audienceHierarchical processing is pervasive in the brain, but its computational signi...
This article demonstrates the potential of using hierarchical Bayesian methods to relate models and ...
. A Machine can only learn if it is biased in some way. Typically the bias is supplied by hand, for ...
The use of abstract higher-level knowledge (overhypotheses) allows humans to learn quickly from spar...
We argue that human inductive generalization is best explained in a Bayesian framework, rather than ...
Hierarchical processing is pervasive in the brain, but its computational significance for learning u...
Inductive learning is impossible without overhypotheses, or constraints on the hypotheses considered...
Inductive learning is impossible without overhypothe-ses, or constraints on the hypotheses considere...
We present a hierarchical Bayesian framework for modeling the acqui-sition of verb argument construc...
Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Brain and Cognitive Sciences, 2008....
In previous work [3] we have proposed Hierarchical Bayesian Networks (HBNs) as an extension of Bay...
The human ability to learn quickly about causal relationships requires abstract knowledge that provi...
Introduction: The need for hierarchical models Those of us who study human cognition have no easy ta...
Many of the central problems of cognitive science are problems of induction, calling for uncertain i...
We demonstrate the potential of using hierarchical Bayesian methods to relate models and data in the...
International audienceHierarchical processing is pervasive in the brain, but its computational signi...
This article demonstrates the potential of using hierarchical Bayesian methods to relate models and ...
. A Machine can only learn if it is biased in some way. Typically the bias is supplied by hand, for ...
The use of abstract higher-level knowledge (overhypotheses) allows humans to learn quickly from spar...
We argue that human inductive generalization is best explained in a Bayesian framework, rather than ...
Hierarchical processing is pervasive in the brain, but its computational significance for learning u...