We argue that human inductive generalization is best explained in a Bayesian framework, rather than by traditional models based on simi-larity computations. We go beyond previous work on Bayesian concept learning by introducing an unsupervised method for constructing flex-ible hypothesis spaces, and we propose a version of the Bayesian Oc-cam’s razor that trades off priors and likelihoods to prevent under- or over-generalization in these flexible spaces. We analyze two published data sets on inductive reasoning as well as the results of a new behavioral study that we have carried out.
Inductive Logic Progrdng (ILP) involves the construction of first-order definite clause theories fro...
I consider the problem of learning concepts from small numbers of positive examples, a feat which h...
A major problem in machine learning is that of inductive bias: how to choose a learner’s hy-pothesis...
Many of the central problems of cognitive science are problems of induction, calling for uncertain i...
We introduce a tractable family of Bayesian generalization functions. The family extends the basic m...
Inductive inference allows humans to make powerful generalizations from sparse data when learning ab...
We present a model of inductive inference that includes, as special cases, Bayesian reasoning, case-...
We introduce a tractable family of Bayesian generalization functions. The family extends the basic m...
Both intensional and extensional background knowledge have previously been used in inductive problem...
This paper argues that Bayesian probability theory is a general method for machine learning. From tw...
Inductive inference allows humans to make powerful generalizations from sparse data when learning ab...
Everyday inductive reasoning draws on many kinds of knowledge, including knowledge about relationshi...
The underlying idea behind the adaptive logics of inductive generalization is that most inductive re...
International audienceWe present a model of inductive inference that includes, as special cases, Bay...
I consider the problem of learning concepts from small numbers of pos-itive examples, a feat which h...
Inductive Logic Progrdng (ILP) involves the construction of first-order definite clause theories fro...
I consider the problem of learning concepts from small numbers of positive examples, a feat which h...
A major problem in machine learning is that of inductive bias: how to choose a learner’s hy-pothesis...
Many of the central problems of cognitive science are problems of induction, calling for uncertain i...
We introduce a tractable family of Bayesian generalization functions. The family extends the basic m...
Inductive inference allows humans to make powerful generalizations from sparse data when learning ab...
We present a model of inductive inference that includes, as special cases, Bayesian reasoning, case-...
We introduce a tractable family of Bayesian generalization functions. The family extends the basic m...
Both intensional and extensional background knowledge have previously been used in inductive problem...
This paper argues that Bayesian probability theory is a general method for machine learning. From tw...
Inductive inference allows humans to make powerful generalizations from sparse data when learning ab...
Everyday inductive reasoning draws on many kinds of knowledge, including knowledge about relationshi...
The underlying idea behind the adaptive logics of inductive generalization is that most inductive re...
International audienceWe present a model of inductive inference that includes, as special cases, Bay...
I consider the problem of learning concepts from small numbers of pos-itive examples, a feat which h...
Inductive Logic Progrdng (ILP) involves the construction of first-order definite clause theories fro...
I consider the problem of learning concepts from small numbers of positive examples, a feat which h...
A major problem in machine learning is that of inductive bias: how to choose a learner’s hy-pothesis...