Being able to learn from small amounts of data is a key characteristic of human intelligence, but exactly how small? In this paper, we introduce a novel experimental paradigm that allows us to examine classification in an extremely data-scarce setting, asking whether humans can learn more categories than they have exemplars (i.e., can humans do "less-than-one shot'' learning?). An experiment conducted using this paradigm reveals that people are capable of learning in such settings, and provides several insights into underlying mechanisms. First, people can accurately infer and represent high-dimensional feature spaces from very little data. Second, having inferred the relevant spaces, people use a form of prototype-based categorization (as ...
International audienceThis paper presents a computational model of the way humans inductively identi...
When different stimuli belong to the same category, learning about their attributes should be guided...
Models of category learning often assume that exemplar features are learned in proportion to how muc...
The curse of dimensionality, which has been widely studied in statistics and machine learning, occur...
Basic decisions, such as judging a person as a friend or foe, involve categorizing novel stimuli. Re...
Deep learning has recently driven remarkable progress in several applications, including image class...
From just a single example, we can derive quite precise intuitions about what other class members lo...
Three experiments compared the learning of lower-dimensional family resemblance categories (4 dimens...
The curse of dimensionality, which has been widely studied in statistics and machine learning, occur...
Progress in studying human categorization has typically in-volved comparing generalization judgments...
We investigated human category learning from partial information provided as equivalence constraints...
International audienceFew-shot learning is often motivated by the ability of humans to learn new tas...
Learning to categorize objects in the world is more than just learning the specific facts that chara...
<p>Understanding how humans and machines recognize novel visual concepts from few examples remains a...
In coming to understand the world—in learning concepts, acquiring language, and grasping causal rela...
International audienceThis paper presents a computational model of the way humans inductively identi...
When different stimuli belong to the same category, learning about their attributes should be guided...
Models of category learning often assume that exemplar features are learned in proportion to how muc...
The curse of dimensionality, which has been widely studied in statistics and machine learning, occur...
Basic decisions, such as judging a person as a friend or foe, involve categorizing novel stimuli. Re...
Deep learning has recently driven remarkable progress in several applications, including image class...
From just a single example, we can derive quite precise intuitions about what other class members lo...
Three experiments compared the learning of lower-dimensional family resemblance categories (4 dimens...
The curse of dimensionality, which has been widely studied in statistics and machine learning, occur...
Progress in studying human categorization has typically in-volved comparing generalization judgments...
We investigated human category learning from partial information provided as equivalence constraints...
International audienceFew-shot learning is often motivated by the ability of humans to learn new tas...
Learning to categorize objects in the world is more than just learning the specific facts that chara...
<p>Understanding how humans and machines recognize novel visual concepts from few examples remains a...
In coming to understand the world—in learning concepts, acquiring language, and grasping causal rela...
International audienceThis paper presents a computational model of the way humans inductively identi...
When different stimuli belong to the same category, learning about their attributes should be guided...
Models of category learning often assume that exemplar features are learned in proportion to how muc...