We propose a novel approach to efficiently select informative samples for large-scale learn-ing. Instead of directly feeding a learning algorithm with a very large amount of samples, as it is usually done to reach state-of-the-art performance, we have developed a “distilla-tion ” procedure to recursively reduce the size of an initial training set using a criterion that ensures the maximization of the information content of the selected sub-set. We demonstrate the performance of this procedure for two different computer vision problems. First, we show that distillation can be used to improve the traditional bootstrap-ping approach to object detection. Second, we apply distillation to a classification problem with artificial distortions. We s...
Efficient object detection methods have recentlyreceived great attention in remote se...
We address the problem of training Object Detection models using significantly less bounding box ann...
Classical Boosting algorithms, such as AdaBoost, build a strong classifier without concern for the c...
We present a distillation algorithm which operates on a large, unstructured, and noisy collection of...
We present a distillation algorithm which operates on a large, unstructured, and noisy collection of...
International audienceKnowledge distillation aims at compressing deep models by transferring the lea...
There is a growing discrepancy in computer vision between large-scale models that achieve state-of-t...
Object detectors based on the sliding window technique are usually trained in two successive steps: ...
Knowledge Distillation (KD) is a well-known training paradigm in deep neural networks where knowledg...
Previous knowledge distillation (KD) methods for object detection mostly focus on feature imitation ...
In this paper we revisit the efficacy of knowledge distillation as a function matching and metric le...
Top Abstract Image super-resolution is one of the most appealing applications of image processing,...
Few-shot learning models learn representations with limited human annotations, and such a learning p...
To distinguish objects from non-objects in images under computational constraints, a suitable soluti...
This thesis concerns the problem of object detection, which is defined as finding all instances of a...
Efficient object detection methods have recentlyreceived great attention in remote se...
We address the problem of training Object Detection models using significantly less bounding box ann...
Classical Boosting algorithms, such as AdaBoost, build a strong classifier without concern for the c...
We present a distillation algorithm which operates on a large, unstructured, and noisy collection of...
We present a distillation algorithm which operates on a large, unstructured, and noisy collection of...
International audienceKnowledge distillation aims at compressing deep models by transferring the lea...
There is a growing discrepancy in computer vision between large-scale models that achieve state-of-t...
Object detectors based on the sliding window technique are usually trained in two successive steps: ...
Knowledge Distillation (KD) is a well-known training paradigm in deep neural networks where knowledg...
Previous knowledge distillation (KD) methods for object detection mostly focus on feature imitation ...
In this paper we revisit the efficacy of knowledge distillation as a function matching and metric le...
Top Abstract Image super-resolution is one of the most appealing applications of image processing,...
Few-shot learning models learn representations with limited human annotations, and such a learning p...
To distinguish objects from non-objects in images under computational constraints, a suitable soluti...
This thesis concerns the problem of object detection, which is defined as finding all instances of a...
Efficient object detection methods have recentlyreceived great attention in remote se...
We address the problem of training Object Detection models using significantly less bounding box ann...
Classical Boosting algorithms, such as AdaBoost, build a strong classifier without concern for the c...