We propose an efficient method to learn a compact and discriminative dictionary for visual categorization, in which the dictionary learning is formulated as a problem of graph partition. Firstly, an approximate kNN graph is efficiently computed on the data set using a divide-and-conquer strategy. And then the dictionary learning is achieved by seeking a graph topology on the resulting kNN graph that maximizes a submodular objective function. Due to the property of diminishing return and monotonicity of the defined objective function, it can be solved by means of a fast greedy-based optimization. By combing these two efficient ingredients, we finally obtain a genuinely fast algorithm for dictionary learning, which is promising for large-scal...
In this paper, we propose a novel information theoretic approach to obtain compact and discriminativ...
Classifiers based on sparse representations have recently been shown to provide excellent results in...
Many techniques in computer vision, machine learning, and statistics rely on the fact that a signal ...
Image categorization by means of bag of visual words has received increasing attention by the image ...
For the task of visual categorization, the learning model is expected to be endowed with discriminat...
Abstract. The Bag-of-Visual-Words (BoVW) model is a popular approach for visual recognition. Used su...
Abstract. The Bag-of-Visual-Words (BoVW) model is a popular approach for visual recognition. Used su...
International audienceDictionary learning is a branch of signal processing and machine learning that...
We address the problem of learning a sparsifying synthesis dictionary over large datasets that occur...
International audienceDictionary learning is a branch of signal processing and machine learning that...
In this paper, we propose a novel dictionary learning method in the semi-supervised setting by dynam...
For the task of visual categorization, the learning model is expected to be endowed with discriminat...
Classifiers based on sparse representations have recently been shown to provide excellent results in...
In big data image/video analytics, we encounter the problem of learning an over-complete dictionary ...
In big data image/video analytics, we encounter the problem of learning an over-complete dictionary ...
In this paper, we propose a novel information theoretic approach to obtain compact and discriminativ...
Classifiers based on sparse representations have recently been shown to provide excellent results in...
Many techniques in computer vision, machine learning, and statistics rely on the fact that a signal ...
Image categorization by means of bag of visual words has received increasing attention by the image ...
For the task of visual categorization, the learning model is expected to be endowed with discriminat...
Abstract. The Bag-of-Visual-Words (BoVW) model is a popular approach for visual recognition. Used su...
Abstract. The Bag-of-Visual-Words (BoVW) model is a popular approach for visual recognition. Used su...
International audienceDictionary learning is a branch of signal processing and machine learning that...
We address the problem of learning a sparsifying synthesis dictionary over large datasets that occur...
International audienceDictionary learning is a branch of signal processing and machine learning that...
In this paper, we propose a novel dictionary learning method in the semi-supervised setting by dynam...
For the task of visual categorization, the learning model is expected to be endowed with discriminat...
Classifiers based on sparse representations have recently been shown to provide excellent results in...
In big data image/video analytics, we encounter the problem of learning an over-complete dictionary ...
In big data image/video analytics, we encounter the problem of learning an over-complete dictionary ...
In this paper, we propose a novel information theoretic approach to obtain compact and discriminativ...
Classifiers based on sparse representations have recently been shown to provide excellent results in...
Many techniques in computer vision, machine learning, and statistics rely on the fact that a signal ...