We propose a sparse non-negative image coding based on simulated annealing and matrix pseudo-inversion. We show that sparsity and non-negativity are both important to obtain part-based coding and we also show the impact of each of them on the coding. In contrast with other approaches in the literature, our method can constrain both weights and basis vectors to generate part-based bases suitable for image recognition and fiducial point extraction. We also propose a speedup of the algorithm by implementing a hybrid system that mixes simulated annealing and pseudo-inverse computation of matrices
Linear dimensionality reduction techniques such as principal component analysis are powerful tools f...
A central concern for many learning algorithms is how to efficiently store what the algorithm has le...
matrix factorization extended by sparse code shrinkage and by weight sparsificatio
Non-negative sparse coding is a method for decomposing multivariate data into non-negative sparse co...
Example-based learning of codes that statistically encode general image classes is of vital importan...
Recently projected gradient (PG) approaches have found many applications in solving the minimization...
Abstract — This paper presents a fast part-based subspace selection algorithm, termed the binary spa...
Abstract. In image compression and feature extraction, linear expan-sions are standardly used. It wa...
In this work we apply non-negative matrix factorizations (NMF) to some imaging and inverse problems....
In order to perform object recognition it is necessary to learn representations of the underlying co...
In order to perform object recognition it is necessary to learn representations of the underlying c...
As a central problem in computer vision and pattern recognition, data representation has attracted g...
International audienceThe computational cost of many signal processing and machine learning techniqu...
As a central problem in computer vision and pattern recognition, data representation has attracted g...
Nonnegative matrix factorization (NMF) has become a very popular technique in machine learning becau...
Linear dimensionality reduction techniques such as principal component analysis are powerful tools f...
A central concern for many learning algorithms is how to efficiently store what the algorithm has le...
matrix factorization extended by sparse code shrinkage and by weight sparsificatio
Non-negative sparse coding is a method for decomposing multivariate data into non-negative sparse co...
Example-based learning of codes that statistically encode general image classes is of vital importan...
Recently projected gradient (PG) approaches have found many applications in solving the minimization...
Abstract — This paper presents a fast part-based subspace selection algorithm, termed the binary spa...
Abstract. In image compression and feature extraction, linear expan-sions are standardly used. It wa...
In this work we apply non-negative matrix factorizations (NMF) to some imaging and inverse problems....
In order to perform object recognition it is necessary to learn representations of the underlying co...
In order to perform object recognition it is necessary to learn representations of the underlying c...
As a central problem in computer vision and pattern recognition, data representation has attracted g...
International audienceThe computational cost of many signal processing and machine learning techniqu...
As a central problem in computer vision and pattern recognition, data representation has attracted g...
Nonnegative matrix factorization (NMF) has become a very popular technique in machine learning becau...
Linear dimensionality reduction techniques such as principal component analysis are powerful tools f...
A central concern for many learning algorithms is how to efficiently store what the algorithm has le...
matrix factorization extended by sparse code shrinkage and by weight sparsificatio