A central concern for many learning algorithms is how to efficiently store what the algorithm has learned. An algorithm for the compression of Nonnegative Matrix Factorizations is presented. Compression is achieved by embedding the factorization in an encoding routine. Its performance is investigated using two standard test images, Peppers and Barbara. The compression ratio (18:1) achieved by the proposed Matrix Factorization improves the storage-ability of Nonnegative Matrix Factorizations without significantly degrading accuracy (≈ 1-3dB degradation is introduced). We learn as before, but storage is cheaper
We study the problem of detecting and localizing objects in still, gray-scale images making use of t...
Recently projected gradient (PG) approaches have found many applications in solving the minimization...
Nonnegative matrix factorization (NMF) has been success-fully applied to different domains as a tech...
A central concern for many learning algorithms is how to efficiently store what the algorithm has le...
A central concern for many learning algorithms is how to efficiently store what the algorithm has le...
A central concern for many learning algorithms and sensing systems is how to efficiently store what ...
We present a robust, parts-based data compression algorithm, L21 Semi-Nonnegative Matrix Factorizati...
Abstract. In image compression and feature extraction, linear expan-sions are standardly used. It wa...
In order to perform object recognition it is necessary to learn representations of the underlying c...
Recent improvements in computing and technology demand the processing and analysis of huge datasets ...
As a central problem in computer vision and pattern recognition, data representation has attracted g...
As a central problem in computer vision and pattern recognition, data representation has attracted g...
In order to perform object recognition it is necessary to learn representations of the underlying co...
Nonnegative matrix factorization (NMF) has become a very popular technique in machine learning becau...
We propose a sparse non-negative image coding based on simulated annealing and matrix pseudo-inversi...
We study the problem of detecting and localizing objects in still, gray-scale images making use of t...
Recently projected gradient (PG) approaches have found many applications in solving the minimization...
Nonnegative matrix factorization (NMF) has been success-fully applied to different domains as a tech...
A central concern for many learning algorithms is how to efficiently store what the algorithm has le...
A central concern for many learning algorithms is how to efficiently store what the algorithm has le...
A central concern for many learning algorithms and sensing systems is how to efficiently store what ...
We present a robust, parts-based data compression algorithm, L21 Semi-Nonnegative Matrix Factorizati...
Abstract. In image compression and feature extraction, linear expan-sions are standardly used. It wa...
In order to perform object recognition it is necessary to learn representations of the underlying c...
Recent improvements in computing and technology demand the processing and analysis of huge datasets ...
As a central problem in computer vision and pattern recognition, data representation has attracted g...
As a central problem in computer vision and pattern recognition, data representation has attracted g...
In order to perform object recognition it is necessary to learn representations of the underlying co...
Nonnegative matrix factorization (NMF) has become a very popular technique in machine learning becau...
We propose a sparse non-negative image coding based on simulated annealing and matrix pseudo-inversi...
We study the problem of detecting and localizing objects in still, gray-scale images making use of t...
Recently projected gradient (PG) approaches have found many applications in solving the minimization...
Nonnegative matrix factorization (NMF) has been success-fully applied to different domains as a tech...