Kohonen's Learning Vector Quantization (LVQ) is a neural network architecture that performs nonparametric classification. It classifies observations by comparing them to k templates called Voronoi vectors. The locations of these vectors are determined from past labeled data through a learning algorithm. When learning is complete, the class of a new observation is the same as the class of the closest Voronoi vector. Hence LVQ is similar to nearest neighbors, except that instead of all of the past obervations being searched only the k Voronoi vectors are searched. In this paper, we show that the LVQ learning algorithm converges to locally asymptotic stable equilibria of an ordinary differential equation. We show that the learning algorithm pe...
Winner-Takes-All (WTA) prescriptions for learning vector quantization (LVQ) are studied in the frame...
Prototypes based algorithms are commonly used to reduce the computa-tional complexity of Nearest-Nei...
Prototypes based algorithms are commonly used to reduce the computa-tional complexity of Nearest-Nei...
In this thesis we study several properties of Learning Vector Quantization. LVQ is a nonparametric d...
Learning vector quantization (LVQ) schemes constitute intuitive, powerful classification heuris-tics...
Learning vector quantization (LVQ) schemes constitute intuitive, powerful classification heuristics ...
Learning vector quantization (LVQ) constitutes a powerful and simple method for adaptive nearest pro...
Learning vector quantization (LVQ) constitutes a powerful and intuitive method for adaptive nearest ...
Kohonen's Learning Vector Quantization (LVQ) is modified by attributing training counters to ea...
In this paper we describe OSLVQ (Optimum-Size Learning Vector Quantization), an algorithm for traini...
A novel encoding technique is proposed for the recognition of patterns using four different techniqu...
In this paper we analyze the convergence properties of a class of self-organizing neural networks, i...
Prototypes based algorithms are commonly used to reduce the computational complexity of Nearest-Nei...
The nearest neighbor (NN) classifiers, especially the k-NN algorithm, are among the simplest and yet...
Winner-Takes-All (WTA) prescriptions for learning vector quantization (LVQ) are studied in the frame...
Winner-Takes-All (WTA) prescriptions for learning vector quantization (LVQ) are studied in the frame...
Prototypes based algorithms are commonly used to reduce the computa-tional complexity of Nearest-Nei...
Prototypes based algorithms are commonly used to reduce the computa-tional complexity of Nearest-Nei...
In this thesis we study several properties of Learning Vector Quantization. LVQ is a nonparametric d...
Learning vector quantization (LVQ) schemes constitute intuitive, powerful classification heuris-tics...
Learning vector quantization (LVQ) schemes constitute intuitive, powerful classification heuristics ...
Learning vector quantization (LVQ) constitutes a powerful and simple method for adaptive nearest pro...
Learning vector quantization (LVQ) constitutes a powerful and intuitive method for adaptive nearest ...
Kohonen's Learning Vector Quantization (LVQ) is modified by attributing training counters to ea...
In this paper we describe OSLVQ (Optimum-Size Learning Vector Quantization), an algorithm for traini...
A novel encoding technique is proposed for the recognition of patterns using four different techniqu...
In this paper we analyze the convergence properties of a class of self-organizing neural networks, i...
Prototypes based algorithms are commonly used to reduce the computational complexity of Nearest-Nei...
The nearest neighbor (NN) classifiers, especially the k-NN algorithm, are among the simplest and yet...
Winner-Takes-All (WTA) prescriptions for learning vector quantization (LVQ) are studied in the frame...
Winner-Takes-All (WTA) prescriptions for learning vector quantization (LVQ) are studied in the frame...
Prototypes based algorithms are commonly used to reduce the computa-tional complexity of Nearest-Nei...
Prototypes based algorithms are commonly used to reduce the computa-tional complexity of Nearest-Nei...