In this thesis we study several properties of Learning Vector Quantization. LVQ is a nonparametric detection scheme proposed in the neural network community by Kohonen. We examine it in detail, both theoretically and experimentally, to determine its properties as a nonparametric classifier. In particular, we study the convergence of the parameter adjustment rule in LVQ, we present a modification to LVQ which results in improving the convergence of the algorithms, we show that LVQ performs as well as other classifiers on two sets of simulations, and we show that the classification error associated with LVQ can be made arbitrarily small
Prototypes based algorithms are commonly used to reduce the computa-tional complexity of Nearest-Nei...
In this paper we describe OSLVQ (Optimum-Size Learning Vector Quantization), an algorithm for traini...
Winner-Takes-All (WTA) prescriptions for learning vector quantization (LVQ) are studied in the frame...
Kohonen's Learning Vector Quantization (LVQ) is a neural network architecture that performs nonparam...
Learning vector quantization (LVQ) schemes constitute intuitive, powerful classification heuris-tics...
Learning vector quantization (LVQ) schemes constitute intuitive, powerful classification heuristics ...
Learning vector quantization (LVQ) constitutes a powerful and intuitive method for adaptive nearest ...
The field of machine learning concerns the design of algorithms to learn and recognize complex patte...
Learning vector quantization (LVQ) constitutes a powerful and simple method for adaptive nearest pro...
Combined compression and classification problems are becoming increasingly important in many applica...
Learning vector quantization (LVQ) is one of the most powerful approaches for prototype based classi...
Kohonen's Learning Vector Quantization (LVQ) is modified by attributing training counters to ea...
Prototypes based algorithms are commonly used to reduce the computational complexity of Nearest-Nei...
Learning vector quantization (LVQ) constitutes a powerful and intuitive method for adaptive nearest ...
Prototypes based algorithms are commonly used to reduce the computa-tional complexity of Nearest-Nei...
Prototypes based algorithms are commonly used to reduce the computa-tional complexity of Nearest-Nei...
In this paper we describe OSLVQ (Optimum-Size Learning Vector Quantization), an algorithm for traini...
Winner-Takes-All (WTA) prescriptions for learning vector quantization (LVQ) are studied in the frame...
Kohonen's Learning Vector Quantization (LVQ) is a neural network architecture that performs nonparam...
Learning vector quantization (LVQ) schemes constitute intuitive, powerful classification heuris-tics...
Learning vector quantization (LVQ) schemes constitute intuitive, powerful classification heuristics ...
Learning vector quantization (LVQ) constitutes a powerful and intuitive method for adaptive nearest ...
The field of machine learning concerns the design of algorithms to learn and recognize complex patte...
Learning vector quantization (LVQ) constitutes a powerful and simple method for adaptive nearest pro...
Combined compression and classification problems are becoming increasingly important in many applica...
Learning vector quantization (LVQ) is one of the most powerful approaches for prototype based classi...
Kohonen's Learning Vector Quantization (LVQ) is modified by attributing training counters to ea...
Prototypes based algorithms are commonly used to reduce the computational complexity of Nearest-Nei...
Learning vector quantization (LVQ) constitutes a powerful and intuitive method for adaptive nearest ...
Prototypes based algorithms are commonly used to reduce the computa-tional complexity of Nearest-Nei...
Prototypes based algorithms are commonly used to reduce the computa-tional complexity of Nearest-Nei...
In this paper we describe OSLVQ (Optimum-Size Learning Vector Quantization), an algorithm for traini...
Winner-Takes-All (WTA) prescriptions for learning vector quantization (LVQ) are studied in the frame...