Generalized Matrix Learning Vector Quantization (GMLVQ) critically relies on the use of an optimization algorithm to train its model parameters. We test various schemes for automated control of learning rates in gradient-based training. We evaluate these algorithms in terms of their achieved performance and their practical feasibility. We find that some algorithms do indeed perform better than others across multiple benchmark datasets. These algorithms produce GMLVQ models which not only better fit the training data, but also perform better upon validation. In particular, we find that the Variance-based Stochastic Gradient Descent algorithm consistently performs best across all experiments