Abstract. Stochastic gradient methods are effective to solve matrix fac-torization problems. However, it is well known that the performance of stochastic gradient method highly depends on the learning rate schedule used; a good schedule can significantly boost the training process. In this paper, motivated from past works on convex optimization which assign a learning rate for each variable, we propose a new schedule for matrix factorization. The experiments demonstrate that the proposed schedule leads to faster convergence than existing ones. Our schedule uses the same parameter on all data sets included in our experiments; that is, the time spent on learning rate selection can be significantly reduced. By applying this schedule to a state...
Short version of https://arxiv.org/abs/1709.01427International audienceWhen applied to training deep...
We consider the application of stochastic gradient descent (SGD) to the nonnegative matrix factoriza...
Stochastic gradient descent is the method of choice for solving large-scale optimization problems in...
Matrix factorization is known to be an effective method for recommender systems that are given only ...
Generalized Matrix Learning Vector Quantization (GMLVQ) critically relies on the use of an optimizat...
Matrix factorization (MF) has been attracting much attention due to its wide applications. However, ...
Recent work has established an empirically successful framework for adapting learning rates for stoc...
As Web 2.0 and enterprise-cloud applications have proliferated, data mining algorithms increasingly ...
The performance of stochastic gradient de-scent (SGD) depends critically on how learn-ing rates are ...
The purpose of this text is to provide an accessible introduction to a set of recently developed alg...
Gradient-based optimization algorithms, in particular their stochastic counterparts, have become by ...
AbstractIn this paper, a stochastic gradient descent algorithm is proposed for the binary classifica...
© 2016 IEEE. The paper looks at a scaled variant of the stochastic gradient descent algorithm for th...
A construct that has been receiving attention recently in reinforcement learning is stochastic facto...
International audienceWe present the first accelerated randomized algorithm for solving linear syste...
Short version of https://arxiv.org/abs/1709.01427International audienceWhen applied to training deep...
We consider the application of stochastic gradient descent (SGD) to the nonnegative matrix factoriza...
Stochastic gradient descent is the method of choice for solving large-scale optimization problems in...
Matrix factorization is known to be an effective method for recommender systems that are given only ...
Generalized Matrix Learning Vector Quantization (GMLVQ) critically relies on the use of an optimizat...
Matrix factorization (MF) has been attracting much attention due to its wide applications. However, ...
Recent work has established an empirically successful framework for adapting learning rates for stoc...
As Web 2.0 and enterprise-cloud applications have proliferated, data mining algorithms increasingly ...
The performance of stochastic gradient de-scent (SGD) depends critically on how learn-ing rates are ...
The purpose of this text is to provide an accessible introduction to a set of recently developed alg...
Gradient-based optimization algorithms, in particular their stochastic counterparts, have become by ...
AbstractIn this paper, a stochastic gradient descent algorithm is proposed for the binary classifica...
© 2016 IEEE. The paper looks at a scaled variant of the stochastic gradient descent algorithm for th...
A construct that has been receiving attention recently in reinforcement learning is stochastic facto...
International audienceWe present the first accelerated randomized algorithm for solving linear syste...
Short version of https://arxiv.org/abs/1709.01427International audienceWhen applied to training deep...
We consider the application of stochastic gradient descent (SGD) to the nonnegative matrix factoriza...
Stochastic gradient descent is the method of choice for solving large-scale optimization problems in...