Iterative methods that take steps in approxi-mate subgradient directions have proved to be useful for stochastic learning problems over large or streaming data sets. When the objective consists of a loss function plus a nonsmooth regularization term, whose pur-pose is to induce structure (for example, spar-sity) in the solution, the solution often lies on a low-dimensional manifold along which the regularizer is smooth. This paper shows that a regularized dual averaging algorithm can identify this manifold with high probabil-ity. This observation motivates an algorith-mic strategy in which, once a near-optimal manifold is identified, we switch to an al-gorithm that searches only in this manifold, which typically has much lower intrinsic di-...
We develop new stochastic optimization methods that are applicable to a wide range of structured reg...
We propose an automatic approximation of the intrinsic manifold for general semi-supervised learning...
A central problem in statistical learning is to design prediction algorithms that not only perform w...
Iterative methods that calculate their steps from approximate subgradient directions have proved to ...
Iterative methods that calculate their steps from approximate subgradient directions have proved to...
In this paper, an online learning algorithm is proposed as sequential stochastic approximation of a ...
In this paper, an online learning algorithm is proposed as sequential stochastic approximation of a ...
Abstract — In this paper, some new probabilistic upper bounds are presented for the online learning ...
© 2016 Elsevier B.V. Recent advances in stochastic learning, such as dual averaging schemes for prox...
Recent advances in stochastic optimization and regularized dual averaging approaches revealed a subs...
Traditional learning algorithms use only labeled data for training. However, labeled examples are of...
We introduce a new algorithm, extended regularized dual averaging (XRDA), for solving regularized st...
Many settings of unsupervised learning can be viewed as quantization problems - the minimization of ...
Many settings of unsupervised learning can be viewed as quantization problems - the minimization of ...
Many settings of unsupervised learning can be viewed as quantization problems - the minimization of ...
We develop new stochastic optimization methods that are applicable to a wide range of structured reg...
We propose an automatic approximation of the intrinsic manifold for general semi-supervised learning...
A central problem in statistical learning is to design prediction algorithms that not only perform w...
Iterative methods that calculate their steps from approximate subgradient directions have proved to ...
Iterative methods that calculate their steps from approximate subgradient directions have proved to...
In this paper, an online learning algorithm is proposed as sequential stochastic approximation of a ...
In this paper, an online learning algorithm is proposed as sequential stochastic approximation of a ...
Abstract — In this paper, some new probabilistic upper bounds are presented for the online learning ...
© 2016 Elsevier B.V. Recent advances in stochastic learning, such as dual averaging schemes for prox...
Recent advances in stochastic optimization and regularized dual averaging approaches revealed a subs...
Traditional learning algorithms use only labeled data for training. However, labeled examples are of...
We introduce a new algorithm, extended regularized dual averaging (XRDA), for solving regularized st...
Many settings of unsupervised learning can be viewed as quantization problems - the minimization of ...
Many settings of unsupervised learning can be viewed as quantization problems - the minimization of ...
Many settings of unsupervised learning can be viewed as quantization problems - the minimization of ...
We develop new stochastic optimization methods that are applicable to a wide range of structured reg...
We propose an automatic approximation of the intrinsic manifold for general semi-supervised learning...
A central problem in statistical learning is to design prediction algorithms that not only perform w...