We study generalization properties of random features (RF) regression in high dimensions optimized by stochastic gradient descent (SGD). In this regime, we derive precise non-asymptotics error bounds of RF regression under both constant and polynomial-decay step-size SGD setting, and observe the double descent phenomenon both theoretically and empirically. Our analysis shows how to cope with multiple randomness sources of initialization, label noise, and data sampling (as well as stochastic gradients) with no closed-form solution, and also goes beyond the commonly-used Gaussian/spherical data assumption. Our theoretical results demonstrate that, with SGD training, RF regression still generalizes well for interpolation learning, and is able ...
Sketching and stochastic gradient methods are arguably the most common techniques to derive efficien...
Stochastic Gradient Descent-Ascent (SGDA) is one of the most prominent algorithms for solving min-ma...
Interpolators -- estimators that achieve zero training error -- have attracted growing attention in ...
We study generalization properties of random features (RF) regression in high dimensions optimized b...
We prove a non-asymptotic distribution-independent lower bound for the expected mean squared general...
Recent theoretical studies illustrated that kernel ridgeless regression can guarantee good generaliz...
[previously titled "Theory of Deep Learning III: Generalization Properties of SGD"] In Theory III we...
The stochastic gradient descent (SGD) algorithm has been widely used in statistical estimation for l...
Stochastic descent methods (of the gradient and mirror varieties) have become increasingly popular i...
Stochastic Gradient Descent (SGD) is an out-of-equilibrium algorithm used extensively to train artif...
From the sampling of data to the initialisation of parameters, randomness is ubiquitous in modern Ma...
The observation that stochastic gradient descent (SGD) favors flat minima has played a fundamental r...
We study the scaling limits of stochastic gradient descent (SGD) with constant step-size in the high...
In classical statistics, the bias-variance trade-off describes how varying a model's complexity (e.g...
Understanding the implicit bias of Stochastic Gradient Descent (SGD) is one of the key challenges in...
Sketching and stochastic gradient methods are arguably the most common techniques to derive efficien...
Stochastic Gradient Descent-Ascent (SGDA) is one of the most prominent algorithms for solving min-ma...
Interpolators -- estimators that achieve zero training error -- have attracted growing attention in ...
We study generalization properties of random features (RF) regression in high dimensions optimized b...
We prove a non-asymptotic distribution-independent lower bound for the expected mean squared general...
Recent theoretical studies illustrated that kernel ridgeless regression can guarantee good generaliz...
[previously titled "Theory of Deep Learning III: Generalization Properties of SGD"] In Theory III we...
The stochastic gradient descent (SGD) algorithm has been widely used in statistical estimation for l...
Stochastic descent methods (of the gradient and mirror varieties) have become increasingly popular i...
Stochastic Gradient Descent (SGD) is an out-of-equilibrium algorithm used extensively to train artif...
From the sampling of data to the initialisation of parameters, randomness is ubiquitous in modern Ma...
The observation that stochastic gradient descent (SGD) favors flat minima has played a fundamental r...
We study the scaling limits of stochastic gradient descent (SGD) with constant step-size in the high...
In classical statistics, the bias-variance trade-off describes how varying a model's complexity (e.g...
Understanding the implicit bias of Stochastic Gradient Descent (SGD) is one of the key challenges in...
Sketching and stochastic gradient methods are arguably the most common techniques to derive efficien...
Stochastic Gradient Descent-Ascent (SGDA) is one of the most prominent algorithms for solving min-ma...
Interpolators -- estimators that achieve zero training error -- have attracted growing attention in ...