In high dimensions, most machine learning method perform fragile even there are a little outliers. To address this, we hope to introduce a new method with the base learner, such as Bayesian regression or stochastic gradient descent to solve the problem of the vulnerability in the model. Because the mini-batch gradient descent allows for a more robust convergence than the batch gradient descent, we work a method with the mini-batch gradient descent, called Mini-Batch Gradient Descent with Trimming (MBGDT). Our method show state-of-art performance and have greater robustness than several baselines when we apply our method in designed dataset
Current machine learning practice requires solving huge-scale empirical risk minimization problems q...
Robustness of machine learning, often referring to securing performance on different data, is always...
Recent years have witnessed huge advances in machine learning (ML) and its applications, especially ...
We study a scalable alternative to robust gradient descent (RGD) techniques that can be used when lo...
The steplength selection is a crucial issue for the effectiveness of the stochastic gradient methods...
Gradient-based algorithms are popular when solving unconstrained optimization problems. By exploitin...
Mini-batch stochastic gradient descent (SGD) and variants thereof approximate the objective function...
We propose a mini-batching scheme for improving the theoretical complexity and practical performance...
Optimization is one of the factors in machine learning to help model training during backpropagatio...
Stochastic gradient descent (SGD) holds as a classical method to build large scale machine learning ...
Proximal gradient descent (PGD) and stochastic proximal gradient descent (SPGD) are popular methods ...
Big Data problems in Machine Learning have large number of data points or large number of features, ...
Abstract This research paper presents an innovative approach to gradient descent known as ‘‘Sample G...
In this work, we propose to progressively increase the training difficulty during learning a neural ...
Mini-batch stochastic gradient methods (SGD) are state of the art for distributed training of deep n...
Current machine learning practice requires solving huge-scale empirical risk minimization problems q...
Robustness of machine learning, often referring to securing performance on different data, is always...
Recent years have witnessed huge advances in machine learning (ML) and its applications, especially ...
We study a scalable alternative to robust gradient descent (RGD) techniques that can be used when lo...
The steplength selection is a crucial issue for the effectiveness of the stochastic gradient methods...
Gradient-based algorithms are popular when solving unconstrained optimization problems. By exploitin...
Mini-batch stochastic gradient descent (SGD) and variants thereof approximate the objective function...
We propose a mini-batching scheme for improving the theoretical complexity and practical performance...
Optimization is one of the factors in machine learning to help model training during backpropagatio...
Stochastic gradient descent (SGD) holds as a classical method to build large scale machine learning ...
Proximal gradient descent (PGD) and stochastic proximal gradient descent (SPGD) are popular methods ...
Big Data problems in Machine Learning have large number of data points or large number of features, ...
Abstract This research paper presents an innovative approach to gradient descent known as ‘‘Sample G...
In this work, we propose to progressively increase the training difficulty during learning a neural ...
Mini-batch stochastic gradient methods (SGD) are state of the art for distributed training of deep n...
Current machine learning practice requires solving huge-scale empirical risk minimization problems q...
Robustness of machine learning, often referring to securing performance on different data, is always...
Recent years have witnessed huge advances in machine learning (ML) and its applications, especially ...