Presented on February 11, 2019 at 11:00 a.m. as part of the ARC12 Distinguished Lecture in the Klaus Advanced Computing Building, Room 1116.Tuo Zhao is an assistant professor in the H. Milton Stewart School of Industrial and Systems Engineering and the school of Computational Science and Engineering at Georgia Tech. His current research focuses on developing a new generation of optimization algorithms with statistical and computational guarantees, as well as user-friendly open source software for machine learning and scientific computing.Runtime: 25:25 minutesStochastic Gradient Descent-type (SGD) algorithms have been widely applied to many non-convex optimization problems in machine learning, e.g., training deep neural networks, variationa...
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Brain and Cognitive Sciences, 2...
© 2019 Massachusetts Institute of Technology. For nonconvex optimization in machine learning, this a...
In this article we study fully-connected feedforward deep ReLU ANNs with an arbitrarily large number...
The dissertation addresses the research topics of machine learning outlined below. We developed the ...
Classical optimization techniques have found widespread use in machine learning. Convex optimization...
Machine learning has become one of the most exciting research areas in the world, with various appli...
In the recent decade, deep neural networks have solved ever more complex tasks across many fronts in...
International audienceWe introduce a generic scheme to solve non-convex optimization problems using ...
We introduce a generic scheme to solve nonconvex optimization problems using gradient-based algorith...
Learning a deep neural network requires solving a challenging optimization problem: it is a high-dim...
Nonconvex minimax problems appear frequently in emerging machine learning applications, such as gene...
In this thesis, we theoretically analyze the ability of neural networks trained by gradient descent ...
We consider the fundamental problem in nonconvex optimization of efficiently reaching a stationary p...
Machine learning and reinforcement learning have achieved tremendous success in solving problems in ...
Large-scale machine learning problems can be reduced to non-convex optimization problems if state-of...
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Brain and Cognitive Sciences, 2...
© 2019 Massachusetts Institute of Technology. For nonconvex optimization in machine learning, this a...
In this article we study fully-connected feedforward deep ReLU ANNs with an arbitrarily large number...
The dissertation addresses the research topics of machine learning outlined below. We developed the ...
Classical optimization techniques have found widespread use in machine learning. Convex optimization...
Machine learning has become one of the most exciting research areas in the world, with various appli...
In the recent decade, deep neural networks have solved ever more complex tasks across many fronts in...
International audienceWe introduce a generic scheme to solve non-convex optimization problems using ...
We introduce a generic scheme to solve nonconvex optimization problems using gradient-based algorith...
Learning a deep neural network requires solving a challenging optimization problem: it is a high-dim...
Nonconvex minimax problems appear frequently in emerging machine learning applications, such as gene...
In this thesis, we theoretically analyze the ability of neural networks trained by gradient descent ...
We consider the fundamental problem in nonconvex optimization of efficiently reaching a stationary p...
Machine learning and reinforcement learning have achieved tremendous success in solving problems in ...
Large-scale machine learning problems can be reduced to non-convex optimization problems if state-of...
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Brain and Cognitive Sciences, 2...
© 2019 Massachusetts Institute of Technology. For nonconvex optimization in machine learning, this a...
In this article we study fully-connected feedforward deep ReLU ANNs with an arbitrarily large number...