Thesis (Ph.D.)--University of Washington, 2017-08In this dissertation, we present the results of our research on three topics, namely, the design and analysis of online convex optimization algorithms, convergence rate analysis of proximal gradient homotopy algorithm for structured convex problems, and application of computational methods for study of brain cells in the visual cortex of the primate brain. In our work on online optimization, we have developed a systematic approach with a clear connection to regret minimization for design and worst case analysis of online optimization algorithms. We apply this approach to online experiment design problems. Our results on the convergence rate analysis of proximal gradient homotopy algorithm ex...
Abstract. Linear optimization is many times algorithmically simpler than non-linear convex optimizat...
Tracking time-varying sparse signals is a recent problem with widespread applications. Techniques de...
Blackwell approachability is an online learning setup generalizing the classical problem of regret m...
Classical optimization techniques have found widespread use in machine learning. Convex optimization...
We study the rates of growth of the regret in online convex optimization. First, we show that a simp...
This monograph presents the main mathematical ideas in convex opti-mization. Starting from the funda...
We develop an online gradient algorithm for optimizing the performance of product-form networks thro...
34 pages, 15 figuresSpurred by the enthusiasm surrounding the "Big Data" paradigm, the mathematical ...
We study the rates of growth of the regret in online convex optimization. First, we show that a simp...
We present a unified, black-box-style method for developing and analyzing online convex optimization...
In this research we study some online learning algorithms in the online convex optimization framewor...
We study Online Convex Optimization in the unbounded setting where neither predictions nor gradient ...
The dissertation addresses the research topics of machine learning outlined below. We developed the ...
Abstract. Linear optimization is many times algorithmically simpler than non-linear convex optimizat...
Tracking time-varying sparse signals is a recent problem with widespread applications. Techniques de...
Blackwell approachability is an online learning setup generalizing the classical problem of regret m...
Classical optimization techniques have found widespread use in machine learning. Convex optimization...
We study the rates of growth of the regret in online convex optimization. First, we show that a simp...
This monograph presents the main mathematical ideas in convex opti-mization. Starting from the funda...
We develop an online gradient algorithm for optimizing the performance of product-form networks thro...
34 pages, 15 figuresSpurred by the enthusiasm surrounding the "Big Data" paradigm, the mathematical ...
We study the rates of growth of the regret in online convex optimization. First, we show that a simp...
We present a unified, black-box-style method for developing and analyzing online convex optimization...
In this research we study some online learning algorithms in the online convex optimization framewor...
We study Online Convex Optimization in the unbounded setting where neither predictions nor gradient ...
The dissertation addresses the research topics of machine learning outlined below. We developed the ...
Abstract. Linear optimization is many times algorithmically simpler than non-linear convex optimizat...
Tracking time-varying sparse signals is a recent problem with widespread applications. Techniques de...
Blackwell approachability is an online learning setup generalizing the classical problem of regret m...