In this paper we consider a dual gradient method for solving linear ill-posed problems $Ax = y$, where $A : X \to Y$ is a bounded linear operator from a Banach space $X$ to a Hilbert space $Y$. A strongly convex penalty function is used in the method to select a solution with desired feature. Under variational source conditions on the sought solution, convergence rates are derived when the method is terminated by either an {\it a priori} stopping rule or the discrepancy principle. We also consider an acceleration of the method as well as its various applications.Comment: Accepte
The convergence behavior of gradient methods for minimizing convex differentiable functions is one o...
We consider stopping rules in conjugate gradient type iteration methods for solving linear ill‐posed...
Contains fulltext : 18943_on__thrao.pdf ( ) (Open Access)Report no. 010217 p
In this paper we propose a distributed dual gradient algorithm for minimizing linearly constrained s...
© 2017 Informa UK Limited, trading as Taylor & Francis Group We suggest simple implementable modif...
The gradient projection algorithm plays an important role in solving constrained convex minimization...
In this paper we introduce a new primal-dual technique for convergence analysis of gradient schemes ...
Under the error bound assumption, we establish the linear convergence rate of a gradient projection ...
. We describe an algorithm for optimization of a smooth function subject to general linear constrain...
The standard assumption for proving linear convergence of first order methods for smooth convex opti...
The importance of an adequate inner loop starting point (as opposed to a sufficient inner loop stopp...
AbstractLet T be a bounded linear operator from one Hilbert space to another. A class of gradient me...
In this paper we propose the gradient descent type methods to solve convex optimization problems in ...
In this paper, we present an analysis of the convergence rate of gradient descent with a varying ste...
In this paper, we propose the gradient descent type methods to solve convex optimization problems in...
The convergence behavior of gradient methods for minimizing convex differentiable functions is one o...
We consider stopping rules in conjugate gradient type iteration methods for solving linear ill‐posed...
Contains fulltext : 18943_on__thrao.pdf ( ) (Open Access)Report no. 010217 p
In this paper we propose a distributed dual gradient algorithm for minimizing linearly constrained s...
© 2017 Informa UK Limited, trading as Taylor & Francis Group We suggest simple implementable modif...
The gradient projection algorithm plays an important role in solving constrained convex minimization...
In this paper we introduce a new primal-dual technique for convergence analysis of gradient schemes ...
Under the error bound assumption, we establish the linear convergence rate of a gradient projection ...
. We describe an algorithm for optimization of a smooth function subject to general linear constrain...
The standard assumption for proving linear convergence of first order methods for smooth convex opti...
The importance of an adequate inner loop starting point (as opposed to a sufficient inner loop stopp...
AbstractLet T be a bounded linear operator from one Hilbert space to another. A class of gradient me...
In this paper we propose the gradient descent type methods to solve convex optimization problems in ...
In this paper, we present an analysis of the convergence rate of gradient descent with a varying ste...
In this paper, we propose the gradient descent type methods to solve convex optimization problems in...
The convergence behavior of gradient methods for minimizing convex differentiable functions is one o...
We consider stopping rules in conjugate gradient type iteration methods for solving linear ill‐posed...
Contains fulltext : 18943_on__thrao.pdf ( ) (Open Access)Report no. 010217 p