We study adaptive information for approximation of linear problems in a separable Hilbert space equipped with a probability measure μ. It is known that adaption does not help in the worst case for linear problems. We prove that adaption also does not help on the average. That is, there exists nonadaptive information which is as powerful as adaptive information. This result holds for "orthogonally invariant" measures. We provide necessary and sufficient conditions for a measure to be orthogonally invariant. Examples of orthogonally invariant measures include Gaussian measures and, in the finite dimensional case, weighted Lebesgue measures
AbstractSome results on worst case optimal algorithms and recent results of J. Traub, G. Wasilkowski...
AbstractWe study how much information with varying cardinality can be better than information with f...
AbstractWe study how much information with varying cardinality can be better than information with f...
AbstractWhen observations can be made without noise, it is known that adaptive information is no mor...
AbstractIn this paper we bound the infimum of the ratio of adaptive to nonadaptive information for l...
AbstractWe study approximation of linear functionals on separable Banach spaces equipped with a Gaus...
AbstractAdaptive information is not more powerful than nonadaptive information for solving linear pr...
AbstractWe study adaptive information of varying cardinality for linear problems defined on a separa...
We study optimal algorithms and optimal information in an average case model for linear problems in ...
AbstractThis paper studies optimal information and optimal algorithms in Hilbert space for an n-dime...
We introduce an average case model and define general notions of optimal algorithm and optimal infor...
AbstractWe study approximation of linear functionals on separable Banach spaces equipped with a Gaus...
AbstractWe introduce an average case model and define general notions of optimal algorithm and optim...
We study optimal algorithms for linear problems in two settings: the average case and the probabilis...
AbstractWe study the worst case complexity of solving problems for which information is partial and ...
AbstractSome results on worst case optimal algorithms and recent results of J. Traub, G. Wasilkowski...
AbstractWe study how much information with varying cardinality can be better than information with f...
AbstractWe study how much information with varying cardinality can be better than information with f...
AbstractWhen observations can be made without noise, it is known that adaptive information is no mor...
AbstractIn this paper we bound the infimum of the ratio of adaptive to nonadaptive information for l...
AbstractWe study approximation of linear functionals on separable Banach spaces equipped with a Gaus...
AbstractAdaptive information is not more powerful than nonadaptive information for solving linear pr...
AbstractWe study adaptive information of varying cardinality for linear problems defined on a separa...
We study optimal algorithms and optimal information in an average case model for linear problems in ...
AbstractThis paper studies optimal information and optimal algorithms in Hilbert space for an n-dime...
We introduce an average case model and define general notions of optimal algorithm and optimal infor...
AbstractWe study approximation of linear functionals on separable Banach spaces equipped with a Gaus...
AbstractWe introduce an average case model and define general notions of optimal algorithm and optim...
We study optimal algorithms for linear problems in two settings: the average case and the probabilis...
AbstractWe study the worst case complexity of solving problems for which information is partial and ...
AbstractSome results on worst case optimal algorithms and recent results of J. Traub, G. Wasilkowski...
AbstractWe study how much information with varying cardinality can be better than information with f...
AbstractWe study how much information with varying cardinality can be better than information with f...