Abstract We consider the moving least-squares (MLS) method by the regression learning framework under the assumption that the sampling process satisfies the α-mixing condition. We conduct the rigorous error analysis by using the probability inequalities for the dependent samples in the error estimates. When the dependent samples satisfy an exponential α-mixing, we derive the satisfactory learning rate and error bound of the algorithm
This paper looks at the strong consistency of the ordinary least squares (OLS) estimator in linear r...
We give an unified convergence analysis of ensemble learning methods including e.g. AdaBoost, Logist...
We prove rates of convergence in the statistical sense for kernel-based least squares regression usi...
AbstractIn this paper we apply a concentration technique to improve the convergence rates for a movi...
AbstractMoving least-square (MLS) is an approximation method for data interpolation, numerical analy...
AbstractThe rate of convergence of the least squares estimator in a non-linear regression model with...
Abstract We consider the moving least-square (MLS) method by the coefficient-based regression framew...
We investigate machine learning for the least square regression with data dependent hypothesis and c...
We prove the L1-norm and bounded variation norm convergence of a piecewise linear least squares meth...
This thesis is being archived as a Digitized Shelf Copy for campus access to current students and st...
International audienceIn this paper, we investigate the impact of compression on stochastic gradient...
AbstractA standard assumption in theoretical study of learning algorithms for regression is uniform ...
We introduce two new temporal difference (TD) algorithms based on the theory of linear leastsquares ...
We give an unified convergence analysis of ensemble learning methods including e.g. AdaBoost, Logist...
Data gathering is a constant in human history with ever increasing amounts in quantity and dimension...
This paper looks at the strong consistency of the ordinary least squares (OLS) estimator in linear r...
We give an unified convergence analysis of ensemble learning methods including e.g. AdaBoost, Logist...
We prove rates of convergence in the statistical sense for kernel-based least squares regression usi...
AbstractIn this paper we apply a concentration technique to improve the convergence rates for a movi...
AbstractMoving least-square (MLS) is an approximation method for data interpolation, numerical analy...
AbstractThe rate of convergence of the least squares estimator in a non-linear regression model with...
Abstract We consider the moving least-square (MLS) method by the coefficient-based regression framew...
We investigate machine learning for the least square regression with data dependent hypothesis and c...
We prove the L1-norm and bounded variation norm convergence of a piecewise linear least squares meth...
This thesis is being archived as a Digitized Shelf Copy for campus access to current students and st...
International audienceIn this paper, we investigate the impact of compression on stochastic gradient...
AbstractA standard assumption in theoretical study of learning algorithms for regression is uniform ...
We introduce two new temporal difference (TD) algorithms based on the theory of linear leastsquares ...
We give an unified convergence analysis of ensemble learning methods including e.g. AdaBoost, Logist...
Data gathering is a constant in human history with ever increasing amounts in quantity and dimension...
This paper looks at the strong consistency of the ordinary least squares (OLS) estimator in linear r...
We give an unified convergence analysis of ensemble learning methods including e.g. AdaBoost, Logist...
We prove rates of convergence in the statistical sense for kernel-based least squares regression usi...