We study the asymptotic behavior of the logistic classifier in an abstract Hilbert space and require realistic conditions on the distribution of data for its consistency. The number kn of estimated parameters via maximum quasi-likelihood is allowed to diverge so that kn/n → 0 and nτ 4 kn → ∞, where n is the number of observations and τkn is the variance of the last principal component of data used for estimation. This is the only result on the consistency of the logistic classifier we know so far when the data are assumed to come from a Hilbert space
Bayesian belief nets (BNs) are often used for classification tasks — typically to return the most li...
In recent years, functional linear models have attracted growing attention in statistics and machine...
The scores returned by support vector machines are often used as a confidence measures in the classi...
Abstract. We study maximum penalized likelihood estimation for lo-gistic regression type problems. T...
© 2017 Elsevier B.V. We study point separation for the logistic regression model for Hilbert space-v...
In this paper we give a survey of the combination of classifiers. We briefly describe basic principl...
The main ideas behind the classic multivariate logistic regression model make sense when translated ...
Direct use of the likelihood function typically produces severely biased estimates when the dimensio...
The talk will focus on the problem of finite-sample null hypothesis significance testing on the mea...
Many classification algorithms are designed on the assumption that the population of interest is sta...
Abstract. Let {X,Xn;n ≥ 1} be a sequence of i.i.d. random variables taking values in a real separabl...
We present a general modelling method for optimal probability prediction over future observations, i...
We investigate the generalisation performance of consistent classifiers, i.e. classifiers that are c...
We consider the least-square regression problem with regularization by a block 1-norm, i.e., a sum o...
The talk will focus on the problem of finite-sample null hypothesis significance testing on the mea...
Bayesian belief nets (BNs) are often used for classification tasks — typically to return the most li...
In recent years, functional linear models have attracted growing attention in statistics and machine...
The scores returned by support vector machines are often used as a confidence measures in the classi...
Abstract. We study maximum penalized likelihood estimation for lo-gistic regression type problems. T...
© 2017 Elsevier B.V. We study point separation for the logistic regression model for Hilbert space-v...
In this paper we give a survey of the combination of classifiers. We briefly describe basic principl...
The main ideas behind the classic multivariate logistic regression model make sense when translated ...
Direct use of the likelihood function typically produces severely biased estimates when the dimensio...
The talk will focus on the problem of finite-sample null hypothesis significance testing on the mea...
Many classification algorithms are designed on the assumption that the population of interest is sta...
Abstract. Let {X,Xn;n ≥ 1} be a sequence of i.i.d. random variables taking values in a real separabl...
We present a general modelling method for optimal probability prediction over future observations, i...
We investigate the generalisation performance of consistent classifiers, i.e. classifiers that are c...
We consider the least-square regression problem with regularization by a block 1-norm, i.e., a sum o...
The talk will focus on the problem of finite-sample null hypothesis significance testing on the mea...
Bayesian belief nets (BNs) are often used for classification tasks — typically to return the most li...
In recent years, functional linear models have attracted growing attention in statistics and machine...
The scores returned by support vector machines are often used as a confidence measures in the classi...