In Part I Shrinkage Estimation, we let X ∼ Np(&thetas;, σ2I), where both &thetas; and σ2 are unknown. We consider estimation of 0 under squared error loss function. We develop sufficient conditions for prior density functions such that the corresponding generalized Bayes estimators for &thetas; are admissible. These conditions are analogous to the sufficient conditions in Brown & Hwang (1982), but the solution there is only for the case when σ2 is known. To illustrate how to select hierarchical priors, we also apply these sufficient conditions to a widely used hierarchical Bayes model proposed by Maruyama & Strawderman (2005), and obtain a class of admissible and minimax generalized Bayes estimators for the normal mean &thetas;. In Part II ...
A wide range of statistical problems involve estimation of means or conditional means of multidimens...
In linear regression problems with many predictors, penalized regression techniques are often used t...
In various applications, we deal with high-dimensional positive-valued data that often exhibits spar...
In Part I Shrinkage Estimation, we let X ∼ Np(&thetas;, σ2I), where both &thetas; and σ2 are unknown...
Parameter shrinkage is known to reduce fitting and prediction errors in linear models. When the vari...
We study the causal effect of winning an Oscar Award on an actor or actress’s survival. Does the inc...
This paper builds on a simple unified representation of shrinkage Bayes estimators based on hierarch...
This paper builds on a simple unified representation of shrinkage Bayes estimators based on hierarch...
This paper builds on a simple unified representation of shrinkage Bayes estimators based on hierarch...
In Part I titled Empirical Bayes Estimation, we discuss the estimation of a heteroscedastic multivar...
In Part I titled Empirical Bayes Estimation, we discuss the estimation of a heteroscedastic multivar...
This paper builds on a simple unified representation of shrinkage Bayes estimators based on hierarch...
Sparsity is a standard structural assumption that is made while modeling high-dimensional statistica...
In this paper a variety of shrinkage methods for estimating unknown population parameters has been c...
Abstract: We propose a generalized double Pareto prior for Bayesian shrinkage estimation and inferen...
A wide range of statistical problems involve estimation of means or conditional means of multidimens...
In linear regression problems with many predictors, penalized regression techniques are often used t...
In various applications, we deal with high-dimensional positive-valued data that often exhibits spar...
In Part I Shrinkage Estimation, we let X ∼ Np(&thetas;, σ2I), where both &thetas; and σ2 are unknown...
Parameter shrinkage is known to reduce fitting and prediction errors in linear models. When the vari...
We study the causal effect of winning an Oscar Award on an actor or actress’s survival. Does the inc...
This paper builds on a simple unified representation of shrinkage Bayes estimators based on hierarch...
This paper builds on a simple unified representation of shrinkage Bayes estimators based on hierarch...
This paper builds on a simple unified representation of shrinkage Bayes estimators based on hierarch...
In Part I titled Empirical Bayes Estimation, we discuss the estimation of a heteroscedastic multivar...
In Part I titled Empirical Bayes Estimation, we discuss the estimation of a heteroscedastic multivar...
This paper builds on a simple unified representation of shrinkage Bayes estimators based on hierarch...
Sparsity is a standard structural assumption that is made while modeling high-dimensional statistica...
In this paper a variety of shrinkage methods for estimating unknown population parameters has been c...
Abstract: We propose a generalized double Pareto prior for Bayesian shrinkage estimation and inferen...
A wide range of statistical problems involve estimation of means or conditional means of multidimens...
In linear regression problems with many predictors, penalized regression techniques are often used t...
In various applications, we deal with high-dimensional positive-valued data that often exhibits spar...