The linear coefficient in a partially linear model with confounding variables can be estimated using double machine learning (DML). However, this DML estimator has a two-stage least squares (TSLS) interpretation and may produce overly wide confidence intervals. To address this issue, we propose a regularization and selection scheme, regsDML, which leads to narrower confidence intervals. It selects either the TSLS DML estimator or a regularization-only estimator depending on whose estimated variance is smaller. The regularization-only estimator is tailored to have a low mean squared error. The regsDML estimator is fully data driven. The regsDML estimator converges at the parametric rate, is asymptotically Gaussian distributed, and asymptotic...
A connection between the general linear model (GLM) with frequentist statistical testing and machin...
The R package DoubleML implements the double/debiased machine learning framework of Chernozhukov et ...
Maximum Likelihood (ML) in the linear model overfits when the number of predictors (M) exceeds the n...
A connection between the general linear model (GLM) with frequentist statistical testing and machine...
We propose and study a unified procedure for variable selection in partially linear models. A new ty...
We explore the validity of the 2-stage least squares estimator with l_{1}-regularization in both sta...
Recent advances in machine learning literature provide a series of new algorithms that both address ...
AbstractWe propose and study a unified procedure for variable selection in partially linear models. ...
We explore the validity of the 2-stage least squares estimator with l_{1}-regularization in both sta...
We explore the validity of the 2-stage least squares estimator with l_{1}-regularization in both sta...
We explore the validity of the 2-stage least squares estimator with l1−regularization in both stages...
We explore the validity of the 2-stage least squares estimator with l_{1}-regularization in both sta...
We explore the validity of the 2-stage least squares estimator with l1−regularization in both stages...
Abstract: We propose and study a unified procedure for variable selection in partially linear models...
We propose regularization methods for linear models based on the Lq-likelihood, which is a generaliz...
A connection between the general linear model (GLM) with frequentist statistical testing and machin...
The R package DoubleML implements the double/debiased machine learning framework of Chernozhukov et ...
Maximum Likelihood (ML) in the linear model overfits when the number of predictors (M) exceeds the n...
A connection between the general linear model (GLM) with frequentist statistical testing and machine...
We propose and study a unified procedure for variable selection in partially linear models. A new ty...
We explore the validity of the 2-stage least squares estimator with l_{1}-regularization in both sta...
Recent advances in machine learning literature provide a series of new algorithms that both address ...
AbstractWe propose and study a unified procedure for variable selection in partially linear models. ...
We explore the validity of the 2-stage least squares estimator with l_{1}-regularization in both sta...
We explore the validity of the 2-stage least squares estimator with l_{1}-regularization in both sta...
We explore the validity of the 2-stage least squares estimator with l1−regularization in both stages...
We explore the validity of the 2-stage least squares estimator with l_{1}-regularization in both sta...
We explore the validity of the 2-stage least squares estimator with l1−regularization in both stages...
Abstract: We propose and study a unified procedure for variable selection in partially linear models...
We propose regularization methods for linear models based on the Lq-likelihood, which is a generaliz...
A connection between the general linear model (GLM) with frequentist statistical testing and machin...
The R package DoubleML implements the double/debiased machine learning framework of Chernozhukov et ...
Maximum Likelihood (ML) in the linear model overfits when the number of predictors (M) exceeds the n...