The lack of interpretability remains a key barrier to the adoption of deep models in many applications. In this work, we explicitly regularize deep models so human users might step through the process behind their predictions in little time. Specifically, we train deep time-series models so their class-probability predictions have high accuracy while being closely modeled by decision trees with few nodes. Using intuitive toy examples as well as medical tasks for treating sepsis and HIV, we demonstrate that this new tree regularization yields models that are easier for humans to simulate than simpler L1 or L2 penalties without sacrificing predictive power
Deep Learning algorithms have achieved a great success in many domains where large scale datasets ar...
This paper aims to investigate the limits of deep learning by exploring the issue of overfitting in ...
Interpretability of representations in both deep generative and discriminative models is highly desi...
The lack of interpretability remains a key barrier to the adoption of deep models in many applicatio...
The lack of interpretability remains a barrier to adopting deep neural networks across many safety-c...
One obstacle that so far prevents the introduction of machine learning models primarily in critical ...
The articles in this special section focus on learning adaptive models. Over the past few years, spa...
Deep Learning based models are currently dominating most state-of-the-art solutions for disease pred...
In machine learning often a tradeoff must be made between accuracy and intelligibility. More accurat...
In classification and forecasting with tabular data, one often utilizes tree-based models. Those can...
10 pages, 22 figures, submitted to ICLR 2023A wide variety of model explanation approaches have been...
The combination of deep neural nets and theory-driven models (deep grey-box models) can be advantage...
There have been many recent advances in machine learning, resulting in models which have had major i...
The expansion of machine learning to high-stakes application domains such as medicine, finance, and ...
The identification of useful temporal dependence structure in discrete time series data is an import...
Deep Learning algorithms have achieved a great success in many domains where large scale datasets ar...
This paper aims to investigate the limits of deep learning by exploring the issue of overfitting in ...
Interpretability of representations in both deep generative and discriminative models is highly desi...
The lack of interpretability remains a key barrier to the adoption of deep models in many applicatio...
The lack of interpretability remains a barrier to adopting deep neural networks across many safety-c...
One obstacle that so far prevents the introduction of machine learning models primarily in critical ...
The articles in this special section focus on learning adaptive models. Over the past few years, spa...
Deep Learning based models are currently dominating most state-of-the-art solutions for disease pred...
In machine learning often a tradeoff must be made between accuracy and intelligibility. More accurat...
In classification and forecasting with tabular data, one often utilizes tree-based models. Those can...
10 pages, 22 figures, submitted to ICLR 2023A wide variety of model explanation approaches have been...
The combination of deep neural nets and theory-driven models (deep grey-box models) can be advantage...
There have been many recent advances in machine learning, resulting in models which have had major i...
The expansion of machine learning to high-stakes application domains such as medicine, finance, and ...
The identification of useful temporal dependence structure in discrete time series data is an import...
Deep Learning algorithms have achieved a great success in many domains where large scale datasets ar...
This paper aims to investigate the limits of deep learning by exploring the issue of overfitting in ...
Interpretability of representations in both deep generative and discriminative models is highly desi...