Interpretability is often pointed out as a key requirement for trustworthy machine learning. However, learning and releasing models that are inherently interpretable leaks information regarding the underlying training data. As such disclosure may directly conflict with privacy, a precise quantification of the privacy impact of such breach is a fundamental problem. For instance, previous work have shown that the structure of a decision tree can be leveraged to build a probabilistic reconstruction of its training dataset, with the uncertainty of the reconstruction being a relevant metric for the information leak. In this paper, we propose of a novel framework generalizing these probabilistic reconstructions in the sense that it can handle oth...
Machine learning models are often trained on sensitive and proprietary datasets. Yet what -- and und...
A large body of work shows that machine learning (ML) models can leak sensitive or confidential info...
International audienceModel explanations provide transparency into a trained machine learning model’...
This thesis describes contributions to the field of interpretable models in probabilistic machine le...
We look at a specific aspect of model interpretability: models often need to be constrained in size ...
10 pages, 22 figures, submitted to ICLR 2023A wide variety of model explanation approaches have been...
We investigate an attack on a machine learning model that predicts whether a person or household wil...
Part 8: Probabilistic Attacks and Protection (Short Papers)International audienceIn this paper, we i...
A salient approach to interpretable machine learning is to restrict modeling to simple models. In th...
Machine learning models are increasingly utilized across impactful domains to predict individual out...
Interpretable predictions, which clarify why a machine learning model makes a particular decision, c...
Models leak information about their training data. This enables attackers to infer sensitive informa...
As automated decision-making systems are increasingly deployed in areas with personal and societal i...
In recent years, there has been an increasing involvement of artificial intelligence and machine lea...
In recent years we are witnessing the diffusion of AI systems based on powerful Machine Learning mo...
Machine learning models are often trained on sensitive and proprietary datasets. Yet what -- and und...
A large body of work shows that machine learning (ML) models can leak sensitive or confidential info...
International audienceModel explanations provide transparency into a trained machine learning model’...
This thesis describes contributions to the field of interpretable models in probabilistic machine le...
We look at a specific aspect of model interpretability: models often need to be constrained in size ...
10 pages, 22 figures, submitted to ICLR 2023A wide variety of model explanation approaches have been...
We investigate an attack on a machine learning model that predicts whether a person or household wil...
Part 8: Probabilistic Attacks and Protection (Short Papers)International audienceIn this paper, we i...
A salient approach to interpretable machine learning is to restrict modeling to simple models. In th...
Machine learning models are increasingly utilized across impactful domains to predict individual out...
Interpretable predictions, which clarify why a machine learning model makes a particular decision, c...
Models leak information about their training data. This enables attackers to infer sensitive informa...
As automated decision-making systems are increasingly deployed in areas with personal and societal i...
In recent years, there has been an increasing involvement of artificial intelligence and machine lea...
In recent years we are witnessing the diffusion of AI systems based on powerful Machine Learning mo...
Machine learning models are often trained on sensitive and proprietary datasets. Yet what -- and und...
A large body of work shows that machine learning (ML) models can leak sensitive or confidential info...
International audienceModel explanations provide transparency into a trained machine learning model’...