This paper studies model transferability when human decision subjects respond to a deployed machine learning model. In our setting, an agent or a user corresponds to a sample $(X,Y)$ drawn from a distribution $\mathcal{D}$ and will face a model $h$ and its classification result $h(X)$. Agents can modify $X$ to adapt to $h$, which will incur a distribution shift on $(X,Y)$. Therefore, when training $h$, the learner will need to consider the subsequently ``induced'' distribution when the output model is deployed. Our formulation is motivated by applications where the deployed machine learning models interact with human agents, and will ultimately face responsive and interactive data distributions. We formalize the discussions of the transfera...
\u3cp\u3eDomain adaptation is the supervised learning setting in which the training and test data ar...
Discriminative learning methods for classification perform well when training and test data are draw...
Many instances of algorithmic bias are caused by distributional shifts. For example, machine learnin...
When deployed in the real world, machine learning models inevitably encounter changes in the data di...
When the test distribution differs from the training distribution, machine learning models can perfo...
The Domain Adaptation problem in machine learning occurs when the distribution generating the test d...
International audienceAll machine learning algorithms that correspond to supervised and semi-supervi...
Abstract. The supervised learning paradigm assumes in general that both training and test data are s...
A common use case of machine learning in real world settings is to learn a model from historical dat...
In many practical applications data used for training a machine learning model and the deployment da...
Artificial intelligence, and in particular machine learning, is concerned with teaching computer sys...
Machine-learned components, particularly those trained using deep learning methods, are becoming int...
For example, in machine translation tasks, to achieve bidirectional translation between two language...
Domain adaptation (DA) arises as an important problem in statistical machine learning when the sourc...
Recent interest in the external validity of prediction models (i.e., the problem of different train ...
\u3cp\u3eDomain adaptation is the supervised learning setting in which the training and test data ar...
Discriminative learning methods for classification perform well when training and test data are draw...
Many instances of algorithmic bias are caused by distributional shifts. For example, machine learnin...
When deployed in the real world, machine learning models inevitably encounter changes in the data di...
When the test distribution differs from the training distribution, machine learning models can perfo...
The Domain Adaptation problem in machine learning occurs when the distribution generating the test d...
International audienceAll machine learning algorithms that correspond to supervised and semi-supervi...
Abstract. The supervised learning paradigm assumes in general that both training and test data are s...
A common use case of machine learning in real world settings is to learn a model from historical dat...
In many practical applications data used for training a machine learning model and the deployment da...
Artificial intelligence, and in particular machine learning, is concerned with teaching computer sys...
Machine-learned components, particularly those trained using deep learning methods, are becoming int...
For example, in machine translation tasks, to achieve bidirectional translation between two language...
Domain adaptation (DA) arises as an important problem in statistical machine learning when the sourc...
Recent interest in the external validity of prediction models (i.e., the problem of different train ...
\u3cp\u3eDomain adaptation is the supervised learning setting in which the training and test data ar...
Discriminative learning methods for classification perform well when training and test data are draw...
Many instances of algorithmic bias are caused by distributional shifts. For example, machine learnin...