Empirical risk minimization offers well-known learning guarantees when training and test data come from the same domain. In the real world, though, we often wish to adapt a classifier from a source domain with a large amount of training data to different target domain with very little training data. In this work we give uniform convergence bounds for algorithms that minimize a convex combination of source and target empirical risk. The bounds explicitly model the inherent trade-off between training on a large but inaccurate source data set and a small but accurate target training set. Our theory also gives results when we have multiple source domains, each of which may have a different number of instances, and we exhibit cases in which mini...
Transfer learning, or domain adaptation, is concerned with machine learning problems in which traini...
Tight bounds are derived on the risk of models in the ensemble generated by incremental training of ...
We develop minimax optimal risk bounds for the general learning task consisting in predicting as wel...
Unsupervised domain adaptation (UDA) aims to train a target classifier with labeled samples from the...
Many domain adaptation methods are based on learning a projection or a transformation of the source ...
We obtain sharp bounds on the convergence rate of Empirical Risk Minimization performed in a convex ...
Discriminative learning methods for classification perform well when training and test data are draw...
In this paper, we provide a new framework to study the generalization bound of the learning process ...
International audienceTraditional supervised classification algorithms fail when unlabeled test data...
Plotting a learner’s average performance against the number of training samples results in a learnin...
International audienceDomain adaptation (DA) is an important and emerging field of machine learning ...
We present a new algorithm for domain adaptation improving upon a discrepancy minimization algorithm...
The Domain Adaptation problem in machine learning occurs when the distribution generating the test d...
We address the problem of algorithmic fairness: ensuring that sensitive information does not unfairl...
We present a new algorithm for domain adaptation improving upon the discrepancy minimization algorit...
Transfer learning, or domain adaptation, is concerned with machine learning problems in which traini...
Tight bounds are derived on the risk of models in the ensemble generated by incremental training of ...
We develop minimax optimal risk bounds for the general learning task consisting in predicting as wel...
Unsupervised domain adaptation (UDA) aims to train a target classifier with labeled samples from the...
Many domain adaptation methods are based on learning a projection or a transformation of the source ...
We obtain sharp bounds on the convergence rate of Empirical Risk Minimization performed in a convex ...
Discriminative learning methods for classification perform well when training and test data are draw...
In this paper, we provide a new framework to study the generalization bound of the learning process ...
International audienceTraditional supervised classification algorithms fail when unlabeled test data...
Plotting a learner’s average performance against the number of training samples results in a learnin...
International audienceDomain adaptation (DA) is an important and emerging field of machine learning ...
We present a new algorithm for domain adaptation improving upon a discrepancy minimization algorithm...
The Domain Adaptation problem in machine learning occurs when the distribution generating the test d...
We address the problem of algorithmic fairness: ensuring that sensitive information does not unfairl...
We present a new algorithm for domain adaptation improving upon the discrepancy minimization algorit...
Transfer learning, or domain adaptation, is concerned with machine learning problems in which traini...
Tight bounds are derived on the risk of models in the ensemble generated by incremental training of ...
We develop minimax optimal risk bounds for the general learning task consisting in predicting as wel...