In this work, we try to answer the question: given a network of observations which are not independent, how can we measure the amount of learning information in it? In particular, given non-independent networked examples, can we find a number n so that we can learn from the networked examples as much as from a dataset of n identically and independently drawn examples? We can answer this question to a large extent by searching an optimum in the space of all possible ways to weight the examples. We can show that we can efficiently perform approximate optimization in this space.status: publishe
We consider information-theoretic bounds on expected generalization error for statistical learning p...
Network data are ubiquitous in modern machine learning, with tasks of interest including node classi...
We provide an analysis and interpretation of total variation (TV) minimization for semi-supervised l...
Many machine learning algorithms are based on the assumption that training examples are drawn indepe...
International audienceMany machine learning algorithms are based on the assumption that training exa...
Networked data, in which every training example involves two objects and may share some common objec...
AbstractLearning from examples is the process of taking input-output examples of an unknown function...
In this paper, we propose a simple but effective method for training neural networks with a limited ...
Inductive Inference Learning can be described in terms of finding a good approximation to some unkno...
The problem of learning from examples in multilayer networks is studied within the framework of stat...
Understanding the network structure connecting a group of entities is of interest in applications su...
Recently, the teacher-student learning paradigm has drawn much attention in compressing neural netwo...
Existing metrics for the learning performance of feed-forward neural networks do not provide a satis...
We propose an optimality principle for training an unsu-pervised feedforward neural network based up...
We consider information-theoretic bounds on the expected generalization error for statistical learni...
We consider information-theoretic bounds on expected generalization error for statistical learning p...
Network data are ubiquitous in modern machine learning, with tasks of interest including node classi...
We provide an analysis and interpretation of total variation (TV) minimization for semi-supervised l...
Many machine learning algorithms are based on the assumption that training examples are drawn indepe...
International audienceMany machine learning algorithms are based on the assumption that training exa...
Networked data, in which every training example involves two objects and may share some common objec...
AbstractLearning from examples is the process of taking input-output examples of an unknown function...
In this paper, we propose a simple but effective method for training neural networks with a limited ...
Inductive Inference Learning can be described in terms of finding a good approximation to some unkno...
The problem of learning from examples in multilayer networks is studied within the framework of stat...
Understanding the network structure connecting a group of entities is of interest in applications su...
Recently, the teacher-student learning paradigm has drawn much attention in compressing neural netwo...
Existing metrics for the learning performance of feed-forward neural networks do not provide a satis...
We propose an optimality principle for training an unsu-pervised feedforward neural network based up...
We consider information-theoretic bounds on the expected generalization error for statistical learni...
We consider information-theoretic bounds on expected generalization error for statistical learning p...
Network data are ubiquitous in modern machine learning, with tasks of interest including node classi...
We provide an analysis and interpretation of total variation (TV) minimization for semi-supervised l...