Transfer learning is a deep-learning technique that ameliorates the problem of learning when human-annotated labels are expensive and limited. In place of such labels, it uses instead the previously trained weights from a well-chosen source model as the initial weights for the training of a base model for a new target dataset. We demonstrate a novel but general technique for automatically creating such source models. We generate pseudo-labels according to an efficient and extensible algorithm that is based on a classical result from the geometry of high dimensions, the Cayley-Menger determinant. This G2L (``geometry to label'') method incrementally builds up pseudo-labels using a greedy computation of hypervolume content. We demonstrate tha...
Training with the true labels of a dataset as opposed to randomized labels leads to faster optimizat...
The following contains the four datasets described in the paper: Interpretable Geometric Deep Learni...
Transfer learning can significantly improve the sample efficiency of neural networks, by exploiting ...
Self-supervised learning (SSL) has emerged as a desirable paradigm in computer vision due to the ina...
In this paper, we propose a novel transductive pseudo-labeling based method for deep semi-supervised...
Recent advances in deep learning have relied on large, labelled datasets to train high-capacity mode...
A major impediment to the application of deep learning to real-world problems is the scarcity of lab...
This paper describes a method of domain adaptive training for semantic segmentation using multiple s...
Accepted to CVPR 2019Semi-supervised learning is becoming increasingly important because it can comb...
Deep neural networks are susceptible to label noise. Existing methods to improve robustness, such as...
Supervised learning datasets often have privileged information, in the form of features which are av...
Pseudo Labeling is a technique used to improve the performance of semi-supervised Graph Neural Netwo...
An important goal for the generative and developmental systems (GDS) community is to show that GDS a...
Multi-label image recognition with partial labels (MLR-PL), in which some labels are known while oth...
Collecting and labeling the registered 3D point cloud is costly. As a result, 3D resources for train...
Training with the true labels of a dataset as opposed to randomized labels leads to faster optimizat...
The following contains the four datasets described in the paper: Interpretable Geometric Deep Learni...
Transfer learning can significantly improve the sample efficiency of neural networks, by exploiting ...
Self-supervised learning (SSL) has emerged as a desirable paradigm in computer vision due to the ina...
In this paper, we propose a novel transductive pseudo-labeling based method for deep semi-supervised...
Recent advances in deep learning have relied on large, labelled datasets to train high-capacity mode...
A major impediment to the application of deep learning to real-world problems is the scarcity of lab...
This paper describes a method of domain adaptive training for semantic segmentation using multiple s...
Accepted to CVPR 2019Semi-supervised learning is becoming increasingly important because it can comb...
Deep neural networks are susceptible to label noise. Existing methods to improve robustness, such as...
Supervised learning datasets often have privileged information, in the form of features which are av...
Pseudo Labeling is a technique used to improve the performance of semi-supervised Graph Neural Netwo...
An important goal for the generative and developmental systems (GDS) community is to show that GDS a...
Multi-label image recognition with partial labels (MLR-PL), in which some labels are known while oth...
Collecting and labeling the registered 3D point cloud is costly. As a result, 3D resources for train...
Training with the true labels of a dataset as opposed to randomized labels leads to faster optimizat...
The following contains the four datasets described in the paper: Interpretable Geometric Deep Learni...
Transfer learning can significantly improve the sample efficiency of neural networks, by exploiting ...