A common issue in graph learning under the semi-supervised setting is referred to as gradient scarcity. That is, learning graphs by minimizing a loss on a subset of nodes causes edges between unlabelled nodes that are far from labelled ones to receive zero gradients. The phenomenon was first described when optimizing the graph and the weights of a Graph Neural Network (GCN) with a joint optimization algorithm. In this work, we give a precise mathematical characterization of this phenomenon, and prove that it also emerges in bilevel optimization, where additional dependency exists between the parameters of the problem. While for GCNs gradient scarcity occurs due to their finite receptive field, we show that it also occurs with the Laplacian...
International audienceIn recent years, deep neural networks (DNNs) have known an important rise in p...
A lot of theoretical and empirical evidence shows that the flatter local minima tend to improve gene...
Supervised learning over graphs is an intrinsically difficult problem: simultaneous learning of rele...
International audienceGraph Neural Networks (GNNs) have been studied through the lens of expressive ...
We present a novel regularization approach to train neural networks that enjoys better generalizatio...
We present Gradient Gating (G2), a novel framework for improving the performance of Graph Neural Net...
In many graph-based semi-supervised learning algorithms, edge weights are assumed to be fixed and de...
Structures or graphs are pervasive in our lives. Although deep learning has achieved tremendous succ...
Many interesting problems in machine learning are being revisited with new deep learning tools. For ...
The graph neural network (GNN) has demonstrated its superior performance in various applications. Th...
The first provably efficient algorithm for learning graph neural networks (GNNs) with one hidden lay...
This review examines gradient-based techniques to solve bilevel optimization problems. Bilevel optim...
Graph-based learning algorithms including label propagation and spectral clustering are known as the...
We study the effect of regularization in an on-line gradient-descent learning scenario for a general...
International audienceGraph Neural Networks (GNNs) have succeeded in various computer science applic...
International audienceIn recent years, deep neural networks (DNNs) have known an important rise in p...
A lot of theoretical and empirical evidence shows that the flatter local minima tend to improve gene...
Supervised learning over graphs is an intrinsically difficult problem: simultaneous learning of rele...
International audienceGraph Neural Networks (GNNs) have been studied through the lens of expressive ...
We present a novel regularization approach to train neural networks that enjoys better generalizatio...
We present Gradient Gating (G2), a novel framework for improving the performance of Graph Neural Net...
In many graph-based semi-supervised learning algorithms, edge weights are assumed to be fixed and de...
Structures or graphs are pervasive in our lives. Although deep learning has achieved tremendous succ...
Many interesting problems in machine learning are being revisited with new deep learning tools. For ...
The graph neural network (GNN) has demonstrated its superior performance in various applications. Th...
The first provably efficient algorithm for learning graph neural networks (GNNs) with one hidden lay...
This review examines gradient-based techniques to solve bilevel optimization problems. Bilevel optim...
Graph-based learning algorithms including label propagation and spectral clustering are known as the...
We study the effect of regularization in an on-line gradient-descent learning scenario for a general...
International audienceGraph Neural Networks (GNNs) have succeeded in various computer science applic...
International audienceIn recent years, deep neural networks (DNNs) have known an important rise in p...
A lot of theoretical and empirical evidence shows that the flatter local minima tend to improve gene...
Supervised learning over graphs is an intrinsically difficult problem: simultaneous learning of rele...