As training deep learning models on large dataset takes a lot of time and resources, it is desired to construct a small synthetic dataset with which we can train deep learning models sufficiently. There are recent works that have explored solutions on condensing image datasets through complex bi-level optimization. For instance, dataset condensation (DC) matches network gradients w.r.t. large-real data and small-synthetic data, where the network weights are optimized for multiple steps at each outer iteration. However, existing approaches have their inherent limitations: (1) they are not directly applicable to graphs where the data is discrete; and (2) the condensation process is computationally expensive due to the involved nested optimiza...
Dataset distillation methods aim to compress a large dataset into a small set of synthetic samples, ...
International audienceThe problem of predicting connections between a set of data points finds many ...
A common issue in graph learning under the semi-supervised setting is referred to as gradient scarc...
Given the prevalence of large-scale graphs in real-world applications, the storage and time for trai...
Recent studies have demonstrated that gradient matching-based dataset synthesis, or dataset condensa...
In many real-world problems (such as industrial applications, chemistry models, social network analy...
Training on large-scale graphs has achieved remarkable results in graph representation learning, but...
The problem of graph matching under node and pairwise constraints is fundamental in areas as diverse...
Dataset condensation aims to condense a large dataset with a lot of training samples into a small se...
Structures or graphs are pervasive in our lives. Although deep learning has achieved tremendous succ...
Abstract—This work has two main objectives, namely, to introduce a novel algorithm, called the Fast ...
The increasing amount of graph data places requirements on the efficiency and scalability of graph n...
International audienceIn recent years, deep neural networks (DNNs) have known an important rise in p...
Graph Convolutional Networks (GCNs) play a crucial role in graph learning tasks, however, learning g...
The undeniable computational power of artificial neural networks has granted the scientific communit...
Dataset distillation methods aim to compress a large dataset into a small set of synthetic samples, ...
International audienceThe problem of predicting connections between a set of data points finds many ...
A common issue in graph learning under the semi-supervised setting is referred to as gradient scarc...
Given the prevalence of large-scale graphs in real-world applications, the storage and time for trai...
Recent studies have demonstrated that gradient matching-based dataset synthesis, or dataset condensa...
In many real-world problems (such as industrial applications, chemistry models, social network analy...
Training on large-scale graphs has achieved remarkable results in graph representation learning, but...
The problem of graph matching under node and pairwise constraints is fundamental in areas as diverse...
Dataset condensation aims to condense a large dataset with a lot of training samples into a small se...
Structures or graphs are pervasive in our lives. Although deep learning has achieved tremendous succ...
Abstract—This work has two main objectives, namely, to introduce a novel algorithm, called the Fast ...
The increasing amount of graph data places requirements on the efficiency and scalability of graph n...
International audienceIn recent years, deep neural networks (DNNs) have known an important rise in p...
Graph Convolutional Networks (GCNs) play a crucial role in graph learning tasks, however, learning g...
The undeniable computational power of artificial neural networks has granted the scientific communit...
Dataset distillation methods aim to compress a large dataset into a small set of synthetic samples, ...
International audienceThe problem of predicting connections between a set of data points finds many ...
A common issue in graph learning under the semi-supervised setting is referred to as gradient scarc...