International audienceThe lottery ticket hypothesis states that a randomly-initialized neural network contains a small subnetwork which, when trained in isolation, can compete with the performance of the original network. Recent theoretical works proved an even stronger version: every sufficiently overparameterized (dense) neural network contains a subnetwork that, even without training, achieves accuracy comparable to that of the trained large network. These works left as an open problem to extend the result to convolutional neural networks (CNNs). In this work we provide such generalization by showing that, with high probability, it is possible to approximate any CNN by pruning a random CNN whose size is larger by a logarithmic factor
In artificial neural networks, learning from data is a computationally demanding task in which a lar...
The lottery ticket hypothesis suggests that sparse, sub-networks of a given neural network, if initi...
International audienceThis article proposes an original approach to the performance understanding of...
International audienceThe Strong Lottery Ticket Hypothesis (SLTH) states that randomly-initialised n...
The Lottery Ticket Hypothesis continues to have a profound practical impact on the quest for small s...
The lottery ticket hypothesis conjectures the existence of sparse subnetworks of large randomly init...
Large neural networks can be pruned to a small fraction of their original size, with little loss in ...
The lottery ticket hypothesis has sparked the rapid development of pruning algorithms that perform s...
In this thesis we explore pattern mining and deep learning. Often seen as orthogonal, we show that t...
The strong lottery ticket hypothesis holds the promise that pruning randomly initialized deep neural...
The strong lottery ticket hypothesis has highlighted the potential for training deep neural networks...
The recent lottery ticket hypothesis proposes that there is at least one sub-network that matches th...
Lottery tickets (LTs) is able to discover accurate and sparse subnetworks that could be trained in i...
Accurate neural networks can be found just by pruning a randomly initialized overparameterized model...
Yes. In this paper, we investigate strong lottery tickets in generative models, the subnetworks that...
In artificial neural networks, learning from data is a computationally demanding task in which a lar...
The lottery ticket hypothesis suggests that sparse, sub-networks of a given neural network, if initi...
International audienceThis article proposes an original approach to the performance understanding of...
International audienceThe Strong Lottery Ticket Hypothesis (SLTH) states that randomly-initialised n...
The Lottery Ticket Hypothesis continues to have a profound practical impact on the quest for small s...
The lottery ticket hypothesis conjectures the existence of sparse subnetworks of large randomly init...
Large neural networks can be pruned to a small fraction of their original size, with little loss in ...
The lottery ticket hypothesis has sparked the rapid development of pruning algorithms that perform s...
In this thesis we explore pattern mining and deep learning. Often seen as orthogonal, we show that t...
The strong lottery ticket hypothesis holds the promise that pruning randomly initialized deep neural...
The strong lottery ticket hypothesis has highlighted the potential for training deep neural networks...
The recent lottery ticket hypothesis proposes that there is at least one sub-network that matches th...
Lottery tickets (LTs) is able to discover accurate and sparse subnetworks that could be trained in i...
Accurate neural networks can be found just by pruning a randomly initialized overparameterized model...
Yes. In this paper, we investigate strong lottery tickets in generative models, the subnetworks that...
In artificial neural networks, learning from data is a computationally demanding task in which a lar...
The lottery ticket hypothesis suggests that sparse, sub-networks of a given neural network, if initi...
International audienceThis article proposes an original approach to the performance understanding of...