We study the impact of different pruning techniques on the representation learned by deep neural networks trained with contrastive loss functions. Our work finds that at high sparsity levels, contrastive learning results in a higher number of misclassified examples relative to models trained with traditional cross-entropy loss. To understand this pronounced difference, we use metrics such as the number of PIEs (Hooker et al.2019), Q-Score (Kalibhat et al., 2022) and Prediction Depth score (Baldock et al., 2021) to measure the impact of pruning on the learned representation quality. Our analysis suggests the schedule of the pruning method implementation matters. We find that the negative impact of sparsity on the quality of the learned repre...
The success of convolutional neural networks (CNNs) in various applications is accompanied by a sign...
Efficient model compression techniques are required to deploy deep neural networks (DNNs) on edge de...
International audienceIntroduced in the late 1980s for generalization purposes, pruning has now beco...
We study the impact of different pruning techniques on the representation learned by deep neural net...
Deep networks are typically trained with many more parameters than the size of the training dataset....
Pre-trained Language Models (PLMs) have achieved great success in various Natural Language Processin...
The growing energy and performance costs of deep learning have driven the community to reduce the si...
Pruning is an efficient method for deep neural network model compression and acceleration. However, ...
Deep neural networks often have millions of parameters. This can hinder their deployment to low-end ...
The success of overparameterized deep neural networks (DNNs) poses a great challenge to deploy compu...
Pruning is a method of compressing the size of a neural network model, which affects the accuracy an...
It is widely believed that the success of deep networks lies in their ability to learn a meaningful ...
As deep neural networks (DNNs) become widely used, pruned and quantised models are becoming ubiquito...
Cross entropy loss has served as the main objective function for classification-based tasks. Widely ...
Model compression by way of parameter pruning, quantization, or distillation has recently gained pop...
The success of convolutional neural networks (CNNs) in various applications is accompanied by a sign...
Efficient model compression techniques are required to deploy deep neural networks (DNNs) on edge de...
International audienceIntroduced in the late 1980s for generalization purposes, pruning has now beco...
We study the impact of different pruning techniques on the representation learned by deep neural net...
Deep networks are typically trained with many more parameters than the size of the training dataset....
Pre-trained Language Models (PLMs) have achieved great success in various Natural Language Processin...
The growing energy and performance costs of deep learning have driven the community to reduce the si...
Pruning is an efficient method for deep neural network model compression and acceleration. However, ...
Deep neural networks often have millions of parameters. This can hinder their deployment to low-end ...
The success of overparameterized deep neural networks (DNNs) poses a great challenge to deploy compu...
Pruning is a method of compressing the size of a neural network model, which affects the accuracy an...
It is widely believed that the success of deep networks lies in their ability to learn a meaningful ...
As deep neural networks (DNNs) become widely used, pruned and quantised models are becoming ubiquito...
Cross entropy loss has served as the main objective function for classification-based tasks. Widely ...
Model compression by way of parameter pruning, quantization, or distillation has recently gained pop...
The success of convolutional neural networks (CNNs) in various applications is accompanied by a sign...
Efficient model compression techniques are required to deploy deep neural networks (DNNs) on edge de...
International audienceIntroduced in the late 1980s for generalization purposes, pruning has now beco...