Brain-inspired event-driven processors execute deep neural networks (DNNs) in a sparsity-aware manner. Specifically, if more zeros are induced in the activation maps, less computation will be performed in the succeeding convolution layer. However, inducing activation sparsity in DNNs remains a challenge. To address this, we propose a training approach STAR (Sparse Thresholded Activation under partial-Regularization), which combines activation regularization with thresholding, to overcome the barrier of a single threshold- or regularization-based method in sparsity improvement. More precisely, we employ the sparse penalty on the near-zero activations to fit the activation learning behaviour in accuracy recovery, followed by thresholding to f...
Deep learning is finding its way into the embedded world with applications such as autonomous drivin...
Hardware accelerators for neural network inference can exploit common data properties for performanc...
Neural network training is computationally and memory intensive. Sparse training can reduce the bur...
Brain-inspired event-driven processors execute deep neural networks (DNNs) in a sparsity-aware manne...
Brain-inspired event-based processors have attracted considerable attention for edge deployment beca...
Brain-inspired event-based processors have attracted considerable attention for edge deployment beca...
In the recent past, real-time video processing using state-of-the-art deep neural networks (DNN) has...
The growing energy and performance costs of deep learning have driven the community to reduce the si...
Deep Neural Networks (DNNs) have emerged as an important class of machine learning algorithms, provi...
While end-to-end training of Deep Neural Networks (DNNs) yields state of the art performance in an i...
Deep neural networks include millions of learnable parameters, making their deployment over resource...
The ever-increasing number of parameters in deep neural networks poses challenges for memory-limited...
Deep convolutional sparse coding (D-CSC) is a framework reminiscent of deep convolutional neural net...
International audienceSparsifying deep neural networks is of paramount interest in many areas, espec...
In deep learning, fine-grained N:M sparsity reduces the data footprint and bandwidth of a General Ma...
Deep learning is finding its way into the embedded world with applications such as autonomous drivin...
Hardware accelerators for neural network inference can exploit common data properties for performanc...
Neural network training is computationally and memory intensive. Sparse training can reduce the bur...
Brain-inspired event-driven processors execute deep neural networks (DNNs) in a sparsity-aware manne...
Brain-inspired event-based processors have attracted considerable attention for edge deployment beca...
Brain-inspired event-based processors have attracted considerable attention for edge deployment beca...
In the recent past, real-time video processing using state-of-the-art deep neural networks (DNN) has...
The growing energy and performance costs of deep learning have driven the community to reduce the si...
Deep Neural Networks (DNNs) have emerged as an important class of machine learning algorithms, provi...
While end-to-end training of Deep Neural Networks (DNNs) yields state of the art performance in an i...
Deep neural networks include millions of learnable parameters, making their deployment over resource...
The ever-increasing number of parameters in deep neural networks poses challenges for memory-limited...
Deep convolutional sparse coding (D-CSC) is a framework reminiscent of deep convolutional neural net...
International audienceSparsifying deep neural networks is of paramount interest in many areas, espec...
In deep learning, fine-grained N:M sparsity reduces the data footprint and bandwidth of a General Ma...
Deep learning is finding its way into the embedded world with applications such as autonomous drivin...
Hardware accelerators for neural network inference can exploit common data properties for performanc...
Neural network training is computationally and memory intensive. Sparse training can reduce the bur...