This paper presents Lane Compression, a lightweight lossless compression technique for machine learning which is based on a detailed study of the statistical properties of machine learning data. The proposed technique profiles machine learning data gathered ahead of run-time, and partitions values bit-wise into different lanes with more distinctive statistical characteristics. Then the most appropriate compression technique is chosen for each lane out of a small number of low-cost compression techniques. Lane Compression’s compute and memory requirements are very low and yet it achieves a compression rate comparable to or better than Huffman coding. We evaluate and analyse Lane Compression on a wide range of machine learning networks for bo...
Parallel hardware accelerators, for example Graphics Processor Units, have limited on-chip memory ca...
130 pagesOver the past decade, machine learning (ML) with deep neural networks (DNNs) has become ext...
In recent years, Deep Neural Networks (DNNs) have become an area of high interest due to it's ground...
In the wake of the success of convolutional neural networks in image classification, object recognit...
In this thesis we seek to make advances towards the goal of effective learned compression. This enta...
Acceleration of machine learning models is proving to be an important application for FPGAs. Unfortu...
After the tremendous success of convolutional neural networks in image classification, object detect...
This thesis investigates how to improve the performance of lossless data compression hardware as a t...
In recent years, there has been an exponential rise in the quantity of data being acquired and gener...
The recent advances in deep neural networks (DNNs) make them attractive for embedded systems. Howeve...
Although neural network quantization is an imperative technology for the computation and memory effi...
Realtime transferring of data streams enables many data analytics and machine learning applications ...
The recent advances in deep neural networks (DNNs) make them attractive for embedded systems. Howeve...
Part 4: Distributed AI for Resource - Constrained Platforms (DARE 2021) WorkshopInternational audien...
Data and Results associated with journal article: "Hardware-Efficient Compression of Neural Multi-Un...
Parallel hardware accelerators, for example Graphics Processor Units, have limited on-chip memory ca...
130 pagesOver the past decade, machine learning (ML) with deep neural networks (DNNs) has become ext...
In recent years, Deep Neural Networks (DNNs) have become an area of high interest due to it's ground...
In the wake of the success of convolutional neural networks in image classification, object recognit...
In this thesis we seek to make advances towards the goal of effective learned compression. This enta...
Acceleration of machine learning models is proving to be an important application for FPGAs. Unfortu...
After the tremendous success of convolutional neural networks in image classification, object detect...
This thesis investigates how to improve the performance of lossless data compression hardware as a t...
In recent years, there has been an exponential rise in the quantity of data being acquired and gener...
The recent advances in deep neural networks (DNNs) make them attractive for embedded systems. Howeve...
Although neural network quantization is an imperative technology for the computation and memory effi...
Realtime transferring of data streams enables many data analytics and machine learning applications ...
The recent advances in deep neural networks (DNNs) make them attractive for embedded systems. Howeve...
Part 4: Distributed AI for Resource - Constrained Platforms (DARE 2021) WorkshopInternational audien...
Data and Results associated with journal article: "Hardware-Efficient Compression of Neural Multi-Un...
Parallel hardware accelerators, for example Graphics Processor Units, have limited on-chip memory ca...
130 pagesOver the past decade, machine learning (ML) with deep neural networks (DNNs) has become ext...
In recent years, Deep Neural Networks (DNNs) have become an area of high interest due to it's ground...