Regardless of whether the chosen figure of merit is execution time, throughput, battery life for an embedded system or total cost of ownership for a datacenter, today’s computers are fundamentally limited by their energy efficiency. Using specialized hardware-software solutions for particular applications or domains is a well-known approach to increase energy efficiency of computing systems. Reconfigurable logic in the form of Field-Programmable Gate Arrays (FPGAs) is a particularly promising substrate for hardware specialization, owing to its runtime reconfigurability, vastly parallel compute fabric and widespread availability. However, mapping computation to reconfigurable logic in a way which provides performance and efficiency benefits ...
130 pagesOver the past decade, machine learning (ML) with deep neural networks (DNNs) has become ext...
This thesis introduces novel frameworks for automated customization of two classes of machine learni...
High computational complexity and large memory footprint hinder the adoption of convolution neural n...
Recent years have witnessed a tremendous surge of interest in accelerating sparse linear algebra app...
Research has shown that deep neural networks contain significant redundancy, and thus that high clas...
Sparse linear algebra arises in a wide variety of computational disciplines, including medical imagi...
Convolutional Neural Network (CNN) inference has gained a significant amount of traction for perform...
Research has shown that deep neural networks contain significant redundancy, and that high classific...
Deep Neural Networks (DNN) have reached an outstanding accuracy in the past years, often going beyon...
Deep Learning (DL) has become best-in-class for numerous applications but at a high computational co...
Over the last ten years, the rise of deep learning has redefined the state-of-the-art in many comput...
With the rapid proliferation of computing systems and the internet, the amount of data generated has...
The recent “Cambrian explosion” of Deep Learning (DL) algorithms in concert with the end of Moore’s ...
Deep neural network (DNN) has achieved remarkable success in many applications because of its powerf...
This dissertation presents an architecture to accelerate sparse matrix linear algebra,which is among...
130 pagesOver the past decade, machine learning (ML) with deep neural networks (DNNs) has become ext...
This thesis introduces novel frameworks for automated customization of two classes of machine learni...
High computational complexity and large memory footprint hinder the adoption of convolution neural n...
Recent years have witnessed a tremendous surge of interest in accelerating sparse linear algebra app...
Research has shown that deep neural networks contain significant redundancy, and thus that high clas...
Sparse linear algebra arises in a wide variety of computational disciplines, including medical imagi...
Convolutional Neural Network (CNN) inference has gained a significant amount of traction for perform...
Research has shown that deep neural networks contain significant redundancy, and that high classific...
Deep Neural Networks (DNN) have reached an outstanding accuracy in the past years, often going beyon...
Deep Learning (DL) has become best-in-class for numerous applications but at a high computational co...
Over the last ten years, the rise of deep learning has redefined the state-of-the-art in many comput...
With the rapid proliferation of computing systems and the internet, the amount of data generated has...
The recent “Cambrian explosion” of Deep Learning (DL) algorithms in concert with the end of Moore’s ...
Deep neural network (DNN) has achieved remarkable success in many applications because of its powerf...
This dissertation presents an architecture to accelerate sparse matrix linear algebra,which is among...
130 pagesOver the past decade, machine learning (ML) with deep neural networks (DNNs) has become ext...
This thesis introduces novel frameworks for automated customization of two classes of machine learni...
High computational complexity and large memory footprint hinder the adoption of convolution neural n...