Embedding intelligence in extreme edge devices allows distilling raw data acquired from sensors into actionable information, directly on IoT end-nodes. This computing paradigm, in which end-nodes no longer depend entirely on the Cloud, offers undeniable benefits, driving a large research area (TinyML) to deploy leading Machine Learning (ML) algorithms on micro-controller class of devices. To fit the limited memory storage capability of these tiny platforms, full-precision Deep Neural Networks (DNNs) are compressed by representing their data down to byte and sub-byte formats, in the integer domain. However, the current generation of micro-controller systems can barely cope with the computing requirements of QNNs. This thesis tackles the ...
Deep Neural Network (DNN) inference based on quantized narrow-precision integer data represents a pr...
With the emergence of the Internet of Things (IoT), devices are generating massive amounts of data. ...
Low bit-width Quantized Neural Networks (QNNs) enable deployment of complex machine learning models ...
The deployment of Quantized Neural Networks (QNN) on advanced microcontrollers requires optimized so...
Heavily quantized fixed-point arithmetic is becoming a common approach to deploy Convolutional Neura...
We present PULP-NN, an optimized computing library for a parallel ultra-low-power tightly coupled cl...
We present PULP-NN, a multicore computing library for a parallel ultra-low-power cluster of RISC-V b...
Machine Learning (ML) functions are becoming ubiquitous in latency- and privacy-sensitive IoT applic...
Advances in high-performance computer architecture design have been a major driver for the rapid evo...
Deep Neural Networks (DNNs) computation-hungry algorithms demand hardware platforms capable of meeti...
On-chip DNN inference and training at the Extreme-Edge (TinyML) impose strict latency, throughput, a...
In-Memory Acceleration (IMA) promises major efficiency improvements in deep neural network (DNN) inf...
Machine Learning (ML) functions are becoming ubiquitous in latency- and privacy-sensitive IoT applic...
Recent success of machine learning in a broad spectrum of fields has awakened a new era of artificia...
Emerging systems for artificial intelligence (AI) are expected to rely on deep neural networks (DNNs...
Deep Neural Network (DNN) inference based on quantized narrow-precision integer data represents a pr...
With the emergence of the Internet of Things (IoT), devices are generating massive amounts of data. ...
Low bit-width Quantized Neural Networks (QNNs) enable deployment of complex machine learning models ...
The deployment of Quantized Neural Networks (QNN) on advanced microcontrollers requires optimized so...
Heavily quantized fixed-point arithmetic is becoming a common approach to deploy Convolutional Neura...
We present PULP-NN, an optimized computing library for a parallel ultra-low-power tightly coupled cl...
We present PULP-NN, a multicore computing library for a parallel ultra-low-power cluster of RISC-V b...
Machine Learning (ML) functions are becoming ubiquitous in latency- and privacy-sensitive IoT applic...
Advances in high-performance computer architecture design have been a major driver for the rapid evo...
Deep Neural Networks (DNNs) computation-hungry algorithms demand hardware platforms capable of meeti...
On-chip DNN inference and training at the Extreme-Edge (TinyML) impose strict latency, throughput, a...
In-Memory Acceleration (IMA) promises major efficiency improvements in deep neural network (DNN) inf...
Machine Learning (ML) functions are becoming ubiquitous in latency- and privacy-sensitive IoT applic...
Recent success of machine learning in a broad spectrum of fields has awakened a new era of artificia...
Emerging systems for artificial intelligence (AI) are expected to rely on deep neural networks (DNNs...
Deep Neural Network (DNN) inference based on quantized narrow-precision integer data represents a pr...
With the emergence of the Internet of Things (IoT), devices are generating massive amounts of data. ...
Low bit-width Quantized Neural Networks (QNNs) enable deployment of complex machine learning models ...