As key building blocks for digital signal processing, image processing and deep learning etc, adders, multi-operand adders and multiply-accumulator unit (MAC) have drawn lots of attention recently. Two popular ways to improve arithmetic logic unit (ALU) performance and energy efficiency are approximate computing and precision scalable design. Approximate computing helps achieve better performance or energy efficiency by trading accuracy. Precision scalable design provides the capability of allocating just-enough hardware resources to meet the application requirements.In this thesis, we first present a correlation aware predictor (CAP) based approximate adder, which utilizes spatial-temporal correlation information of input streams to predi...
Many error resilient applications can be approximated using multi-layer perceptrons (MLPs) with insi...
Multimedia and image processing applications, may tolerate errors in calculations but still generate...
The current trend for deep learning has come with an enormous computational need for billions of Mul...
In the last decade, the need for efficiency in computing has motivated the coming forth of new devic...
The need to support various machine learning (ML) algorithms on energy-constrained computing devices...
This paper presents a delay- and energy-efficient approximate adder design exploiting an effective c...
Machine Learning requires an enormous amount of mathematical computation per second. Several archite...
An application that can produce a useful result despite some level of computational error is said to...
Approximate computing is a popular field where accuracy is traded with energy. It can benefit applic...
Approximate Arithmetic is a new design paradigm that is being used in many applications which are to...
This paper proposes a novel approximate adder that exploits an error-reduced carry prediction and co...
A number of recent researches focus on designing accelerators for popular deep learning algorithms. ...
Approximate computing is a promising approach for reducing power consumption and design complexity i...
In recent years there has been a growing interest in hardware neural networks, which express many be...
In this paper, we present a novel approximate computing scheme suitable for realizing the energy-eff...
Many error resilient applications can be approximated using multi-layer perceptrons (MLPs) with insi...
Multimedia and image processing applications, may tolerate errors in calculations but still generate...
The current trend for deep learning has come with an enormous computational need for billions of Mul...
In the last decade, the need for efficiency in computing has motivated the coming forth of new devic...
The need to support various machine learning (ML) algorithms on energy-constrained computing devices...
This paper presents a delay- and energy-efficient approximate adder design exploiting an effective c...
Machine Learning requires an enormous amount of mathematical computation per second. Several archite...
An application that can produce a useful result despite some level of computational error is said to...
Approximate computing is a popular field where accuracy is traded with energy. It can benefit applic...
Approximate Arithmetic is a new design paradigm that is being used in many applications which are to...
This paper proposes a novel approximate adder that exploits an error-reduced carry prediction and co...
A number of recent researches focus on designing accelerators for popular deep learning algorithms. ...
Approximate computing is a promising approach for reducing power consumption and design complexity i...
In recent years there has been a growing interest in hardware neural networks, which express many be...
In this paper, we present a novel approximate computing scheme suitable for realizing the energy-eff...
Many error resilient applications can be approximated using multi-layer perceptrons (MLPs) with insi...
Multimedia and image processing applications, may tolerate errors in calculations but still generate...
The current trend for deep learning has come with an enormous computational need for billions of Mul...