The objective of the proposed research is to optimize computing-in-memory (CIM) design for accelerating Deep Neural Network (DNN) algorithms. As compute peripheries such as analog-to-digital converter (ADC) introduce significant overhead in CIM inference design, the research first focuses on the circuit optimization for inference acceleration and proposes a resistive random access memory (RRAM) based ADC-free in-memory compute scheme. We comprehensively explore the trade-offs involving different types of ADCs and investigate a new ADC design especially suited for the CIM, which performs the analog shift-add for multiple weight significance bits, improving the throughput and energy efficiency under similar area constraints. Furthermore, we p...
This work discusses memory-immersed collaborative digitization among compute-in-memory (CiM) arrays ...
abstract: Machine learning technology has made a lot of incredible achievements in recent years. It ...
In-Memory Acceleration (IMA) promises major efficiency improvements in deep neural network (DNN) inf...
The unprecedented growth in Deep Neural Networks (DNN) model size has resulted into a massive amount...
The objective of this research is to accelerate deep neural networks (DNNs) with emerging non-volati...
New computing applications, e.g., deep neural network (DNN) training and inference, have been a driv...
Convolutional neural networks (CNNs) play a key role in deep learning applications. However, the hig...
With the increase in computational parallelism and low-power Integrated Circuits (ICs) design, neuro...
Compute-in-memory (CIM) is an attractive solution to process the extensive workloads of multiply-and...
Compute-In-Memory (CIM) is a promising solution for accelerating DNNs at edge devices, utilizing mix...
Recently, analog compute-in-memory (CIM) architectures based on emerging analog non-volatile memory ...
As AI applications become more prevalent and powerful, the performance of deep learning neural netwo...
International audienceCompute in-memory (CIM) is a promising technique that minimizes data transport...
For decades, innovations to surmount the processor versus memory gap and move beyond conventional vo...
Deep neural networks (DNNs) have achieved unprecedented capabilities in tasks such as analysis and r...
This work discusses memory-immersed collaborative digitization among compute-in-memory (CiM) arrays ...
abstract: Machine learning technology has made a lot of incredible achievements in recent years. It ...
In-Memory Acceleration (IMA) promises major efficiency improvements in deep neural network (DNN) inf...
The unprecedented growth in Deep Neural Networks (DNN) model size has resulted into a massive amount...
The objective of this research is to accelerate deep neural networks (DNNs) with emerging non-volati...
New computing applications, e.g., deep neural network (DNN) training and inference, have been a driv...
Convolutional neural networks (CNNs) play a key role in deep learning applications. However, the hig...
With the increase in computational parallelism and low-power Integrated Circuits (ICs) design, neuro...
Compute-in-memory (CIM) is an attractive solution to process the extensive workloads of multiply-and...
Compute-In-Memory (CIM) is a promising solution for accelerating DNNs at edge devices, utilizing mix...
Recently, analog compute-in-memory (CIM) architectures based on emerging analog non-volatile memory ...
As AI applications become more prevalent and powerful, the performance of deep learning neural netwo...
International audienceCompute in-memory (CIM) is a promising technique that minimizes data transport...
For decades, innovations to surmount the processor versus memory gap and move beyond conventional vo...
Deep neural networks (DNNs) have achieved unprecedented capabilities in tasks such as analysis and r...
This work discusses memory-immersed collaborative digitization among compute-in-memory (CiM) arrays ...
abstract: Machine learning technology has made a lot of incredible achievements in recent years. It ...
In-Memory Acceleration (IMA) promises major efficiency improvements in deep neural network (DNN) inf...