We propose a novel computation-in-memory (CIM) architecture based on DRAM for binary neural network, in which a novel charge sharing circuit enables us to perform all logic operations and accumulation inside sub-array at a very small area overhead (1.22%). Especially, the in-DRAM accumulation can significantly reduce off-chip DRAM accesses. Our experiments show that, on VGG-9 model for CIFAR-10, our proposed method, realized on DDR4 DRAM, gives 2.56 times smaller latency per image and 19.57 times lower energy consumption in off-chip data transfer than the existing methods, modified Ambit and DRISA, at a very small accuracy loss (0.23%).
In-memory computing (IMC) has emerged as a promising technique for enhancing energy-efficiency of de...
International audienceConvolutional neural networks (CNN) have proven very effective in a variety of...
In this work, we present a novel 8T static random access memory (SRAM)-based compute-in-memory (CIM)...
Many advanced neural network inference engines are bounded by the available memory bandwidth. The co...
We present an 8-transistor and 2-capacitor (8T2C) SRAM cell-based in-memory hardware for Binary Neur...
In this paper, we explore potentials of leveraging spin-based in-memory computing platform as an acc...
As Binary Neural Networks (BNNs) started to show promising performance with limited memory and compu...
In recent years, neural network accelerators have been shown to achieve both high energy efficiency ...
The unprecedented growth in Deep Neural Networks (DNN) model size has resulted into a massive amount...
With the increase in computational parallelism and low-power Integrated Circuits (ICs) design, neuro...
DoctorWhile Deep Neural Networks (DNNs) have shown cutting-edge performance on various applications,...
As AI applications become more prevalent and powerful, the performance of deep learning neural netwo...
New computing applications, e.g., deep neural network (DNN) training and inference, have been a driv...
International audienceCompute in-memory (CIM) is a promising technique that minimizes data transport...
The proliferation of embedded Neural Processing Units (NPUs) is enabling the adoption of Tiny Machin...
In-memory computing (IMC) has emerged as a promising technique for enhancing energy-efficiency of de...
International audienceConvolutional neural networks (CNN) have proven very effective in a variety of...
In this work, we present a novel 8T static random access memory (SRAM)-based compute-in-memory (CIM)...
Many advanced neural network inference engines are bounded by the available memory bandwidth. The co...
We present an 8-transistor and 2-capacitor (8T2C) SRAM cell-based in-memory hardware for Binary Neur...
In this paper, we explore potentials of leveraging spin-based in-memory computing platform as an acc...
As Binary Neural Networks (BNNs) started to show promising performance with limited memory and compu...
In recent years, neural network accelerators have been shown to achieve both high energy efficiency ...
The unprecedented growth in Deep Neural Networks (DNN) model size has resulted into a massive amount...
With the increase in computational parallelism and low-power Integrated Circuits (ICs) design, neuro...
DoctorWhile Deep Neural Networks (DNNs) have shown cutting-edge performance on various applications,...
As AI applications become more prevalent and powerful, the performance of deep learning neural netwo...
New computing applications, e.g., deep neural network (DNN) training and inference, have been a driv...
International audienceCompute in-memory (CIM) is a promising technique that minimizes data transport...
The proliferation of embedded Neural Processing Units (NPUs) is enabling the adoption of Tiny Machin...
In-memory computing (IMC) has emerged as a promising technique for enhancing energy-efficiency of de...
International audienceConvolutional neural networks (CNN) have proven very effective in a variety of...
In this work, we present a novel 8T static random access memory (SRAM)-based compute-in-memory (CIM)...