SRAM-based in-memory Binary Neural Network (BNN) accelerators are garnering interests as a platform for energy-efficient edge neural network computing thanks to their compactness in terms of hardware and neural network parameter size. However, previous works had to modify SRAM cells to support XNOR operations on memory array resulting in limited area and energy efficiencies. In this work, we present a conversion method which replaces the signed inputs (+1/-1) of BNN with the unsigned inputs (1/0) without computation error, and vice versa. The method enables BNN computing on conventional 6T SRAM arrays and improves area and energy efficiencies. We also demonstrate that further energy saving is possible by skewing the distribution of binary i...
The need for running complex Machine Learning (ML) algorithms, such as Convolutional Neural Networks...
The proliferation of embedded Neural Processing Units (NPUs) is enabling the adoption of Tiny Machin...
Accelerating the inference of Convolution Neural Networks (CNNs) on edge devices is essential due to...
As Binary Neural Networks (BNNs) started to show promising performance with limited memory and compu...
Binary neural networks (BNNs) are promising to deliver accuracy comparable to conventional deep neur...
DoctorWhile Deep Neural Networks (DNNs) have shown cutting-edge performance on various applications,...
Magnetic RAM (MRAM)-based crossbar array has a great potential as a platform for in-memory binary ne...
Different in-memory computing paradigms enabled by emerging non-volatile memory technologies are pro...
Different in-memory computing paradigms enabled by emerging non-volatile memory technologies are pro...
Deploying state-of-the-art CNNs requires power-hungry processors and off-chip memory. This precludes...
Many advanced neural network inference engines are bounded by the available memory bandwidth. The co...
Implementing binary neural networks (BNNs) on computing-in-memory (CIM) hardware has several attract...
International audienceThe deployment of Edge AI requires energy-efficient hardware with a minimal me...
The need for running complex Machine Learning (ML) algorithms, such as Convolutional Neural Networks...
We present an 8-transistor and 2-capacitor (8T2C) SRAM cell-based in-memory hardware for Binary Neur...
The need for running complex Machine Learning (ML) algorithms, such as Convolutional Neural Networks...
The proliferation of embedded Neural Processing Units (NPUs) is enabling the adoption of Tiny Machin...
Accelerating the inference of Convolution Neural Networks (CNNs) on edge devices is essential due to...
As Binary Neural Networks (BNNs) started to show promising performance with limited memory and compu...
Binary neural networks (BNNs) are promising to deliver accuracy comparable to conventional deep neur...
DoctorWhile Deep Neural Networks (DNNs) have shown cutting-edge performance on various applications,...
Magnetic RAM (MRAM)-based crossbar array has a great potential as a platform for in-memory binary ne...
Different in-memory computing paradigms enabled by emerging non-volatile memory technologies are pro...
Different in-memory computing paradigms enabled by emerging non-volatile memory technologies are pro...
Deploying state-of-the-art CNNs requires power-hungry processors and off-chip memory. This precludes...
Many advanced neural network inference engines are bounded by the available memory bandwidth. The co...
Implementing binary neural networks (BNNs) on computing-in-memory (CIM) hardware has several attract...
International audienceThe deployment of Edge AI requires energy-efficient hardware with a minimal me...
The need for running complex Machine Learning (ML) algorithms, such as Convolutional Neural Networks...
We present an 8-transistor and 2-capacitor (8T2C) SRAM cell-based in-memory hardware for Binary Neur...
The need for running complex Machine Learning (ML) algorithms, such as Convolutional Neural Networks...
The proliferation of embedded Neural Processing Units (NPUs) is enabling the adoption of Tiny Machin...
Accelerating the inference of Convolution Neural Networks (CNNs) on edge devices is essential due to...