Despite the increasing popularity of deep neural networks (DNNs), they cannot be trained efficiently on existing platforms, and efforts have thus been devoted to designing dedicated hardware for DNNs. In our recent work, we have provided direct support for the stochastic gradient descent (SGD) training algorithm by constructing the basic element of neural networks, the synapse, using emerging technologies, namely memristors. Due to the limited performance of SGD, optimization algorithms are commonly employed in DNN training. Therefore, DNN accelerators that only support SGD might not meet DNN training requirements. In this paper, we present a memristorbased synapse that supports the commonly used momentum algorithm. Momentum significantly i...
We propose an innovative stochastic-based computing architecture to implement low-power and robust a...
The proliferation of machine learning algorithms in everyday applications such as image recognition ...
The latest Deep Learning (DL) methods for designing Deep Neural Networks (DNN) have significantly ex...
Abstract—The artificial neural network (ANN) is among the most widely used methods in data processin...
Memristor-based neuromorphic computing systems address the memory-wall issue in von Neumann architec...
The parallel updating scheme of RRAM-based analog neuromorphic systems based on sign stochastic grad...
Designing deep neural networks is an art that often involves an expensive search over candidate arch...
Memristors offer great advantages as a new hardware solution for neuromorphic computing due to their...
Spike-based learning with memristive devices in neuromorphic computing architectures typically uses ...
The momentum parameter is common within numerous optimization and local search algorithms, particula...
It is now accepted that the traditional von Neumann architecture, with processor and memory separati...
Quantized neural networks (QNNs) are being actively researched as a solution for the computational c...
Deep neural networks (DNN) have revolutionized the field of machine learning by providing unpreceden...
On metrics of density and power efficiency, neuromorphic technologies have the potential to surpass ...
Training neural networks with low-resolution synaptic weights raised much interest recently and infe...
We propose an innovative stochastic-based computing architecture to implement low-power and robust a...
The proliferation of machine learning algorithms in everyday applications such as image recognition ...
The latest Deep Learning (DL) methods for designing Deep Neural Networks (DNN) have significantly ex...
Abstract—The artificial neural network (ANN) is among the most widely used methods in data processin...
Memristor-based neuromorphic computing systems address the memory-wall issue in von Neumann architec...
The parallel updating scheme of RRAM-based analog neuromorphic systems based on sign stochastic grad...
Designing deep neural networks is an art that often involves an expensive search over candidate arch...
Memristors offer great advantages as a new hardware solution for neuromorphic computing due to their...
Spike-based learning with memristive devices in neuromorphic computing architectures typically uses ...
The momentum parameter is common within numerous optimization and local search algorithms, particula...
It is now accepted that the traditional von Neumann architecture, with processor and memory separati...
Quantized neural networks (QNNs) are being actively researched as a solution for the computational c...
Deep neural networks (DNN) have revolutionized the field of machine learning by providing unpreceden...
On metrics of density and power efficiency, neuromorphic technologies have the potential to surpass ...
Training neural networks with low-resolution synaptic weights raised much interest recently and infe...
We propose an innovative stochastic-based computing architecture to implement low-power and robust a...
The proliferation of machine learning algorithms in everyday applications such as image recognition ...
The latest Deep Learning (DL) methods for designing Deep Neural Networks (DNN) have significantly ex...