Abstract-This paper describes techniques to implement gradient-descent-based machine learning algorithms on crossbar arrays made of memristors or other analog memory devices. We introduce the Unregulated Step Descent (USD) algorithm, which is an approximation of the steepest descent algorithm, and discuss how it addresses various hardware implementation issues. We discuss the effect of device parameters and their variability on performance of the algorithm by using artificially generated and real-world datasets. In addition to providing insights on the effect of device parameters on learning, we illustrate how the USD algorithm partially offsets the effect of device variability. Finally, we discuss how the USD algorithm can be implemented i...
Power density constraint and device reliability issues are driving energy efficient, fault tolerant ...
Recent breakthroughs suggest that local, approximate gradient descent learning is compatible with Sp...
The on-chip implementation of learning algorithms would accelerate the training of neural networks i...
Abstract — This paper discusses implementations of gradient-descent based learning algorithms on mem...
This paper discusses implementations of gradientdescent based learning algorithms on memristive cros...
Resistive crossbar arrays, despite their advantages such as parallel processing, low power, and high...
Digital electronics has given rise to reliable, affordable, and scalable computing devices. However,...
The invention of neuromorphic computing architecture is inspired by the working mechanism of human-b...
The recent emerging memristor can provide non-volatile memory storage but also intrinsic computing f...
Machine learning models for sequence learning and processing often suffer from high energy consumpti...
In this Chapter, we review the recent progress on resistance drift mitigation techniques for resisti...
Machine learning framework for the 1-transistor 1-memristor crossbar array. Demonstrations include c...
Memristor is being considered as a game changer for the realization of neuromorphic hardware systems...
Modern neuromorphic deep learning techniques, as well as unsupervised techniques like the locally co...
Memristor-based computer architectures are becoming more attractive as a possible choice of hardware...
Power density constraint and device reliability issues are driving energy efficient, fault tolerant ...
Recent breakthroughs suggest that local, approximate gradient descent learning is compatible with Sp...
The on-chip implementation of learning algorithms would accelerate the training of neural networks i...
Abstract — This paper discusses implementations of gradient-descent based learning algorithms on mem...
This paper discusses implementations of gradientdescent based learning algorithms on memristive cros...
Resistive crossbar arrays, despite their advantages such as parallel processing, low power, and high...
Digital electronics has given rise to reliable, affordable, and scalable computing devices. However,...
The invention of neuromorphic computing architecture is inspired by the working mechanism of human-b...
The recent emerging memristor can provide non-volatile memory storage but also intrinsic computing f...
Machine learning models for sequence learning and processing often suffer from high energy consumpti...
In this Chapter, we review the recent progress on resistance drift mitigation techniques for resisti...
Machine learning framework for the 1-transistor 1-memristor crossbar array. Demonstrations include c...
Memristor is being considered as a game changer for the realization of neuromorphic hardware systems...
Modern neuromorphic deep learning techniques, as well as unsupervised techniques like the locally co...
Memristor-based computer architectures are becoming more attractive as a possible choice of hardware...
Power density constraint and device reliability issues are driving energy efficient, fault tolerant ...
Recent breakthroughs suggest that local, approximate gradient descent learning is compatible with Sp...
The on-chip implementation of learning algorithms would accelerate the training of neural networks i...