Analog Memory-based Techniques for Accelerating Deep Neural Networks

Library Content Type:
Publish Date: 
Wednesday, September 23, 2020
Event Name: 
Event Track:

Deep neural networks (DNNs) are the fundamental building blocks that allowed explosive growth in machine learning sub-fields, such as computer vision and natural language processing. Von Neumann-style information processing systems are the basis of modern computer architectures. As Moore's Law slowing and Dennard scaling ended, data communication between memory and compute, i.e. the “Von Neumann bottleneck,” now dominates considerations of system throughput and energy consumption, especially for DNN workloads. Non-Von Neumann architectures, such as those that move computation to the edge of memory crossbar arrays, can significantly reduce the cost of data communication. Crossbar arrays of resistive non-volatile memories (NVM) offer a novel solution for deep learning tasks by computing matrix-vector multiplication in analog memory arrays. The highly parallel structure and computation at the location of the data enables fast and energy-efficient multiply-accumulate computations, which are the workhorse operations within most deep learning algorithms. In this presentation, we will discuss our Phase-Change Memory (PCM) based analog accelerator implementations for training and inference. In both cases, DNN weights are stored within large device arrays as analog conductances. Software-equivalent accuracy on various datasets has been achieved in a mixed software-hardware demonstration despite the considerable imperfections of existing NVM devices, such as noise and variability. We will discuss the device, circuit and system needs, as well as performance outlook for further technology development.

Learning Objectives

Describe current status of phase change memory-based analog DNN accelerators,Differentiate memory technology challenges between training and inference applications,Elaborate on need for end-to-end system optimization for ultimate performance

Watch video: