Combinatorial neural network training algorithm for neuromorphic computing

Loading...
Thumbnail Image
Authors
Date, Prasanna
Issue Date
2019-08
Type
Electronic thesis
Thesis
Language
ENG
Keywords
Computer science
Research Projects
Organizational Units
Journal Issue
Alternative Title
Abstract
In Computer Science, we have realized that the end of Moore’s Law is just around the corner, and it would not be possible to sustain the exponential increase in computation speed on conventional compute platforms like CPUs and GPUs. In the recent years, a substantial portion of compute resources on almost all devices, ranging from mobile phones to laptops to desktops and even supercomputers in some cases, have been consumed by machine learning and deep learning tasks, for example, voice assistant on smart phones, natural language translation, auto-tagging of photos on social media etc. In order to sustain the ever increasing demand for faster and better computers, we need a radically different computing paradigm. Neuromorphic Computing is a non von Neumann computing paradigm which performs computation by emulating the human brain. It naturally lends itself to machine learning and deep learning tasks, and has been shown to consume orders of magnitude less power, and run orders of magnitude faster than conventional CPUs and GPUs when running machine learning and deep learning tasks. In the near future, neuromorphic processors are poised to enable faster and better computing platforms by running machine learning and deep learning tasks in a fast and energy-efficient manner.
Neuromorphic systems can either be analog or digital. Analog neuromorphic systems use memristors as their functional component, but at present, are not very reliable. Digital neuromorphic systems use transistors, and are much more reliable, for example TrueNorth Neurosynaptic System by IBM, Loihi Neuromorphic Processor by Intel etc. Benchmark machine learning and deep learning problems like MNIST, CIFAR, ImageNet have been shown to work well on such systems. In this doctoral work, we focus on the IBM TrueNorth system and specifically ask two questions in this regard: (i) Can IBM TrueNorth perform well on non-image datasets? and (ii) How can on-chip training be performed on TrueNorth like digital neuromorphic systems? To answer the first question and to demonstrate the efficacy of neuromorphic systems in the High Performance Computing (HPC) space, we use the IBM TrueNorth system to classify supercomputer failures, and compare this neuromorphic approach to five other machine learning and deep learning approaches like Support Vector Machines (SVM), K-Nearest Neighbors (K-NN), Deep Neural Networks (DNN), Recurrent Neural Networks (RNN) and Logistic Regression. We show that the neuromorphic approach achieves 99.8% accuracy on the test dataset and outperforms all other approaches. Furthermore, we also show that the neuromorphic approach consumes four and five orders of magnitude less power than CPUs and GPUs respectively.
Despite their phenomenal performance on all of the above tasks, digital neuromorphic systems like IBM TrueNorth suffer from a major drawback, that currently, it is not possible to train deep learning models on-chip. We address this problem of on-chip training and point out that there are two parts to this problem: (i) Once written onto the chip, the synaptic weights are not reconfigurable; and (ii) The backpropagation algorithm cannot be used for on-chip training because of the constraints imposed by the architecture of the TrueNorth system – an efficient learning algorithm needs to be developed that can operate within these constraints. While the first part of the problem remains out of the scope of this doctoral work, we address the second part of the problem and propose a novel Combinatorial Neural Network Training Algorithm (CoNNTrA) for Spiking Neural Networks (SNN), which could be used to enable on-chip training on TrueNorth-like digital neuromorphic systems. CoNNTrA is a heuristic algorithm that can be used to optimize nonlinear combinatorial functions, where decision variables are constrained to have finite discrete values. To validate CoNNTrA, we use it to train benchmark deep learning problems like MNIST dataset, Iris dataset and ImageNet dataset, and show that it takes up to 32× less memory and achieves testing accuracies comparable to state of the art values. Furthermore, we compare the performance of CoNNTrA to performance of the Backpropagation learning algorithm along six performance metrics as well as the Design Index, which is a performance metric that quantifies the degree of accuracy and overfitting of trained deep learning models.
Description
August 2019
School of Science
Full Citation
Publisher
Rensselaer Polytechnic Institute, Troy, NY
Journal
Volume
Issue
PubMed ID
DOI
ISSN
EISSN