Novel greedy deep learning algorithms

Loading...
Thumbnail Image
Authors
Wu, Ke
Issue Date
2015-12
Type
Electronic thesis
Thesis
Language
ENG
Keywords
Computer science
Research Projects
Organizational Units
Journal Issue
Alternative Title
Abstract
The new algorithms are designed to simulate human’s learning process, where features for different objects are identified, understood and memorized iteratively. The main difference between the two new algorithms is the partition function used to split the input data into subsets. A certain feature will be learned from one subset, instead of from the whole data set. The Greedy-By-Node (GN) algorithm is based on an additive-feature assumption which to some extent resembles the boosting algorithms, where the input data is sorted and partitioned based on their distance to the most common feature learned by the first inner node. The subsets closer to the common feature will be learned earlier, while harder problems are intrinsically covered by more inner nodes and learned at later stage. The Greedy-By-Class-By-Node (GCN) algorithm directly utilizes the data labels and assumes that data in each class share common features. A special cache mechanism and a parameter called ”amnesia factor” are also introduced in order to keep the speed while provide control over the ”orthogonality” between learned features. Our algorithms are orders of magnitude faster in training, create more interpretable internal representations at the node level, while not sacrificing on the ultimate out-of-sample performance.
Description
December 2015
School of Science
Full Citation
Publisher
Rensselaer Polytechnic Institute, Troy, NY
Terms of Use
Journal
Volume
Issue
PubMed ID
DOI
ISSN
EISSN