• Login
    View Item 
    •   DSpace@RPI Home
    • Rensselaer Libraries
    • RPI Theses Online (Complete)
    • View Item
    •   DSpace@RPI Home
    • Rensselaer Libraries
    • RPI Theses Online (Complete)
    • View Item
    JavaScript is disabled for your browser. Some features of this site may not work without it.

    Symbolic information in recurrent neural networks: Issues of representation and training

    Author
    Omlin, Christian W. P.
    View/Open
    177661_thesis.pdf (9.974Mb)
    177662_tp_and_abs.pdf (53.79Kb)
    Other Contributors
    Giles, C. Lee; Rogers, Edwin H.; Ji, Chuany; Krishnamoorthy, M. S.; Walker, Ellen L.;
    Date Issued
    1994-12
    Subject
    Computer Science
    Degree
    PhD;
    Terms of Use
    This electronic version is a licensed copy owned by Rensselaer Polytechnic Institute, Troy, NY. Copyright of original work retained by author.;
    Metadata
    Show full item record
    URI
    https://hdl.handle.net/20.500.13015/1805
    Abstract
    In recent years, there has been a renewed interest in artificial neural networks. The discovery of new learning algorithms has made them promising tools in diverse applications such as pattern recognition, signal processing, knowledge acquisition for expert systems, prediction of protein structures, and dynamical system modeling. Recurrent neural networks which are able to store state information over indefinite time spans are particularly well-suited for modeling dynamical systems such a stock markets, speech, and physical systems.; In this thesis, we examine how symbolic knowledge is represented in recurrent networks trained to recognize regular languages. We demonstrate methods for inserting and extracting the learned languages in the form of deterministic finite-state automata (DFAs). We focus our attention on the quality of the extracted grammatical rules and on how partial prior knowledge can be effectively utilized to improve training performance. We prove that DFAs can be naturally embedded in a special recurrent network architecture with higher order weights such that the finite-state encoding remains stable indefinitely. We investigate means for constructing neural DFAs which are tolerant to faults in their internal structure.;
    Description
    December 1994; School of Science
    Department
    Dept. of Computer Science;
    Publisher
    Rensselaer Polytechnic Institute, Troy, NY
    Relationships
    Rensselaer Theses and Dissertations Online Collection;
    Access
    Restricted to current Rensselaer faculty, staff and students. Access inquiries may be directed to the Rensselaer Libraries.;
    Collections
    • RPI Theses Online (Complete)

    Browse

    All of DSpace@RPICommunities & CollectionsBy Issue DateAuthorsTitlesSubjectsThis CollectionBy Issue DateAuthorsTitlesSubjects

    My Account

    Login

    DSpace software copyright © 2002-2022  DuraSpace
    Contact Us | Send Feedback
    DSpace Express is a service operated by 
    Atmire NV