• Login
    View Item 
    •   DSpace@RPI Home
    • Rensselaer Libraries
    • RPI Theses Online (Complete)
    • View Item
    •   DSpace@RPI Home
    • Rensselaer Libraries
    • RPI Theses Online (Complete)
    • View Item
    JavaScript is disabled for your browser. Some features of this site may not work without it.

    A dynamic computational auditory scene analysis model of hearing using artificial neural networks

    Author
    Deshpande, Nikhil
    View/Open
    179672_Deshpande_rpi_0185E_11478.pdf (86.08Mb)
    Other Contributors
    Braasch, Jonas; Bahn, Curtis; Ji, Qiang, 1963-; Krueger, Ted (Theodore Edward), 1954-; Perry, Chris (Christopher S.);
    Date Issued
    2019-05
    Subject
    Architecture
    Degree
    PhD;
    Terms of Use
    This electronic version is a licensed copy owned by Rensselaer Polytechnic Institute, Troy, NY. Copyright of original work retained by author.;
    Metadata
    Show full item record
    URI
    https://hdl.handle.net/20.500.13015/2402
    Abstract
    Of note to this document will be the utility of neural networks in computational binaural modeling.; Humans perform various psychoacoustical phenomena when presented with a stimulus in a real room in the presence of reflections. Specifically, for a given sound source, humans are able to perceive directional cues contained within both the direct sound and reflections, and can internally resolve this information. This is known as the precedence effect. While various phsychoacoustical phenomena with different properties fall under this term's use, this document will specifically focus on the properties of reflections that occur within a short time window of the direct sound, how the human auditory system is able to resolve such content, how a computational model can be programmed to identify the binaural cues within these reflections, and how a computer can resolve the content of these reflections.; The human auditory system includes highly complex and robust methods of directional processing and filtering in order to extract signals from real environments. Humans rely on the auditory system for communicating, listening for potential danger, and appreciating and performing of music. Using only two ears as receivers, the auditory system can extract a wide range of information from sound in physical environments. This includes the ability to parse mixed, concurrent acoustical scenes into individual streams as well as the ability to resolve competing directional information contained within room reflections. Acoustical scenes and streaming are studied under a subset of the field of acoustics known as Auditory Scene Analysis, and studies demonstrate that the ability may be reinforced by head movement. Many of the directional cues incorporated in both the earliest-arriving wavefront -- known as the direct sound -- and from delayed reflections are resolved in the auditory system, and used to reinforce the content within the original stimulus. While only some of the auditory processing that allows humans to perform these functions is understood, there exists an extensive body of research offering insight into human performance in complex listening environments.;
    Description
    May 2019; School of Architecture
    Department
    School of Architecture;
    Publisher
    Rensselaer Polytechnic Institute, Troy, NY
    Relationships
    Rensselaer Theses and Dissertations Online Collection;
    Access
    Restricted to current Rensselaer faculty, staff and students. Access inquiries may be directed to the Rensselaer Libraries.;
    Collections
    • RPI Theses Online (Complete)

    Browse

    All of DSpace@RPICommunities & CollectionsBy Issue DateAuthorsTitlesSubjectsThis CollectionBy Issue DateAuthorsTitlesSubjects

    My Account

    Login

    DSpace software copyright © 2002-2022  DuraSpace
    Contact Us | Send Feedback
    DSpace Express is a service operated by 
    Atmire NV