A dynamic computational auditory scene analysis model of hearing using artificial neural networks
Loading...
Authors
Deshpande, Nikhil
Issue Date
2019-05
Type
Electronic thesis
Thesis
Thesis
Language
ENG
Keywords
Architecture
Alternative Title
Abstract
The human auditory system includes highly complex and robust methods of directional processing and filtering in order to extract signals from real environments. Humans rely on the auditory system for communicating, listening for potential danger, and appreciating and performing of music. Using only two ears as receivers, the auditory system can extract a wide range of information from sound in physical environments. This includes the ability to parse mixed, concurrent acoustical scenes into individual streams as well as the ability to resolve competing directional information contained within room reflections. Acoustical scenes and streaming are studied under a subset of the field of acoustics known as Auditory Scene Analysis, and studies demonstrate that the ability may be reinforced by head movement. Many of the directional cues incorporated in both the earliest-arriving wavefront -- known as the direct sound -- and from delayed reflections are resolved in the auditory system, and used to reinforce the content within the original stimulus. While only some of the auditory processing that allows humans to perform these functions is understood, there exists an extensive body of research offering insight into human performance in complex listening environments.
Description
May 2019
School of Architecture
School of Architecture
Full Citation
Publisher
Rensselaer Polytechnic Institute, Troy, NY