Using auditory cues to perceptually extract visual data in collaborative, immersive big-data display systems

Loading...
Thumbnail Image
Authors
Lee, Wendy
Issue Date
2017-08
Type
Electronic thesis
Thesis
Language
ENG
Keywords
Architectural sciences
Research Projects
Organizational Units
Journal Issue
Alternative Title
Abstract
The advent of multisensory display systems, such as virtual and augmented reality, has fostered a new relationship between humans and space. Not only can these systems mimic real-world environments, they have the ability to create a new space typology made solely of data. In these spaces, two-dimensional information is displayed in three dimensions, requiring human senses to be used to understand virtual, attention-based elements. Studies in the field of big data have predominately focused on visual representations and extractions of information with little focus on sounds. The goal of this research is to evaluate the most efficient methods of perceptually extracting visual data using auditory stimuli in immersive environments. Using Rensselaer’s CRAIVE-Lab, a virtual reality space with 360-degree panorama visuals and an array of 128 loudspeakers, participants were asked questions based on complex visual displays using a variety of auditory cues ranging from sine tones to camera shutter sounds. Analysis of the speed and accuracy of participant responses revealed that auditory cues that were more favorable for localization and were positively perceived were best for data extraction and could help create more user-friendly systems in the future.
Description
August 2017
School of Architecture
Full Citation
Publisher
Rensselaer Polytechnic Institute, Troy, NY
Terms of Use
Journal
Volume
Issue
PubMed ID
DOI
ISSN
EISSN