A method for evaluating audio-visual scene analysis in multi-talker environments
Keywords:Auditory scene analysis, Speech perception, Virtual Reality
In cocktail-party environments, listeners are able to comprehend and localize multiple simultaneous talkers. With current virtual reality (VR) technology and virtual acoustics it has become possible to present an audio-visual cocktail-party in a controlled laboratory environment. A new continuous speech corpus with ten monologues from five female and five male talkers was designed and recorded. Each monologue contained a substantially different topic. Using an egocentric interaction method in VR, subjects were asked to label perceived talkers according to source position and content of speech, while varying the number of simultaneously presented talkers. With an increasing number of talkers, the subjects’ accuracy in performing this task was found to decrease. When more than six talkers were in a scene, the number of talkers was underestimated and the azimuth localization error increased. With this method, a new approach is presented to gauge listeners’ ability to analyze complex audio-visual scenes.
Ahrens, A., Marschall, M., and Dau, T. (2019). “Measuring and modeling speech intelligibility in real and loudspeaker-based virtual sound environments,” Hearing Res., 377, 307-317. doi: 10.1016/j.heares.2019.02.003.
Ahrens, A., Lund, K.D., Marschall, M., and Dau, T. (2019). “Sound source localization with varying amount of visual information in virtual reality,” PLOS ONE, 14(3), e0214603. doi: 10.1371/journal.pone.0214603.
Bradlow, A.R., Torretta G.M., and Pisoni, D.B. (1996). “Intelligibility of normal speech I: Global and fine-grained acoustic-phonetic talker characteristics,” Speech Commun., 20(3-4), 255-27. doi: 10.1016/S0167-6393(96)00063-5.
Bregman, A.S. (1994). “Auditory Scene Analysis: The Perceptual Organization of Sound,” MIT Press. doi: 10.1121/1.408434.
Bronkhorst, A. W. (2000). “The cocktail party phenomenon: A review of research on speech intelligibility in multiple-talker conditions,” Acta Acust united Ac, 86(1), 117-128.
Favrot, S. and Buchholz, J.M. (2010). “LoRA: A loudspeaker-based room auralization system,” Acta Acust united Ac, 96(2), 364-375. doi: 10.3813/AAA.918285.
Kopčo, N., Best, V., and Carlile, S. (2010). “Speech localization in a multitalker mixture,” J. Acoust. Soc. Am., 127(3), 1450-1457. doi: 10.1121/1.3290996.
Weller, T., Best, V., Buchholz, J. M., and Young, T. (2016). “A method for assessing auditory spatial analysis in reverberant multitalker environments,” J. Am. Acad. Audiol., 27(7), 601-611. doi: 10.3766/jaaa.15109.
How to Cite
Authors who publish with this journal agree to the following terms:
a. Authors retain copyright* and grant the journal right of first publication with the work simultaneously licensed under a Creative Commons Attribution License that allows others to share the work with an acknowledgement of the work's authorship and initial publication in this journal.
b. Authors are able to enter into separate, additional contractual arrangements for the non-exclusive distribution of the journal's published version of the work (e.g., post it to an institutional repository or publish it in a book), with an acknowledgement of its initial publication in this journal.
c. Authors are permitted and encouraged to post their work online (e.g., in institutional repositories or on their website) prior to and during the submission process, as it can lead to productive exchanges, as well as earlier and greater citation of published work (See The Effect of Open Access).
*From the 2017 issue onward. The Danavox Jubilee Foundation owns the copyright of all articles published in the 1969-2015 issues. However, authors are still allowed to share the work with an acknowledgement of the work's authorship and initial publication in this journal.