Audiovisual integration in speech perception: a multi-stage process

Authors

  • Kasper Eskelund Cognitive Systems, Department of Informatics and Mathematical Modelling, Technical University of Denmark, DK-2800 Lyngby, Denmark
  • Jyrki Tuomainen Speech Hearing and Language Sciences, University College London, UK
  • Tobias Andersen Cognitive Systems, Department of Informatics and Mathematical Modelling, Technical University of Denmark, DK-2800 Lyngby, Denmark

Abstract

Integration of speech signals from ear and eye is a well-known feature of speech perception. This is evidenced by the McGurk illusion in which visual speech alters auditory speech perception and by the advantage observed in auditory speech detection when a visual signal is present. Here we investigate whether the integration of auditory and visual speech observed in these two audiovisual integration effects are specific traits of speech perception. We further ask whether audiovisual integration is undertaken in a single processing stage or multiple processing stages.

References

Barker, J. P., Berthommier, F., and Schwartz, J.-L. (1998). "Is Primitive AV Coherence an Aid to Segment the Scene?" In Proceedings of the International Conference on Auditory-Visual Speech Processing 1998 (Terrigal, Australia).

Bernstein, L., Auer, E. T. J., and Takayanagi, S. (2004)." Auditory speech detection in noise enhanced by lipreading". Speech Communication 44, 5-18

Bregman, A. S. (1990). Auditory scene analysis: the perceptual organization of sound (MIT Press).

Eskelund, K., Tuomainen, J., and Andersen, T. S. (2010). "Multistage audiovisual integration of speech: dissociating identification and detection". Exp. Brain Res. 208, 447–457.

Grant, K. W., and Seitz, P.-F. (2000). "The use of visible speech cues for improving auditory detection of spoken sentences". J. Acoust. Soc. Am. 108, 1197- 1208.

Massaro, D. W. (1998). Perceiving talking faces: from speech perception to a behavioral principle (MIT Press).

McGurk, H., and MacDonald, J. (1976). "Hearing lips and seeing voices". Nature 264, 746–748.

Nahorna, O., Berthommier, F., and Schwartz, J.-L. (2011). "Binding and unbinding in audiovisual speech fusion: Follow-up experiments on a new paradigm". In Proceedings of the International Conference on Auditory-Visual Speech Processing 2011 (Volterra, Italy: Kungliga Tekniska Högskolan, Sweden).

Nahorna, O., Berthommier, F., and Schwartz, J.-L. (2010). "Binding and unbinding in audiovisual speech fusion: Removing the McGurk effect by an incoherent preceding audiovisual context". In Proceedings of the International Conference on Auditory-Visual Speech Processing 2010 (Hakone, Kanagawa, Japan: Kumamoto University, Japan).

Remez, R., Rubin, P., Pisoni, D., and Carrell, T. (1981). "Speech perception without traditional speech cues". Science 212, 947–949.

Additional Files

Published

2011-12-15

How to Cite

Eskelund, K., Tuomainen, J., & Andersen, T. (2011). Audiovisual integration in speech perception: a multi-stage process. Proceedings of the International Symposium on Auditory and Audiological Research, 3, 119–126. Retrieved from http://proceedings.isaar.eu/index.php/isaarproc/article/view/2011-14

Issue

Section

2011/1. Indicators of hearing impairment and measures of speech perception