Linguistic scene analysis and the importance of synergy
Abstract
This chapter explores the possibility that speech is decoded using cross-spectral and cross-modal integration strategies that are inherently synergistic. Combining information from separate spectral channels or across modalities may result in far greater intelligibility and phonetic recognition than predicted by linear-integration models. This is because decoding speech relies on multi-tier processing strategies that are opportunistic and idiosyncratic. Models incorporating synergistic integration are more likely to predict linguistic comprehension than conventional, linear approaches, particularly in challenging listening conditions.
References
Brown, R., and McNeil, D. (1966). “The ‘tip-of-the-tongue’ phenomenon,” J. Verb. Learn. Behav., 5, 325-337.
Chang, S., Wester, M., and Greenberg, S. (2005). “An elitist approach to automatic articulatory-acoustic feature classi cation for phonetic characterization of spoken language,” Speech Communication, 47, 290-311.
Christiansen, T. U., and Greenberg, S. (2005). “Frequency-selective filtering of the modulation spectrum and its impact on consonant identification,” in The Twenty- First Danavox Symposium, edited by A. Rasmussen and T. Poulsen, 585-599.
Christiansen, T. U., and Greenberg, S. (2008). “Cross-spectral synergy is crucial for consonant recognition,” Submitted.
Christiansen, T. U., Dau, T., and Greenberg, S. (2007). “Spectro-temporal processing of speech – An information-theoretic framework,” In Hearing – From Sen- sory Processing to Perception, edited by B. Kollmeier, G. Klump, V. Hohmann, U. Langemann, M. Mauermann, S. Uppenkamp and J. Verhey. Berlin: Springer Ver- lag, 517-523.
French, N. R., and Steinberg, J. C. (1947). “Factors governing the intelligibility of speech sounds,” J. Acoust. Soc. Am., 19, 90-119.
Grant, K., and Greenberg, S. (2001). “Speech intelligibility derived from asynchronous processing of auditory-visual information,” Proceedings of the Workshop on Audio-Visual Speech Processing (AVSP-2001), 132-137.
Grant, K. W., and Walden, B. E. (1996). “Evaluating the articulation index for auditory-visual consonant recognition,” J. Acoust. Soc. Am., 100, 2415-2424.
Greenberg, S. (2006). “A multi-tier theoretical framework for understanding spoken language,” In Listening to Speech: An Auditory Perspective, edited by S. Greenberg, and W. A. Ainsworth. Mahwah, NJ: Lawrence Erlbaum Associates, 411-433.
Greenberg, S. (2007). “What makes speech stick?” Proceedings of the XVIth International Congress of Phonetic Sciences, 737-740.
Greenberg, S., Arai, T. and Silipo, R. (1998). “Speech intelligibility derived from exceedingly sparse spectral information,” Proceedings of the 5th International Conference on Spoken Language Processing, 74-77.
Grosjean, F. (1985). “The recognition of words after their acoustic offset: Evidence and implications,” Percept. Psychophys., 38, 299-310.
Houtgast, T., and Steeneken, H. J. M. (1985). “A review of the MTF-concept in room acoustics,” J. Acoust. Soc. Am., 77, 1069-1077.
Miller, G.A. and Nicely, P. (1955). “An analysis of perceptual confusions among some English consonants,” J. Acoust. Soc. Am., 27, 338-352.
Miller, G. A., Heise, G. A., and Lichten, W. (1951). “The intelligibility of speech as a function of the context of the test materials,” J. Exp. Psych., 41, 329-335.
Müsch, H., and Buus, S. (2001). “Using statistical decision theory to predict speech intelligibility. II. Measurement and prediction of consonant-discrimination performance,” J. Acoust. Am. Soc., 109, 2910-2920.
Pickett, J. M., and Pollack, I. (1963). “Intelligibility of excerpts from fluent speech: Effects of rate of utterance and duration of excerpt,” Language and Speech, 6, 151-164.
Plomp, R. (2002). The Intelligent Ear. Mahwah, NJ: Lawrence Erlbaum Associates. Pollack, I., and Pickett, J. M. (1963). “The intelligibility of excerpts from conversation,” Language and Speech, 6, 165-171.
Silipo, R., Greenberg, S., and Arai, T. (1999). “Temporal constraints on speech intelligibility as deduced from exceedingly sparse spectral representations,” Proceedings of the 6th European Conference on Speech Communication and Technology (Eurospeech-99), 2687-2690.
Additional Files
Published
How to Cite
Issue
Section
License
Authors who publish with this journal agree to the following terms:
a. Authors retain copyright* and grant the journal right of first publication with the work simultaneously licensed under a Creative Commons Attribution License that allows others to share the work with an acknowledgement of the work's authorship and initial publication in this journal.
b. Authors are able to enter into separate, additional contractual arrangements for the non-exclusive distribution of the journal's published version of the work (e.g., post it to an institutional repository or publish it in a book), with an acknowledgement of its initial publication in this journal.
c. Authors are permitted and encouraged to post their work online (e.g., in institutional repositories or on their website) prior to and during the submission process, as it can lead to productive exchanges, as well as earlier and greater citation of published work (See The Effect of Open Access).
*From the 2017 issue onward. The Danavox Jubilee Foundation owns the copyright of all articles published in the 1969-2015 issues. However, authors are still allowed to share the work with an acknowledgement of the work's authorship and initial publication in this journal.