Using fNIRS to study audio-visual speech integration in post-lingually deafened cochlear implant users

Forfattere

  • Xin Zhou Bionics Institute, Melbourne, Australia; Medical Bionics Department, University of Melbourne, Melbourne, Australia
  • Hamish Innes-Brown Bionics Institute, Melbourne, Australia; Medical Bionics Department, University of Melbourne, Melbourne, Australia
  • Colette McKay Bionics Institute, Melbourne, Australia; Medical Bionics Department, University of Melbourne, Melbourne, Australia

Nøgleord:

fNIRS, cochlear implant, audio-visual integration

Resumé

The aim of this experiment was to investigate differences in audio-visual (AV) speech integration between cochlear implant (CI) users and normal hearing (NH) listeners using behavioural and functional near-infrared spectroscopy (fNIRS) measures. Participants were 16 post-lingually deafened adult CI users and 13 age-matched NH listeners. Participants’ response accuracy in audio-alone (A), visual-alone (V), and AV modalities were measured with closed-set /aCa/ non-words and with open-set CNC words. AV integration was quantified by using a probability model and a cue integration model that predicted participants’ AV performance given minimal or optimal integration. Using fNIRS, brain activation was measured when listening to or watching A, V, or AV speech with or without multi-talker babble. For fNIRS, evidence of AV integration was measured using the principle of inverse effectiveness (PoIE) model (comparing the difference in activation in two brain regions between A and AV modalities in quiet and noise conditions). Behavioural AV integration was similar in the two groups for CNC words but poorer in the CI group compared to NH group for consonant perception.  Our fNIRS data did not demonstrate any AV integration in either NH listeners or CI users, by testing the PoIE.

Referencer

Anderson, C.A., Lazard, D.S., and Hartley, D.E.H. (2016). “Plasticity in bilateral superior temporal cortex: effects of deafness and cochlear implantation on auditory and visual speech processing,” Hear. Res., 343, 138-149.

Blamey, P.J., Cowan, R.S., et al. (1989). “Speech perception using combinations of auditory, visual, and tactile information.” J. Rehabil. Res. Dev., 26, 15-24.

Diederich, A., Colonius, H., et al. (2008). “Assessing age-related multisensory enhancement with the time-window-of-integration model,” Neuropsychologia, 46, 2556-2562.

Groppe, D.M., Urbach, T.P., et al. (2011). “Mass univariate analysis of eventrelated brain potentials/fields I: A critical tutorial review,” Psychophysiology, 48, 1711-1725.

Holmes, N.P. (2007). “The law of inverse effectiveness in neurons and behaviour: multisensory integration versus normal variability,” Neuropsychologia, 45, 3340-3345.

James, T.W., Stevenson, R.A., et al. (2012). “Inverse effectiveness and BOLD fMRI,” The New Handbook of Multisensory Processes (Stein BE, ed.), pp. 207-222.

Laurienti, P.J., Perrault, T.J., et al. (2005). “On the use of superadditivity as a metric for characterizing multisensory integration in functional neuroimaging studies,” Exp. Brain Res., 166, 289-297. doi: 10.1007/s00221-005-2370-2

Meredith, M.A., and Stein, B.E. (1983). “Interactions among converging sensory inputs in the superior colliculus,” Science, 221, 389-391.

Perrault, T.J., Vaughan, J.W., et al. (2005). “Superior colliculus neurons use distinct operational modes in the integration of multisensory stimuli,” J. Neurophysiol., 93, 2575-2586.

Peterson, G.E., and Lehiste, I. (1962). “Revised CNC lists for auditory tests,” J Speech Hear. Disord., 27, 62-70.

Rouger, J., Lagleyre, S., et al. (2007). “Evidence that cochlear-implanted deaf patients are better multisensory integrators,” Proc. Natl. Acad. Sci. USA, 104, 7295-7300. doi: 10.1073/pnas.0609419104

Schierholz, I., Finke, M., et al. (2015). “Enhanced audio–visual interactions in the auditory cortex of elderly cochlear-implant users,” Hear. Res., 328, 133-147.

Stevenson, R.A., and James, T.W. (2009). “Audiovisual integration in human superior temporal sulcus: Inverse effectiveness and the neural processing of speech and object recognition,” NeuroImage, 44, 1210-1223. doi: 10.1016/j.neuroimage.2008.09.034

Yderligere filer

Publiceret

2017-12-18

Citation/Eksport

Zhou, X., Innes-Brown, H., & McKay, C. (2017). Using fNIRS to study audio-visual speech integration in post-lingually deafened cochlear implant users. Proceedings of the International Symposium on Auditory and Audiological Research, 6, 55–62. Hentet fra https://proceedings.isaar.eu/index.php/isaarproc/article/view/2017-08

Nummer

Sektion

2017/2. Neural mechanisms, modeling, and physiological correlates of adaptation