The neural processing of phonemes is shaped by linguistic analysis

Authors

  • Tobias Overath Duke Institute for Brain Sciences, Duke University, Durham, NC, USA; Center for Cognitive Neuroscience, Duke University, Durham, NC, USA; Department of Psychology and Neuroscience, Duke University, Durham, NC, USA
  • Jackson C. Lee Duke Institute for Brain Sciences, Duke University, Durham, NC, USA

Abstract

Speech perception entails the mapping of the acoustic waveform to its linguistic representation. For this transformation to succeed, the speech signal needs to be tracked across multiple temporal scales in order to decode linguistic units ranging from phonemes to sentences. Here, we investigate how linguistic knowledge, and the temporal scale of linguistic analysis, influence the neural processing of a fundamental linguistic unit, the phoneme. To obtain control over the linguistic scale of analysis, we use a novel speech-quilting algorithm (Overath et al., 2015) to control the acoustic structure available at different linguistic units (phoneme, syllable, word). To obtain control over the linguistic content, independent of the temporal acoustic structure, we construct speech quilts from both familiar (English) and foreign (Korean) languages. We recorded electroencephalography in healthy participants and show that the neural response to phonemes, the phoneme-related potential, is shaped by linguistic context only in a familiar language, but not in a foreign language. The results suggest that the processing of the acoustic properties of a fundamental linguistic unit, the phoneme, is already shaped by linguistic analysis.

References

Brainard, D.H. (1997). “The psychophysics toolbox,” Spat. Vis., 10.

Delorme, A., and Makeig, S. (2004). “EEGLAB: An open source toolbox for analysis of single-trial EEG dynamics,” J. Neurosci. Methods, 134, 9-21.

Friederici, A.D., and Gierhan, A.M.E. (2013). “The language network,” Curr. Opin. Neurobiol., 23, 250-254.

Hasson, U., Yang, E., Vallines, I., Heeger, D.J., and Rubin, N. (2008). “A hierarchy of temporal receptive windows in human cortex,” J. Neurosci., 28, 2539-2550.

Hickok, G., and Poeppel, D. (2007). “The cortical organization of speech processing,” Nat. Rev. Neurosci., 8, 393-402.

Khalighinejad, B., da Silva, G.C., and Mesgarani, N. (2017). “Dynamic encoding of acoustic features in neural responses to continuous speech,” J. Neurosci., 37, 2176-2185.

Kocagoncu, E., Clarke, A., Devereux, B.J., and Tyler, L.K. (2017). “Decoding the cortical dynamisc of sound-meaning mapping,” J. Neurosci., 37, 1312-1319.

Ladefoged, P., and Johnson, K. (2010). A Course in Phonetics. Boston: Wadsworth.

Lerner, Y., Honey, C.J., Silbert, L.J., and Hasson, U. (2011). “Topographic mapping of a hierarchy of temporal receptive windows using a narrated story,” J. Neurosci., 31, 2906-2915.

Martin, B.A., Tremblay, K.L., and Korczak, P. (2008). “Speech evoked potentials: from the laboratory to the clinic,” Ear Hearing, 29, 285-313.

Molinaro, N., Lizarazu, M., Lallier, M., Bourguignon, M., and Carreiras, M. (2016). “Out-of-synchrony speech entrainment in developmental dyslexia,” Hum. Brain Mapp., 37, 2767-2783.

Moulines, E., and Charpentier, F. (1990). “Pitch-synchronous waveform processing techniques for text-to-speech synthesis using diphones,” Speech Commun., 9, 453-467.

Overath, T., McDermott, J.H., Zarate, J.M., and Poeppel, D. (2015). “The cortical analysis of speech-specific temporal structure revealed by responses to sound quilts,” Nat. Neurosci., 18, 903-911.

Park, H., Ince, R.A., Schyns, P.G., Thut, G., and Gross, J. (2015). “Frontal top-down signals increase coupling of auditory low-frequency oscillations to continuous speech in human listeners,” Curr. Biol., 25, 1649-1653.

Phillips, C., Pellathy, T., Marantz, A., Yellin, E., Wexler, K., Poeppel, D., McGinnis, M., and Roberts, T. (2000). “Auditory cortex accesses phonologcal categories: an MEG mismatch study,” J. Cogn. Neurosci., 12, 1038-1055.

Plack, C.J., Barker, D., and Prendergast, G. (2014). “Perceptual consequences of "hidden" hearing loss,” Trends Hear., 18, 1-11.

Poeppel, D. (2003). “The analysis of speech in different temporal integration windows: cerebral lateralization as 'asymmetric sampling in time',” Speech Commun., 41, 245-255.

Rosen, S. (1992). “Temporal information in speech: acoustic, auditory and linguistic aspects,” Philos. Trans. R. Soc. Lond. B. Biol. Sci., 336, 367-373.

Sanders, L.D., and Neville, H.J. (2003). “An ERP study of continuous speech processing: I. Segmentation, semantics, and syntax in native speakers,” Brain Res. Cogn. Brain Res., 15, 228-240.

Sohn, H.-M. (1999). The Korean Language. Cambridge: Cambridge University Press.

Stevens, K.N. (2000). Acoustic Phonetics. Cambridge, MA: MIT Press.

Tremblay, K.L., Friesen, L., Martin, B.A., and Wright, R. (2003). “Test-retest reliability of cortical evoked potentials using naturally produced speech sounds,” Ear Hearing, 24, 225-232.

Woldorff, M.G., Liotti, M., Seabolt, M., Busse, L., Lancaster, J.L., and Fox, P.T. (2002). “The temporal dynamics of the effects in occipital cortex of visual-spatial selective attention,” Brain Res. Cogn. Brain Res., 15, 1-15.

Yoon, T.-J., and Kang, Y. (2013). “The Korean Phonetic Aligner Program Suite,” http://korean.utsc.utoronto.ca/kpa/

Yuan, J., and Lieberman, M. (2008). “Speaker identification on the SCOTUS corpus,” J. Acoust. Soc. Am., 123, 3878.

Additional Files

Published

2018-01-11

How to Cite

Overath, T., & Lee, J. C. (2018). The neural processing of phonemes is shaped by linguistic analysis. Proceedings of the International Symposium on Auditory and Audiological Research, 6, 107–116. Retrieved from https://proceedings.isaar.eu/index.php/isaarproc/article/view/2017-13

Issue

Section

2017/2. Neural mechanisms, modeling, and physiological correlates of adaptation