A computational model of sound recognition used to analyze the capacity and adaptability in learning vowel classes

Authors

  • Jeffrey Spencer Department of Electrical and Electronic Engineering, University of Melbourne, Melbourne, Australia Centre for Neural Engineering, University of Melbourne, Melbourne, Australia
  • Neil McLachlan Centre for Music, Mind, and Wellbeing, Melbourne School of Psychological Sciences, University of Melbourne, Melbourne, Australia
  • David B. Grayden Department of Electrical and Electronic Engineering, University of Melbourne, Melbourne, Australia Centre for Neural Engineering, University of Melbourne, Melbourne, Australia

Abstract

Sound recognition is likely to initiate early in auditory processing and use stored representations (spectrotemporal templates) to compare against spectral information from auditory brainstem responses over time. A computational model of sound recognition is developed using neurobiologically plausible operations. The adaptability and number of templates required for the computational model to correctly recognize 10 Klatt-synthesized vowels is determined to be around 1250 templates when trained with random fundamental frequencies from the male pitch range and randomized variation of the first three formants of each vowel. To investigate the ability to adapt to noise and other unheard vowel utterances, test sets with 1000 randomly generated Klatt vowels in babble at signal-to-noise ratios (SNRs) of 20 dB, 10 dB, 5 dB, 0 dB, and ????5 dB are generated. The vowel recognition rates at each SNR are 99.7%, 99.6%, 97.0%, 77.6%, and 54.0%, respectively. Also, a test set of four vowel recordings from four speakers is tested with no noise, giving 100% recognition rate. These data suggest that storage of auditory representations for speech at the spectrotemporal resolution of the auditory nerve over a typical range of spoken pitch does not require excessive memory resources or computing to implement on parallel computer systems.

References

De Wachter, M., Matton, M., Demuynck, K., Wambacq, P., Cools, R., and Van Compernolle, D. (2007). “Template-based continuous speech recognition”, IEEE T. Audio Speech, 15, 1377-1390.

Deng, L. and Strik, H. (2007). “Structure-based and template-based automatic speech recognition – Comparing parametric and non-parametric approaches”, in Interspeech 2007, pp. 2608-2611.

Fairbanks, G., and Grubb, P. (1961). “A psychophysical investigation of vowel formants”, J. Speech Hear. Res., 4, 203-219.

Hawks, J.W., and Miller, J.D. (1995). “A formant bandwidth estimation procedure for vowel synthesis”, J. Acoust. Soc. Am., 97, 1343-1344.

Hillenbrand, J., and Gayvert, R.T. (1993). “Identification of steady-state vowels synthesized from the Peterson and Barney measurements”, J. Acoust. Soc. Am., 94, 668-674.

Kalinli, O., Seltzer, M.L., Droppo, J., and Acero, A. (2010). “Noise adaptive training for robust automatic speech recognition”, IEEE T. Audio Speech, 18, 1889-1901.

Klatt, D.H. (1980). “Software for a cascade/parallel formant synthesizer”, J. Acoust. Soc. Am., 67, 971-995.

Klatt, D.H., and Klatt, L.C. (1990). “Analysis, synthesis, and perception of voice quality variations among female and male talkers”, J. Acoust. Soc. Am., 87, 820-857.

Krishnamoorthy, K., and Mathew, T. (2009). “The multivariate normal distribution”, in Statistical Tolerance Regions: Theory, Applications, and Computation (John Wiley & Sons, Inc.), pp. 225-247.

McLachlan, N. (2009). “A vomputational model of human pitch strength and height judgments.”, Hear. Res., 249, 23-35.

McLachlan, N., and Wilson, S. (2010). “The central role of recognition in auditory perception: a neurobiological model”, Psychol. Rev., 117, 175-196.

McLachlan, N. (2011). “A neurocognitive model of recognition and pitch segregation”, J. Acoust. Soc. Am., 130, 2845-2854.

Mi, L., Tao, S., Wang, W., Dong, Q., Jin, S.-H., and Liu, C. (2013). “English vowel identification in long-term speech-shaped noise and multi-talker babble for English and Chinese listeners”, J. Acoust. Soc. Am., 133, EL391-EL397.

Moore, B.C.J. (2003). An Introduction to the Psychology of Hearing, 3rd Ed. (Academic Press).

Neel, A.T. (2008). “Vowel space characteristics and vowel identification accuracy”, J. Speech Lang. Hear. Res., 51, 574-585.

Oliphant, T.E. (2007). “Python for scientific computing”, Comput. Sci. Eng., 9, 10-20.

Pearce, D., and Hirsch, H.-G. (2000). “The Aurora experimental framework for the performance evaluation of speech recognition systems under noisy conditions”, in ISCA ITRW ASR-2000 (Paris, France), pp. 181-188.

Peterson, G.E., and Barney, H.L. (1952). “Control methods used in a study of the vowels”, J. Acoust. Soc. Am., 24, 175-184.

Rabiner, L.R. (1989). “A tutorial on hidden Markov models and selected applications in speech recognition”, P. IEEE, 77, 257-286.

Schouten, B., Gerrits, E., and Van Hessen, A. (2003). “The end of categorical perception as we know it”, Speech Commun., 41, 71-80.

Slaney, M. (1993). “An efficient implementation of the Patterson-Holdsworth auditory filter bank”, Apple Computer Technical Report #35.

Viemeister, N.F., and Wakefield, G.H. (1991). “Temporal integration and multiple looks”, J. Acoust. Soc. Am., 90, 858-865.

Downloads

Published

2013-12-15

How to Cite

Spencer, J., McLachlan, N., & Grayden, D. B. (2013). A computational model of sound recognition used to analyze the capacity and adaptability in learning vowel classes. Proceedings of the International Symposium on Auditory and Audiological Research, 4, 113–120. Retrieved from https://proceedings.isaar.eu/index.php/isaarproc/article/view/2013-12

Issue

Section

2013/2. Physiological correlates and modeling of auditory plasticity