Learning about perception of temporal fine structure by building audio codecs

Forfattere

  • Lars Villemoes Dolby Sweden AB, Stockholm, Sweden
  • Arijit Biswas Dolby Germany GmbH, Nürnberg, Germany
  • Heiko Purnhagen Dolby Sweden AB, Stockholm, Sweden
  • Heidi-Maria Lehtonen Dolby Sweden AB, Stockholm, Sweden

Nøgleord:

audio coding, quality assessment, audio synthesis, texture

Resumé

The goal of audio coding is to efficiently describe an auditory experience while enabling a faithful reconstruction to the listener. The subjective quality compared to the original is measured by established psychoacoustic tests (BS.1116, 2015; BS.1534, 2015) and the description cost is measured in number of bits. As it is much cheaper to describe coarse scale signal properties than temporal fine structure (TFS), tools like noise fill, spectral extension, binaural cue coding, and machine learning have increased performance of audio codecs far beyond the first generation based on masking principles (e.g., mp3). In this evolution, implicit knowledge on hearing has been acquired by codec developers, but it has become increasingly difficult to construct tools to predict subjective quality. For example, it is yet unknown which aspects of the TFS that are essential for the listening impression to be preserved. To explore these issues, we study models of auditory representations with the mindset from audio coding. Given a method to solve the inverse problem of creating a signal with a specified representation, evaluating by listening can immediately reveal strengths and weaknesses of a candidate model.

Referencer

Brandenburg, K., Faller, C., Herre, J., Johnston, J. D., and Kleijn, W. B., (2013). “Perceptual Coding of High-Quality Digital Audio,” Proc. IEEE, 101, 1905 - 1919, doi: 10.1109/JPROC.2013.2263371
BS.1116 (2015). “Methods for the subjective assessment of small impairments in audio systems,” Recommendation ITU-R BS.1116-3. Retrieved from: https://www.itu.int/rec/R-REC-BS.1116/en
BS.1387 (2001). “Method for objective measurements of perceived audio quality,” Recommendation ITU-R BS.1387-1, https://www.itu.int/rec/R-REC-BS.1387
BS.1534 (2015). “Method for the subjective assessment of intermediate quality levels of coding systems,” Recommendation ITU-R BS.1534-3. Retrieved from: https://www.itu.int/rec/R-REC-BS.1534
Dau, T., Püschel, D., and Kohlrausch, A. (1996). “A quantitative model of the “effective” signal processing in the auditory system. I. Model structure,” J. Acous. Soc. of Am., 99, 3615-3622, doi: 10.1121/1.414959
Decorsière, R., Søndergaard, P. L., MacDonald, E. N., and Dau, T., (2015). “Inversion of Auditory Spectrograms, Traditional Spectrograms, and Other Envelope Representations,” IEEE-ACM T. Audio Spe., 23, 46-56, doi: 10.1109/TASLP.2014.2367821
Herre, J., and Dick, S., (2019), “Psychoacoustic Models for Perceptual Audio Coding—A Tutorial Review,” Appl. Sci., 9, 2854, doi: 10.3390/app9142854
McDermott, J. H., Oxenham, A. J., and Simoncelli, E. P., (2009). “Sound texture synthesis via filter statistics,” IEEE WASPAA, 297-300, doi:10.1109/aspaa.2009.5346467
Meddis, R., and O'Mard, L. (1997). “A unitary model of pitch perception,” J. Acoust. Soc. Am., 102, 1811-20, doi:10.1121/1.420088
Moore, B., (2019). “The roles of temporal envelope and fine structure information in auditory perception,” Acoust. Sci. Technol., 40, 61-83, doi: 10.1250/ast.40.61
Kleijn, W. B., Lim, F. S. C., Luebs, A., Skoglund, J., Stimberg, F., Wang Q., and Walters, T. C., (2018). “Wavenet Based Low Rate Speech Coding,” IEEE ICASSP, 676-680, doi: 10.1109/icassp.2018.8462529
Klejsa, J., Hedelin, P., Zhou, C., Fejgin, R., and Villemoes, L., (2019). “High-quality Speech Coding with Sample RNN,” IEEE ICASSP, 7155-7159. doi: 10.1109/icassp.2019.8682435 (Samples retrieved from: https://sigport.org/documents/high-quality-speech-coding-sample-rnn)
P.863 (2018). “Perceptual objective listening quality prediction,” Recommendation ITU-T P.863. Retrieved from: https://www.itu.int/rec/T-REC-P.863
Slaney, M. (1995). “Pattern playback from 1950 to 1995,” Proc. IEEE Int. Conf. Syst. Man. Cybern., 4, 3519-3524, doi:10.1109/icsmc.1995.538332

Yderligere filer

Publiceret

2020-04-14

Citation/Eksport

Villemoes, L., Biswas, A., Purnhagen, H., & Lehtonen, H.-M. (2020). Learning about perception of temporal fine structure by building audio codecs. Proceedings of the International Symposium on Auditory and Audiological Research, 7, 141–148. Hentet fra https://proceedings.isaar.eu/index.php/isaarproc/article/view/2019-17

Nummer

Sektion

2019/3. Machine listening and intelligent auditory signal processing