kth.sePublications
Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Perceptual differentiation modeling explains phoneme mispronunciation by non-native speakers
KTH, School of Computer Science and Communication (CSC), Speech, Music and Hearing, TMH, Speech Communication and Technology. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Speech Technology, CTT.
KTH, School of Computer Science and Communication (CSC), Speech, Music and Hearing, TMH, Speech Communication and Technology. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Speech Technology, CTT.ORCID iD: 0000-0003-4532-014X
2011 (English)In: ICASSP, IEEE International Conference on Acoustics, Speech and Signal Processing - Proceedings, 2011, p. 5704-5707Conference paper, Published paper (Refereed)
Abstract [en]

One of the difficulties in second language (L2) learning is the weakness in discriminating between acoustic diversity within an L2 phoneme category and between different categories. In this paper, we describe a general method to quantitatively measure the perceptual difference between a group of native and individual non-native speakers. Normally, this task includes subjective listening tests and/or a thorough linguistic study. We instead use a totally automated method based on a psycho-acoustic auditory model. For a certain phoneme class, we measure the similarity of the Euclidean space spanned by the power spectrum of a native speech signal and the Euclidean space spanned by the auditory model output. We do the same for a non-native speech signal. Comparing the two similarity measurements, we find problematic phonemes for a given speaker. To validate our method, we apply it to different groups of non-native speakers of various first language (L1) backgrounds. Our results are verified by the theoretical findings in literature obtained from linguistic studies.

Place, publisher, year, edition, pages
2011. p. 5704-5707
Series
International Conference on Acoustics Speech and Signal Processing ICASSP, ISSN 1520-6149
Keywords [en]
second language learning, auditory model, distortion measure, perceptual differentiation ratio, phoneme
National Category
Other Computer and Information Science Computer Sciences Signal Processing
Identifiers
URN: urn:nbn:se:kth:diva-39053DOI: 10.1109/ICASSP.2011.5947655ISI: 000296062406103Scopus ID: 2-s2.0-80051656916ISBN: 978-1-4577-0537-3 (print)OAI: oai:DiVA.org:kth-39053DiVA, id: diva2:439332
Conference
36th IEEE International Conference on Acoustics, Speech, and Signal Processing, ICASSP 2011; Prague; 22 May 2011 through 27 May 2011
Note
QC 20111117Available from: 2011-09-07 Created: 2011-09-07 Last updated: 2024-03-18Bibliographically approved
In thesis
1. Perceptually motivated speech recognition and mispronunciation detection
Open this publication in new window or tab >>Perceptually motivated speech recognition and mispronunciation detection
2012 (English)Doctoral thesis, comprehensive summary (Other academic)
Abstract [en]

This doctoral thesis is the result of a research effort performed in two fields of speech technology, i.e., speech recognition and mispronunciation detection. Although the two areas are clearly distinguishable, the proposed approaches share a common hypothesis based on psychoacoustic processing of speech signals. The conjecture implies that the human auditory periphery provides a relatively good separation of different sound classes. Hence, it is possible to use recent findings from psychoacoustic perception together with mathematical and computational tools to model the auditory sensitivities to small speech signal changes.

The performance of an automatic speech recognition system strongly depends on the representation used for the front-end. If the extracted features do not include all relevant information, the performance of the classification stage is inherently suboptimal. The work described in Papers A, B and C is motivated by the fact that humans perform better at speech recognition than machines, particularly for noisy environments. The goal is to make use of knowledge of human perception in the selection and optimization of speech features for speech recognition. These papers show that maximizing the similarity of the Euclidean geometry of the features to the geometry of the perceptual domain is a powerful tool to select or optimize features. Experiments with a practical speech recognizer confirm the validity of the principle. It is also shown an approach to improve mel frequency cepstrum coefficients (MFCCs) through offline optimization. The method has three advantages: i) it is computationally inexpensive, ii) it does not use the auditory model directly, thus avoiding its computational cost, and iii) importantly, it provides better recognition performance than traditional MFCCs for both clean and noisy conditions.

The second task concerns automatic pronunciation error detection. The research, described in Papers D, E and F, is motivated by the observation that almost all native speakers perceive, relatively easily, the acoustic characteristics of their own language when it is produced by speakers of the language. Small variations within a phoneme category, sometimes different for various phonemes, do not change significantly the perception of the language’s own sounds. Several methods are introduced based on similarity measures of the Euclidean space spanned by the acoustic representations of the speech signal and the Euclidean space spanned by an auditory model output, to identify the problematic phonemes for a given speaker. The methods are tested for groups of speakers from different languages and evaluated according to a theoretical linguistic study showing that they can capture many of the problematic phonemes that speakers from each language mispronounce. Finally, a listening test on the same dataset verifies the validity of these methods.

Place, publisher, year, edition, pages
Stockholm: KTH Royal Institute of Technology, 2012. p. xxi, 79
Series
Trita CSC-A, ISSN 1653-5723 ; 2012:10
Keywords
feature extraction, feature selection, auditory models, MFCCs, speech recognition, distortion measures, perturbation analysis, psychoacoustics, human perception, sensitivity matrix, pronunciation error detection, phoneme, second language, perceptual assessment
National Category
Computer Sciences Signal Processing Media and Communication Technology Other Computer and Information Science
Identifiers
urn:nbn:se:kth:diva-102321 (URN)978-91-7501-468-5 (ISBN)
Public defence
2012-10-05, A2, Östermalmsgatan 26, KTH, Stockholm, 10:00 (English)
Opponent
Supervisors
Projects
European Union FP6-034362 research project ACORNSComputer-Animated language Teachers (CALATea)
Note

QC 20120914

Available from: 2012-09-14 Created: 2012-09-13 Last updated: 2022-06-24Bibliographically approved

Open Access in DiVA

No full text in DiVA

Other links

Publisher's full textScopus

Authority records

Engwall, Olov

Search in DiVA

By author/editor
Koniaris, ChristosEngwall, Olov
By organisation
Speech Communication and TechnologyCentre for Speech Technology, CTT
Other Computer and Information ScienceComputer SciencesSignal Processing

Search outside of DiVA

GoogleGoogle Scholar

doi
isbn
urn-nbn

Altmetric score

doi
isbn
urn-nbn
Total: 576 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf