Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Units for Dynamic Vocal Tract Length Normalization
KTH, School of Computer Science and Communication (CSC), Speech, Music and Hearing, TMH. (TAL)
KTH, School of Computer Science and Communication (CSC), Speech, Music and Hearing, TMH. (TAL)
(English)Manuscript (preprint) (Other academic)
Abstract [en]

A novel method to account for dynamic speaker characteristic properties in a speech recognition system is presented. The estimated trajectory of a property can be constrained to be constant or to have a limited rate-of-change within a phone or a sub-phone state, or be allowed to change between individual speech frames. The constraints are implemented by extending each state in the HMM by a number of property-specific sub-states transformed from the original model. The connections in the transition matrix of the extended model define possible slopes of the trajectory. Constraints on its dynamic range during an utterance are implemented by decomposing the trajectory into a static and a dynamic component. Results are presented on vocal tract length normalization in connected-digit recognition of children's speech using models trained on male adult speech. The word error rate was reduced compared with the conventional utterance-specific warping factor by 10% relative.

Keyword [en]
speech recognition, VTLN, dynamic modelling
National Category
Language Technology (Computational Linguistics)
Identifiers
URN: urn:nbn:se:kth:diva-12242OAI: oai:DiVA.org:kth-12242DiVA: diva2:306690
Projects
KOBRA
Note
QC 20110502Available from: 2010-04-08 Created: 2010-03-30 Last updated: 2011-05-02Bibliographically approved
In thesis
1. Accounting for Individual Speaker Properties in Automatic Speech Recognition
Open this publication in new window or tab >>Accounting for Individual Speaker Properties in Automatic Speech Recognition
2010 (English)Licentiate thesis, comprehensive summary (Other academic)
Abstract [en]

In this work, speaker characteristic modeling has been applied in the fields of automatic speech recognition (ASR) and automatic speaker verification (ASV). In ASR, a key problem is that acoustic mismatch between training and test conditions degrade classification per- formance. In this work, a child exemplifies a speaker not represented in training data and methods to reduce the spectral mismatch are devised and evaluated. To reduce the acoustic mismatch, predictive modeling based on spectral speech transformation is applied. Follow- ing this approach, a model suitable for a target speaker, not well represented in the training data, is estimated and synthesized by applying vocal tract predictive modeling (VTPM). In this thesis, the traditional static modeling on the utterance level is extended to dynamic modeling. This is accomplished by operating also on sub-utterance units, such as phonemes, phone-realizations, sub-phone realizations and sound frames.

Initial experiments shows that adaptation of an acoustic model trained on adult speech significantly reduced the word error rate of ASR for children, but not to the level of a model trained on children’s speech. Multi-speaker-group training provided an acoustic model that performed recognition for both adults and children within the same model at almost the same accuracy as speaker-group dedicated models, with no added model complexity. In the analysis of the cause of errors, body height of the child was shown to be correlated to word error rate.

A further result is that the computationally demanding iterative recognition process in standard VTLN can be replaced by synthetically extending the vocal tract length distribution in the training data. A multi-warp model is trained on the extended data and recognition is performed in a single pass. The accuracy is similar to that of the standard technique.

A concluding experiment in ASR shows that the word error rate can be reduced by ex- tending a static vocal tract length compensation parameter into a temporal parameter track. A key component to reach this improvement was provided by a novel joint two-level opti- mization process. In the process, the track was determined as a composition of a static and a dynamic component, which were simultaneously optimized on the utterance and sub- utterance level respectively. This had the principal advantage of limiting the modulation am- plitude of the track to what is realistic for an individual speaker. The recognition error rate was reduced by 10% relative compared with that of a standard utterance-specific estimation technique.

The techniques devised and evaluated can also be applied to other speaker characteristic properties, which exhibit a dynamic nature.

An excursion into ASV led to the proposal of a statistical speaker population model. The model represents an alternative approach for determining the reject/accept threshold in an ASV system instead of the commonly used direct estimation on a set of client and impos- tor utterances. This is especially valuable in applications where a low false reject or false ac- cept rate is required. In these cases, the number of errors is often too few to estimate a reli- able threshold using the direct method. The results are encouraging but need to be verified on a larger database.

Place, publisher, year, edition, pages
Stockholm: KTH Royal Institute of Technology, 2010. xiv, 43 p.
Series
Trita-CSC-A, ISSN 1653-5723 ; 2010:05
Keyword
MAP, MLLR, VTLN, speaker characteristics, dynamic modeling, child
National Category
Language Technology (Computational Linguistics)
Identifiers
urn:nbn:se:kth:diva-12258 (URN)978-91-7415-605-8 (ISBN)
Presentation
2010-04-23, Fantum, KTH, Lindstedtsvägen 24, SE-100 44 STOCKHOLM, SWEDEN, 15:15 (English)
Opponent
Supervisors
Projects
Pf-StarKOBRA
Note
QC 20110502Available from: 2010-04-08 Created: 2010-03-30 Last updated: 2011-05-02Bibliographically approved

Open Access in DiVA

No full text

Search in DiVA

By author/editor
Elenius, DanielBlomberg, Mats
By organisation
Speech, Music and Hearing, TMH
Language Technology (Computational Linguistics)

Search outside of DiVA

GoogleGoogle Scholar

urn-nbn

Altmetric score

urn-nbn
Total: 61 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf