2009 (English)In: Speech Communication, ISSN 0167-6393, Vol. 51, no 3, 195-209 p.Article in journal (Refereed) Published
It has been shown that acoustic-to-articulatory inversion, i.e. estimation of the articulatory configuration from the corresponding acoustic signal, can be greatly improved by adding visual features extracted from the speaker's face. In order to make the inversion method usable in a realistic application, these features should be possible to obtain from a monocular frontal face video, where the speaker is not required to wear any special markers. In this study, we investigate the importance of visual cues for inversion. Experiments with motion capture data of the face show that important articulatory information can be extracted using only a few face measures that mimic the information that could be gained from a video-based method. We also show that the depth cue for these measures is not critical, which means that the relevant information can be extracted from a frontal video. A real video-based face feature extraction method is further presented, leading to similar improvements in inversion quality. Rather than tracking points on the face, it represents the appearance of the mouth area using independent component images. These findings are important for applications that need a simple audiovisual-to-articulatory inversion technique, e.g. articulatory phonetics training for second language learners or hearing-impaired persons.
Place, publisher, year, edition, pages
2009. Vol. 51, no 3, 195-209 p.
Speech inversion, Articulatory inversion, Computer vision, speech recognition
IdentifiersURN: urn:nbn:se:kth:diva-18162DOI: 10.1016/j.specom.2008.07.005ISI: 000263203900001ScopusID: 2-s2.0-58149191558OAI: oai:DiVA.org:kth-18162DiVA: diva2:336208
QC 201005252010-08-052010-08-052011-01-11Bibliographically approved