Reconstructing Tongue Movements from Audio and Video
2006 (English)In: INTERSPEECH 2006 AND 9TH INTERNATIONAL CONFERENCE ON SPOKEN LANGUAGE PROCESSING, Vol. 1-5, BAIXAS: ISCA-INST SPEECH COMMUNICATION ASSOC , 2006, 2238-2241 p.Conference paper (Refereed)
This paper presents an approach to articulatory inversion using audio and video of the user's face, requiring no special markers. The video is stabilized with respect to the face, and the mouth region cropped out. The mouth image is projected into a learned independent component subspace to obtain a low-dimensional representation of the mouth appearance. The inversion problem is treated as one of regression; a non-linear regressor using relevance vector machines is trained with a dataset of simultaneous images of a subject's face, acoustic features and positions of magnetic coils glued to the subjects's tongue. The results show the benefit of using both cues for inversion. We envisage the inversion method to be part of a pronunciation training system with articulatory feedback.
Place, publisher, year, edition, pages
BAIXAS: ISCA-INST SPEECH COMMUNICATION ASSOC , 2006. 2238-2241 p.
audio-visual to articulatory inversion
Computer and Information Science
IdentifiersURN: urn:nbn:se:kth:diva-38182ISI: 000269965901297ScopusID: 2-s2.0-34548378893ISBN: 978-1-60423-449-7OAI: oai:DiVA.org:kth-38182DiVA: diva2:436171
9th International Conference on Spoken Language Processing/INTERSPEECH 2006, Pittsburgh
QC 201108222011-08-222011-08-222011-12-01Bibliographically approved