Synthetic visual speech driven from auditory speechShow others and affiliations
1999 (English)In: Proceedings of Audio-Visual Speech Processing (AVSP'99)), 1999Conference paper, Published paper (Refereed)
Abstract [en]
We have developed two different methods for using auditory, telephone speech to drive the movements of a synthetic face. In the first method, Hidden Markov Models (HMMs) were trained on a phonetically transcribed telephone speech database. The output of the HMMs was then fed into a rulebased visual speech synthesizer as a string of phonemes together with time labels. In the second method, Artificial Neural Networks (ANNs) were trained on the same database to map acoustic parameters directly to facial control parameters. These target parameter trajectories were generated by using phoneme strings from a database as input to the visual speech synthesis The two methods were evaluated through audiovisual intelligibility tests with ten hearing impaired persons, and compared to “ideal” articulations (where no recognition was involved), a natural face, and to the intelligibility of the audio alone. It was found that the HMM method performs considerably better than the audio alone condition (54% and 34% keywords correct respectively), but not as well as the “ideal” articulating artificial face (64%). The intelligibility for the ANN method was 34% keywords correct.
Place, publisher, year, edition, pages
1999.
National Category
Computer and Information Sciences
Identifiers
URN: urn:nbn:se:kth:diva-53577OAI: oai:DiVA.org:kth-53577DiVA, id: diva2:470371
Conference
Auditory-Visual Speech Processing (AVSP'99), Santa Cruz, CA, USA, August 7-10, 1999
Note
QC 20120103
2011-12-282011-12-282022-06-24Bibliographically approved