Articulatory synthesis using corpus-based estimation of line spectrum pairs
2005 (English)In: 9th European Conference on Speech Communication and Technology, 2005, 1909-1912 p.Conference paper (Refereed)
An attempt to define a new articulatory synthesis method, in which the speech signal is generated through a statistical estimation of its relation with articulatory parameters, is presented. A corpus containing acoustic material and simultaneous recordings of the tongue and facial movements was used to train and test the articulatory synthesis of VCV words and short sentences. Tongue and facial motion data, captured with electromagnetic articulography and three-dimensional optical motion tracking, respectively, define articulatory parameters of a talking head. These articulatory parameters are then used as estimators of the speech signal, represented by line spectrum pairs. The statistical link between the articulatory parameters and the speech signal was established using either linear estimation or artificial neural networks. The results show that the linear estimation was only enough to synthesize identifiable vowels, but not consonants, whereas the neural networks gave a perceptually better synthesis.
Place, publisher, year, edition, pages
2005. 1909-1912 p.
Computer Science Language Technology (Computational Linguistics)
IdentifiersURN: urn:nbn:se:kth:diva-51881ScopusID: 2-s2.0-33745213765OAI: oai:DiVA.org:kth-51881DiVA: diva2:465175
9th European Conference on Speech Communication and Technology. Lisbon. 4 September 2005 - 8 September 2005
QC 20120111. tmh_import_11_12_142011-12-142011-12-142012-01-11Bibliographically approved