Articulatory synthesis using corpus-based estimation of line spectrum pairs
2005 (English)In: 9th European Conference on Speech Communication and Technology, 2005, p. 1909-1912Conference paper, Published paper (Refereed)
Abstract [en]
An attempt to define a new articulatory synthesis method, in which the speech signal is generated through a statistical estimation of its relation with articulatory parameters, is presented. A corpus containing acoustic material and simultaneous recordings of the tongue and facial movements was used to train and test the articulatory synthesis of VCV words and short sentences. Tongue and facial motion data, captured with electromagnetic articulography and three-dimensional optical motion tracking, respectively, define articulatory parameters of a talking head. These articulatory parameters are then used as estimators of the speech signal, represented by line spectrum pairs. The statistical link between the articulatory parameters and the speech signal was established using either linear estimation or artificial neural networks. The results show that the linear estimation was only enough to synthesize identifiable vowels, but not consonants, whereas the neural networks gave a perceptually better synthesis.
Place, publisher, year, edition, pages
2005. p. 1909-1912
National Category
Computer Sciences Language Technology (Computational Linguistics)
Identifiers
URN: urn:nbn:se:kth:diva-51881Scopus ID: 2-s2.0-33745213765OAI: oai:DiVA.org:kth-51881DiVA, id: diva2:465175
Conference
9th European Conference on Speech Communication and Technology. Lisbon. 4 September 2005 - 8 September 2005
Note
QC 20120111. tmh_import_11_12_14
2011-12-142011-12-142022-06-24Bibliographically approved