Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Using a Biomechanical Model and Articulatory Data for the Numerical Production of Vowels
KTH, School of Computer Science and Communication (CSC), Speech, Music and Hearing, TMH.ORCID iD: 0000-0002-8991-1016
GTM Grup de recerca en Tecnologies Mèdia, La Salle, Universitat Ramon Llull, Barcelona, Spain.
KTH, School of Computer Science and Communication (CSC), Speech, Music and Hearing, TMH.ORCID iD: 0000-0003-4532-014X
GTM Grup de recerca en Tecnologies Mèdia, La Salle, Universitat Ramon Llull, Barcelona, Spain.
Show others and affiliations
2016 (English)In: Interspeech 2016, 2016, p. 3569-3573Conference paper, Published paper (Refereed)
Abstract [en]

We introduce a framework to study speech production using a biomechanical model of the human vocal tract, ArtiSynth. Electromagnetic articulography data was used as input to an inverse tracking simulation that estimates muscle activations to generate 3D jaw and tongue postures corresponding to the target articulator positions. For acoustic simulations, the vocal tract geometry is needed, but since the vocal tract is a cavity rather than a physical object, its geometry does not explicitly exist in a biomechanical model. A fully-automatic method to extract the 3D geometry (surface mesh) of the vocal tract by blending geometries of the relevant articulators has therefore been developed. This automatic extraction procedure is essential, since a method with manual intervention is not feasible for large numbers of simulations or for generation of dynamic sounds, such as diphthongs. We then simulated the vocal tract acoustics by using the Finite Element Method (FEM). This requires a high quality vocal tract mesh without irregular geometry or self-intersections. We demonstrate that the framework is applicable to acoustic FEM simulations of a wide range of vocal tract deformations. In particular we present results for cardinal vowel production, with muscle activations, vocal tract geometry, and acoustic simulations.

Place, publisher, year, edition, pages
2016. p. 3569-3573
Keywords [en]
speech production, biomechanical articulatory model, vocal tract geometry, vocal tract acoustics, Finite Element Method
National Category
Computer Sciences Fluid Mechanics and Acoustics
Identifiers
URN: urn:nbn:se:kth:diva-192602DOI: 10.21437/Interspeech.2016-1500ISI: 000409394402095Scopus ID: 2-s2.0-84994364959OAI: oai:DiVA.org:kth-192602DiVA, id: diva2:971288
Conference
Interspeech, 8-12 Sep 2016, San Francisco
Projects
EUNISON
Note

QC 20160920

Available from: 2016-09-15 Created: 2016-09-15 Last updated: 2018-11-16Bibliographically approved
In thesis
1. Computational Modeling of the Vocal Tract: Applications to Speech Production
Open this publication in new window or tab >>Computational Modeling of the Vocal Tract: Applications to Speech Production
2018 (English)Doctoral thesis, comprehensive summary (Other academic)
Abstract [en]

Human speech production is a complex process, involving neuromuscular control signals, the effects of articulators' biomechanical properties and acoustic wave propagation in a vocal tract tube of intricate shape. Modeling these phenomena may play an important role in advancing our understanding of the involved mechanisms, and may also have future medical applications, e.g., guiding doctors in diagnosing, treatment planning, and surgery prediction of related disorders, ranging from oral cancer, cleft palate, obstructive sleep apnea, dysphagia, etc.

A more complete understanding requires models that are as truthful representations as possible of the phenomena. Due to the complexity of such modeling, simplifications have nevertheless been used extensively in speech production research: phonetic descriptors (such as the position and degree of the most constricted part of the vocal tract) are used as control signals, the articulators are represented as two-dimensional geometrical models, the vocal tract is considered as a smooth tube and plane wave propagation is assumed, etc.

This thesis aims at firstly investigating the consequences of such simplifications, and secondly at contributing to establishing unified modeling of the speech production process, by connecting three-dimensional biomechanical modeling of the upper airway with three-dimensional acoustic simulations. The investigation on simplifying assumptions demonstrated the influence of vocal tract geometry features — such as shape representation, bending and lip shape — on its acoustic characteristics, and that the type of modeling — geometrical or biomechanical — affects the spatial trajectories of the articulators, as well as the transition of formant frequencies in the spectrogram.

The unification of biomechanical and acoustic modeling in three-dimensions allows to realistically control the acoustic output of dynamic sounds, such as vowel-vowel utterances, by contraction of relevant muscles. This moves and shapes the speech articulators that in turn dene the vocal tract tube in which the wave propagation occurs. The main contribution of the thesis in this line of work is a novel and complex method that automatically reconstructs the shape of the vocal tract from the biomechanical model. This step is essential to link biomechanical and acoustic simulations, since the vocal tract, which anatomically is a cavity enclosed by different structures, is only implicitly defined in a biomechanical model constituted of several distinct articulators.

Place, publisher, year, edition, pages
KTH Royal Institute of Technology, 2018. p. 105
Series
TRITA-EECS-AVL ; 2018:90
Keywords
vocal tract, upper airway, speech production, biomechanical model, acoustic model, vocal tract reconstruction
National Category
Computer Sciences
Research subject
Speech and Music Communication
Identifiers
urn:nbn:se:kth:diva-239071 (URN)978-91-7873-021-6 (ISBN)
Public defence
2018-12-07, D2, Lindstedtsvägen 5, Stockholm, 14:00 (English)
Opponent
Supervisors
Note

QC 20181116

Available from: 2018-11-16 Created: 2018-11-16 Last updated: 2018-11-16Bibliographically approved

Open Access in DiVA

fulltext(1412 kB)155 downloads
File information
File name FULLTEXT01.pdfFile size 1412 kBChecksum SHA-512
264ed7ef651a6641f049f5c269da281efa5d0f293a9c90bcded964ab9946d5944f43e5874a523e7da2188c78417563fc7dd4b0fa96cfb017b2e9d96645f5f53a
Type fulltextMimetype application/pdf

Other links

Publisher's full textScopusPublished version

Authority records BETA

Dabbaghchian, Saeed

Search in DiVA

By author/editor
Dabbaghchian, SaeedEngwall, Olov
By organisation
Speech, Music and Hearing, TMH
Computer SciencesFluid Mechanics and Acoustics

Search outside of DiVA

GoogleGoogle Scholar
Total: 155 downloads
The number of downloads is the sum of all downloads of full texts. It may include eg previous versions that are now no longer available

doi
urn-nbn

Altmetric score

doi
urn-nbn
Total: 212 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf