kth.sePublications
Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Audiovisual speech inversion by switching dynamical modeling Governed by a Hidden Markov Process
KTH, School of Computer Science and Communication (CSC), Speech, Music and Hearing, TMH, Speech Communication and Technology. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Speech Technology, CTT.
NTU, Athens, Greece.
Show others and affiliations
2008 (English)In: Proceedings of EUSIPCO, 2008Conference paper, Published paper (Refereed)
Abstract [en]

We propose a unified framework to recover articulation from audiovisual speech. The nonlinear audiovisual-to-articulatory mapping is modeled by means of a switching linear dynamical system. Switching is governed by a state sequence determined via a Hidden Markov Model alignment process. Mel Frequency Cepstral Coefficients are extracted from audio while visual analysis is performed using Active Appearance Models. The articulatory state is represented by the coordinates of points on important articulators, e.g., tongue and lips. To evaluate our inversion approach, instead of just using the conventional correlation coefficients and root mean squared errors, we introduce a novel evaluation scheme that is more specific to the inversion problem. Prediction errors in the positions of the articulators are weighted differently depending on their relevant importance in the production of the corresponding sound. The applied weights are determined by an articulatory classification analysis using Support Vector Machines with a radial basis function kernel. Experiments are conducted in the audiovisual-articulatory MOCHA database.

Place, publisher, year, edition, pages
2008.
Series
European Signal Processing Conference, ISSN 2219-5491
Keywords [en]
Active appearance models, Audio-visual speech, Classification analysis, Correlation coefficient, Dynamical modeling, Evaluation scheme, Hidden Markov process, Inversion problems, Mel-frequency cepstral coefficients, Prediction errors, Radial basis functions, Root mean squared errors, State sequences, Switching linear dynamical systems, Unified framework, Visual analysis
National Category
Computer Sciences Language Technology (Computational Linguistics)
Identifiers
URN: urn:nbn:se:kth:diva-52053Scopus ID: 2-s2.0-84863731362OAI: oai:DiVA.org:kth-52053DiVA, id: diva2:465347
Conference
16th European Signal Processing Conference, EUSIPCO 2008, Lausanne, Switzerland, 25-29 August 2008
Note

QC 20141013

Available from: 2011-12-14 Created: 2011-12-14 Last updated: 2022-06-24Bibliographically approved

Open Access in DiVA

No full text in DiVA

Scopus

Search in DiVA

By author/editor
Ananthakrishnan, GopalEngwall, Olov
By organisation
Speech Communication and TechnologyCentre for Speech Technology, CTT
Computer SciencesLanguage Technology (Computational Linguistics)

Search outside of DiVA

GoogleGoogle Scholar

urn-nbn

Altmetric score

urn-nbn
Total: 340 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf