Change search
Refine search result
2345 201 - 201 of 201
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Rows per page
  • 5
  • 10
  • 20
  • 50
  • 100
  • 250
Sort
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
Select
The maximal number of hits you can export is 250. When you want to export more records please use the Create feeds function.
  • 201. Öhman, Tobias
    et al.
    Salvi, Giampiero
    KTH, Superseded Departments, Speech, Music and Hearing.
    Using HMMs and ANNs for mapping acoustic to visual speech1999In: TMH-QPSR, Vol. 40, no 1-2, p. 45-50Article in journal (Other academic)
    Abstract [en]

    In this paper we present two different methods for mapping auditory, telephonequality speech to visual parameter trajectories, specifying the movements of ananimated synthetic face. In the first method, Hidden Markov Models (HMMs)where used to obtain phoneme strings and time labels. These where thentransformed by rules into parameter trajectories for visual speech synthesis. In thesecond method, Artificial Neural Networks (ANNs) were trained to directly mapacoustic parameters to synthesis parameters. Speaker independent HMMs weretrained on a phonetically transcribed telephone speech database. Differentunderlying units of speech were modelled by the HMMs, such as monophones,diphones, triphones, and visemes. The ANNs were trained on male, female , andmixed speakers.The HMM method and the ANN method were evaluated through audio-visualintelligibility tests with ten hearing impaired persons, and compared to “ideal”articulations (where no recognition was involved), a natural face, and to theintelligibility of the audio alone. It was found that the HMM method performsconsiderably better than the audio alone condition (54% and 34% keywordscorrect, respectively), but not as well as the “ideal” articulating artificial face(64%). The intelligibility for the ANN method was 34% keywords correct.

2345 201 - 201 of 201
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf