Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Is there a McGurk effect for tongue reading?
KTH, School of Computer Science and Communication (CSC), Speech, Music and Hearing, TMH, Speech Communication and Technology. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Speech Technology, CTT.ORCID iD: 0000-0003-4532-014X
2010 (English)In: Proceedings of AVSP: International Conferenceon Audio-Visual Speech Processing, 2010Conference paper, Published paper (Refereed)
Abstract [en]

Previous studies on tongue reading, i.e., speech perception ofdegraded audio supported by animations of tongue movementshave indicated that the support is weak initially and that subjectsneed training to learn to interpret the movements. Thispaper investigates if the learning is of the animation templatesas such or if subjects learn to retrieve articulatory knowledgethat they already have. Matching and conflicting animationsof tongue movements were presented randomly together withthe auditory speech signal at three different levels of noise in aconsonant identification test. The average recognition rate overthe three noise levels was significantly higher for the matchedaudiovisual condition than for the conflicting and the auditoryonly. Audiovisual integration effects were also found for conflictingstimuli. However, the visual modality is given much lessweight in the perception than for a normal face view, and intersubjectdifferences in the use of visual information are large.

Place, publisher, year, edition, pages
2010.
Keyword [en]
McGurk, audiovisual speech perception, augmented reality
National Category
Computer Science Language Technology (Computational Linguistics)
Identifiers
URN: urn:nbn:se:kth:diva-52167OAI: oai:DiVA.org:kth-52167DiVA: diva2:465462
Conference
Auditory-Visual Speech Processing (AVSP) 2010. Hakone, Kanagawa, Japan. September 30-October 3, 2010
Note

QC 20120111. tmh_import_11_12_14

Available from: 2011-12-14 Created: 2011-12-14 Last updated: 2016-05-25Bibliographically approved

Open Access in DiVA

No full text

Other links

http://www.isca-speech.org/archive/avsp10/papers/av10_S2-2.pdf

Search in DiVA

By author/editor
Engwall, Olov
By organisation
Speech Communication and TechnologyCentre for Speech Technology, CTT
Computer ScienceLanguage Technology (Computational Linguistics)

Search outside of DiVA

GoogleGoogle Scholar

urn-nbn

Altmetric score

urn-nbn
Total: 47 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf