kth.sePublications
Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Visual phonemic ambiguity and speechreading
KTH, School of Computer Science and Communication (CSC), Speech, Music and Hearing, TMH, Speech Communication and Technology.ORCID iD: 0000-0003-1399-6604
2006 (English)In: Journal of Speech, Language and Hearing Research, ISSN 1092-4388, E-ISSN 1558-9102, Vol. 49, no 4, p. 835-847Article in journal (Refereed) Published
Abstract [en]

Purpose: To study the role of visual perception of phonemes in visual perception of sentences and words among normal-hearing individuals. Method: Twenty-four normal-hearing adults identified consonants, words, and sentences, spoken by either a human or a synthetic talker. The synthetic talker was programmed with identical parameters within phoneme groups, hypothetically resulting in simplified articulation. Proportions of correctly identified phonemes per participant, condition, and task, as well as sensitivity to single consonants and clusters of consonants, were measured. Groups of mutually exclusive consonants were used for sensitivity analyses and hierarchical cluster analyses. Results: Consonant identification performance did not differ as a function of talker, nor did average sensitivity to single consonants. The bilabial and labiodental clusters were most readily identified and cohesive for both talkers. Word and sentence identification was better for the human talker than the synthetic talker. The participants were more sensitive to the clusters of the least visible consonants with the human talker than with the synthetic talker. Conclusions: it is suggested that ability to distiguish between clusters of the least visually distinct phonemes is important in speech reading. Specifically, it reduces the number of candidates, and thereby facilitates lexical identification.

Place, publisher, year, edition, pages
2006. Vol. 49, no 4, p. 835-847
Keywords [en]
speechreading, articulation, students, normal hearing, word-recognition, normal-hearing, lexical distinctiveness, speech-perception, displayed emotion, performance, language, visemes
Identifiers
URN: urn:nbn:se:kth:diva-15951DOI: 10.1044/1092-4388(2006/059)ISI: 000240117200010PubMedID: 16908878Scopus ID: 2-s2.0-34248695615OAI: oai:DiVA.org:kth-15951DiVA, id: diva2:333993
Note
QC 20100525Available from: 2010-08-05 Created: 2010-08-05 Last updated: 2022-06-25Bibliographically approved

Open Access in DiVA

No full text in DiVA

Other links

Publisher's full textPubMedScopus

Authority records

Beskow, Jonas

Search in DiVA

By author/editor
Beskow, Jonas
By organisation
Speech Communication and Technology
In the same journal
Journal of Speech, Language and Hearing Research

Search outside of DiVA

GoogleGoogle Scholar

doi
pubmed
urn-nbn

Altmetric score

doi
pubmed
urn-nbn
Total: 52 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf