Change search
ReferencesLink to record
Permanent link

Direct link
Visual phonemic ambiguity and speechreading
KTH, School of Computer Science and Communication (CSC), Speech, Music and Hearing, TMH, Speech Communication and Technology.ORCID iD: 0000-0003-1399-6604
2006 (English)In: Journal of Speech, Language and Hearing Research, ISSN 1092-4388, E-ISSN 1558-9102, Vol. 49, no 4, 835-847 p.Article in journal (Refereed) Published
Abstract [en]

Purpose: To study the role of visual perception of phonemes in visual perception of sentences and words among normal-hearing individuals. Method: Twenty-four normal-hearing adults identified consonants, words, and sentences, spoken by either a human or a synthetic talker. The synthetic talker was programmed with identical parameters within phoneme groups, hypothetically resulting in simplified articulation. Proportions of correctly identified phonemes per participant, condition, and task, as well as sensitivity to single consonants and clusters of consonants, were measured. Groups of mutually exclusive consonants were used for sensitivity analyses and hierarchical cluster analyses. Results: Consonant identification performance did not differ as a function of talker, nor did average sensitivity to single consonants. The bilabial and labiodental clusters were most readily identified and cohesive for both talkers. Word and sentence identification was better for the human talker than the synthetic talker. The participants were more sensitive to the clusters of the least visible consonants with the human talker than with the synthetic talker. Conclusions: it is suggested that ability to distiguish between clusters of the least visually distinct phonemes is important in speech reading. Specifically, it reduces the number of candidates, and thereby facilitates lexical identification.

Place, publisher, year, edition, pages
2006. Vol. 49, no 4, 835-847 p.
Keyword [en]
speechreading, articulation, students, normal hearing, word-recognition, normal-hearing, lexical distinctiveness, speech-perception, displayed emotion, performance, language, visemes
URN: urn:nbn:se:kth:diva-15951DOI: 10.1044/1092-4388(2006/059)ISI: 000240117200010ScopusID: 2-s2.0-34248695615OAI: diva2:333993
QC 20100525Available from: 2010-08-05 Created: 2010-08-05Bibliographically approved

Open Access in DiVA

No full text

Other links

Publisher's full textScopus

Search in DiVA

By author/editor
Beskow, Jonas
By organisation
Speech Communication and Technology
In the same journal
Journal of Speech, Language and Hearing Research

Search outside of DiVA

GoogleGoogle Scholar
The number of downloads is the sum of all downloads of full texts. It may include eg previous versions that are now no longer available

Altmetric score

Total: 13 hits
ReferencesLink to record
Permanent link

Direct link