kth.sePublications
Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Spoken and non-verbal interaction experiments with a social robot
KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Speech, Music and Hearing, TMH.ORCID iD: 0000-0003-1399-6604
2016 (English)In: The Journal of the Acoustical Society of America, Acoustical Society of America , 2016, Vol. 140, no 3005Conference paper, Oral presentation with published abstract (Refereed)
Abstract [en]

During recent years, we have witnessed the start of a revolution in personal robotics. Once associated with highly specialized manufacturing tasks, robots are rapidly starting to become part of our everyday lives. The potential of these systems is far-reaching; from co-worker robots that operate and collaborate with humans side-by-side to robotic tutors in schools that interact with humans in a shared environment. All of these scenarios require systems that are able to act and react in a social way. Evidence suggests that robots should leverage channels of communication that humans understand—despite differences in physical form and capabilities. We have developed Furhat—a social robot that is able to convey several important aspects of human face-to-face interaction such as visual speech, facial expression, and eye gaze by means of facial animation that is retro-projected on a physical mask. In this presentation, we cover a series of experiments attempting to quantize the effect of our social robot and how it compares to other interaction modalities. It is shown that a number of functions ranging from low-level audio-visual speech perception to vocabulary learning improve when compared to unimodal (e.g., audio-only) settings or 2D virtual avatars.

Place, publisher, year, edition, pages
Acoustical Society of America , 2016. Vol. 140, no 3005
National Category
Computer graphics and computer vision
Identifiers
URN: urn:nbn:se:kth:diva-268360DOI: 10.1121/1.4969317OAI: oai:DiVA.org:kth-268360DiVA, id: diva2:1394162
Conference
Acoustical Society of America
Note

QC 20200513

Available from: 2020-02-18 Created: 2020-02-18 Last updated: 2025-02-07Bibliographically approved

Open Access in DiVA

No full text in DiVA

Other links

Publisher's full text

Authority records

Beskow, Jonas

Search in DiVA

By author/editor
Beskow, Jonas
By organisation
Speech, Music and Hearing, TMH
Computer graphics and computer vision

Search outside of DiVA

GoogleGoogle Scholar

doi
urn-nbn

Altmetric score

doi
urn-nbn
Total: 105 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf