kth.sePublications
Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Perception of Emotions in Human and Robot Faces: Is the Eye Region Enough?
Max Planck Institute for Psycholinguistics, Nijmegen, The Netherlands; Furhat Robotics AB, Stockholm, Sweden.ORCID iD: 0000-0002-9223-1230
KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Speech, Music and Hearing, TMH. Furhat Robotics AB, Stockholm, Sweden.ORCID iD: 0000-0002-8579-1790
Max Planck Institute for Psycholinguistics, Nijmegen, The Netherlands; Donders Institute for Brain, Cognition and Behaviour, Radboud University, Nijmegen, The Netherlands.ORCID iD: 0000-0001-7280-7549
Max Planck Institute for Psycholinguistics, Nijmegen, The Netherlands; Maastricht University, Maastricht, The Netherlands.ORCID iD: 0000-0002-7124-4091
2025 (English)In: Social Robotics - 16th International Conference, ICSR + AI 2024, Proceedings, Springer Nature , 2025, p. 290-303Conference paper, Published paper (Refereed)
Abstract [en]

The increased interest in developing next-gen social robots has raised questions about the factors affecting the perception of robot emotions. This study investigates the impact of robot appearances (human-like, mechanical) and face regions (full-face, eye-region) on human perception of robot emotions. A between-subjects user study (N = 305) was conducted where participants were asked to identify the emotions being displayed in videos of robot faces, as well as a human baseline. Our findings reveal three important insights for effective social robot face design in Human-Robot Interaction (HRI): Firstly, robots equipped with a back-projected, fully animated face – regardless of whether they are more human-like or more mechanical-looking – demonstrate a capacity for emotional expression comparable to that of humans. Secondly, the recognition accuracy of emotional expressions in both humans and robots declines when only the eye region is visible. Lastly, within the constraint of only the eye region being visible, robots with more human-like features significantly enhance emotion recognition.

Place, publisher, year, edition, pages
Springer Nature , 2025. p. 290-303
Keywords [en]
Affective Robots, Design and Human Factors, Emotion Recognition, Emotional Robotics, Human-Robot Interaction, Posture and Facial Expressions
National Category
Robotics and automation Human Computer Interaction Computer graphics and computer vision
Identifiers
URN: urn:nbn:se:kth:diva-362500DOI: 10.1007/978-981-96-3522-1_26Scopus ID: 2-s2.0-105002048335OAI: oai:DiVA.org:kth-362500DiVA, id: diva2:1952948
Conference
16th International Conference on Social Robotics, ICSR + AI 2024, Odense, Denmark, October 23-26, 2024
Note

Part of ISBN 9789819635214

QC 20250425

Available from: 2025-04-16 Created: 2025-04-16 Last updated: 2025-04-25Bibliographically approved

Open Access in DiVA

No full text in DiVA

Other links

Publisher's full textScopus

Authority records

Skantze, Gabriel

Search in DiVA

By author/editor
Mishra, ChinmayaSkantze, GabrielHagoort, PeterVerdonschot, Rinus
By organisation
Speech, Music and Hearing, TMH
Robotics and automationHuman Computer InteractionComputer graphics and computer vision

Search outside of DiVA

GoogleGoogle Scholar

doi
urn-nbn

Altmetric score

doi
urn-nbn
Total: 18 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf