kth.sePublications KTH
Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
When is it right for a robot to be wrong? Children trust a robot over a human in a selective trust task
KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Robotics, Perception and Learning, RPL.ORCID iD: 0000-0002-6158-4818
Jacobs University, Germany.
Griffith University, Australia.
2024 (English)In: Computers in human behavior, ISSN 0747-5632, E-ISSN 1873-7692, Vol. 157, article id 108229Article in journal (Refereed) Published
Abstract [en]

Little is known about how children perceive, trust and learn from social robots compared to humans. The goal of this study was to compare a robot and a human agent in a selective trust task across different combinations of reliability (both reliable, only human reliable, or only robot reliable). 111 children, aged 3 to 6 years, participated in an online study where they viewed videos of a human and a robot labelling both familiar and novel objects. We found that, although children preferred to endorse a novel object label from the agent who previously labelled familiar objects correctly, when both the human and the robot were reliable they were biased more towards the robot. Their social evaluations also tended much more strongly towards a general robot preference. Children's conceptualisations of the agents making a mistake also differed, such that an unreliable human was selected as doing things on purpose, but not an unreliable robot. These findings suggest that children's perceptions of a robot's reliability are separate from their evaluation of its desirability as a social interaction partner and its perceived agency. Further, they indicate that a robot making a mistake does not necessarily reduce children's desire to interact with it as a social agent.

Place, publisher, year, edition, pages
Elsevier BV , 2024. Vol. 157, article id 108229
Keywords [en]
Human–robot-interaction, Liking, Mistakes, Social cognition, Social learning, Trust
National Category
Human Computer Interaction Robotics and automation
Identifiers
URN: urn:nbn:se:kth:diva-366548DOI: 10.1016/j.chb.2024.108229ISI: 001239074500001Scopus ID: 2-s2.0-85190256145OAI: oai:DiVA.org:kth-366548DiVA, id: diva2:1982536
Note

QC 20250708

Available from: 2025-07-08 Created: 2025-07-08 Last updated: 2025-07-08Bibliographically approved

Open Access in DiVA

No full text in DiVA

Other links

Publisher's full textScopus

Authority records

Stower, Rebecca

Search in DiVA

By author/editor
Stower, Rebecca
By organisation
Robotics, Perception and Learning, RPL
In the same journal
Computers in human behavior
Human Computer InteractionRobotics and automation

Search outside of DiVA

GoogleGoogle Scholar

doi
urn-nbn

Altmetric score

doi
urn-nbn
Total: 50 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf