How Did We Miss This?: A Case Study on Unintended Biases in Robot Social Behavior
2023 (English)In: HRI 2023: Companion of the ACM/IEEE International Conference on Human-Robot Interaction, Association for Computing Machinery (ACM) , 2023, p. 11-20Conference paper, Published paper (Refereed)
Abstract [en]
With societies growing more and more conscious of human social biases that are implicit in most of our interactions, the development of automated robot social behavior is failing to address these issues as more than just an afterthought. In the present work, we describe how we unintentionally implemented robot listener behavior that was biased toward the gender of the participants, while following typical design procedures in the field. In a post-hoc analysis of data collected in a between-subject user study (n=60), we find that both a rule-based and a deep learning-based listener behavior models produced a higher number of backchannels (listener feedback, through nodding or vocal utterances) if the participant identified as a male. We investigate the cause of this bias in both models and discuss the implications of our findings. Further, we provide approaches that may be taken to address the issue of algorithmic fairness, and preventative measures to avoid the development of biased social robot behavior.
Place, publisher, year, edition, pages
Association for Computing Machinery (ACM) , 2023. p. 11-20
Keywords [en]
AI fairness, ethical HRI, gender bias, machine learning, non-verbal behaviors
National Category
Human Computer Interaction Robotics and automation
Identifiers
URN: urn:nbn:se:kth:diva-333371DOI: 10.1145/3568294.3580032ISI: 001054975700002Scopus ID: 2-s2.0-85150450065OAI: oai:DiVA.org:kth-333371DiVA, id: diva2:1785060
Conference
18th Annual ACM/IEEE International Conference on Human-Robot Interaction, HRI 2023, Stockholm, Sweden, Mar 13 2023 - Mar 16 2023
Note
Part of ISBN 9781450399708
QC 20230801
2023-08-012023-08-012025-02-05Bibliographically approved