kth.sePublications KTH
Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
How Did We Miss This?: A Case Study on Unintended Biases in Robot Social Behavior
KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Robotics, Perception and Learning, RPL.
KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Robotics, Perception and Learning, RPL.ORCID iD: 0000-0001-7130-0826
Uppsala University, Sweden.
KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Robotics, Perception and Learning, RPL.ORCID iD: 0000-0002-2212-4325
2023 (English)In: HRI 2023: Companion of the ACM/IEEE International Conference on Human-Robot Interaction, Association for Computing Machinery (ACM) , 2023, p. 11-20Conference paper, Published paper (Refereed)
Abstract [en]

With societies growing more and more conscious of human social biases that are implicit in most of our interactions, the development of automated robot social behavior is failing to address these issues as more than just an afterthought. In the present work, we describe how we unintentionally implemented robot listener behavior that was biased toward the gender of the participants, while following typical design procedures in the field. In a post-hoc analysis of data collected in a between-subject user study (n=60), we find that both a rule-based and a deep learning-based listener behavior models produced a higher number of backchannels (listener feedback, through nodding or vocal utterances) if the participant identified as a male. We investigate the cause of this bias in both models and discuss the implications of our findings. Further, we provide approaches that may be taken to address the issue of algorithmic fairness, and preventative measures to avoid the development of biased social robot behavior.

Place, publisher, year, edition, pages
Association for Computing Machinery (ACM) , 2023. p. 11-20
Keywords [en]
AI fairness, ethical HRI, gender bias, machine learning, non-verbal behaviors
National Category
Human Computer Interaction Robotics and automation
Identifiers
URN: urn:nbn:se:kth:diva-333371DOI: 10.1145/3568294.3580032ISI: 001054975700002Scopus ID: 2-s2.0-85150450065OAI: oai:DiVA.org:kth-333371DiVA, id: diva2:1785060
Conference
18th Annual ACM/IEEE International Conference on Human-Robot Interaction, HRI 2023, Stockholm, Sweden, Mar 13 2023 - Mar 16 2023
Note

Part of ISBN 9781450399708

QC 20230801

Available from: 2023-08-01 Created: 2023-08-01 Last updated: 2025-02-05Bibliographically approved

Open Access in DiVA

No full text in DiVA

Other links

Publisher's full textScopus

Authority records

Parreira, Maria TeresaGillet, SarahLeite, Iolanda

Search in DiVA

By author/editor
Parreira, Maria TeresaGillet, SarahLeite, Iolanda
By organisation
Robotics, Perception and Learning, RPL
Human Computer InteractionRobotics and automation

Search outside of DiVA

GoogleGoogle Scholar

doi
urn-nbn

Altmetric score

doi
urn-nbn
Total: 240 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf