kth.sePublications KTH
Change search
Link to record
Permanent link

Direct link
Parreira, Maria Teresa
Publications (5 of 5) Show all publications
Parreira, M. T., Gillet, S., Winkle, K. & Leite, I. (2023). How Did We Miss This?: A Case Study on Unintended Biases in Robot Social Behavior. In: HRI 2023: Companion of the ACM/IEEE International Conference on Human-Robot Interaction. Paper presented at 18th Annual ACM/IEEE International Conference on Human-Robot Interaction, HRI 2023, Stockholm, Sweden, Mar 13 2023 - Mar 16 2023 (pp. 11-20). Association for Computing Machinery (ACM)
Open this publication in new window or tab >>How Did We Miss This?: A Case Study on Unintended Biases in Robot Social Behavior
2023 (English)In: HRI 2023: Companion of the ACM/IEEE International Conference on Human-Robot Interaction, Association for Computing Machinery (ACM) , 2023, p. 11-20Conference paper, Published paper (Refereed)
Abstract [en]

With societies growing more and more conscious of human social biases that are implicit in most of our interactions, the development of automated robot social behavior is failing to address these issues as more than just an afterthought. In the present work, we describe how we unintentionally implemented robot listener behavior that was biased toward the gender of the participants, while following typical design procedures in the field. In a post-hoc analysis of data collected in a between-subject user study (n=60), we find that both a rule-based and a deep learning-based listener behavior models produced a higher number of backchannels (listener feedback, through nodding or vocal utterances) if the participant identified as a male. We investigate the cause of this bias in both models and discuss the implications of our findings. Further, we provide approaches that may be taken to address the issue of algorithmic fairness, and preventative measures to avoid the development of biased social robot behavior.

Place, publisher, year, edition, pages
Association for Computing Machinery (ACM), 2023
Keywords
AI fairness, ethical HRI, gender bias, machine learning, non-verbal behaviors
National Category
Human Computer Interaction Robotics and automation
Identifiers
urn:nbn:se:kth:diva-333371 (URN)10.1145/3568294.3580032 (DOI)001054975700002 ()2-s2.0-85150450065 (Scopus ID)
Conference
18th Annual ACM/IEEE International Conference on Human-Robot Interaction, HRI 2023, Stockholm, Sweden, Mar 13 2023 - Mar 16 2023
Note

Part of ISBN 9781450399708

QC 20230801

Available from: 2023-08-01 Created: 2023-08-01 Last updated: 2025-02-05Bibliographically approved
Parreira, M. T., Gillet, S. & Leite, I. (2023). Robot Duck Debugging: Can Attentive Listening Improve Problem Solving?. In: ICMI 2023: Proceedings of the 25th International Conference on Multimodal Interaction. Paper presented at 25th International Conference on Multimodal Interaction, ICMI 2023, Paris, France, Oct 9 2023 - Oct 13 2023 (pp. 527-536). Association for Computing Machinery (ACM)
Open this publication in new window or tab >>Robot Duck Debugging: Can Attentive Listening Improve Problem Solving?
2023 (English)In: ICMI 2023: Proceedings of the 25th International Conference on Multimodal Interaction, Association for Computing Machinery (ACM) , 2023, p. 527-536Conference paper, Published paper (Refereed)
Abstract [en]

While thinking aloud has been reported to positively affect problem-solving, the effects of the presence of an embodied entity (e.g., a social robot) to whom words can be directed remain mostly unexplored. In this work, we investigated the role of a robot in a "rubber duck debugging"setting, by analyzing how a robot's listening behaviors could support a thinking-aloud problem-solving session. Participants completed two different tasks while speaking their thoughts aloud to either a robot or an inanimate object (a giant rubber duck). We implemented and tested two types of listener behavior in the robot: a rule-based heuristic and a deep-learning-based model. In a between-subject user study with 101 participants, we evaluated how the presence of a robot affected users' engagement in thinking aloud, behavior during the task, and self-reported user experience. In addition, we explored the impact of the two robot listening behaviors on those measures. In contrast to prior work, our results indicate that neither the rule-based heuristic nor the deep learning robot conditions improved performance or perception of the task, compared to an inanimate object. We discuss potential explanations and shed light on the feasibility of designing social robots as assistive tools in thinking-aloud problem-solving tasks.

Place, publisher, year, edition, pages
Association for Computing Machinery (ACM), 2023
Keywords
listening model, non-verbal behaviors, social robot, think aloud
National Category
Robotics and automation
Identifiers
urn:nbn:se:kth:diva-339689 (URN)10.1145/3577190.3614160 (DOI)001147764700062 ()2-s2.0-85175806988 (Scopus ID)
Conference
25th International Conference on Multimodal Interaction, ICMI 2023, Paris, France, Oct 9 2023 - Oct 13 2023
Note

Part of ISBN 9798400700552

QC 20231116

Available from: 2023-11-16 Created: 2023-11-16 Last updated: 2025-02-09Bibliographically approved
Mohamed, Y., Ballardini, G., Parreira, M. T., Lemaignan, S. & Leite, I. (2022). Automatic Frustration Detection Using Thermal Imaging. In: PROCEEDINGS OF THE 2022 17TH ACM/IEEE INTERNATIONAL CONFERENCE ON HUMAN-ROBOT INTERACTION (HRI '22): . Paper presented at 17th Annual ACM/IEEE International Conference on Human-Robot Interaction (HRI), MAR 07-10, 2022, ELECTR NETWORK (pp. 451-460). Institute of Electrical and Electronics Engineers (IEEE)
Open this publication in new window or tab >>Automatic Frustration Detection Using Thermal Imaging
Show others...
2022 (English)In: PROCEEDINGS OF THE 2022 17TH ACM/IEEE INTERNATIONAL CONFERENCE ON HUMAN-ROBOT INTERACTION (HRI '22), Institute of Electrical and Electronics Engineers (IEEE) , 2022, p. 451-460Conference paper, Published paper (Refereed)
Abstract [en]

To achieve seamless interactions, robots have to be capable of reliably detecting affective states in real time. One of the possible states that humans go through while interacting with robots is frustration. Detecting frustration from RGB images can be challenging in some real-world situations; thus, we investigate in this work whether thermal imaging can be used to create a model that is capable of detecting frustration induced by cognitive load and failure. To train our model, we collected a data set from 18 participants experiencing both types of frustration induced by a robot. The model was tested using features from several modalities: thermal, RGB, Electrodermal Activity (EDA), and all three combined. When data from both frustration cases were combined and used as training input, the model reached an accuracy of 89% with just RGB features, 87% using only thermal features, 84% using EDA, and 86% when using all modalities. Furthermore, the highest accuracy for the thermal data was reached using three facial regions of interest: nose, forehead and lower lip.

Place, publisher, year, edition, pages
Institute of Electrical and Electronics Engineers (IEEE), 2022
Series
ACM IEEE International Conference on Human-Robot Interaction, ISSN 2167-2121
Keywords
Human-robot interaction, Thermal imaging, Frustration, cognitive load, Action units
National Category
Human Computer Interaction
Identifiers
urn:nbn:se:kth:diva-322478 (URN)10.1109/HRI53351.2022.9889545 (DOI)000869793600050 ()2-s2.0-85140750883 (Scopus ID)
Conference
17th Annual ACM/IEEE International Conference on Human-Robot Interaction (HRI), MAR 07-10, 2022, ELECTR NETWORK
Note

Part of proceedings: ISBN 978-1-6654-0731-1

QC 20221216

Available from: 2022-12-16 Created: 2022-12-16 Last updated: 2025-08-25Bibliographically approved
Parreira, M. T., Gillet, S., Vazquez, M. & Leite, I. (2022). Design Implications for Effective Robot Gaze Behaviors in Multiparty Interactions. In: PROCEEDINGS OF THE 2022 17TH ACM/IEEE INTERNATIONAL CONFERENCE ON HUMAN-ROBOT INTERACTION (HRI '22): . Paper presented at 17th Annual ACM/IEEE International Conference on Human-Robot Interaction (HRI), MAR 07-10, 2022, ELECTR NETWORK (pp. 976-980). Institute of Electrical and Electronics Engineers (IEEE)
Open this publication in new window or tab >>Design Implications for Effective Robot Gaze Behaviors in Multiparty Interactions
2022 (English)In: PROCEEDINGS OF THE 2022 17TH ACM/IEEE INTERNATIONAL CONFERENCE ON HUMAN-ROBOT INTERACTION (HRI '22), Institute of Electrical and Electronics Engineers (IEEE) , 2022, p. 976-980Conference paper, Published paper (Refereed)
Abstract [en]

Human-robot non-verbal communication has been a growing focus of research, as we realize its importance to achieve interaction goals (e.g. modulating turn-taking) and manage human perception of the interaction. Consequently, the development of models for robot non-verbal behavior, such as gaze, should be informed by studies of human reaction and perception to that behavior. Here, we look at data from two studies where two humans interact describing words to a robot. The robot tries to balance participation of the two players through a combination of gaze aversion, looking at the listener and looking at the speaker. We analyze how momentary gaze patterns reflect in the participant's turn length and perception of the robot, as well as in the participation imbalance. Our findings may be used as recommendations towards crafting robot gaze behaviors in multiparty interactions.

Place, publisher, year, edition, pages
Institute of Electrical and Electronics Engineers (IEEE), 2022
Series
ACM IEEE International Conference on Human-Robot Interaction, ISSN 2167-2121
Keywords
multiparty interaction, gaze, non-verbal behavior, social robotics
National Category
Human Computer Interaction
Identifiers
urn:nbn:se:kth:diva-322475 (URN)10.1109/HRI53351.2022.9889481 (DOI)000869793600142 ()2-s2.0-85139511204 (Scopus ID)
Conference
17th Annual ACM/IEEE International Conference on Human-Robot Interaction (HRI), MAR 07-10, 2022, ELECTR NETWORK
Note

Part of proceedings: ISBN 978-1-6654-0731-1

QC 20221216

Available from: 2022-12-16 Created: 2022-12-16 Last updated: 2022-12-16Bibliographically approved
Gillet, S., Parreira, M. T., Vázquez, M. & Leite, I. (2022). Learning Gaze Behaviors for Balancing Participation in Group Human-Robot Interactions. In: HRI '22: Proceedings of the 2022 ACM/IEEE International Conference on Human-Robot Interaction: . Paper presented at 17th Annual ACM/IEEE International Conference on Human-Robot Interaction (HRI), MAR 07-10, 2022, ELECTR NETWORK (pp. 265-274). Institute of Electrical and Electronics Engineers (IEEE)
Open this publication in new window or tab >>Learning Gaze Behaviors for Balancing Participation in Group Human-Robot Interactions
2022 (English)In: HRI '22: Proceedings of the 2022 ACM/IEEE International Conference on Human-Robot Interaction, Institute of Electrical and Electronics Engineers (IEEE) , 2022, p. 265-274Conference paper, Published paper (Refereed)
Abstract [en]

Robots can affect group dynamics. In particular, prior work has shown that robots that use hand-crafted gaze heuristics can influence human participation in group interactions. However, hand-crafting robot behaviors can be difficult and might have unexpected results in groups. Thus, this work explores learning robot gaze behaviors that balance human participation in conversational interactions. More specifically, we examine two techniques for learning a gaze policy from data: imitation learning (IL) and batch reinforcement learning (RL). First, we formulate the problem of learning a gaze policy as a sequential decision-making task focused on human turn-taking. Second, we experimentally show that IL can be used to combine strategies from hand-crafted gaze behaviors, and we formulate a novel reward function to achieve a similar result using batch RL. Finally, we conduct an offline evaluation of IL and RL policies and compare them via a user study (N=50). The results from the study show that the learned behavior policies did not compromise the interaction. Interestingly, the proposed reward for the RL formulation enabled the robot to encourage participants to take more turns during group human-robot interactions than one of the gaze heuristic behaviors from prior work. Also, the imitation learning policy led to more active participation from human participants than another prior heuristic behavior. 

Place, publisher, year, edition, pages
Institute of Electrical and Electronics Engineers (IEEE), 2022
Series
ACM IEEE International Conference on Human-Robot Interaction, ISSN 2167-2121
Keywords
social robotics, nonverbal signals, learning
National Category
Human Computer Interaction
Identifiers
urn:nbn:se:kth:diva-316516 (URN)10.1109/HRI53351.2022.9889416 (DOI)000869793600031 ()2-s2.0-85140768966 (Scopus ID)
Conference
17th Annual ACM/IEEE International Conference on Human-Robot Interaction (HRI), MAR 07-10, 2022, ELECTR NETWORK
Note

Part of proceedings: ISBN 978-1-6654-0731-1

QC 20220905

Available from: 2022-08-19 Created: 2022-08-19 Last updated: 2024-07-19Bibliographically approved
Organisations

Search in DiVA

Show all publications