kth.sePublications
Change search
Refine search result
12 1 - 50 of 83
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Rows per page
  • 5
  • 10
  • 20
  • 50
  • 100
  • 250
Sort
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
Select
The maximal number of hits you can export is 250. When you want to export more records please use the Create feeds function.
  • 1.
    Almeida, João Tiago
    et al.
    KTH.
    Leite, Iolanda
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Robotics, Perception and Learning, RPL.
    Yadollahi, Elmira
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Robotics, Perception and Learning, RPL.
    Would you help me?: Linking robot's perspective-taking to human prosocial behavior2023In: HRI 2023: Proceedings of the 2023 ACM/IEEE International Conference on Human-Robot Interaction, Association for Computing Machinery (ACM) , 2023, p. 388-397Conference paper (Refereed)
    Abstract [en]

    Despite the growing literature on human attitudes toward robots, particularly prosocial behavior, little is known about how robots' perspective-taking, the capacity to perceive and understand the world from other viewpoints, could infuence such attitudes and perceptions of the robot. To make robots and AI more autonomous and self-aware, more researchers have focused on developing cognitive skills such as perspective-taking and theory of mind in robots and AI. The present study investigated whether a robot's perspectivetaking choices could infuence the occurrence and extent of exhibiting prosocial behavior toward the robot.We designed an interaction consisting of a perspective-taking task, where we manipulated how the robot instructs the human to fnd objects by changing its frame of reference and measured the human's exhibition of prosocial behavior toward the robot. In a between-subject study (N=70), we compared the robot's egocentric and addressee-centric instructions against a control condition, where the robot's instructions were object-centric. Participants' prosocial behavior toward the robot was measured using a voluntary data collection session. Our results imply that the occurrence and extent of prosocial behavior toward the robot were signifcantly infuenced by the robot's visuospatial perspective-taking behavior. Furthermore, we observed, through questionnaire responses, that the robot's choice of perspectivetaking could potentially infuence the humans' perspective choices, were they to reciprocate the instructions to the robot.

  • 2.
    Bartoli, Ermanno
    et al.
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Robotics, Perception and Learning, RPL.
    Dogan, Fethiye Irmak
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Robotics, Perception and Learning, RPL.
    Leite, Iolanda
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Robotics, Perception and Learning, RPL.
    Contextualized Knowledge Graph Embeddings for Activity Prediction in Service Robotics2023Conference paper (Refereed)
    Download full text (pdf)
    fulltext
  • 3.
    Beskow, Jonas
    et al.
    KTH, School of Computer Science and Communication (CSC), Speech, Music and Hearing, TMH.
    Peters, Christopher
    KTH, School of Computer Science and Communication (CSC), Computational Science and Technology (CST).
    Castellano, G.
    O'Sullivan, C.
    Leite, Iolanda
    KTH, School of Computer Science and Communication (CSC), Robotics, perception and learning, RPL.
    Kopp, S.
    Preface2017In: 17th International Conference on Intelligent Virtual Agents, IVA 2017, Springer, 2017, Vol. 10498, p. V-VIConference paper (Refereed)
  • 4.
    Castellano, Ginevra
    et al.
    Uppsala University, Sweden.
    Riek, Laurel
    UC San Diego, United States.
    Cakmak, Maya
    University of Washington, United States.
    Leite, Iolanda
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Robotics, Perception and Learning, RPL.
    Chairs' welcome2023In: Proceedings 18th Annual ACM/IEEE International Conference on Human-Robot Interaction, HRI 2023, IEEE Computer Society , 2023Conference paper (Other academic)
  • 5.
    Castellano, Ginevra
    et al.
    Uppsala University, Sweden.
    Riek, Laurel
    Uc San Diego, United States.
    Cakmak, Maya
    University of Washington, United States.
    Leite, Iolanda
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Robotics, Perception and Learning, RPL.
    Chairs' Welcome2023In: Proceedings HRI '23: ACM/IEEE International Conference on Human-Robot Interaction, ACM Press, 2023Conference paper (Other academic)
  • 6.
    Correia, Filipa
    et al.
    Univ Lisbon, INESC ID, Inst Super Tecn, Lisbon, Portugal..
    Mascarenhas, Samuel F.
    Univ Lisbon, INESC ID, Inst Super Tecn, Lisbon, Portugal..
    Gomes, Samuel
    Univ Lisbon, INESC ID, Inst Super Tecn, Lisbon, Portugal..
    Arriaga, Patricia
    CIS IUL, Inst Univ Lisboa ISCTE IUL, Lisbon, Portugal..
    Leite, Iolanda
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Robotics, Perception and Learning, RPL.
    Prada, Rui
    Univ Lisbon, INESC ID, Inst Super Tecn, Lisbon, Portugal..
    Melo, Francisco S.
    Univ Lisbon, INESC ID, Inst Super Tecn, Lisbon, Portugal..
    Paiva, Ana
    Univ Lisbon, INESC ID, Inst Super Tecn, Lisbon, Portugal..
    Exploring Prosociality in Human-Robot Teams2019In: HRI '19: 2019 14TH ACM/IEEE INTERNATIONAL CONFERENCE ON HUMAN-ROBOT INTERACTION, IEEE , 2019, p. 143-151Conference paper (Refereed)
    Abstract [en]

    This paper explores the role of prosocial behaviour when people team up with robots in a collaborative game that presents a social dilemma similar to a public goods game. An experiment was conducted with the proposed game in which each participant joined a team with a prosocial robot and a selfish robot. During 5 rounds of the game, each player chooses between contributing to the team goal (cooperate) or contributing to his individual goal (defect). The prosociality level of the robots only affects their strategies to play the game, as one always cooperates and the other always defects. We conducted a user study at the office of a large corporation with 70 participants where we manipulated the game result (winning or losing) in a between-subjects design. Results revealed two important considerations: (1) the prosocial robot was rated more positively in terms of its social attributes than the selfish robot, regardless of the game result; (2) the perception of competence, the responsibility attribution (blame/credit), and the preference for a future partner revealed significant differences only in the losing condition. These results yield important concerns for the creation of robotic partners, the understanding of group dynamics and, from a more general perspective, the promotion of a prosocial society.

  • 7.
    Dogan, Fethiye Irmak
    et al.
    KTH.
    Gillet, Sarah
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Robotics, Perception and Learning, RPL.
    Carter, Elizabeth
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Robotics, Perception and Learning, RPL.
    Leite, Iolanda
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Robotics, Perception and Learning, RPL.
    The impact of adding perspective-taking to spatial referencing during human-robot interaction2020In: Robotics and Autonomous Systems, ISSN 0921-8890, E-ISSN 1872-793X, Vol. 134, article id 103654Article in journal (Refereed)
    Abstract [en]

    For effective verbal communication in collaborative tasks, robots need to account for the different perspectives of their human partners when referring to objects in a shared space. For example, when a robot helps its partner find correct pieces while assembling furniture, it needs to understand how its collaborator perceives the world and refer to objects accordingly. In this work, we propose a method to endow robots with perspective-taking abilities while spatially referring to objects. To examine the impact of our proposed method, we report the results of a user study showing that when the objects are spatially described from the users' perspectives, participants take less time to find the referred objects, find the correct objects more often and consider the task easier.

  • 8.
    Dogan, Fethiye Irmak
    et al.
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Robotics, Perception and Learning, RPL.
    Kalkan, S.
    Leite, Iolanda
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Robotics, Perception and Learning, RPL.
    Learning to Generate Unambiguous Spatial Referring Expressions for Real-World Environments2019In: IEEE International Conference on Intelligent Robots and Systems, Institute of Electrical and Electronics Engineers (IEEE) , 2019, p. 4992-4999Conference paper (Refereed)
    Abstract [en]

    Referring to objects in a natural and unambiguous manner is crucial for effective human-robot interaction. Previous research on learning-based referring expressions has focused primarily on comprehension tasks, while generating referring expressions is still mostly limited to rule-based methods. In this work, we propose a two-stage approach that relies on deep learning for estimating spatial relations to describe an object naturally and unambiguously with a referring expression. We compare our method to the state of the art algorithm in ambiguous environments (e.g., environments that include very similar objects with similar relationships). We show that our method generates referring expressions that people find to be more accurate (30% better) and would prefer to use (32% more often).

  • 9.
    Dogan, Fethiye Irmak
    et al.
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Robotics, Perception and Learning, RPL.
    Leite, Iolanda
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Robotics, Perception and Learning, RPL.
    Open Challenges on Generating Referring Expressions for Human-Robot Interaction2020Conference paper (Refereed)
    Abstract [en]

    Effective verbal communication is crucial in human-robot collaboration. When a robot helps its human partner to complete a task with verbal instructions, referring expressions are commonly employed during the interaction. Despite many studies on generating referring expressions, crucial open challenges still remain for effective interaction. In this work, we discuss some of these challenges (i.e., using contextual information, taking users’ perspectives, and handling misinterpretations in an autonomous manner).

    Download full text (pdf)
    fulltext
  • 10.
    Dogan, Fethiye Irmak
    et al.
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Robotics, Perception and Learning, RPL.
    Liu, Weiyu
    Georgia Institute of Technology.
    Leite, Iolanda
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Robotics, Perception and Learning, RPL.
    Chernova, Sonia
    Georgia Institute of Technology.
    Semantically-Driven Disambiguation for Human-Robot InteractionIn: Article in journal (Other academic)
  • 11.
    Dogan, Fethiye Irmak
    et al.
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Robotics, Perception and Learning, RPL.
    Melsión, Gaspar Isaac
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Robotics, Perception and Learning, RPL.
    Leite, Iolanda
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Robotics, Perception and Learning, RPL.
    Leveraging Explainability for Understanding Object Descriptions in Ambiguous 3D Environments2023In: Frontiers in Robotics and AI, E-ISSN 2296-9144, Vol. 9Article in journal (Refereed)
  • 12.
    Dogan, Fethiye Irmak
    et al.
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Robotics, Perception and Learning, RPL.
    Torre, Ilaria
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Robotics, Perception and Learning, RPL.
    Leite, Iolanda
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Robotics, Perception and Learning, RPL.
    Asking Follow-Up Clarifications to Resolve Ambiguities in Human-Robot Conversation2022In: ACM/IEEE International Conference on Human-Robot Interaction, IEEE Computer Society , 2022, p. 461-469Conference paper (Refereed)
    Abstract [en]

    When a robot aims to comprehend its human partner's request by identifying the referenced objects in Human-Robot Conversation, ambiguities can occur because the environment might contain many similar objects or the objects described in the request might be unknown to the robot. In the case of ambiguities, most of the systems ask users to repeat their request, which assumes that the robot is familiar with all of the objects in the environment. This assumption might lead to task failure, especially in complex real-world environments. In this paper, we address this challenge by presenting an interactive system that asks for follow-up clarifications to disambiguate the described objects using the pieces of information that the robot could understand from the request and the objects in the environment that are known to the robot. To evaluate our system while disambiguating the referenced objects, we conducted a user study with 63 participants. We analyzed the interactions when the robot asked for clarifications and when it asked users to redescribe the same object. Our results show that generating followup clarification questions helped the robot correctly identify the described objects with fewer attempts (i.e., conversational turns). Also, when people were asked clarification questions, they perceived the task as easier, and they evaluated the task understanding and competence of the robot as higher. Our code and anonymized dataset are publicly available11 https://github.com/IrmakDogan/Resolving-Ambiguities. 

  • 13.
    Engelhardt, Sara
    et al.
    KTH, School of Computer Science and Communication (CSC), Robotics, perception and learning, RPL.
    Hansson, Emmeli
    KTH, School of Computer Science and Communication (CSC), Robotics, perception and learning, RPL.
    Leite, Iolanda
    KTH, School of Computer Science and Communication (CSC), Robotics, perception and learning, RPL.
    Better faulty than sorry: Investigating social recovery strategies to minimize the impact of failure in human-robot interaction2017In: WCIHAI 2017 Workshop on Conversational Interruptions in Human-Agent Interactions: Proceedings of the first Workshop on Conversational Interruptions in Human-Agent Interactions co-located with 17th International Conference on International Conference on Intelligent Virtual Agents (IVA 2017) Stockholm, Sweden, August 27, 2017., CEUR-WS , 2017, Vol. 1943, p. 19-27Conference paper (Refereed)
    Abstract [en]

    Failure happens in most social interactions, possibly even more so in interactions between a robot and a human. This paper investigates different failure recovery strategies that robots can employ to minimize the negative effect on people's perception of the robot. A between-subject Wizard-of-Oz experiment with 33 participants was conducted in a scenario where a robot and a human play a collaborative game. The interaction was mainly speech-based and controlled failures were introduced at specific moments. Three types of recovery strategies were investigated, one in each experimental condition: ignore (the robot ignores that a failure has occurred and moves on with the task), apology (the robot apologizes for failing and moves on) and problem-solving (the robot tries to solve the problem with the help of the human). Our results show that the apology-based strategy scored the lowest on measures such as likeability and perceived intelligence, and that the ignore strategy lead to better perceptions of perceived intelligence and animacy than the employed recovery strategies.

  • 14. Fraune, M. R.
    et al.
    Karatas, N.
    Leite, Iolanda
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Robotics, Perception and Learning, RPL.
    Workshop YOUR study design! Participatory critique and refinement of participants' studies2021In: ACM/IEEE International Conference on Human-Robot Interaction, IEEE Computer Society , 2021, p. 688-690Conference paper (Refereed)
    Abstract [en]

    The purpose of this workshop is to help researchers develop methodological skills, especially in areas that are relatively new to them. With HRI researchers coming from diverse backgrounds in computer science, engineering, informatics, philosophy, psychology, and more disciplines, we can't be expert in everything. In this workshop, participants will be grouped with a mentor to enhance their study design and interdisciplinary work. Participants will submit 4-page papers with a small introduction and detailed method section for a project currently in the design process. In small groups led by a mentor in the area, they will discuss their method and obtain feedback. The workshop will include time to edit and improve the study. Workshop mentors include Drs. Cindy Bethel, Hung Hsuan Huang, Selma Sabanović, Brian Scassellati, Megan Strait, Komatsu Takanori, Leila Takayama, and Ewart de Visser, with expertise in areas of real-world study, empirical lab study, questionnaire design, interview, participatory design, and statistics. 

  • 15.
    Fraune, Marlena R.
    et al.
    New Mexico State Univ, Intergrp Human Robot Interact iHRI Lab, Dept Psychol, Las Cruces, NM 88003 USA..
    Leite, Iolanda
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Robotics, Perception and Learning, RPL.
    Karatas, Nihan
    Nagoya Univ, Human Machine Interact HMI & Human Characterist R, Inst Innovat Future Soc, Nagoya, Japan..
    Amirova, Aida
    Nazarbayev Univ, Dept Robot & Mech, Sch Engn & Digital Sci, Nur Sultan, Kazakhstan..
    Legeleux, Amelie
    Univ South Brittany, Lab STICC, CNRS UMR 6285, Brest, France..
    Sandygulova, Anara
    Nazarbayev Univ, Dept Robot & Mech, Sch Engn & Digital Sci, Nur Sultan, Kazakhstan..
    Neerincx, Anouk
    Univ South Brittany, Lab STICC, CNRS UMR 6285, Brest, France..
    Dilip Tikas, Gaurav
    Inst Management Technol, Strategy Innovat & Entrepreneurship Area, Ghaziabad, India..
    Gunes, Hatice
    Univ Cambridge, Dept Comp Sci & Technol, Affect Intelligence & Robot Lab, Cambridge, England..
    Mohan, Mayumi
    Max Planck Inst Intelligent Syst, Hapt Intelligence Dept, Stuttgart, Germany..
    Abbasi, Nida Itrat
    Univ Cambridge, Dept Comp Sci & Technol, Affect Intelligence & Robot Lab, Cambridge, England..
    Shenoy, Sudhir
    Univ Virginia, Comp Engn Program, Human AI Technol Lab, Charlottesville, VA USA..
    Scassellati, Brian
    Yale Univ, Dept Comp Sci, Social Robot Lab, New Haven, CT USA..
    de Visser, Ewart J.
    US Air Force Acad, Warfighter Effectiveness Res Ctr, Colorado Springs, CO USA..
    Komatsu, Takanori
    Meiji Univ, Sch Interdisciplinary Math Sci, Dept Frontier Media Sci, Tokyo, Japan..
    Lessons Learned About Designing and Conducting Studies From HRI Experts2022In: Frontiers in Robotics and AI, E-ISSN 2296-9144, Vol. 8, article id 772141Article in journal (Refereed)
    Abstract [en]

    The field of human-robot interaction (HRI) research is multidisciplinary and requires researchers to understand diverse fields including computer science, engineering, informatics, philosophy, psychology, and more disciplines. However, it is hard to be an expert in everything. To help HRI researchers develop methodological skills, especially in areas that are relatively new to them, we conducted a virtual workshop, Workshop Your Study Design (WYSD), at the 2021 International Conference on HRI. In this workshop, we grouped participants with mentors, who are experts in areas like real-world studies, empirical lab studies, questionnaire design, interview, participatory design, and statistics. During and after the workshop, participants discussed their proposed study methods, obtained feedback, and improved their work accordingly. In this paper, we present 1) Workshop attendees' feedback about the workshop and 2) Lessons that the participants learned during their discussions with mentors. Participants' responses about the workshop were positive, and future scholars who wish to run such a workshop can consider implementing their suggestions. The main contribution of this paper is the lessons learned section, where the workshop participants contributed to forming this section based on what participants discovered during the workshop. We organize lessons learned into themes of 1) Improving study design for HRI, 2) How to work with participants - especially children -, 3) Making the most of the study and robot's limitations, and 4) How to collaborate well across fields as they were the areas of the papers submitted to the workshop. These themes include practical tips and guidelines to assist researchers to learn about fields of HRI research with which they have limited experience. We include specific examples, and researchers can adapt the tips and guidelines to their own areas to avoid some common mistakes and pitfalls in their research.

  • 16.
    Galatolo, Alessio
    et al.
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Robotics, Perception and Learning, RPL. Uppsala Univ, Dept Informat Technol, Uppsala, Sweden.
    Leite, Iolanda
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Robotics, Perception and Learning, RPL.
    Winkle, Katie
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Robotics, Perception and Learning, RPL. Uppsala Univ, Dept Informat Technol, Uppsala, Sweden.
    Personality-Adapted Language Generation for Social Robots2023In: 2023 32nd IEEE international conference on robot and human interactive communication, RO-MAN, Institute of Electrical and Electronics Engineers (IEEE) , 2023, p. 1800-1807Conference paper (Refereed)
    Abstract [en]

    Previous works in Human-Robot Interaction have demonstrated the positive potential benefit of designing social robots which express specific personalities. In this work, we focus specifically on the adaptation of language (as the choice of words, their order, etc.) following the extraversion trait. We look to investigate whether current language models could support more autonomous generations of such personality-expressive robot output. We examine the performance of two models with user studies evaluating (i) raw text output and (ii) text output when used within multi-modal speech from the Furhat robot. We find that the ability to successfully manipulate perceived extraversion sometimes varies across different dialogue topics. We were able to achieve correct manipulation of robot personality via our language adaptation, but our results suggest further work is necessary to improve the automation and generalisation abilities of these models.

  • 17.
    Galatolo, Alessio
    et al.
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Robotics, Perception and Learning, RPL.
    Melsión, Gaspar Isaac
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Robotics, Perception and Learning, RPL.
    Leite, Iolanda
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Robotics, Perception and Learning, RPL.
    Winkle, Katie
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Robotics, Perception and Learning, RPL.
    The Right (Wo)Man for the Job?: Exploring the Role of Gender when Challenging Gender Stereotypes with a Social Robot2022In: International Journal of Social Robotics, ISSN 1875-4791, E-ISSN 1875-4805Article in journal (Refereed)
    Abstract [en]

    Recent works have identified both risks and opportunities afforded by robot gendering. Specifically, robot gendering risks the propagation of harmful gender stereotypes, but may positively influence robot acceptance/impact, and/or actually offer a vehicle with which to educate about and challenge traditional gender stereotypes. Our work sits at the intersection of these ideas, to explore whether robot gendering might impact robot credibility and persuasiveness specifically when that robot is being used to try and dispel gender stereotypes and change interactant attitudes. Whilst we demonstrate no universal impact of robot gendering on first impressions of the robot, we demonstrate complex interactions between robot gendering, interactant gender and observer gender which emerge when the robot engages in challenging gender stereotypes. Combined with previous work, our results paint a mixed picture regarding how best to utilise robot gendering when challenging gender stereotypes this way. Specifically, whilst we find some potential evidence in favour of utilising male presenting robots for maximum impact in this context, we question whether this actually reflects the kind of gender biases we actually set out to challenge with this work. 

  • 18.
    Gillet, Sarah
    et al.
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Robotics, Perception and Learning, RPL.
    Cumbal, Ronald
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Speech, Music and Hearing, TMH.
    Abelho Pereira, André Tiago
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Speech, Music and Hearing, TMH.
    Lopes, José
    Heriot-Watt University.
    Engwall, Olov
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Speech, Music and Hearing, TMH.
    Leite, Iolanda
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Robotics, Perception and Learning, RPL.
    Robot Gaze Can Mediate Participation Imbalance in Groups with Different Skill Levels2021In: Proceedings of the 2021 ACM/IEEE International Conference on Human-Robot Interaction, Association for Computing Machinery , 2021, p. 303-311Conference paper (Refereed)
    Abstract [en]

    Many small group activities, like working teams or study groups, have a high dependency on the skill of each group member. Differences in skill level among participants can affect not only the performance of a team but also influence the social interaction of its members. In these circumstances, an active member could balance individual participation without exerting direct pressure on specific members by using indirect means of communication, such as gaze behaviors. Similarly, in this study, we evaluate whether a social robot can balance the level of participation in a language skill-dependent game, played by a native speaker and a second language learner. In a between-subjects study (N = 72), we compared an adaptive robot gaze behavior, that was targeted to increase the level of contribution of the least active player, with a non-adaptive gaze behavior. Our results imply that, while overall levels of speech participation were influenced predominantly by personal traits of the participants, the robot’s adaptive gaze behavior could shape the interaction among participants which lead to more even participation during the game.

  • 19.
    Gillet, Sarah
    et al.
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Robotics, Perception and Learning, RPL.
    Leite, Iolanda
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Robotics, Perception and Learning, RPL.
    A Robot Mediated Music Mixing Activity for Promoting Collaboration among Children2020In: Companion of the 2020 ACM/IEEE International Conference on Human-Robot Interaction, HRI 2020, Association for Computing Machinery (ACM) , 2020, p. 212-214Conference paper (Refereed)
    Abstract [en]

    Since children show favoritism of in-group members over out-group members from the age of five, children that newly arrive in a country or culture might have difficulties to be integrated into the already settled group. To address this problem, we developed a robot-mediated music mixing game for three players that aims to bring together children from the newly arrived and settled groups. We designed a game with the robot's goal in mind and allow the robot to observe the participation of the different players in real-time. With this information, the robot can encourage equal participation in the shared activity by prompting the least active child to act. Preliminary results show that the robot can potentially succeed in influencing participation behavior. These results encourage future work that not only studies the in-game effects but also effects on group dynamics.

  • 20.
    Gillet, Sarah
    et al.
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Robotics, Perception and Learning, RPL.
    Parreira, Maria Teresa
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Robotics, Perception and Learning, RPL.
    Vázquez, Marynel
    Yale Univ, New Haven, CT USA..
    Leite, Iolanda
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Robotics, Perception and Learning, RPL.
    Learning Gaze Behaviors for Balancing Participation in Group Human-Robot Interactions2022In: HRI '22: Proceedings of the 2022 ACM/IEEE International Conference on Human-Robot Interaction, Institute of Electrical and Electronics Engineers (IEEE) , 2022, p. 265-274Conference paper (Refereed)
    Abstract [en]

    Robots can affect group dynamics. In particular, prior work has shown that robots that use hand-crafted gaze heuristics can influence human participation in group interactions. However, hand-crafting robot behaviors can be difficult and might have unexpected results in groups. Thus, this work explores learning robot gaze behaviors that balance human participation in conversational interactions. More specifically, we examine two techniques for learning a gaze policy from data: imitation learning (IL) and batch reinforcement learning (RL). First, we formulate the problem of learning a gaze policy as a sequential decision-making task focused on human turn-taking. Second, we experimentally show that IL can be used to combine strategies from hand-crafted gaze behaviors, and we formulate a novel reward function to achieve a similar result using batch RL. Finally, we conduct an offline evaluation of IL and RL policies and compare them via a user study (N=50). The results from the study show that the learned behavior policies did not compromise the interaction. Interestingly, the proposed reward for the RL formulation enabled the robot to encourage participants to take more turns during group human-robot interactions than one of the gaze heuristic behaviors from prior work. Also, the imitation learning policy led to more active participation from human participants than another prior heuristic behavior. 

    Download full text (pdf)
    fulltext
  • 21.
    Gillet, Sarah
    et al.
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Robotics, Perception and Learning, RPL.
    van den Bos, Wouter
    Univ Amsterdam, Amsterdam, Netherlands..
    Leite, Iolanda
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Robotics, Perception and Learning, RPL.
    A social robot mediator to foster collaboration and inclusion among children2020In: Robotics: Science and systems XVI / [ed] Toussaint, M Bicchi, A Hermans, T, MIT Press, 2020Conference paper (Refereed)
    Abstract [en]

    Formation of subgroups and thereby the problem of intergroup bias is well-studied in psychology. Already from the age of five, children can show ingroup preferences. We developed a social robot mediator to explore how a robot could help overcome these intergroup biases, especially for children newly arrived to a country. By utilizing an online evaluation of collaboration levels, we allow the robot to perceive and act upon the current group dynamics. We investigated the effectiveness of the robot’s mediating behavior in a between-subject study with 39 children, of whom 13 children had arrived in Sweden within the last 2 years. Results indicate that the robot could help the process of inclusion by mediating the activity. The robot succeeds in encouraging the newly arrived children to act more outgoing and in increasing collaboration among ingroup children. Further, children show a higher level of prosociality after interacting with the robot. In line with prior work, this study demonstrates the ability of social robotic technology to assist group processes.

  • 22.
    Gillet, Sarah
    et al.
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Robotics, Perception and Learning, RPL.
    Winkle, Katie
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Robotics, Perception and Learning, RPL.
    Belgiovine, Giulia
    Univ Studi Genova, CONTACT Unit, Ist Italiano Tecnol, Genoa, Italy.;Univ Studi Genova, DIBRIS Dept, Genoa, Italy..
    Leite, Iolanda
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Robotics, Perception and Learning, RPL.
    Ice-Breakers, Turn-Takers and Fun-Makers: Exploring Robots for Groups with Teenagers2022In: 2022 31st IEEE International Conference on Robot and Human Interactive Communication (RO-MAN), IEEE , 2022, p. 1474-1481Conference paper (Refereed)
    Abstract [en]

    Successful, enjoyable group interactions are important in public and personal contexts, especially for teenagers whose peer groups are important for self-identity and selfesteem. Social robots seemingly have the potential to positively shape group interactions, but it seems difficult to effect such impact by designing robot behaviors solely based on related (human interaction) literature. In this article, we take a usercentered approach to explore how teenagers envisage a social robot <feminine ordinal indicator>group assistant degrees. We engaged 16 teenagers in focus groups, interviews, and robot testing to capture their views and reflections about robots for groups. Over the course of a two-week summer school, participants co-designed the action space for such a robot and experienced working with/wizarding it for 10+ hours. This experience further altered and deepened their insights into using robots as group assistants. We report results regarding teenagers' views on the applicability and use of a robot group assistant, how these expectations evolved throughout the study, and their repeat interactions with the robot. Our results indicate that each group moves on a spectrum of need for the robot, reflected in use of the robot more (or less) for ice-breaking, turn-taking, and fun-making as the situation demanded.

  • 23.
    Gross, James
    et al.
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Information Science and Engineering.
    Törngren, Martin
    KTH, School of Industrial Engineering and Management (ITM), Engineering Design, Mechatronics and Embedded Control Systems.
    Dán, György
    KTH, School of Electrical Engineering and Computer Science (EECS), Computer Science, Network and Systems Engineering.
    Broman, David
    KTH, School of Electrical Engineering and Computer Science (EECS), Computer Science, Software and Computer systems, SCS.
    Herzog, Erik
    Leite, Iolanda
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Robotics, Perception and Learning, RPL.
    Ramakrishna, Raksha
    KTH, School of Electrical Engineering and Computer Science (EECS), Computer Science, Network and Systems Engineering.
    Stower, Rebecca
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Robotics, Perception and Learning, RPL.
    Thompson, Haydn
    TECoSA – Trends, Drivers, and Strategic Directions for Trustworthy Edge Computing in Industrial Applications2022In: INSIGHT, ISSN 2156-485X, Vol. 25, no 4, p. 29-34Article in journal (Refereed)
    Abstract [en]

    TECoSA – a university-based research center in collaboration with industry – was established early in 2020, focusing on Trustworthy Edge Computing Systems and Applications. This article summarizes and assesses the current trends and drivers regarding edge computing. In our analysis, edge computing provided by mobile network operators will be the initial dominating form of this new computing paradigm for the coming decade. These insights form the basis for the research agenda of the TECoSA center, highlighting more advanced use cases, including AR/VR/Cognitive Assistance, cyber-physical systems, and distributed machine learning. The article further elaborates on the identified strategic directions given these trends, emphasizing testbeds and collaborative multidisciplinary research.

    Download full text (pdf)
    TECoSA position paper
  • 24.
    Güneysu Özgür, Arzu
    et al.
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Robotics, Perception and Learning, RPL. Chili Lab, Ecole Polytechnique Fédérale de Lausanne (EPFL), Lausanne, Switzerland.
    Majlesi, A. R.
    Taburet, V.
    Meijer, Sebastiaan
    KTH, School of Engineering Sciences in Chemistry, Biotechnology and Health (CBH), Biomedical Engineering and Health Systems, Health Informatics and Logistics.
    Leite, Iolanda
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Robotics, Perception and Learning, RPL.
    Kuoppamäki, Sanna
    KTH, School of Engineering Sciences in Chemistry, Biotechnology and Health (CBH), Biomedical Engineering and Health Systems, Health Informatics and Logistics.
    Designing Tangible Robot Mediated Co-located Games to Enhance Social Inclusion for Neurodivergent Children2022In: Proceedings of Interaction Design and Children, IDC 2022, Association for Computing Machinery, Inc , 2022, p. 536-543Conference paper (Refereed)
    Abstract [en]

    Neurodivergent children with cognitive and communicative difficulties often experience a lower level of social integration in comparison to neurotypical children. Therefore it is crucial to understand social inclusion challenges and address exclusion. Since previous work shows that gamified robotic activities have a high potential to enable inclusive and collaborative environments we propose using robot-mediated games for enhancing social inclusion. In this work, we present the design of a multiplayer tangible Pacman game with three different inter-player interaction modalities: semi-dependent collaborative, dependent collaborative, and competitive. The initial usability evaluation and the observations of the experiments show the benefits of the game for creating collaborative and cooperative practices for the players and thus also potential for social interaction and social inclusion. Importantly, we observe that inter-player interaction design affects the communication between the players and their physical interaction with the game.

  • 25.
    Iovino, Matteo
    et al.
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Robotics, Perception and Learning, RPL. ABB Corporate Research, Västerås.
    Dogan, Fethiye Irmak
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Robotics, Perception and Learning, RPL.
    Leite, Iolanda
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Robotics, Perception and Learning, RPL.
    Smith, Christian
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Robotics, Perception and Learning, RPL.
    Interactive Disambiguation for Behavior Tree Execution2022In: 2022 IEEE-RAS 21st International Conference on Humanoid Robots (Humanoids), Institute of Electrical and Electronics Engineers (IEEE) , 2022Conference paper (Refereed)
    Abstract [en]

    Abstract:In recent years, robots are used in an increasing variety of tasks, especially by small- and medium sized enterprises. These tasks are usually fast-changing, they have a collaborative scenario and happen in unpredictable environments with possible ambiguities. It is important to have methods capable of generating robot programs easily, that are made as general as possible by handling uncertainties. We present a system that integrates a method to learn Behavior Trees (BTs) from demonstration for pick and place tasks, with a framework that uses verbal interaction to ask follow-up clarification questions to resolve ambiguities. During the execution of a task, the system asks for user input when there is need to disambiguate an object in the scene, i.e. when the targets of the task are objects of a same type that are present in multiple instances. The integrated system is demonstrated on different scenarios of a pick and place task, with increasing level of ambiguities. The code used for this paper is made publicly available 1 1 https://github.com/matiov/disambiguate-BT-execution.

  • 26.
    Irfan, Bahar
    et al.
    Univ Plymouth, Ctr Robot & Neural Syst, Plymouth, Devon, England..
    Ramachandran, Aditi
    Yale Univ, Social Robot Lab, New Haven, CT 06520 USA..
    Spaulding, Samuel
    MIT, Personal Robots Grp, Media Lab, Cambridge, MA 02139 USA..
    Glas, Dylan F.
    Huawei, Futurewei Technol, Santa Clara, CA USA..
    Leite, Iolanda
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Robotics, Perception and Learning, RPL.
    Koay, Kheng Lee
    Univ Hertfordshire, Adapt Syst Res Grp, Hatfield, Herts, England..
    Personalization in Long-Term Human-Robot Interaction2019In: HRI '19: 2019 14TH ACM/IEEE INTERNATIONAL CONFERENCE ON HUMAN-ROBOT INTERACTION, IEEE , 2019, p. 685-686Conference paper (Refereed)
    Abstract [en]

    For practical reasons, most human-robot interaction (HRI) studies focus on short-term interactions between humans and robots. However, such studies do not capture the difficulty of sustaining engagement and interaction quality across long-term interactions. Many real-world robot applications will require repeated interactions and relationship-building over the long term, and personalization and adaptation to users will be necessary to maintain user engagement and to build rapport and trust between the user and the robot. This full-day workshop brings together perspectives from a variety of research areas, including companion robots, elderly care, and educational robots, in order to provide a forum for sharing and discussing innovations, experiences, works-in-progress, and best practices which address the challenges of personalization in long-term HRI.

  • 27.
    Iucci, Alessandro
    et al.
    Ericsson AB, Ericsson Res AI, Stockholm, Sweden..
    Hata, Alberto
    Ericsson Telecomunicacoes SA, Ericsson Res AI, Indaiatuba, Brazil..
    Terra, Ahmad
    Ericsson AB, Ericsson Res AI, Stockholm, Sweden..
    Inam, Rafia
    Ericsson AB, Ericsson Res AI, Stockholm, Sweden..
    Leite, Iolanda
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Robotics, Perception and Learning, RPL.
    Explainable Reinforcement Learning for Human-Robot Collaboration2021In: 2021 20Th International Conference On Advanced Robotics (ICAR), Institute of Electrical and Electronics Engineers (IEEE) , 2021, p. 927-934Conference paper (Refereed)
    Abstract [en]

    Reinforcement learning (RL) is getting popular in the robotics field due to its nature to learn from dynamic environments. However, it is unable to provide explanations of why an output was generated. Explainability becomes therefore important in situations where humans interact with robots, such as in human-robot collaboration (HRC) scenarios. Attempts to address explainability in robotics usually are restricted to explain a specific decision taken by the RL model, but not to understand the complete behavior of the robot. In addition, the explainability methods are restricted to be used by domain experts as queries and responses are not translated to natural language. This work overcomes these limitations by proposing an explainability solution for RL models applied to HRC. It is mainly formed by the adaptation of two methods: (i) Reward decomposition gives an insight into the factors that impacted the robot's choice by decomposing the reward function. It further provides sets of relevant reasons for each decision taken during the robot's operation; (ii) Autonomous policy explanation provides a global explanation of the robot's behavior by answering queries in the form of natural language, thus making understandable to any human user. Experiments in simulated HRC scenarios revealed an increased understanding of the optimal choices made by the robots. Additionally, our solution demonstrated as a powerful debugging tool to find weaknesses in the robot's policy and assist in its improvement.

  • 28.
    Jonell, Patrik
    et al.
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Speech, Music and Hearing, TMH.
    Deichler, Anna
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Speech, Music and Hearing, TMH.
    Torre, Ilaria
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Robotics, Perception and Learning, RPL.
    Leite, Iolanda
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Robotics, Perception and Learning, RPL.
    Beskow, Jonas
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Speech, Music and Hearing, TMH.
    Mechanical Chameleons: Evaluating the effects of a social robot’snon-verbal behavior on social influence2021In: Proceedings of SCRITA 2021, a workshop at IEEE RO-MAN 2021, 2021Conference paper (Refereed)
    Abstract [en]

    In this paper we present a pilot study which investigates how non-verbal behavior affects social influence in social robots. We also present a modular system which is capable of controlling the non-verbal behavior based on the interlocutor's facial gestures (head movements and facial expressions) in real time, and a study investigating whether three different strategies for facial gestures ("still", "natural movement", i.e. movements recorded from another conversation, and "copy", i.e. mimicking the user with a four second delay) has any affect on social influence and decision making in a "survival task". Our preliminary results show there was no significant difference between the three conditions, but this might be due to among other things a low number of study participants (12). 

    Download full text (pdf)
    fulltext
  • 29.
    Jonell, Patrik
    et al.
    KTH, School of Computer Science and Communication (CSC), Speech, Music and Hearing, TMH.
    Mendelson, Joseph
    KTH, School of Computer Science and Communication (CSC), Speech, Music and Hearing, TMH.
    Storskog, Thomas
    Hagman, Göran
    Östberg, Per
    Leite, Iolanda
    KTH, School of Computer Science and Communication (CSC), Robotics, perception and learning, RPL.
    Kucherenko, Taras
    KTH, School of Computer Science and Communication (CSC), Robotics, perception and learning, RPL.
    Mikheeva, Olga
    Akenine, Ulrika
    Jelic, Vesna
    Solomon, Alina
    Beskow, Jonas
    KTH, School of Computer Science and Communication (CSC), Speech, Music and Hearing, TMH.
    Gustafson, Joakim
    KTH, School of Computer Science and Communication (CSC), Speech, Music and Hearing, TMH.
    Kivipelto, Miia
    Kjellström, Hedvig
    KTH, School of Computer Science and Communication (CSC), Robotics, perception and learning, RPL.
    Machine Learning and Social Robotics for Detecting Early Signs of Dementia2017Other (Other academic)
  • 30.
    Karlsson, Jesper
    et al.
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Robotics, Perception and Learning, RPL.
    van Waveren, Sanne
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Robotics, Perception and Learning, RPL.
    Pek, Christian
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Robotics, Perception and Learning, RPL.
    Torre, Ilaria
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Robotics, Perception and Learning, RPL.
    Leite, Iolanda
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Robotics, Perception and Learning, RPL.
    Tumova, Jana
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Robotics, Perception and Learning, RPL.
    Encoding Human Driving Styles in Motion Planning for Autonomous Vehicles2021In: 2021 IEEE International Conference on Robotics and Automation (ICRA), Institute of Electrical and Electronics Engineers (IEEE) , 2021, p. 11262-11268Conference paper (Refereed)
    Abstract [en]

    Driving styles play a major role in the acceptance and use of autonomous vehicles. Yet, existing motion planning techniques can often only incorporate simple driving styles that are modeled by the developers of the planner and not tailored to the passenger. We present a new approach to encode human driving styles through the use of signal temporal logic and its robustness metrics. Specifically, we use a penalty structure that can be used in many motion planning frameworks, and calibrate its parameters to model different automated driving styles. We combine this penalty structure with a set of signal temporal logic formula, based on the Responsibility-Sensitive Safety model, to generate trajectories that we expected to correlate with three different driving styles: aggressive, neutral, and defensive. An online study showed that people perceived different parameterizations of the motion planner as unique driving styles, and that most people tend to prefer a more defensive automated driving style, which correlated to their self-reported driving style.

    Download full text (pdf)
    fulltext
  • 31. Kennedy, J.
    et al.
    Leite, Iolanda
    KTH, School of Computer Science and Communication (CSC), Robotics, perception and learning, RPL. Disney Research, United States.
    Pereira, A.
    Sun, M.
    Li, B.
    Jain, R.
    Cheng, R.
    Pincus, E.
    Carter, E. J.
    Lehman, J. F.
    Learning and reusing dialog for repeated interactions with a situated social agent2017In: 17th International Conference on Intelligent Virtual Agents, IVA 2017, Springer, 2017, Vol. 10498, p. 192-204Conference paper (Refereed)
    Abstract [en]

    Content authoring for conversations is a limiting factor in creating verbal interactions with intelligent virtual agents. Building on techniques utilizing semi-situated learning in an incremental crowdworking pipeline, this paper introduces an embodied agent that self-authors its own dialog for social chat. In particular, the autonomous use of crowdworkers is supplemented with a generalization method that borrows and assesses the validity of dialog across conversational states. We argue that the approach offers a community-focused tailoring of dialog responses that is not available in approaches that rely solely on statistical methods across big data. We demonstrate the advantages that this can bring to interactions through data collected from 486 conversations between a situated social agent and 22 users during a 3 week long evaluation period.

  • 32.
    Khanna, Parag
    et al.
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Robotics, Perception and Learning, RPL.
    Yadollahi, Elmira
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Robotics, Perception and Learning, RPL.
    Björkman, Mårten
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Robotics, Perception and Learning, RPL.
    Leite, Iolanda
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Robotics, Perception and Learning, RPL.
    Smith, Christian
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Robotics, Perception and Learning, RPL.
    Effects of Explanation Strategies to Resolve Failures in Human-Robot Collaboration2023In: 2023 32nd IEEE international conference on robot and human interactive communication, RO-MAN, Institute of Electrical and Electronics Engineers (IEEE) , 2023, p. 1829-1836Conference paper (Refereed)
    Abstract [en]

    Despite significant improvements in robot capabilities, they are likely to fail in human-robot collaborative tasks due to high unpredictability in human environments and varying human expectations. In this work, we explore the role of explanation of failures by a robot in a human-robot collaborative task. We present a user study incorporating common failures in collaborative tasks with human assistance to resolve the failure. In the study, a robot and a human work together to fill a shelf with objects. Upon encountering a failure, the robot explains the failure and the resolution to overcome the failure, either through handovers or humans completing the task. The study is conducted using different levels of robotic explanation based on the failure action, failure cause, and action history, and different strategies in providing the explanation over the course of repeated interaction. Our results show that the success in resolving the failures is not only a function of the level of explanation but also the type of failures. Furthermore, while novice users rate the robot higher overall in terms of their satisfaction with the explanation, their satisfaction is not only a function of the robot's explanation level at a certain round but also the prior information they received from the robot.

  • 33.
    Khanna, Parag
    et al.
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Robotics, Perception and Learning, RPL.
    Yadollahi, Elmira
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Robotics, Perception and Learning, RPL.
    Leite, Iolanda
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Robotics, Perception and Learning, RPL.
    Björkman, Mårten
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Robotics, Perception and Learning, RPL.
    Smith, Christian
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Robotics, Perception and Learning, RPL.
    How do Humans take an Object from a Robot: Behavior changes observed in a User Study2023In: HAI 2023 - Proceedings of the 11th Conference on Human-Agent Interaction, Association for Computing Machinery (ACM) , 2023, p. 372-374Conference paper (Refereed)
    Abstract [en]

    To facilitate human-robot interaction and gain human trust, a robot should recognize and adapt to changes in human behavior. This work documents different human behaviors observed while taking objects from an interactive robot in an experimental study, categorized across two dimensions: pull force applied and handedness. We also present the changes observed in human behavior upon repeated interaction with the robot to take various objects.

  • 34.
    Kontogiorgos, Dimosthenis
    et al.
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Speech, Music and Hearing, TMH.
    van Waveren, Sanne
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Robotics, Perception and Learning, RPL.
    Wallberg, Olle
    KTH.
    Abelho Pereira, André Tiago
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Speech, Music and Hearing, TMH.
    Leite, Iolanda
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Robotics, Perception and Learning, RPL.
    Gustafson, Joakim
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Speech, Music and Hearing, TMH.
    Embodiment Effects in Interactions with Failing Robots2020In: CHI '20: Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems, ACM Digital Library, 2020Conference paper (Refereed)
    Abstract [en]

    The increasing use of robots in real-world applications will inevitably cause users to encounter more failures in interactions. While there is a longstanding effort in bringing human-likeness to robots, how robot embodiment affects users’ perception of failures remains largely unexplored. In this paper, we extend prior work on robot failures by assessing the impact that embodiment and failure severity have on people’s behaviours and their perception of robots. Our findings show that when using a smart-speaker embodiment, failures negatively affect users’ intention to frequently interact with the device, however not when using a human-like robot embodiment. Additionally, users significantly rate the human-like robot higher in terms of perceived intelligence and social presence. Our results further suggest that in higher severity situations, human-likeness is distracting and detrimental to the interaction. Drawing on quantitative findings, we discuss benefits and drawbacks of embodiment in robot failures that occur in guided tasks.

  • 35.
    Kucherenko, Taras
    et al.
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Robotics, Perception and Learning, RPL.
    Jonell, Patrik
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Speech, Music and Hearing, TMH.
    van Waveren, Sanne
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Robotics, Perception and Learning, RPL.
    Henter, Gustav Eje
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Speech, Music and Hearing, TMH.
    Alexanderson, Simon
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Speech, Music and Hearing, TMH.
    Leite, Iolanda
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Robotics, Perception and Learning, RPL.
    Kjellström, Hedvig
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Robotics, Perception and Learning, RPL.
    Gesticulator: A framework for semantically-aware speech-driven gesture generation2020In: ICMI '20: Proceedings of the 2020 International Conference on Multimodal Interaction, Association for Computing Machinery (ACM) , 2020Conference paper (Refereed)
    Abstract [en]

    During speech, people spontaneously gesticulate, which plays akey role in conveying information. Similarly, realistic co-speechgestures are crucial to enable natural and smooth interactions withsocial agents. Current end-to-end co-speech gesture generationsystems use a single modality for representing speech: either au-dio or text. These systems are therefore confined to producingeither acoustically-linked beat gestures or semantically-linked ges-ticulation (e.g., raising a hand when saying “high”): they cannotappropriately learn to generate both gesture types. We present amodel designed to produce arbitrary beat and semantic gesturestogether. Our deep-learning based model takes both acoustic andsemantic representations of speech as input, and generates gesturesas a sequence of joint angle rotations as output. The resulting ges-tures can be applied to both virtual agents and humanoid robots.Subjective and objective evaluations confirm the success of ourapproach. The code and video are available at the project page svito-zar.github.io/gesticula

  • 36.
    Li, Rui
    et al.
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Robotics, Perception and Learning, RPL.
    van Almkerk, Marc
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Robotics, Perception and Learning, RPL.
    van Waveren, Sanne
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Robotics, Perception and Learning, RPL.
    Carter, Elizabeth
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Robotics, Perception and Learning, RPL.
    Leite, Iolanda
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Robotics, Perception and Learning, RPL.
    Comparing Human-Robot Proxemics between Virtual Reality and the Real World2019In: HRI '19: 2019 14TH ACM/IEEE INTERNATIONAL CONFERENCE ON HUMAN-ROBOT INTERACTION, IEEE , 2019, p. 431-439Conference paper (Refereed)
    Abstract [en]

    Virtual Reality (VR) can greatly benefit Human-Robot Interaction (HRI) as a tool to effectively iterate across robot designs. However, possible system limitations of VR could influence the results such that they do not fully reflect real-life encounters with robots. In order to better deploy VR in HRI, we need to establish a basic understanding of what the differences are between HRI studies in the real world and in VR. This paper investigates the differences between the real life and VR with a focus on proxemic preferences, in combination with exploring the effects of visual familiarity and spatial sound within the VR experience. Results suggested that people prefer closer interaction distances with a real, physical robot than with a virtual robot in VR. Additionally, the virtual robot was perceived as more discomforting than the real robot, which could result in the differences in proxemics. Overall, these results indicate that the perception of the robot has to be evaluated before the interaction can be studied. However, the results also suggested that VR settings with different visual familiarities are consistent with each other in how they affect HRI proxemics and virtual robot perceptions, indicating the freedom to study HRI in various scenarios in VR. The effect of spatial sound in VR drew a more complex picture and thus calls for more in-depth research to understand its influence on HRI in VR.

  • 37.
    Linard, Alexis
    et al.
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Robotics, Perception and Learning, RPL.
    Torre, Ilaria
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Robotics, Perception and Learning, RPL.
    Bartoli, Ermanno
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Robotics, Perception and Learning, RPL.
    Sleat, Alex
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Robotics, Perception and Learning, RPL.
    Leite, Iolanda
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Robotics, Perception and Learning, RPL.
    Tumova, Jana
    KTH, School of Electrical Engineering and Computer Science (EECS), Centres, Centre for Autonomous Systems, CAS. KTH, School of Electrical Engineering and Computer Science (EECS), Centres, ACCESS Linnaeus Centre. KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Robotics, Perception and Learning, RPL.
    Real-time RRT* with Signal Temporal Logic Preferences2023In: 2023 IEEE/RSJ international conference on intelligent robots and systems (IROS), IEEE, 2023Conference paper (Other academic)
    Abstract [en]

    Signal Temporal Logic (STL) is a rigorous specification language that allows one to express various spatiotemporal requirements and preferences. Its semantics (called robustness) allows quantifying to what extent are the STL specifications met. In this work, we focus on enabling STL constraints and preferences in the Real-Time Rapidly ExploringRandom Tree (RT-RRT*) motion planning algorithm in an environment with dynamic obstacles. We propose a cost function that guides the algorithm towards the asymptotically most robust solution, i.e. a plan that maximally adheres to the STL specification. In experiments, we applied our method to a social navigation case, where the STL specification captures spatio-temporal preferences on how a mobile robot should avoid an incoming human in a shared space. Our results show that our approach leads to plans adhering to the STL specification, while ensuring efficient cost computation.

    Download full text (pdf)
    fulltext
  • 38.
    Linard, Alexis
    et al.
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Robotics, Perception and Learning, RPL.
    Torre, Ilaria
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Robotics, Perception and Learning, RPL.
    Leite, Iolanda
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Robotics, Perception and Learning, RPL.
    Tumova, Jana
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Robotics, Perception and Learning, RPL.
    Inference of Multi-Class STL Specifications for Multi-Label Human-Robot Encounters2022In: 2022 IEEE/RSJ international conference on intelligent robots and systems (IROS), Institute of Electrical and Electronics Engineers (IEEE) , 2022, p. 1305-1311Conference paper (Refereed)
    Abstract [en]

    This paper is interested in formalizing human trajectories in human-robot encounters. Inspired by robot navigation tasks in human-crowded environments, we consider the case where a human and a robot walk towards each other, and where humans have to avoid colliding with the incoming robot. Further, humans may describe different behaviors, ranging from being in a hurry/minimizing completion time to maximizing safety. We propose a decision tree-based algorithm to extract STL formulae from multi-label data. Our inference algorithm learns STL specifications from data containing multiple classes, where instances can be labelled by one or many classes. We base our evaluation on a dataset of trajectories collected through an online study reproducing human-robot encounters.

    Download full text (pdf)
    multiclass_stl_learn
  • 39.
    Linard, Alexis
    et al.
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Robotics, Perception and Learning, RPL. KTH Royal Inst Technol, Div Robot Percept & Learning, SE-10044 Stockholm, Sweden.;KTH .
    Torre, Ilaria
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Robotics, Perception and Learning, RPL.
    Steen, Anders
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Robotics, Perception and Learning, RPL.
    Leite, Iolanda
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Robotics, Perception and Learning, RPL.
    Tumova, Jana
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Robotics, Perception and Learning, RPL.
    Formalizing Trajectories in Human-Robot Encounters via Probabilistic STL Inference2021In: 2021 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Institute of Electrical and Electronics Engineers (IEEE) , 2021, p. 9857-9862Conference paper (Refereed)
    Abstract [en]

    In this paper, we are interested in formalizing human trajectories in human-robot encounters. We consider a particular case where a human and a robot walk towards each other. A question that arises is whether, when, and how humans will deviate from their trajectory to avoid a collision. These human trajectories can then be used to generate socially acceptable robot trajectories. To model these trajectories, we propose a data-driven algorithm to extract a formal specification expressed in Signal Temporal Logic with probabilistic predicates. We evaluated our method on trajectories collected through an online study where participants had to avoid colliding with a robot in a shared environment. Further, we demonstrate that probabilistic STL is a suitable formalism to depict human behavior, choices and preferences in specific scenarios of social navigation.

    Download full text (pdf)
    fulltext
  • 40.
    Marta, Daniel
    et al.
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Robotics, Perception and Learning, RPL.
    Holk, Simon
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Robotics, Perception and Learning, RPL.
    Pek, Christian
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Robotics, Perception and Learning, RPL.
    Tumova, Jana
    KTH, School of Electrical Engineering and Computer Science (EECS), Centres, Centre for Autonomous Systems, CAS. KTH, School of Electrical Engineering and Computer Science (EECS), Centres, ACCESS Linnaeus Centre. KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Robotics, Perception and Learning, RPL.
    Leite, Iolanda
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Robotics, Perception and Learning, RPL.
    Aligning Human Preferences with Baseline Objectives in Reinforcement Learning2023In: 2023 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION (ICRA 2023), Institute of Electrical and Electronics Engineers (IEEE) , 2023Conference paper (Refereed)
    Abstract [en]

    Practical implementations of deep reinforcement learning (deep RL) have been challenging due to an amplitude of factors, such as designing reward functions that cover every possible interaction. To address the heavy burden of robot reward engineering, we aim to leverage subjective human preferences gathered in the context of human-robot interaction, while taking advantage of a baseline reward function when available. By considering baseline objectives to be designed beforehand, we are able to narrow down the policy space, solely requesting human attention when their input matters the most. To allow for control over the optimization of different objectives, our approach contemplates a multi-objective setting. We achieve human-compliant policies by sequentially training an optimal policy from a baseline specification and collecting queries on pairs of trajectories. These policies are obtained by training a reward estimator to generate Pareto optimal policies that include human preferred behaviours. Our approach ensures sample efficiency and we conducted a user study to collect real human preferences, which we utilized to obtain a policy on a social navigation environment.

    Download full text (pdf)
    fulltext
  • 41.
    Marta, Daniel
    et al.
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Robotics, Perception and Learning, RPL.
    Holk, Simon
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Robotics, Perception and Learning, RPL.
    Pek, Christian
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Robotics, Perception and Learning, RPL.
    Tumova, Jana
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Robotics, Perception and Learning, RPL.
    Leite, Iolanda
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Robotics, Perception and Learning, RPL.
    VARIQuery: VAE Segment-based Active Learning for Query Selection in Preference-based Reinforcement Learning2023Conference paper (Refereed)
    Abstract [en]

    Human-in-the-loop reinforcement learning (RL) methods actively integrate human knowledge to create reward functions for various robotic tasks. Learning from preferences shows promise as alleviates the requirement of demonstrations by querying humans on state-action sequences. However, the limited granularity of sequence-based approaches complicates temporal credit assignment. The amount of human querying is contingent on query quality, as redundant queries result in excessive human involvement. This paper addresses the often-overlooked aspect of query selection, which is closely related to active learning (AL). We propose a novel query selection approach that leverages variational autoencoder (VAE) representations of state sequences. In this manner, we formulate queries that are diverse in nature while simultaneously taking into account reward model estimations. We compare our approach to the current state-of-the-art query selection methods in preference-based RL, and find ours to be either on-par or more sample efficient through extensive benchmarking on simulated environments relevant to robotics. Lastly, we conduct an online study to verify the effectiveness of our query selection approach with real human feedback and examine several metrics related to human effort.

    Download full text (pdf)
    fulltext
  • 42.
    Marta, Daniel
    et al.
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Robotics, Perception and Learning, RPL. Digital Futures.
    Holk, Simon
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Robotics, Perception and Learning, RPL. Digital futures.
    Pek, Christian
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Robotics, Perception and Learning, RPL. Digital Futures.
    Tumova, Jana
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Robotics, Perception and Learning, RPL. Digital Futures.
    Leite, Iolanda
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Robotics, Perception and Learning, RPL. Digital Futures.
    VARIQuery: VAE Segment-Based Active Learning for Query Selection in Preference-Based Reinforcement Learning2023In: 2023 IEEE/RSJ International Conference on Intelligent Robots and Systems, IROS 2023, Institute of Electrical and Electronics Engineers (IEEE) , 2023, p. 7878-7885Conference paper (Refereed)
    Abstract [en]

    Human-in-the-loop reinforcement learning (RL) methods actively integrate human knowledge to create reward functions for various robotic tasks. Learning from preferences shows promise as alleviates the requirement of demonstrations by querying humans on state-action sequences. However, the limited granularity of sequence-based approaches complicates temporal credit assignment. The amount of human querying is contingent on query quality, as redundant queries result in excessive human involvement. This paper addresses the often-overlooked aspect of query selection, which is closely related to active learning (AL). We propose a novel query selection approach that leverages variational autoencoder (VAE) representations of state sequences. In this manner, we formulate queries that are diverse in nature while simultaneously taking into account reward model estimations. We compare our approach to the current state-of-the-art query selection methods in preference-based RL, and find ours to be either on-par or more sample efficient through extensive benchmarking on simulated environments relevant to robotics. Lastly, we conduct an online study to verify the effectiveness of our query selection approach with real human feedback and examine several metrics related to human effort.

  • 43. Marta, Daniel
    et al.
    Pek, Christian
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Robotics, Perception and Learning, RPL.
    Melsion, Gaspar Isaac
    Tumova, Jana
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Robotics, Perception and Learning, RPL.
    Leite, Iolanda
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Robotics, Perception and Learning, RPL.
    Human-Feedback Shield Synthesis for Perceived Safety in Deep Reinforcement Learning2022In: IEEE Robotics and Automation Letters, E-ISSN 2377-3766, Vol. 7, no 1, p. 406-413Article in journal (Other academic)
    Abstract [en]

    Despite the successes of deep reinforcement learning (RL), it is still challenging to obtain safe policies. Formal verifi- cation approaches ensure safety at all times, but usually overly restrict the agent’s behaviors, since they assume adversarial behavior of the environment. Instead of assuming adversarial behavior, we suggest to focus on perceived safety instead, i.e., policies that avoid undesired behaviors while having a desired level of conservativeness. To obtain policies that are perceived as safe, we propose a shield synthesis framework with two distinct loops: (1) an inner loop that trains policies with a set of actions that is constrained by shields whose conservativeness is parameterized, and (2) an outer loop that presents example rollouts of the policy to humans and collects their feedback to update the parameters of the shields in the inner loop. We demonstrate our approach on a RL benchmark of Lunar landing and a scenario in which a mobile robot navigates around humans. For the latter, we conducted two user studies to obtain policies that were perceived as safe. Our results indicate that our framework converges to policies that are perceived as safe, is robust against noisy feedback, and can query feedback for multiple policies at the same time.

    Download full text (pdf)
    fulltext
  • 44.
    Melsión, Gaspar Isaac
    et al.
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Robotics, Perception and Learning, RPL.
    Stower, Rebecca
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Robotics, Perception and Learning, RPL.
    Winkle, Katie
    Uppsala Univ, Dept Informat Technol, Uppsala, Sweden..
    Leite, Iolanda
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Robotics, Perception and Learning, RPL.
    What's at Stake?: Robot explanations matter for high but not low-stake scenarios2023In: 2023 32nd IEEE international conference on robot and human interactive communication, RO-MAN, Institute of Electrical and Electronics Engineers (IEEE) , 2023, p. 2421-2426Conference paper (Refereed)
    Abstract [en]

    Although the field of Explainable Artificial Intelligence (XAI) in Human-Robot Interaction is gathering increasing attention, how well different explanations compare across HRI scenarios is still not well understood. We conducted an exploratory online study with 335 participants analysing the interaction between type of explanation (counterfactual, feature-based, and no explanation), the stake of the scenario (high, low) and the application scenario (healthcare, industry). Participants viewed one of 12 different vignettes depicting a combination of these three factors and rated their system understanding and trust in the robot. Compared to no explanation, both counterfactual and feature-based explanations improved system understanding and performance trust (but not moral trust). Additionally, when no explanation was present, high-stake scenarios led to significantly worse performance trust and system understanding. These findings suggest that explanations can be used to calibrate users' perceptions of the robot in high-stake scenarios.

  • 45.
    Melsión, Gaspar Isaac
    et al.
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Robotics, Perception and Learning, RPL.
    Torre, Ilaria
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Robotics, Perception and Learning, RPL.
    Vidal, E.
    Leite, Iolanda
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Robotics, Perception and Learning, RPL.
    Using Explainability to Help Children UnderstandGender Bias in AI2021In: Proceedings of Interaction Design and Children, IDC 2021, Association for Computing Machinery (ACM) , 2021, p. 87-99Conference paper (Refereed)
    Abstract [en]

    Machine learning systems have become ubiquitous into our society. This has raised concerns about the potential discrimination that these systems might exert due to unconscious bias present in the data, for example regarding gender and race. Whilst this issue has been proposed as an essential subject to be included in the new AI curricula for schools, research has shown that it is a difficult topic to grasp by students. We propose an educational platform tailored to raise the awareness of gender bias in supervised learning, with the novelty of using Grad-CAM as an explainability technique that enables the classifier to visually explain its own predictions. Our study demonstrates that preadolescents (N=78, age 10-14) significantly improve their understanding of the concept of bias in terms of gender discrimination, increasing their ability to recognize biased predictions when they interact with the interpretable model, highlighting its suitability for educational programs.

  • 46.
    Mohamed, Youssef
    et al.
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Robotics, Perception and Learning, RPL.
    Ballardini, Giulia
    Univ Genoa, Genoa, Italy..
    Parreira, Maria Teresa
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Robotics, Perception and Learning, RPL.
    Lemaignan, Severin
    PAL Robot, Barcelona, Spain..
    Leite, Iolanda
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Robotics, Perception and Learning, RPL.
    Automatic Frustration Detection Using Thermal Imaging2022In: PROCEEDINGS OF THE 2022 17TH ACM/IEEE INTERNATIONAL CONFERENCE ON HUMAN-ROBOT INTERACTION (HRI '22), Institute of Electrical and Electronics Engineers (IEEE) , 2022, p. 451-460Conference paper (Refereed)
    Abstract [en]

    To achieve seamless interactions, robots have to be capable of reliably detecting affective states in real time. One of the possible states that humans go through while interacting with robots is frustration. Detecting frustration from RGB images can be challenging in some real-world situations; thus, we investigate in this work whether thermal imaging can be used to create a model that is capable of detecting frustration induced by cognitive load and failure. To train our model, we collected a data set from 18 participants experiencing both types of frustration induced by a robot. The model was tested using features from several modalities: thermal, RGB, Electrodermal Activity (EDA), and all three combined. When data from both frustration cases were combined and used as training input, the model reached an accuracy of 89% with just RGB features, 87% using only thermal features, 84% using EDA, and 86% when using all modalities. Furthermore, the highest accuracy for the thermal data was reached using three facial regions of interest: nose, forehead and lower lip.

  • 47.
    Morillo-Mendez, Lucas
    et al.
    Örebro Univ, Ctr Appl Autonomous Sensor Syst, Örebro, Sweden..
    Stower, Rebecca
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Robotics, Perception and Learning, RPL.
    Sleat, Alex
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Robotics, Perception and Learning, RPL.
    Schreiter, Tim
    Örebro Univ, Ctr Appl Autonomous Sensor Syst, Örebro, Sweden..
    Leite, Iolanda
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Robotics, Perception and Learning, RPL.
    Mozos, Oscar Martinez
    Örebro Univ, Ctr Appl Autonomous Sensor Syst, Örebro, Sweden..
    Schrooten, Martien G. S.
    Örebro Univ, Sch Behav Social & Legal Sci, Örebro, Sweden..
    Can the robot "see" what I see?: Robot gaze drives attention depending on mental state attribution2023In: Frontiers in Psychology, E-ISSN 1664-1078, Vol. 14, article id 1215771Article in journal (Refereed)
    Abstract [en]

    Mentalizing, where humans infer the mental states of others, facilitates understanding and interaction in social situations. Humans also tend to adopt mentalizing strategies when interacting with robotic agents. There is an ongoing debate about how inferred mental states affect gaze following, a key component of joint attention. Although the gaze from a robot induces gaze following, the impact of mental state attribution on robotic gaze following remains unclear. To address this question, we asked forty-nine young adults to perform a gaze cueing task during which mental state attribution was manipulated as follows. Participants sat facing a robot that turned its head to the screen at its left or right. Their task was to respond to targets that appeared either at the screen the robot gazed at or at the other screen. At the baseline, the robot was positioned so that participants would perceive it as being able to see the screens. We expected faster response times to targets at the screen the robot gazed at than targets at the non-gazed screen (i.e., gaze cueing effect). In the experimental condition, the robot's line of sight was occluded by a physical barrier such that participants would perceive it as unable to see the screens. Our results revealed gaze cueing effects in both conditions although the effect was reduced in the occluded condition compared to the baseline. These results add to the expanding fields of social cognition and human-robot interaction by suggesting that mentalizing has an impact on robotic gaze following.

  • 48.
    Orthmann, Bastian
    et al.
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Robotics, Perception and Learning, RPL.
    Leite, Iolanda
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Robotics, Perception and Learning, RPL.
    Bresin, Roberto
    KTH, School of Electrical Engineering and Computer Science (EECS), Human Centered Technology, Media Technology and Interaction Design, MID.
    Torre, Ilaria
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Robotics, Perception and Learning, RPL.
    Sounding Robots: Design and Evaluation of Auditory Displays for Unintentional Human-robot Interaction2023In: ACM Transactions on Human-Robot Interaction, E-ISSN 2573-9522, Vol. 12, no 4, article id 49Article in journal (Refereed)
    Abstract [en]

    Non-verbal communication is important in HRI, particularly when humans and robots do not need to actively engage in a task together, but rather they co-exist in a shared space. Robots might still need to communicate states such as urgency or availability, and where they intend to go, to avoid collisions and disruptions. Sounds could be used to communicate such states and intentions in an intuitive and non-disruptive way. Here, we propose a multi-layer classification system for displaying various robot information simultaneously via sound. We first conceptualise which robot features could be displayed (robot size, speed, availability for interaction, urgency, and directionality); we then map them to a set of audio parameters. The designed sounds were then evaluated in five online studies, where people listened to the sounds and were asked to identify the associated robot features. The sounds were generally understood as intended by participants, especially when they were evaluated one feature at a time, and partially when they were evaluated two features simultaneously. The results of these evaluations suggest that sounds can be successfully used to communicate robot states and intended actions implicitly and intuitively.

  • 49. Paiva, Ana
    et al.
    Leite, Iolanda
    KTH, School of Computer Science and Communication (CSC), Robotics, perception and learning, RPL.
    Boukricha, Hana
    Wachsmuth, Ipke
    Empathy in Virtual Agents and Robots: A Survey2017In: ACM Transactions on Interactive Intelligent Systems, ISSN 2160-6455, E-ISSN 2160-6463, Vol. 7, no 3, article id 11Article in journal (Refereed)
    Abstract [en]

    This article surveys the area of computational empathy, analysing different ways by which artificial agents can simulate and trigger empathy in their interactions with humans. Empathic agents can be seen as agents that have the capacity to place themselves into the position of a user's or another agent's emotional situation and respond appropriately. We also survey artificial agents that, by their design and behaviour, can lead users to respond emotionally as if they were experiencing the agent's situation. In the course of this survey, we present the research conducted to date on empathic agents in light of the principles and mechanisms of empathy found in humans. We end by discussing some of the main challenges that this exciting area will be facing in the future.

  • 50.
    Panesar, Amrita
    et al.
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Robotics, Perception and Learning, RPL.
    Dogan, Fethiye Irmak
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Robotics, Perception and Learning, RPL.
    Leite, Iolanda
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Robotics, Perception and Learning, RPL.
    Improving Visual Question Answering by Leveraging Depth and Adapting Explainability2022In: 2022 31St Ieee International Conference On Robot And Human Interactive Communication (Ieee Ro-Man 2022), Institute of Electrical and Electronics Engineers (IEEE) , 2022, p. 252-259Conference paper (Refereed)
    Abstract [en]

    During human-robot conversation, it is critical for robots to be able to answer users' questions accurately and provide a suitable explanation for why they arrive at the answer they provide. Depth is a crucial component in producing more intelligent robots that can respond correctly as some questions might rely on spatial relations within the scene, for which 2D RGB data alone would be insufficient. Due to the lack of existing depth datasets for the task of VQA, we introduce a new dataset, VQA-SUNRGBD. When we compare our proposed model on this RGB-D dataset against the baseline VQN network on RGB data alone, we show that ours outperforms, particularly in questions relating to depth such as asking about the proximity of objects and relative positions of objects to one another. We also provide Grad-CAM activations to gain insight regarding the predictions on depth-related questions and find that our method produces better visual explanations compared to Grad-CAM on RGB data. To our knowledge, this work is the first of its kind to leverage depth and an explainability module to produce an explainable Visual Question Answering (VQA) system.

12 1 - 50 of 83
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf