kth.sePublications
Change search
Refine search result
1 - 22 of 22
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Rows per page
  • 5
  • 10
  • 20
  • 50
  • 100
  • 250
Sort
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
Select
The maximal number of hits you can export is 250. When you want to export more records please use the Create feeds function.
  • 1.
    Arriola-Rios, Veronica E.
    et al.
    Univ Nacl Autonoma Mexico, UNAM, Fac Sci, Dept Math, Mexico City, DF, Mexico..
    Guler, Puren
    Örebro Univ, Ctr Appl Autonomous Sensor Syst, Autonomous Mobile Manipulat Lab, Örebro, Sweden..
    Ficuciello, Fanny
    Univ Naples Federico II, PRISMA Lab, Dept Elect Engn & Informat Technol, Naples, Italy..
    Kragic, Danica
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Robotics, Perception and Learning, RPL. KTH, School of Electrical Engineering and Computer Science (EECS), Centres, Centre for Autonomous Systems, CAS.
    Siciliano, Bruno
    Univ Naples Federico II, PRISMA Lab, Dept Elect Engn & Informat Technol, Naples, Italy..
    Wyatt, Jeremy L.
    Univ Birmingham, Sch Comp Sci, Birmingham, W Midlands, England..
    Modeling of Deformable Objects for Robotic Manipulation: A Tutorial and Review2020In: Frontiers in Robotics and AI, E-ISSN 2296-9144, Vol. 7, article id 82Article, review/survey (Refereed)
    Abstract [en]

    Manipulation of deformable objects has given rise to an important set of open problems in the field of robotics. Application areas include robotic surgery, household robotics, manufacturing, logistics, and agriculture, to name a few. Related research problems span modeling and estimation of an object's shape, estimation of an object's material properties, such as elasticity and plasticity, object tracking and state estimation during manipulation, and manipulation planning and control. In this survey article, we start by providing a tutorial on foundational aspects of models of shape and shape dynamics. We then use this as the basis for a review of existing work on learning and estimation of these models and on motion planning and control to achieve desired deformations. We also discuss potential future lines of work.

  • 2.
    Bütepage, Judith
    et al.
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Robotics, Perception and Learning, RPL.
    Ghadirzadeh, Ali
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Robotics, Perception and Learning, RPL. Intelligent Robotics Research Group, Aalto University, Espoo, Finland.
    Öztimur Karadag, Özge
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Robotics, Perception and Learning, RPL. Department of Computer Engineering, Alanya Alaaddin Keykubat University, Antalya, Turkey.
    Björkman, Mårten
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Robotics, Perception and Learning, RPL.
    Kragic, Danica
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Robotics, Perception and Learning, RPL.
    Imitating by Generating: Deep Generative Models for Imitation of Interactive Tasks2020In: Frontiers in Robotics and AI, E-ISSN 2296-9144, Vol. 7, article id 47Article in journal (Refereed)
    Abstract [en]

    To coordinate actions with an interaction partner requires a constant exchange of sensorimotor signals. Humans acquire these skills in infancy and early childhood mostly by imitation learning and active engagement with a skilled partner. They require the ability to predict and adapt to one's partner during an interaction. In this work we want to explore these ideas in a human-robot interaction setting in which a robot is required to learn interactive tasks from a combination of observational and kinesthetic learning. To this end, we propose a deep learning framework consisting of a number of components for (1) human and robot motion embedding, (2) motion prediction of the human partner, and (3) generation of robot joint trajectories matching the human motion. As long-term motion prediction methods often suffer from the problem of regression to the mean, our technical contribution here is a novel probabilistic latent variable model which does not predict in joint space but in latent space. To test the proposed method, we collect human-human interaction data and human-robot interaction data of four interactive tasks “hand-shake,” “hand-wave,” “parachute fist-bump,” and “rocket fist-bump.” We demonstrate experimentally the importance of predictive and adaptive components as well as low-level abstractions to successfully learn to imitate human behavior in interactive social tasks.

  • 3.
    Calvo-Barajas, Natalia
    et al.
    Uppsala Univ, Dept Informat Technol, Uppsala Social Robot Lab, Uppsala, Sweden..
    Elgarf, Maha
    KTH, School of Electrical Engineering and Computer Science (EECS), Computer Science, Computational Science and Technology (CST).
    Perugia, Giulia
    Uppsala Univ, Dept Informat Technol, Uppsala Social Robot Lab, Uppsala, Sweden..
    Paiva, Ana
    Univ Lisbon, Inst Super Tecn IST, Dept Comp Sci & Engn, Lisbon, Portugal..
    Peters, Christopher
    KTH, School of Electrical Engineering and Computer Science (EECS), Computer Science, Computational Science and Technology (CST).
    Castellano, Ginevra
    Uppsala Univ, Dept Informat Technol, Uppsala Social Robot Lab, Uppsala, Sweden..
    Hurry Up, We Need to Find the Key! How Regulatory Focus Design Affects Children's Trust in a Social Robot2021In: Frontiers in Robotics and AI, E-ISSN 2296-9144, Vol. 8, article id 652035Article in journal (Refereed)
    Abstract [en]

    In educational scenarios involving social robots, understanding the way robot behaviors affect children's motivation to achieve their learning goals is of vital importance. It is crucial for the formation of a trust relationship between the child and the robot so that the robot can effectively fulfill its role as a learning companion. In this study, we investigate the effect of a regulatory focus design scenario on the way children interact with a social robot. Regulatory focus theory is a type of self-regulation that involves specific strategies in pursuit of goals. It provides insights into how a person achieves a particular goal, either through a strategy focused on "promotion" that aims to achieve positive outcomes or through one focused on "prevention" that aims to avoid negative outcomes. In a user study, 69 children (7-9 years old) played a regulatory focus design goal-oriented collaborative game with the EMYS robot. We assessed children's perception of likability and competence and their trust in the robot, as well as their willingness to follow the robot's suggestions when pursuing a goal. Results showed that children perceived the prevention-focused robot as being more likable than the promotion-focused robot. We observed that a regulatory focus design did not directly affect trust. However, the perception of likability and competence was positively correlated with children's trust but negatively correlated with children's acceptance of the robot's suggestions.

  • 4.
    Chellapurath, Mrudul
    et al.
    KTH, School of Engineering Sciences (SCI), Engineering Mechanics, Vehicle Engineering and Solid Mechanics. Max Planck Institute for Intelligent Systems, Stuttgart, Germany.
    Khandelwal, Pranav C.
    Max Planck Institute for Intelligent Systems, Stuttgart, Germany; Institute of Flight Mechanics and Controls, University of Stuttgart, Stuttgart, Germany.
    Schulz, Andrew K.
    Max Planck Institute for Intelligent Systems, Stuttgart, Germany.
    Bioinspired robots can foster nature conservation2023In: Frontiers in Robotics and AI, E-ISSN 2296-9144, Vol. 10, article id 1145798Article in journal (Refereed)
    Abstract [en]

    We live in a time of unprecedented scientific and human progress while being increasingly aware of its negative impacts on our planet’s health. Aerial, terrestrial, and aquatic ecosystems have significantly declined putting us on course to a sixth mass extinction event. Nonetheless, the advances made in science, engineering, and technology have given us the opportunity to reverse some of our ecosystem damage and preserve them through conservation efforts around the world. However, current conservation efforts are primarily human led with assistance from conventional robotic systems which limit their scope and effectiveness, along with negatively impacting the surroundings. In this perspective, we present the field of bioinspired robotics to develop versatile agents for future conservation efforts that can operate in the natural environment while minimizing the disturbance/impact to its inhabitants and the environment’s natural state. We provide an operational and environmental framework that should be considered while developing bioinspired robots for conservation. These considerations go beyond addressing the challenges of human-led conservation efforts and leverage the advancements in the field of materials, intelligence, and energy harvesting, to make bioinspired robots move and sense like animals. In doing so, it makes bioinspired robots an attractive, non-invasive, sustainable, and effective conservation tool for exploration, data collection, intervention, and maintenance tasks. Finally, we discuss the development of bioinspired robots in the context of collaboration, practicality, and applicability that would ensure their further development and widespread use to protect and preserve our natural world.

  • 5.
    Cumbal, Ronald
    et al.
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Speech, Music and Hearing, TMH.
    Axelsson, Agnes
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Speech, Music and Hearing, TMH.
    Mehta, Shivam
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Speech, Music and Hearing, TMH.
    Engwall, Olov
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Speech, Music and Hearing, TMH.
    Stereotypical nationality representations in HRI: perspectives from international young adults2023In: Frontiers in Robotics and AI, E-ISSN 2296-9144, Vol. 10, article id 1264614Article in journal (Refereed)
    Abstract [en]

    People often form immediate expectations about other people, or groups of people, based on visual appearance and characteristics of their voice and speech. These stereotypes, often inaccurate or overgeneralized, may translate to robots that carry human-like qualities. This study aims to explore if nationality-based preconceptions regarding appearance and accents can be found in people's perception of a virtual and a physical social robot. In an online survey with 80 subjects evaluating different first-language-influenced accents of English and nationality-influenced human-like faces for a virtual robot, we find that accents, in particular, lead to preconceptions on perceived competence and likeability that correspond to previous findings in social science research. In a physical interaction study with 74 participants, we then studied if the perception of competence and likeability is similar after interacting with a robot portraying one of four different nationality representations from the online survey. We find that preconceptions on national stereotypes that appeared in the online survey vanish or are overshadowed by factors related to general interaction quality. We do, however, find some effects of the robot's stereotypical alignment with the subject group, with Swedish subjects (the majority group in this study) rating the Swedish-accented robot as less competent than the international group, but, on the other hand, recalling more facts from the Swedish robot's presentation than the international group does. In an extension in which the physical robot was replaced by a virtual robot interacting in the same scenario online, we further found the same results that preconceptions are of less importance after actual interactions, hence demonstrating that the differences in the ratings of the robot between the online survey and the interaction is not due to the interaction medium. We hence conclude that attitudes towards stereotypical national representations in HRI have a weak effect, at least for the user group included in this study (primarily educated young students in an international setting).

  • 6.
    Deichler, Anna
    et al.
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Speech, Music and Hearing, TMH.
    Wang, Siyang
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Speech, Music and Hearing, TMH.
    Alexanderson, Simon
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Speech, Music and Hearing, TMH.
    Beskow, Jonas
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Speech, Music and Hearing, TMH.
    Learning to generate pointing gestures in situated embodied conversational agents2023In: Frontiers in Robotics and AI, E-ISSN 2296-9144, Vol. 10, article id 1110534Article in journal (Refereed)
    Abstract [en]

    One of the main goals of robotics and intelligent agent research is to enable them to communicate with humans in physically situated settings. Human communication consists of both verbal and non-verbal modes. Recent studies in enabling communication for intelligent agents have focused on verbal modes, i.e., language and speech. However, in a situated setting the non-verbal mode is crucial for an agent to adapt flexible communication strategies. In this work, we focus on learning to generate non-verbal communicative expressions in situated embodied interactive agents. Specifically, we show that an agent can learn pointing gestures in a physically simulated environment through a combination of imitation and reinforcement learning that achieves high motion naturalness and high referential accuracy. We compared our proposed system against several baselines in both subjective and objective evaluations. The subjective evaluation is done in a virtual reality setting where an embodied referential game is played between the user and the agent in a shared 3D space, a setup that fully assesses the communicative capabilities of the generated gestures. The evaluations show that our model achieves a higher level of referential accuracy and motion naturalness compared to a state-of-the-art supervised learning motion synthesis model, showing the promise of our proposed system that combines imitation and reinforcement learning for generating communicative gestures. Additionally, our system is robust in a physically-simulated environment thus has the potential of being applied to robots.

  • 7.
    Dogan, Fethiye Irmak
    et al.
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Robotics, Perception and Learning, RPL.
    Melsión, Gaspar Isaac
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Robotics, Perception and Learning, RPL.
    Leite, Iolanda
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Robotics, Perception and Learning, RPL.
    Leveraging Explainability for Understanding Object Descriptions in Ambiguous 3D Environments2023In: Frontiers in Robotics and AI, E-ISSN 2296-9144, Vol. 9Article in journal (Refereed)
  • 8.
    Engwall, Olov
    et al.
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Speech, Music and Hearing, TMH.
    Bandera Rubio, Juan Pedro
    Departemento de Tecnología Electrónica, University of Málaga, Málaga, Spain.
    Bensch, Suna
    Department of Computing Science, Umeå University, Umeå, Sweden.
    Haring, Kerstin Sophie
    Robots and Sensors for the Human Well-Being, Ritchie School of Engineering and Computer Science, University of Denver, Denver, United States.
    Kanda, Takayuki
    HRI Lab, Kyoto University, Kyoto, Japan.
    Núñez, Pedro
    Tecnología de los Computadores y las Comunicaciones Department, University of Extremadura, Badajoz, Spain.
    Rehm, Matthias
    The Technical Faculty of IT and Design, Aalborg University, Aalborg, Denmark.
    Sgorbissa, Antonio
    Dipartimento di Informatica, Bioingegneria, Robotica e Ingegneria dei Sistemi, University of Genoa, Genoa, Italy.
    Editorial: Socially, culturally and contextually aware robots2023In: Frontiers in Robotics and AI, E-ISSN 2296-9144, Vol. 10, article id 1232215Article in journal (Other academic)
  • 9.
    Engwall, Olov
    et al.
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Speech, Music and Hearing, TMH, Speech Communication and Technology.
    Cumbal, Ronald
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Speech, Music and Hearing, TMH.
    Majlesi, Ali Reza
    Socio-cultural perception of robot backchannels2023In: Frontiers in Robotics and AI, E-ISSN 2296-9144, Vol. 10Article in journal (Refereed)
    Abstract [en]

    Introduction: Backchannels, i.e., short interjections by an interlocutor to indicate attention, understanding or agreement regarding utterances by another conversation participant, are fundamental in human-human interaction. Lack of backchannels or if they have unexpected timing or formulation may influence the conversation negatively, as misinterpretations regarding attention, understanding or agreement may occur. However, several studies over the years have shown that there may be cultural differences in how backchannels are provided and perceived and that these differences may affect intercultural conversations. Culturally aware robots must hence be endowed with the capability to detect and adapt to the way these conversational markers are used across different cultures. Traditionally, culture has been defined in terms of nationality, but this is more and more considered to be a stereotypic simplification. We therefore investigate several socio-cultural factors, such as the participants’ gender, age, first language, extroversion and familiarity with robots, that may be relevant for the perception of backchannels.

    Methods: We first cover existing research on cultural influence on backchannel formulation and perception in human-human interaction and on backchannel implementation in Human-Robot Interaction. We then present an experiment on second language spoken practice, in which we investigate how backchannels from the social robot Furhat influence interaction (investigated through speaking time ratios and ethnomethodology and multimodal conversation analysis) and impression of the robot (measured by post-session ratings). The experiment, made in a triad word game setting, is focused on if activity-adaptive robot backchannels may redistribute the participants’ speaking time ratio, and/or if the participants’ assessment of the robot is influenced by the backchannel strategy. The goal is to explore how robot backchannels should be adapted to different language learners to encourage their participation while being perceived as socio-culturally appropriate.

    Results: We find that a strategy that displays more backchannels towards a less active speaker may substantially decrease the difference in speaking time between the two speakers, that different socio-cultural groups respond differently to the robot’s backchannel strategy and that they also perceive the robot differently after the session.

    Discussion: We conclude that the robot may need different backchanneling strategies towards speakers from different socio-cultural groups in order to encourage them to speak and have a positive perception of the robot.

     

    Download full text (pdf)
    fulltext
  • 10.
    Fraune, Marlena R.
    et al.
    New Mexico State Univ, Intergrp Human Robot Interact iHRI Lab, Dept Psychol, Las Cruces, NM 88003 USA..
    Leite, Iolanda
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Robotics, Perception and Learning, RPL.
    Karatas, Nihan
    Nagoya Univ, Human Machine Interact HMI & Human Characterist R, Inst Innovat Future Soc, Nagoya, Japan..
    Amirova, Aida
    Nazarbayev Univ, Dept Robot & Mech, Sch Engn & Digital Sci, Nur Sultan, Kazakhstan..
    Legeleux, Amelie
    Univ South Brittany, Lab STICC, CNRS UMR 6285, Brest, France..
    Sandygulova, Anara
    Nazarbayev Univ, Dept Robot & Mech, Sch Engn & Digital Sci, Nur Sultan, Kazakhstan..
    Neerincx, Anouk
    Univ South Brittany, Lab STICC, CNRS UMR 6285, Brest, France..
    Dilip Tikas, Gaurav
    Inst Management Technol, Strategy Innovat & Entrepreneurship Area, Ghaziabad, India..
    Gunes, Hatice
    Univ Cambridge, Dept Comp Sci & Technol, Affect Intelligence & Robot Lab, Cambridge, England..
    Mohan, Mayumi
    Max Planck Inst Intelligent Syst, Hapt Intelligence Dept, Stuttgart, Germany..
    Abbasi, Nida Itrat
    Univ Cambridge, Dept Comp Sci & Technol, Affect Intelligence & Robot Lab, Cambridge, England..
    Shenoy, Sudhir
    Univ Virginia, Comp Engn Program, Human AI Technol Lab, Charlottesville, VA USA..
    Scassellati, Brian
    Yale Univ, Dept Comp Sci, Social Robot Lab, New Haven, CT USA..
    de Visser, Ewart J.
    US Air Force Acad, Warfighter Effectiveness Res Ctr, Colorado Springs, CO USA..
    Komatsu, Takanori
    Meiji Univ, Sch Interdisciplinary Math Sci, Dept Frontier Media Sci, Tokyo, Japan..
    Lessons Learned About Designing and Conducting Studies From HRI Experts2022In: Frontiers in Robotics and AI, E-ISSN 2296-9144, Vol. 8, article id 772141Article in journal (Refereed)
    Abstract [en]

    The field of human-robot interaction (HRI) research is multidisciplinary and requires researchers to understand diverse fields including computer science, engineering, informatics, philosophy, psychology, and more disciplines. However, it is hard to be an expert in everything. To help HRI researchers develop methodological skills, especially in areas that are relatively new to them, we conducted a virtual workshop, Workshop Your Study Design (WYSD), at the 2021 International Conference on HRI. In this workshop, we grouped participants with mentors, who are experts in areas like real-world studies, empirical lab studies, questionnaire design, interview, participatory design, and statistics. During and after the workshop, participants discussed their proposed study methods, obtained feedback, and improved their work accordingly. In this paper, we present 1) Workshop attendees' feedback about the workshop and 2) Lessons that the participants learned during their discussions with mentors. Participants' responses about the workshop were positive, and future scholars who wish to run such a workshop can consider implementing their suggestions. The main contribution of this paper is the lessons learned section, where the workshop participants contributed to forming this section based on what participants discovered during the workshop. We organize lessons learned into themes of 1) Improving study design for HRI, 2) How to work with participants - especially children -, 3) Making the most of the study and robot's limitations, and 4) How to collaborate well across fields as they were the areas of the papers submitted to the workshop. These themes include practical tips and guidelines to assist researchers to learn about fields of HRI research with which they have limited experience. We include specific examples, and researchers can adapt the tips and guidelines to their own areas to avoid some common mistakes and pitfalls in their research.

  • 11. Förster, Frank
    et al.
    Romeo, Marta
    Holthaus, Patrick
    Wood, Luke J.
    Dondrup, Christian
    Fischer, Joel E.
    Liza, Farhana Ferdousi
    Kaszuba, Sara
    Hough, Julian
    Nesset, Birthe
    Hernández García, Daniel
    Kontogiorgos, Dimosthenis
    Williams, Jennifer
    Özkan, Elif Ecem
    Barnard, Pepita
    Berumen, Gustavo
    Price, Dominic
    Cobb, Sue
    Wiltschko, Martina
    Tisserand, Lucien
    Porcheron, Martin
    Giuliani, Manuel
    Skantze, Gabriel
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Speech, Music and Hearing, TMH.
    Healey, Patrick G. T.
    Papaioannou, Ioannis
    Gkatzia, Dimitra
    Albert, Saul
    Huang, Guanyu
    Maraev, Vladislav
    Kapetanios, Epaminondas
    Working with troubles and failures in conversation between humans and robots: workshop report2023In: Frontiers in Robotics and AI, E-ISSN 2296-9144, Vol. 10Article in journal (Refereed)
    Abstract [en]

    This paper summarizes the structure and findings from the first Workshop on Troubles and Failures in Conversations between Humans and Robots. The workshop was organized to bring together a small, interdisciplinary group of researchers working on miscommunication from two complementary perspectives. One group of technology-oriented researchers was made up of roboticists, Human-Robot Interaction (HRI) researchers and dialogue system experts. The second group involved experts from conversation analysis, cognitive science, and linguistics. Uniting both groups of researchers is the belief that communication failures between humans and machines need to be taken seriously and that a systematic analysis of such failures may open fruitful avenues in research beyond current practices to improve such systems, including both speech-centric and multimodal interfaces. This workshop represents a starting point for this endeavour. The aim of the workshop was threefold: Firstly, to establish an interdisciplinary network of researchers that share a common interest in investigating communicative failures with a particular view towards robotic speech interfaces; secondly, to gain a partial overview of the “failure landscape” as experienced by roboticists and HRI researchers; and thirdly, to determine the potential for creating a robotic benchmark scenario for testing future speech interfaces with respect to the identified failures. The present article summarizes both the “failure landscape” surveyed during the workshop as well as the outcomes of the attempt to define a benchmark scenario.

  • 12.
    Karayiannidis, Yiannis
    et al.
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Robotics, Perception and Learning, RPL.
    Droukas, L.
    Papageorgiou, D.
    Doulgeri, Z.
    Robot control for task performance and enhanced safety under impact2015In: Frontiers in Robotics and AI, E-ISSN 2296-9144, Vol. 2, no DECArticle in journal (Refereed)
    Abstract [en]

    A control law combining motion performance quality and low stiffness reaction to unintended contacts is proposed in this work. It achieves prescribed performance evolution of the position error under disturbances up to a level related to model uncertainties and responds compliantly and with low stiffness to significant disturbances arising from impact forces. The controller employs a velocity reference signal in a model-based control law utilizing a non-linear time-dependent term, which embeds prescribed performance specifications and vanishes in case of significant disturbances. Simulation results with a three degrees of freedom (DOF) robot illustrate the motion performance and self-regulation of the output stiffness achieved by this controller under an external force, and highlights its advantages with respect to constant and switched impedance schemes. Experiments with a KUKA LWR 4+ demonstrate its performance under impact with a human while following a desired trajectory.

  • 13.
    Mishra, Chinmaya
    et al.
    Furhat Robot AB, Stockholm, Sweden..
    Offrede, Tom
    Humboldt Univ, Berlin, Germany..
    Fuchs, Susanne
    Leibniz Ctr Gen Linguist ZAS, Berlin, Germany..
    Mooshammer, Christine
    Humboldt Univ, Berlin, Germany..
    Skantze, Gabriel
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Speech, Music and Hearing, TMH. Furhat Robot AB, Stockholm, Sweden..
    Does a robot's gaze aversion affect human gaze aversion?2023In: Frontiers in Robotics and AI, E-ISSN 2296-9144, Vol. 10, article id 1127626Article in journal (Refereed)
    Abstract [en]

    Gaze cues serve an important role in facilitating human conversations and are generally considered to be one of the most important non-verbal cues. Gaze cues are used to manage turn-taking, coordinate joint attention, regulate intimacy, and signal cognitive effort. In particular, it is well established that gaze aversion is used in conversations to avoid prolonged periods of mutual gaze. Given the numerous functions of gaze cues, there has been extensive work on modelling these cues in social robots. Researchers have also tried to identify the impact of robot gaze on human participants. However, the influence of robot gaze behavior on human gaze behavior has been less explored. We conducted a within-subjects user study (N = 33) to verify if a robot's gaze aversion influenced human gaze aversion behavior. Our results show that participants tend to avert their gaze more when the robot keeps staring at them as compared to when the robot exhibits well-timed gaze aversions. We interpret our findings in terms of intimacy regulation: humans try to compensate for the robot's lack of gaze aversion.

  • 14.
    Mishra, Chinmaya
    et al.
    Furhat Robotics AB, Stockholm, Sweden; Max Planck Institute for Psycholinguistics, Nijmegen, Netherlands.
    Verdonschot, Rinus
    Max Planck Institute for Psycholinguistics, Nijmegen, Netherlands.
    Hagoort, Peter
    Max Planck Institute for Psycholinguistics, Nijmegen, Netherlands; Donders Institute for Brain, Cognition and Behaviour, Radboud University, Nijmegen, Netherlands.
    Skantze, Gabriel
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Speech, Music and Hearing, TMH. Furhat Robotics AB, Stockholm, Sweden.
    Real-time emotion generation in human-robot dialogue using large language models2023In: Frontiers in Robotics and AI, E-ISSN 2296-9144, Vol. 10Article in journal (Refereed)
    Abstract [en]

    Affective behaviors enable social robots to not only establish better connections with humans but also serve as a tool for the robots to express their internal states. It has been well established that emotions are important to signal understanding in Human-Robot Interaction (HRI). This work aims to harness the power of Large Language Models (LLM) and proposes an approach to control the affective behavior of robots. By interpreting emotion appraisal as an Emotion Recognition in Conversation (ERC) tasks, we used GPT-3.5 to predict the emotion of a robot’s turn in real-time, using the dialogue history of the ongoing conversation. The robot signaled the predicted emotion using facial expressions. The model was evaluated in a within-subjects user study (N = 47) where the model-driven emotion generation was compared against conditions where the robot did not display any emotions and where it displayed incongruent emotions. The participants interacted with the robot by playing a card sorting game that was specifically designed to evoke emotions. The results indicated that the emotions were reliably generated by the LLM and the participants were able to perceive the robot’s emotions. It was found that the robot expressing congruent model-driven facial emotion expressions were perceived to be significantly more human-like, emotionally appropriate, and elicit a more positive impression. Participants also scored significantly better in the card sorting game when the robot displayed congruent facial expressions. From a technical perspective, the study shows that LLMs can be used to control the affective behavior of robots reliably in real-time. Additionally, our results could be used in devising novel human-robot interactions, making robots more effective in roles where emotional interaction is important, such as therapy, companionship, or customer service.

  • 15. Oertel, C.
    et al.
    Castellano, G.
    Chetouani, M.
    Nasir, J.
    Obaid, M.
    Pelachaud, C.
    Peters, Christopher
    KTH, School of Electrical Engineering and Computer Science (EECS), Computer Science, Computational Science and Technology (CST).
    Engagement in Human-Agent Interaction: An Overview2020In: Frontiers in Robotics and AI, E-ISSN 2296-9144, Vol. 7, article id 92Article in journal (Refereed)
    Abstract [en]

    Engagement is a concept of the utmost importance in human-computer interaction, not only for informing the design and implementation of interfaces, but also for enabling more sophisticated interfaces capable of adapting to users. While the notion of engagement is actively being studied in a diverse set of domains, the term has been used to refer to a number of related, but different concepts. In fact it has been referred to across different disciplines under different names and with different connotations in mind. Therefore, it can be quite difficult to understand what the meaning of engagement is and how one study relates to another one accordingly. Engagement has been studied not only in human-human, but also in human-agent interactions i.e., interactions with physical robots and embodied virtual agents. In this overview article we focus on different factors involved in engagement studies, distinguishing especially between those studies that address task and social engagement, involve children and adults, are conducted in a lab or aimed for long term interaction. We also present models for detecting engagement and for generating multimodal behaviors to show engagement.

  • 16.
    Oertel, Catharine
    et al.
    Delft Univ Technol, Interact Intelligence, Dept Intelligent Syst, Delft, Netherlands..
    Jonell, Patrik
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Speech, Music and Hearing, TMH.
    Kontogiorgos, Dimosthenis
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Speech, Music and Hearing, TMH.
    Mora, Kenneth Funes
    Eyeware Tech SA, Martigny, Switzerland..
    Odobez, Jean-Marc
    Idiap Res Inst, Percept & Act Understanding, Martigny, Switzerland..
    Gustafsson, Joakim
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Speech, Music and Hearing, TMH.
    Towards an Engagement-Aware Attentive Artificial Listener for Multi-Party Interactions2021In: Frontiers in Robotics and AI, E-ISSN 2296-9144, Vol. 8, article id 555913Article in journal (Refereed)
    Abstract [en]

    Listening to one another is essential to human-human interaction. In fact, we humans spend a substantial part of our day listening to other people, in private as well as in work settings. Attentive listening serves the function to gather information for oneself, but at the same time, it also signals to the speaker that he/she is being heard. To deduce whether our interlocutor is listening to us, we are relying on reading his/her nonverbal cues, very much like how we also use non-verbal cues to signal our attention. Such signaling becomes more complex when we move from dyadic to multi-party interactions. Understanding how humans use nonverbal cues in a multi-party listening context not only increases our understanding of human-human communication but also aids the development of successful human-robot interactions. This paper aims to bring together previous analyses of listener behavior analyses in human-human multi-party interaction and provide novel insights into gaze patterns between the listeners in particular. We are investigating whether the gaze patterns and feedback behavior, as observed in the humanhuman dialogue, are also beneficial for the perception of a robot in multi-party humanrobot interaction. To answer this question, we are implementing an attentive listening system that generates multi-modal listening behavior based on our human-human analysis. We are comparing our system to a baseline system that does not differentiate between different listener types in its behavior generation. We are evaluating it in terms of the participant's perception of the robot, his behavior as well as the perception of third-party observers.

    Download full text (pdf)
    fulltext
  • 17.
    Perugia, Giulia
    et al.
    Eindhoven Univ Technol, Human Technol Interact Grp, Eindhoven, Netherlands.;Uppsala Univ, Dept Informat Technol, Uppsala Social Robot Lab, Uppsala, Sweden..
    Paetzel-Pruesmann, Maike
    Univ Potsdam, Dept Linguist, Computat Linguist, Potsdam, Germany..
    Hupont, Isabelle
    Univ Potsdam, Dept Linguist, Computat Linguist, Potsdam, Germany.;European Commiss, Joint Res Ctr, Seville, Spain..
    Varni, Giovanna
    Inst Polytech Paris, LTCI, Telecom Paris, Paris, France..
    Chetouani, Mohamed
    Sorbonne Univ, CNRS, Inst Syst Intelligents & Robot, Paris, France..
    Peters, Christopher Edward
    KTH, School of Electrical Engineering and Computer Science (EECS), Computer Science, Computational Science and Technology (CST).
    Castellano, Ginevra
    Uppsala Univ, Dept Informat Technol, Uppsala Social Robot Lab, Uppsala, Sweden..
    Does the Goal Matter?: Emotion Recognition Tasks Can Change the Social Value of Facial Mimicry Towards Artificial Agents2021In: Frontiers in Robotics and AI, E-ISSN 2296-9144, Vol. 8, article id 699090Article in journal (Refereed)
    Abstract [en]

    In this paper, we present a study aimed at understanding whether the embodiment and humanlikeness of an artificial agent can affect people's spontaneous and instructed mimicry of its facial expressions. The study followed a mixed experimental design and revolved around an emotion recognition task. Participants were randomly assigned to one level of humanlikeness (between-subject variable: humanlike, characterlike, or morph facial texture of the artificial agents) and observed the facial expressions displayed by three artificial agents differing in embodiment (within-subject variable: video-recorded robot, physical robot, and virtual agent) and a human (control). To study both spontaneous and instructed facial mimicry, we divided the experimental sessions into two phases. In the first phase, we asked participants to observe and recognize the emotions displayed by the agents. In the second phase, we asked them to look at the agents' facial expressions, replicate their dynamics as closely as possible, and then identify the observed emotions. In both cases, we assessed participants' facial expressions with an automated Action Unit (AU) intensity detector. Contrary to our hypotheses, our results disclose that the agent that was perceived as the least uncanny, and most anthropomorphic, likable, and co-present, was the one spontaneously mimicked the least. Moreover, they show that instructed facial mimicry negatively predicts spontaneous facial mimicry. Further exploratory analyses revealed that spontaneous facial mimicry appeared when participants were less certain of the emotion they recognized. Hence, we postulate that an emotion recognition goal can flip the social value of facial mimicry as it transforms a likable artificial agent into a distractor. Further work is needed to corroborate this hypothesis. Nevertheless, our findings shed light on the functioning of human-agent and human-robot mimicry in emotion recognition tasks and help us to unravel the relationship between facial mimicry, liking, and rapport.

  • 18.
    Tuncer, Sylvaine
    et al.
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Robotics, Perception and Learning, RPL.
    Gillet, Sarah
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Robotics, Perception and Learning, RPL.
    Leite, Iolanda
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Robotics, Perception and Learning, RPL.
    Robot-Mediated Inclusive Processes in Groups of Children: From Gaze Aversion to Mutual Smiling Gaze2022In: Frontiers in Robotics and AI, E-ISSN 2296-9144, Vol. 9, article id 729146Article in journal (Refereed)
    Abstract [en]

    Our work is motivated by the idea that social robots can help inclusive processes in groups of children, focusing on the case of children who have newly arrived from a foreign country and their peers at school. Building on an initial study where we tested different robot behaviours and recorded children's interactions mediated by a robot in a game, we present in this paper the findings from a subsequent analysis of the same video data drawing from ethnomethodology and conversation analysis. We describe how this approach differs from predominantly quantitative video analysis in HRI; how mutual gaze appeared as a challenging interactional accomplishment between unacquainted children, and why we focused on this phenomenon. We identify two situations and trajectories in which children make eye contact: asking for or giving instructions, and sharing an emotional reaction. Based on detailed analyses of a selection of extracts in the empirical section, we describe patterns and discuss the links between the different situations and trajectories, and relationship building. Our findings inform HRI and robot design by identifying complex interactional accomplishments between two children, as well as group dynamics which support these interactions. We argue that social robots should be able to perceive such phenomena in order to better support inclusion of outgroup children. Lastly, by explaining how we combined approaches and showing how they build on each other, we also hope to demonstrate the value of interdisciplinary research, and encourage it.

  • 19.
    van den Berghe, Rianne
    et al.
    Univ Utrecht, Dept Dev Youth & Educ Diverse Soc, Utrecht, Netherlands.;Windesheim Univ Appl Sci, Sect Leadership Educ & Dev, Almere, Netherlands..
    Oudgenoeg-Paz, Ora
    Univ Utrecht, Dept Dev Youth & Educ Diverse Soc, Utrecht, Netherlands..
    Verhagen, Josje
    Univ Amsterdam, Amsterdam Ctr Language & Commun, Amsterdam, Netherlands..
    Brouwer, Susanne
    Radboud Univ Nijmegen, Dept Modern Languages & Cultures, Nijmegen, Netherlands..
    de Haas, Mirjam
    Tilburg Univ, Dept Cognit Sci & Artificial Intelligence, Tilburg, Netherlands..
    de Wit, Jan
    Tilburg Univ, Dept Commun & Cognit, Tilburg, Netherlands..
    Willemsen, Bram
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Decision and Control Systems (Automatic Control).
    Vogt, Paul
    Tilburg Univ, Dept Cognit Sci & Artificial Intelligence, Tilburg, Netherlands.;Hanze Univ Appl Sci, Sch Commun Media & IT, Groningen, Netherlands..
    Krahmer, Emiel
    Tilburg Univ, Dept Commun & Cognit, Tilburg, Netherlands..
    Leseman, Paul
    Univ Utrecht, Dept Dev Youth & Educ Diverse Soc, Utrecht, Netherlands..
    Individual Differences in Children's (Language) Learning Skills Moderate Effects of Robot-Assisted Second Language Learning2021In: Frontiers in Robotics and AI, E-ISSN 2296-9144, Vol. 8Article in journal (Refereed)
    Abstract [en]

    The current study investigated how individual differences among children affect the added value of social robots for teaching second language (L2) vocabulary to young children. Specifically, we investigated the moderating role of three individual child characteristics deemed relevant for language learning: first language (L1) vocabulary knowledge, phonological memory, and selective attention. We expected children low in these abilities to particularly benefit from being assisted by a robot in a vocabulary training. An L2 English vocabulary training intervention consisting of seven sessions was administered to 193 monolingual Dutch five-year-old children over a three- to four-week period. Children were randomly assigned to one of three experimental conditions: 1) a tablet only, 2) a tablet and a robot that used deictic (pointing) gestures (the no-iconic-gestures condition), or 3) a tablet and a robot that used both deictic and iconic gestures (i.e., gestures depicting the target word; the iconic-gestures condition). There also was a control condition in which children did not receive a vocabulary training, but played dancing games with the robot. L2 word knowledge was measured directly after the training and two to four weeks later. In these post-tests, children in the experimental conditions outperformed children in the control condition on word knowledge, but there were no differences between the three experimental conditions. Several moderation effects were found. The robot's presence particularly benefited children with larger L1 vocabularies or poorer phonological memory, while children with smaller L1 vocabularies or better phonological memory performed better in the tablet-only condition. Children with larger L1 vocabularies and better phonological memory performed better in the no-iconic-gestures condition than in the iconic-gestures condition, while children with better selective attention performed better in the iconic-gestures condition than the no-iconic-gestures condition. Together, the results showed that the effects of the robot and its gestures differ across children, which should be taken into account when designing and evaluating robot-assisted L2 teaching interventions.

  • 20.
    van Waveren, Sanne
    et al.
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Robotics, Perception and Learning, RPL.
    Carter, Elizabeth
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Robotics, Perception and Learning, RPL.
    Örnberg, Oscar
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Robotics, Perception and Learning, RPL.
    Leite, Iolanda
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Robotics, Perception and Learning, RPL.
    Exploring Non-Expert Robot Programming Through Crowdsourcing2021In: Frontiers in Robotics and AI, E-ISSN 2296-9144, Vol. 8, article id 646002Article in journal (Refereed)
    Abstract [en]

    A longstanding barrier to deploying robots in the real world is the ongoing need to author robot behavior. Remote data collection-particularly crowdsourcing-is increasingly receiving interest. In this paper, we make the argument to scale robot programming to the crowd and present an initial investigation of the feasibility of this proposed method. Using an off-the-shelf visual programming interface, non-experts created simple robot programs for two typical robot tasks (navigation and pick-and-place). Each needed four subtasks with an increasing number of programming statements (if statement, while loop, variables) for successful completion of the programs. Initial findings of an online study (N = 279) indicate that non-experts, after minimal instruction, were able to create simple programs using an off-the-shelf visual programming interface. We discuss our findings and identify future avenues for this line of research.

  • 21.
    Webb, Helena
    et al.
    Univ Oxford, Dept Comp Sci, Oxford, England..
    Dumitru, Morgan
    Univ Oxford, Dept Comp Sci, Oxford, England..
    van Maris, Anouk
    Univ West England, Bristol Robot Lab, Bristol, Avon, England..
    Winkle, Katie
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Robotics, Perception and Learning, RPL.
    Jirotka, Marina
    Univ Oxford, Dept Comp Sci, Oxford, England..
    Winfield, Alan
    Univ West England, Bristol Robot Lab, Bristol, Avon, England..
    Role-Play as Responsible Robotics: The Virtual Witness Testimony Role-Play Interview for Investigating Hazardous Human-Robot Interactions2021In: Frontiers in Robotics and AI, E-ISSN 2296-9144, Vol. 8, article id 644336Article in journal (Refereed)
    Abstract [en]

    The development of responsible robotics requires paying attention to responsibility within the research process in addition to responsibility as the outcome of research. This paper describes the preparation and application of a novel method to explore hazardous human-robot interactions. The Virtual Witness Testimony role-play interview is an approach that enables participants to engage with scenarios in which a human being comes to physical harm whilst a robot is present and may have had a malfunction. Participants decide what actions they would take in the scenario and are encouraged to provide their observations and speculations on what happened. Data collection takes place online, a format that provides convenience as well as a safe space for participants to role play a hazardous encounter with minimal risk of suffering discomfort or distress. We provide a detailed account of how our initial set of Virtual Witness Testimony role-play interviews were conducted and describe the ways in which it proved to be an efficient approach that generated useful findings, and upheld our project commitments to Responsible Research and Innovation. We argue that the Virtual Witness Testimony role-play interview is a flexible and fruitful method that can be adapted to benefit research in human robot interaction and advance responsibility in robotics.

  • 22.
    Winkle, Katie
    et al.
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Robotics, Perception and Learning, RPL.
    Senft, Emmanuel
    Univ Wisconsin, Dept Comp Sci, 1210 W Dayton St, Madison, WI 53706 USA..
    Lemaignan, Severin
    Univ West England, Bristol Robot Lab, Bristol, Avon, England..
    LEADOR: A Method for End-To-End Participatory Design of Autonomous Social Robots2021In: Frontiers in Robotics and AI, E-ISSN 2296-9144, Vol. 8, article id 704119Article in journal (Refereed)
    Abstract [en]

    Participatory design (PD) has been used to good success in human-robot interaction (HRI) but typically remains limited to the early phases of development, with subsequent robot behaviours then being hardcoded by engineers or utilised in Wizard-of-Oz (WoZ) systems that rarely achieve autonomy. In this article, we present LEADOR (Led-by-Experts Automation and Design Of Robots), an end-to-end PD methodology for domain expert co-design, automation, and evaluation of social robot behaviour. This method starts with typical PD, working with the domain expert(s) to co-design the interaction specifications and state and action space of the robot. It then replaces the traditional offline programming or WoZ phase by an in situ and online teaching phase where the domain expert can live-program or teach the robot how to behave whilst being embedded in the interaction context. We point out that this live teaching phase can be best achieved by adding a learning component to a WoZ setup, which captures implicit knowledge of experts, as they intuitively respond to the dynamics of the situation. The robot then progressively learns an appropriate, expert-approved policy, ultimately leading to full autonomy, even in sensitive and/or ill-defined environments. However, LEADOR is agnostic to the exact technical approach used to facilitate this learning process. The extensive inclusion of the domain expert(s) in robot design represents established responsible innovation practice, lending credibility to the system both during the teaching phase and when operating autonomously. The combination of this expert inclusion with the focus on in situ development also means that LEADOR supports a mutual shaping approach to social robotics. We draw on two previously published, foundational works from which this (generalisable) methodology has been derived to demonstrate the feasibility and worth of this approach, provide concrete examples in its application, and identify limitations and opportunities when applying this framework in new environments.

1 - 22 of 22
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf