kth.sePublications
Change search
Refine search result
1 - 36 of 36
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Rows per page
  • 5
  • 10
  • 20
  • 50
  • 100
  • 250
Sort
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
Select
The maximal number of hits you can export is 250. When you want to export more records please use the Create feeds function.
  • 1.
    Clark, Leigh
    et al.
    Univ Coll Dublin, Dublin, Ireland..
    Cowan, Benjamin R.
    Univ Coll Dublin, Dublin, Ireland..
    Edwards, Justin
    Univ Coll Dublin, Dublin, Ireland..
    Munteanu, Cosmin
    Univ Toronto, Mississauga, ON, Canada.;Univ Toronto, Toronto, ON, Canada..
    Murad, Christine
    Univ Toronto, Mississauga, ON, Canada.;Univ Toronto, Toronto, ON, Canada..
    Aylett, Matthew
    CereProc Ltd, Edinburgh, Midlothian, Scotland..
    Moore, Roger K.
    Univ Sheffield, Sheffield, S Yorkshire, England..
    Edlund, Jens
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Speech, Music and Hearing, TMH.
    Székely, Éva
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Speech, Music and Hearing, TMH.
    Healey, Patrick
    Queen Mary Univ London, London, England..
    Harte, Naomi
    Trinity Coll Dublin, Dublin, Ireland..
    Torre, Ilaria
    Trinity Coll Dublin, Dublin, Ireland..
    Doyle, Philip
    Voysis Ltd, Dublin, Ireland..
    Mapping Theoretical and Methodological Perspectives for Understanding Speech Interface Interactions2019In: CHI EA '19 EXTENDED ABSTRACTS: EXTENDED ABSTRACTS OF THE 2019 CHI CONFERENCE ON HUMAN FACTORS IN COMPUTING SYSTEMS, ASSOC COMPUTING MACHINERY , 2019Conference paper (Refereed)
    Abstract [en]

    The use of speech as an interaction modality has grown considerably through the integration of Intelligent Personal Assistants (IPAs- e.g. Siri, Google Assistant) into smartphones and voice based devices (e.g. Amazon Echo). However, there remain significant gaps in using theoretical frameworks to understand user behaviours and choices and how they may applied to specific speech interface interactions. This part-day multidisciplinary workshop aims to critically map out and evaluate theoretical frameworks and methodological approaches across a number of disciplines and establish directions for new paradigms in understanding speech interface user behaviour. In doing so, we will bring together participants from HCI and other speech related domains to establish a cohesive, diverse and collaborative community of researchers from academia and industry with interest in exploring theoretical and methodological issues in the field.

  • 2.
    Dogan, Fethiye Irmak
    et al.
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Robotics, Perception and Learning, RPL.
    Torre, Ilaria
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Robotics, Perception and Learning, RPL.
    Leite, Iolanda
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Robotics, Perception and Learning, RPL.
    Asking Follow-Up Clarifications to Resolve Ambiguities in Human-Robot Conversation2022In: ACM/IEEE International Conference on Human-Robot Interaction, IEEE Computer Society , 2022, p. 461-469Conference paper (Refereed)
    Abstract [en]

    When a robot aims to comprehend its human partner's request by identifying the referenced objects in Human-Robot Conversation, ambiguities can occur because the environment might contain many similar objects or the objects described in the request might be unknown to the robot. In the case of ambiguities, most of the systems ask users to repeat their request, which assumes that the robot is familiar with all of the objects in the environment. This assumption might lead to task failure, especially in complex real-world environments. In this paper, we address this challenge by presenting an interactive system that asks for follow-up clarifications to disambiguate the described objects using the pieces of information that the robot could understand from the request and the objects in the environment that are known to the robot. To evaluate our system while disambiguating the referenced objects, we conducted a user study with 63 participants. We analyzed the interactions when the robot asked for clarifications and when it asked users to redescribe the same object. Our results show that generating followup clarification questions helped the robot correctly identify the described objects with fewer attempts (i.e., conversational turns). Also, when people were asked clarification questions, they perceived the task as easier, and they evaluated the task understanding and competence of the robot as higher. Our code and anonymized dataset are publicly available11 https://github.com/IrmakDogan/Resolving-Ambiguities. 

  • 3.
    El Haddad, Kevin
    et al.
    University of Mons.
    Torre, Ilaria
    University of Plymouth.
    Gilmartin, Emer
    Trinity College Dublin.
    Çakmak, Hüseyin
    University of Mons.
    Dupont, Stéphane
    University of Mons.
    Dutoit, Thierry
    University of Mons.
    Campbell, Nick
    Trinity College Dublin.
    Introducing AmuS: The Amused Speech Database2017Conference paper (Refereed)
    Abstract [en]

    In this paper we present the AmuS database of about three hours worth of data related to amused speech recorded from two males and one female subjects and contains data in two languages French and English. We review previous work on smiled speech and speech-laughs. We describe acoustic analysis on part of our database, and a perception test comparing speech-laughs with smiled and neutral speech. We show the efficiency of the data in AmuS for synthesis of amused speech by training HMM-based models for neutral and smiled speech for each voice and comparing them using an on-line CMOS test.

    Download full text (pdf)
    Introducing AmuS
  • 4.
    Jonell, Patrik
    et al.
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Speech, Music and Hearing, TMH.
    Deichler, Anna
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Speech, Music and Hearing, TMH.
    Torre, Ilaria
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Robotics, Perception and Learning, RPL.
    Leite, Iolanda
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Robotics, Perception and Learning, RPL.
    Beskow, Jonas
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Speech, Music and Hearing, TMH.
    Mechanical Chameleons: Evaluating the effects of a social robot’snon-verbal behavior on social influence2021In: Proceedings of SCRITA 2021, a workshop at IEEE RO-MAN 2021, 2021Conference paper (Refereed)
    Abstract [en]

    In this paper we present a pilot study which investigates how non-verbal behavior affects social influence in social robots. We also present a modular system which is capable of controlling the non-verbal behavior based on the interlocutor's facial gestures (head movements and facial expressions) in real time, and a study investigating whether three different strategies for facial gestures ("still", "natural movement", i.e. movements recorded from another conversation, and "copy", i.e. mimicking the user with a four second delay) has any affect on social influence and decision making in a "survival task". Our preliminary results show there was no significant difference between the three conditions, but this might be due to among other things a low number of study participants (12). 

    Download full text (pdf)
    fulltext
  • 5.
    Jonell, Patrik
    et al.
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Speech, Music and Hearing, TMH.
    Kucherenko, Taras
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Robotics, Perception and Learning, RPL.
    Torre, Ilaria
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Robotics, Perception and Learning, RPL.
    Beskow, Jonas
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Speech, Music and Hearing, TMH.
    Can we trust online crowdworkers? : Comparing online and offline participants in a preference test of virtual agents.2020In: IVA '20: Proceedings of the 20th ACM International Conference on Intelligent Virtual Agents, Association for Computing Machinery (ACM) , 2020Conference paper (Refereed)
    Abstract [en]

    Conducting user studies is a crucial component in many scientific fields. While some studies require participants to be physically present, other studies can be conducted both physically (e.g. in-lab)and online (e.g. via crowdsourcing). Inviting participants to the lab can be a time-consuming and logistically difficult endeavor, not to mention that sometimes research groups might not be able to run in-lab experiments, because of, for example, a pandemic. Crowd-sourcing platforms such as Amazon Mechanical Turk (AMT) or prolific can therefore be a suitable alternative to run certain experiments, such as evaluating virtual agents. Although previous studies investigated the use of crowdsourcing platforms for running experiments, there is still uncertainty as to whether the results are reliable for perceptual studies. Here we replicate a previous experiment where participants evaluated a gesture generation model for virtual agents. The experiment is conducted across three participant poolsś in-lab, Prolific, andAMTś having similar demographics across the in-lab participants and the Prolific platform. Our results show no difference between the three participant pools in regards to their evaluations of the gesture generation models and their reliability scores. The results indicate that online platforms can successfully be used for perceptual evaluations of this kind.

  • 6.
    Karlsson, Jesper
    et al.
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Robotics, Perception and Learning, RPL.
    van Waveren, Sanne
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Robotics, Perception and Learning, RPL.
    Pek, Christian
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Robotics, Perception and Learning, RPL.
    Torre, Ilaria
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Robotics, Perception and Learning, RPL.
    Leite, Iolanda
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Robotics, Perception and Learning, RPL.
    Tumova, Jana
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Robotics, Perception and Learning, RPL.
    Encoding Human Driving Styles in Motion Planning for Autonomous Vehicles2021In: 2021 IEEE International Conference on Robotics and Automation (ICRA), Institute of Electrical and Electronics Engineers (IEEE) , 2021, p. 11262-11268Conference paper (Refereed)
    Abstract [en]

    Driving styles play a major role in the acceptance and use of autonomous vehicles. Yet, existing motion planning techniques can often only incorporate simple driving styles that are modeled by the developers of the planner and not tailored to the passenger. We present a new approach to encode human driving styles through the use of signal temporal logic and its robustness metrics. Specifically, we use a penalty structure that can be used in many motion planning frameworks, and calibrate its parameters to model different automated driving styles. We combine this penalty structure with a set of signal temporal logic formula, based on the Responsibility-Sensitive Safety model, to generate trajectories that we expected to correlate with three different driving styles: aggressive, neutral, and defensive. An online study showed that people perceived different parameterizations of the motion planner as unique driving styles, and that most people tend to prefer a more defensive automated driving style, which correlated to their self-reported driving style.

    Download full text (pdf)
    fulltext
  • 7. Knight, S.
    et al.
    Lavan, N.
    Torre, Ilaria
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Robotics, Perception and Learning, RPL.
    McGettigan, C.
    The influence of perceived vocal traits on trusting behaviours in an economic game2021In: Quarterly Journal of Experimental Psychology, ISSN 1747-0218, E-ISSN 1747-0226, Vol. 74, no 10, p. 1747-1754Article in journal (Refereed)
    Abstract [en]

    When presented with voices, we make rapid, automatic judgements of social traits such as trustworthiness—and such judgements are highly consistent across listeners. However, it remains unclear whether voice-based first impressions actually influence behaviour towards a voice’s owner, and—if they do—whether and how they interact over time with the voice owner’s observed actions to further influence the listener’s behaviour. This study used an investment game paradigm to investigate (1) whether voices judged to differ in relevant social traits accrued different levels of investment and/or (2) whether first impressions of the voices interacted with the behaviour of their apparent owners to influence investments over time. Results show that participants were responding to their partner’s behaviour. Crucially, however, there were no effects of voice. These findings suggest that, at least under some conditions, social traits perceived from the voice alone may not influence trusting behaviours in the context of a virtual interaction.

  • 8.
    Laban, Guy
    et al.
    Univ Glasgow, Glasgow, Lanark, Scotland..
    Le Maguer, Sebastien
    Trin Coll Dublin, ADAPTCtr, Dublin, Ireland..
    Lee, Minha
    Eindhoven Univ Technol, Eindhoven, Netherlands..
    Kontogiorgos, Dimosthenis
    Univ Potsdam, Potsdam, Germany..
    Reig, Samantha
    Carnegie Mellon Univ, Pennsauken, NJ USA..
    Torre, Ilaria
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Robotics, Perception and Learning, RPL.
    Tejwani, Ravi
    MIT, Cambridge, MA 02139 USA..
    Dennis, Matthew J.
    Eindhoven Univ Technol, Eindhoven, Netherlands..
    Pereira, André
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Speech, Music and Hearing, TMH.
    Robo-Identity: Exploring Artificial Identity and Emotion via Speech Interactions2022In: PROCEEDINGS OF THE 2022 17TH ACM/IEEE INTERNATIONAL CONFERENCE ON HUMAN-ROBOT INTERACTION (HRI '22), Institute of Electrical and Electronics Engineers (IEEE) , 2022, p. 1265-1268Conference paper (Refereed)
    Abstract [en]

    Following the success of the first edition of Robo-Identity, the second edition will provide an opportunity to expand the discussion about artificial identity. This year, we are focusing on emotions that are expressed through speech and voice. Synthetic voices of robots can resemble and are becoming indistinguishable from expressive human voices. This can be an opportunity and a constraint in expressing emotional speech that can (falsely) convey a human-like identity that can mislead people, leading to ethical issues. How should we envision an agent's artificial identity? In what ways should we have robots that maintain a machine-like stance, e.g., through robotic speech, and should emotional expressions that are increasingly human-like be seen as design opportunities? These are not mutually exclusive concerns. As this discussion needs to be conducted in a multidisciplinary manner, we welcome perspectives on challenges and opportunities from variety of fields. For this year's edition, the special theme will be "speech, emotion and artificial identity".

  • 9. Lee, Minha
    et al.
    Kontogiorgos, Dimosthenis
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Speech, Music and Hearing, TMH.
    Torre, Ilaria
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Robotics, Perception and Learning, RPL.
    Luria, Michal
    Tejwani, Ravi
    Dennis, Matthew
    Abelho Pereira, André Tiago
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Speech, Music and Hearing, TMH.
    Robo-Identity: Exploring Artificial Identity and Multi-Embodiment2021In: ACM/IEEE International Conference on Human-Robot Interaction, Association for Computing Machinery (ACM) , 2021Conference paper (Refereed)
    Abstract [en]

    Interactive robots are becoming more commonplace and complex, but their identity has not yet been a key point of investigation. Identity is an overarching concept that combines traits like personality or a backstory (among other aspects) that people readily attribute to a robot to individuate it as a unique entity. Given people's tendency to anthropomorphize social robots, "who is a robot?"should be a guiding question above and beyond "what is a robot?"Hence, we open up a discussion on artificial identity through this workshop in a multi-disciplinary manner; we welcome perspectives on challenges and opportunities from fields of ethics, design, and engineering. For instance, dynamic embodiment, e.g., an agent that dynamically moves across one's smartwatch, smart speaker, and laptop, is a technical and theoretical problem, with ethical ramifications. Another consideration is whether multiple bodies may warrant multiple identities instead of an "all-in-one"identity. Who "lives"in which devices or bodies? Should their identity travel across different forms, and how can that be achieved in an ethically mindful manner? We bring together philosophical, ethical, technical, and designerly perspectives on exploring artificial identity.

  • 10.
    Linard, Alexis
    et al.
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Robotics, Perception and Learning, RPL.
    Torre, Ilaria
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Robotics, Perception and Learning, RPL.
    Bartoli, Ermanno
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Robotics, Perception and Learning, RPL.
    Sleat, Alex
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Robotics, Perception and Learning, RPL.
    Leite, Iolanda
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Robotics, Perception and Learning, RPL.
    Tumova, Jana
    KTH, School of Electrical Engineering and Computer Science (EECS), Centres, Centre for Autonomous Systems, CAS. KTH, School of Electrical Engineering and Computer Science (EECS), Centres, ACCESS Linnaeus Centre. KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Robotics, Perception and Learning, RPL.
    Real-time RRT* with Signal Temporal Logic Preferences2023In: 2023 IEEE/RSJ international conference on intelligent robots and systems (IROS), IEEE, 2023Conference paper (Other academic)
    Abstract [en]

    Signal Temporal Logic (STL) is a rigorous specification language that allows one to express various spatiotemporal requirements and preferences. Its semantics (called robustness) allows quantifying to what extent are the STL specifications met. In this work, we focus on enabling STL constraints and preferences in the Real-Time Rapidly ExploringRandom Tree (RT-RRT*) motion planning algorithm in an environment with dynamic obstacles. We propose a cost function that guides the algorithm towards the asymptotically most robust solution, i.e. a plan that maximally adheres to the STL specification. In experiments, we applied our method to a social navigation case, where the STL specification captures spatio-temporal preferences on how a mobile robot should avoid an incoming human in a shared space. Our results show that our approach leads to plans adhering to the STL specification, while ensuring efficient cost computation.

    Download full text (pdf)
    fulltext
  • 11.
    Linard, Alexis
    et al.
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Robotics, Perception and Learning, RPL.
    Torre, Ilaria
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Robotics, Perception and Learning, RPL.
    Leite, Iolanda
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Robotics, Perception and Learning, RPL.
    Tumova, Jana
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Robotics, Perception and Learning, RPL.
    Inference of Multi-Class STL Specifications for Multi-Label Human-Robot Encounters2022In: 2022 IEEE/RSJ international conference on intelligent robots and systems (IROS), Institute of Electrical and Electronics Engineers (IEEE) , 2022, p. 1305-1311Conference paper (Refereed)
    Abstract [en]

    This paper is interested in formalizing human trajectories in human-robot encounters. Inspired by robot navigation tasks in human-crowded environments, we consider the case where a human and a robot walk towards each other, and where humans have to avoid colliding with the incoming robot. Further, humans may describe different behaviors, ranging from being in a hurry/minimizing completion time to maximizing safety. We propose a decision tree-based algorithm to extract STL formulae from multi-label data. Our inference algorithm learns STL specifications from data containing multiple classes, where instances can be labelled by one or many classes. We base our evaluation on a dataset of trajectories collected through an online study reproducing human-robot encounters.

    Download full text (pdf)
    multiclass_stl_learn
  • 12.
    Linard, Alexis
    et al.
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Robotics, Perception and Learning, RPL. KTH Royal Inst Technol, Div Robot Percept & Learning, SE-10044 Stockholm, Sweden.;KTH .
    Torre, Ilaria
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Robotics, Perception and Learning, RPL.
    Steen, Anders
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Robotics, Perception and Learning, RPL.
    Leite, Iolanda
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Robotics, Perception and Learning, RPL.
    Tumova, Jana
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Robotics, Perception and Learning, RPL.
    Formalizing Trajectories in Human-Robot Encounters via Probabilistic STL Inference2021In: 2021 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Institute of Electrical and Electronics Engineers (IEEE) , 2021, p. 9857-9862Conference paper (Refereed)
    Abstract [en]

    In this paper, we are interested in formalizing human trajectories in human-robot encounters. We consider a particular case where a human and a robot walk towards each other. A question that arises is whether, when, and how humans will deviate from their trajectory to avoid a collision. These human trajectories can then be used to generate socially acceptable robot trajectories. To model these trajectories, we propose a data-driven algorithm to extract a formal specification expressed in Signal Temporal Logic with probabilistic predicates. We evaluated our method on trajectories collected through an online study where participants had to avoid colliding with a robot in a shared environment. Further, we demonstrate that probabilistic STL is a suitable formalism to depict human behavior, choices and preferences in specific scenarios of social navigation.

    Download full text (pdf)
    fulltext
  • 13.
    McGinn, Conor
    et al.
    Trinity College Dublin.
    Torre, Ilaria
    Trinity College Dublin.
    Can you Tell the Robot by the Voice?: An Exploratory Study on the Role of Voice in the Perception of Robots2019In: 14th ACM/IEEE International Conference on Human-Robot Interaction, HRI 2019, Daegu, South Korea, March 11-14, 2019, 2019, p. 211-221Conference paper (Refereed)
    Download full text (pdf)
    fulltext
  • 14.
    Melsión, Gaspar Isaac
    et al.
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Robotics, Perception and Learning, RPL.
    Torre, Ilaria
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Robotics, Perception and Learning, RPL.
    Vidal, E.
    Leite, Iolanda
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Robotics, Perception and Learning, RPL.
    Using Explainability to Help Children UnderstandGender Bias in AI2021In: Proceedings of Interaction Design and Children, IDC 2021, Association for Computing Machinery (ACM) , 2021, p. 87-99Conference paper (Refereed)
    Abstract [en]

    Machine learning systems have become ubiquitous into our society. This has raised concerns about the potential discrimination that these systems might exert due to unconscious bias present in the data, for example regarding gender and race. Whilst this issue has been proposed as an essential subject to be included in the new AI curricula for schools, research has shown that it is a difficult topic to grasp by students. We propose an educational platform tailored to raise the awareness of gender bias in supervised learning, with the novelty of using Grad-CAM as an explainability technique that enables the classifier to visually explain its own predictions. Our study demonstrates that preadolescents (N=78, age 10-14) significantly improve their understanding of the concept of bias in terms of gender discrimination, increasing their ability to recognize biased predictions when they interact with the interpretable model, highlighting its suitability for educational programs.

  • 15.
    Orthmann, Bastian
    et al.
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Robotics, Perception and Learning, RPL.
    Leite, Iolanda
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Robotics, Perception and Learning, RPL.
    Bresin, Roberto
    KTH, School of Electrical Engineering and Computer Science (EECS), Human Centered Technology, Media Technology and Interaction Design, MID.
    Torre, Ilaria
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Robotics, Perception and Learning, RPL.
    Sounding Robots: Design and Evaluation of Auditory Displays for Unintentional Human-robot Interaction2023In: ACM Transactions on Human-Robot Interaction, E-ISSN 2573-9522, Vol. 12, no 4, article id 49Article in journal (Refereed)
    Abstract [en]

    Non-verbal communication is important in HRI, particularly when humans and robots do not need to actively engage in a task together, but rather they co-exist in a shared space. Robots might still need to communicate states such as urgency or availability, and where they intend to go, to avoid collisions and disruptions. Sounds could be used to communicate such states and intentions in an intuitive and non-disruptive way. Here, we propose a multi-layer classification system for displaying various robot information simultaneously via sound. We first conceptualise which robot features could be displayed (robot size, speed, availability for interaction, urgency, and directionality); we then map them to a set of audio parameters. The designed sounds were then evaluated in five online studies, where people listened to the sounds and were asked to identify the associated robot features. The sounds were generally understood as intended by participants, especially when they were evaluated one feature at a time, and partially when they were evaluated two features simultaneously. The results of these evaluations suggest that sounds can be successfully used to communicate robot states and intended actions implicitly and intuitively.

  • 16.
    Romeo, Marta
    et al.
    Heriot Watt Univ, Sch Math & Comp Sci, Edinburgh, Midlothian, Scotland.;Univ Manchester, Sch Comp Sci, Manchester, Lancs, England..
    Torre, Ilaria
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Robotics, Perception and Learning, RPL. Chalmers Univ Technol, Div Interact Design & Software Engn, Gothenburg, Sweden.
    Le Maguer, Sebastien
    Trinity Coll Dublin, ADAPT Ctr, Dublin, Ireland..
    Cangelosi, Angelo
    Univ Manchester, Sch Comp Sci, Manchester, Lancs, England..
    Leite, Iolanda
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Robotics, Perception and Learning, RPL.
    Putting Robots in Context: Challenging the Influence of Voice and Empathic Behaviour on Trust2023In: 2023 32ND IEEE INTERNATIONAL CONFERENCE ON ROBOT AND HUMAN INTERACTIVE COMMUNICATION, RO-MAN, Institute of Electrical and Electronics Engineers (IEEE) , 2023, p. 2045-2050Conference paper (Refereed)
    Abstract [en]

    Trust is essential for social interactions, including those between humans and social artificial agents, such as robots. Several robot-related factors can contribute to the formation of trust. However, previous work has often treated trust as an absolute concept, whereas it is highly context-dependent, and it is possible that some robot-related features will influence trust in some contexts, but not in others. In this paper, we present the results of two video-based online studies aimed at investigating the role of robot voice and empathic behaviour on trust formation in a general context as well as in a task-specific context. We found that voice influences trust in the specific context, with no effect of voice or empathic behaviour in the general context. Thus, context mediated whether robot-related features play a role in people's trust formation towards robots.

  • 17.
    Schuppe, Georg Friedrich
    et al.
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Robotics, Perception and Learning, RPL.
    Torre, Ilaria
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Robotics, Perception and Learning, RPL.
    Leite, Iolanda
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Robotics, Perception and Learning, RPL.
    Tumova, Jana
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Robotics, Perception and Learning, RPL.
    Follow my Advice: Assume-Guarantee Approach to Task Planning with Human in the LooManuscript (preprint) (Other academic)
    Abstract [en]

    We focus on correct-by-design robot task planning from finite Linear Temporal Logic (LTLf) specifications with a human in the loop. Since provable guarantees are difficult to obtain unconditionally, we take an assume-guarantee perspective. Along with guarantees on the robot's task satisfaction, we compute the weakest sufficient assumptions on the human's behavior. We approach the problem via a stochastic game and leverage algorithmic synthesis of the weakest sufficient assumptions. We turn the assumptions into runtime advice to be communicated to the human. We conducted an online user study and showed that the robot is perceived as safer, more intelligent and more compliant with our approach than a robot giving more frequent advice corresponding to stronger assumptions.In addition, we show that our approach leads to less violations of the specification than not communicating with the participant at all.

    Download full text (pdf)
    fulltext
  • 18.
    Székely, Éva
    et al.
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Speech, Music and Hearing, TMH.
    Gustafsson, Joakim
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Speech, Music and Hearing, TMH.
    Torre, Ilaria
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Robotics, Perception and Learning, RPL.
    Prosody-controllable gender-ambiguous speech synthesis: a tool for investigating implicit bias in speech perception2023In: Interspeech 2023, International Speech Communication Association , 2023, p. 1234-1238Conference paper (Refereed)
    Abstract [en]

    This paper proposes a novel method to develop gender-ambiguous TTS, which can be used to investigate hidden gender bias in speech perception. Our aim is to provide a tool for researchers to conduct experiments on language use associated with specific genders. Ambiguous voices can also be beneficial for virtual assistants, to help reduce stereotypes and increase acceptance. Our approach uses a multi-speaker embedding in a neural TTS engine, combining two corpora recorded by a male and a female speaker to achieve a gender-ambiguous timbre. We also propose speaker-disentangled prosody control to ensure that the timbre is robust across a range of prosodies and enable more expressive speech. We optimised the output using an SSL-based network trained on hundreds of speakers. We conducted perceptual evaluations on the settings that were judged most ambiguous by the network, which showed that listeners perceived the speech samples as gender-ambiguous, also in prosody-controlled conditions.

  • 19.
    Torre, Ilaria
    University of York.
    Production and perception of smiling voice2014In: Proceedings of the First Postgraduate and Academic Researchers in Linguistics at York Conference (PARLAY 2013), 2014Conference paper (Refereed)
  • 20.
    Torre, Ilaria
    et al.
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Robotics, Perception and Learning, RPL. Trinity Coll Dublin, Dublin D02 PN40, Ireland..
    Carrigan, Emma
    Trinity Coll Dublin, Dublin D02 PN40, Ireland..
    Domijan, Katarina
    Maynooth Univ, Maynooth, Kildare, Ireland..
    McDonnell, Rachel
    Trinity Coll Dublin, ADAPT Res Ctr, Dublin 2, Ireland..
    Harte, Naomi
    Trinity Coll Dublin, ADAPT Res Ctr, Dublin 2, Ireland..
    The Effect of Audio-Visual Smiles on Social Influence in a Cooperative Human-Agent Interaction Task2021In: ACM Transactions on Computer-Human Interaction, ISSN 1073-0516, E-ISSN 1557-7325, Vol. 28, no 6, article id 44Article in journal (Refereed)
    Abstract [en]

    Emotional expressivity is essential for human interactions, informing both perception and decision-making. Here, we examine whether creating an audio-visual emotional channel mismatch influences decision-making in a cooperative task with a virtual character. We created a virtual character that was either congruent in its emotional expression (smiling in the face and voice) or incongruent (smiling in only one channel). People (N = 98) evaluated the character in terms of valence and arousal in an online study; then, visitors in a museum played the "lunar survival task" with the character over three experiments (N = 597, 78, 101, respectively). Exploratory results suggest that multi-modal expressions are perceived, and reacted upon, differently than unimodal expressions, supporting previous theories of audio-visual integration.

  • 21.
    Torre, Ilaria
    et al.
    Trinity College Dublin.
    Carrigan, Emma
    Trinity College Dublin.
    McCabe, Killian
    ADAPT Research Centre.
    McDonnell, Rachel
    Trinity College Dublin.
    Harte, Naomi
    Trinity College Dublin.
    Survival at the Museum: A Cooperation Experiment with Emotionally Expressive Virtual Characters2018In: Proceedings of the 2018 on International Conference on Multimodal Interaction, 2018, p. 423-427Conference paper (Refereed)
    Download full text (pdf)
    fulltext
  • 22.
    Torre, Ilaria
    et al.
    Trinity College Dublin.
    Carrigan, Emma
    Trinity College Dublin.
    McDonnell, Rachel
    Trinity College Dublin.
    Domijan, Katarina
    Maynooth University.
    McCabe, Killian
    ADAPT Research Centre.
    Harte, Naomi
    Trinity College Dublin.
    The Effect of Multimodal Emotional Expression and Agent Appearance on Trust in Human-Agent Interaction2019In: Motion, Interaction and Games, Association for Computing Machinery , 2019Conference paper (Refereed)
    Download full text (pdf)
    fulltext
  • 23.
    Torre, Ilaria
    et al.
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Robotics, Perception and Learning, RPL.
    Deichler, Anna
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Speech, Music and Hearing, TMH.
    Nicholson, Matthew
    Trin Coll Dublin, ADAPT Res Ctr, Dublin, Ireland..
    McDonnell, Rachel
    Trin Coll Dublin, Sch Comp Sci & Stat, Dublin, Ireland..
    Harte, Naomi
    Trin Coll Dublin, Sch Elect Elect Engn, Dublin, Ireland..
    To smile or not to smile: The effect of mismatched emotional expressions in a Human-Robot cooperative task2022In: 2022 31St Ieee International Conference On Robot And Human Interactive Communication (Ieee Ro-Man 2022), Institute of Electrical and Electronics Engineers (IEEE) , 2022, p. 8-13Conference paper (Refereed)
    Abstract [en]

    Emotional expressivity is essential for successful Human-Robot Interaction. However, robots often have different levels of expressivity in their face and voice. Here we ask whether this modality mismatch influences human behaviour and perception of the robot. Participants played a cooperative task with a robot that displayed matched and mismatched smiling expressions in the face and voice. Emotional expressivity did not influence acceptance of robot's recommendations or subjective evaluations of the robot. However, we found that the robot had overall a higher social influence than a virtual character, and was evaluated more positively.

  • 24.
    Torre, Ilaria
    et al.
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Robotics, Perception and Learning, RPL.
    Dogan, Fethiye Irmak
    KTH, School of Electrical Engineering and Computer Science (EECS).
    Kontogiorgos, Dimosthenis
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Speech, Music and Hearing, TMH.
    Voice, Embodiment, and Autonomy as Identity Affordances2021In: HRI '21 Companion: Companion of the 2021 ACM/IEEE International Conference on Human-Robot Interaction, 2021Conference paper (Refereed)
    Abstract [en]

    Perceived robot identity has not been discussed thoroughly in Human-Robot Interaction. In particular, very few works have explored how humans tend to perceive robots that migrate through a variety of media and devices. In this paper, we discuss some of the open challenges for artificial robot identity stemming from the robotic features of voice, embodiment, and autonomy. How does a robot's voice affect perceived robot gender identity, and can we use this knowledge to fight injustice? And how do robot autonomy and decisions affect the mental image humans form of the robot? These, among others, are open questions we wish to bring researchers and designers’ attention to, in order to influence best practices on the timely topic of artificial agent identity.

    Download full text (pdf)
    fulltext
  • 25.
    Torre, Ilaria
    et al.
    Trinity College Dublin.
    Goslin, Jeremy
    University of Plymouth.
    White, Laurence
    Newcastle University.
    If your device could smile: People trust happy-sounding artificial agents more2020In: Computers in human behavior, ISSN 0747-5632, E-ISSN 1873-7692, Vol. 105, p. 106215-Article in journal (Refereed)
    Abstract [en]

    While it is clear that artificial agents that are able to express emotions increase trust in Human-Machine Interaction, most studies looking at this effect concentrated on the expression of emotions through the visual channel, e.g. facial expressions. However, emotions can be expressed in the vocal channel too, yet the relationship between trust and vocally expressive agents has not yet been investigated. We use a game theory paradigm to examine the influence of smiling in the voice on trusting behavior towards a virtual agent, who responds either trustworthily or untrustworthily in an investment game. We found that a smiling voice increases trust, and that this effect persists over time, despite the accumulation of clear evidence regarding the agents level of trustworthiness in a negotiated interaction. Smiling voices maintain this benefit even in the face of behavioral evidence of untrustworthiness.

  • 26.
    Torre, Ilaria
    et al.
    Trinity College Dublin.
    Goslin, Jeremy
    University of Plymouth.
    White, Laurence
    University of Plymouth.
    Zanatto, Debora
    University of Plymouth.
    Trust in artificial voices: A “congruency effect” of first impressions and behavioural experience2018In: Proceedings of APAScience '18, 2018Conference paper (Refereed)
    Abstract [en]

    Societies rely on trustworthy communication in order to function,and the need for trust clearly extends to human-machine communication.Therefore, it is essential to design machines to elicit trust,so as to make interactions with them acceptable and successful.However, while there is a substantial literature on first impressionsof trustworthiness based on various characteristics, including voice,not much is known about the trust development process. Are firstimpressions maintained over time? Or are they influenced by the experienceof an agent’s behaviour? We addressed these questions inthree experiments using the “iterated investment game”, a methodologyderived from game theory that allows implicit measures oftrust to be collected over time. Participants played the game withvarious agents having different voices: in the first experiment, participantsplayed with a computer agent that had either a StandardSouthern British English accent or a Liverpool accent; in the secondexperiment, they played with a computer agent that had eitheran SSBE or a Birmingham accent; in the third experiment, theyplayed with a robot that had either a natural or a synthetic voice.All these agents behaved either trustworthily or untrustworthily. Inall three experiments, participants trusted the agent with one voicemore when it was trustworthy, and the agent with the other voicemore when it was untrustworthy. This suggests that participantsmight change their trusting behaviour based on the congruencyof the agent’s behaviour with the participant’s first impression.Implications for human-machine interaction design are discussed

    Download full text (pdf)
    fulltext
  • 27.
    Torre, Ilaria
    et al.
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Robotics, Perception and Learning, RPL.
    Holk, Simon
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Robotics, Perception and Learning, RPL.
    Carrigan, Emma
    Trinity College Dublin.
    Leite, Iolanda
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Robotics, Perception and Learning, RPL.
    McDonnell, Rachel
    Trinity College Dublin.
    Harte, Naomi
    Trinity College Dublin.
    Dimensional perception of a ‘smiling McGurkeffect’2021In: 2021 9th International Conference on Affective Computing and Intelligent Interaction (ACII), Institute of Electrical and Electronics Engineers (IEEE) , 2021Conference paper (Refereed)
    Abstract [en]

    Multisensory integration influences emotional perception, as the McGurk effect demonstrates for the communication between humans. Human physiology implicitly links the production of visual features with other modes like the audio channel: Face muscles responsible for a smiling face also stretch the vocal cords that results in a characteristic smiling voice. For artificial agents capable of multimodal expression, this linkage is modeled explicitly. In our study, we observe the influence of visual and audio channel on the perception of the agent’s emotional state. We created two virtual characters to control for anthropomorphic appearance. We record videos of these agents either with matching or mismatching emotional expression in the audio and visual channel. In an online study we measured the agent’s perceived valence and arousal. Our results show that a matched smiling voice and smiling face increase both dimensions of the Circumplex model of emotions: ratings of valence and arousal grow. When the channels present conflicting information, any type of smiling results in higher arousal rating, but only the visual channel increases the perceived valence. When engineers are constrained in their design choices, we suggest they should give precedence to convey the artificial agent’s emotional state through the visual channel.

    Download full text (pdf)
    fulltext
  • 28.
    Torre, Ilaria
    et al.
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Robotics, Perception and Learning, RPL.
    Holk, Simon
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Robotics, Perception and Learning, RPL.
    Yadollahi, Elmira
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Robotics, Perception and Learning, RPL.
    Leite, Iolanda
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Robotics, Perception and Learning, RPL.
    McDonnell, R.
    Trinity College Dublin, Dublin, Ireland.
    Harte, N.
    Trinity College Dublin, Dublin, Ireland.
    Smiling in the Face and Voice of Avatars and Robots: Evidence for a ‘smiling McGurk Effect’2022In: IEEE Transactions on Affective Computing, E-ISSN 1949-3045, p. 1-12Article in journal (Refereed)
    Abstract [en]

    Multisensory integration influences emotional perception, as the McGurk effect demonstrates for the communication between humans. Human physiology implicitly links the production of visual features with other modes like the audio channel: Face muscles responsible for a smiling face also stretch the vocal cords that result in a characteristic smiling voice. For artificial agents capable of multimodal expression, this linkage is modeled explicitly. In our studies, we observe the influence of visual and audio channels on the perception of the agents' emotional expression. We created videos of virtual characters and social robots either with matching or mismatching emotional expressions in the audio and visual channels. In two online studies, we measured the agents' perceived valence and arousal. Our results consistently lend support to the ‘emotional McGurk effect' hypothesis, according to which face transmits valence information, and voice transmits arousal. When dealing with dynamic virtual characters, visual information is enough to convey both valence and arousal, and thus audio expressivity need not be congruent. When dealing with robots with fixed facial expressions, however, both visual and audio information need to be present to convey the intended expression.

  • 29.
    Torre, Ilaria
    et al.
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Robotics, Perception and Learning, RPL. Chalmers Univ Technol, Dept Comp Sci & Engn, Gothenburg, Sweden.
    Lagerstedt, Erik
    Univ Skövde, Sch Informat, Skövde, Sweden..
    Dennler, Nathaniel
    Univ Southern Calif, Dept Comp Sci, Los Angeles, CA 90007 USA..
    Seaborn, Katie
    Tokyo Inst Technol, Dept Ind Engn & Econ, Tokyo, Japan..
    Leite, Iolanda
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Robotics, Perception and Learning, RPL.
    Székely, Éva
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Speech, Music and Hearing, TMH.
    Can a gender-ambiguous voice reduce gender stereotypes in human-robot interactions?2023In: 2023 32ND IEEE INTERNATIONAL CONFERENCE ON ROBOT AND HUMAN INTERACTIVE COMMUNICATION, RO-MAN, Institute of Electrical and Electronics Engineers (IEEE) , 2023, p. 106-112Conference paper (Refereed)
    Abstract [en]

    When deploying robots, its physical characteristics, role, and tasks are often fixed. Such factors can also be associated with gender stereotypes among humans, which then transfer to the robots. One factor that can induce gendering but is comparatively easy to change is the robot's voice. Designing voice in a way that interferes with fixed factors might therefore be a way to reduce gender stereotypes in human-robot interaction contexts. To this end, we have conducted a video-based online study to investigate how factors that might inspire gendering of a robot interact. In particular, we investigated how giving the robot a gender-ambiguous voice can affect perception of the robot. We compared assessments (n=111) of videos in which a robot's body presentation and occupation mis/matched with human gender stereotypes. We found evidence that a gender-ambiguous voice can reduce gendering of a robot endowed with stereotypically feminine or masculine attributes. The results can inform more just robot design while opening new questions regarding the phenomenon of robot gendering.

  • 30.
    Torre, Ilaria
    et al.
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Robotics, Perception and Learning, RPL.
    Latupeirissa, Adrian Benigno
    KTH, School of Electrical Engineering and Computer Science (EECS), Human Centered Technology, Media Technology and Interaction Design, MID.
    McGinn, Conor
    Trinity College Dublin.
    How context shapes the appropriateness of a robot’s voice2020In: 29th IEEE International Conference on Robot and Human Interactive Communication, RO-MAN 2020, Institute of Electrical and Electronics Engineers (IEEE), 2020, Vol. 9223449, p. 215-222Conference paper (Refereed)
    Abstract [en]

    Social robots have a recognizable physical appearance, a distinct voice, and interact with users in specific contexts. Previous research has suggested a 'matching hypothesis', which seeks to rationalise how people judge a robot's appropriateness for a task by its appearance. Other research has extended this to cover combinations of robot voices and appearances. In this paper, we examine the missing connection between robot voice, robot appearance, and deployment context. In so doing, we asked participants to match a robot image to a voice within a defined interaction context. We selected widely available social robots, identified task contexts they are used in, and manipulated the voices in terms of gender, naturalness, and accent. We found that the task context mediates the 'matching hypothesis'. People consistently selected a robot based on a vocal feature for a certain context, and a different robot based on the same vocal feature for another context. We suggest that robot voice design should take advantage of current technology that enables the creation and tuning of custom voices. They are a flexible tool to increase perception of appropriateness, which has a positive influence on Human-Robot Interaction. 

    Download full text (pdf)
    fulltext
  • 31.
    Torre, Ilaria
    et al.
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Robotics, Perception and Learning, RPL.
    Le Maguer, Sébastien
    Trinity College Dublin.
    Should robots have accents?2020In: 29th IEEE International Conference on Robot and Human Interactive Communication, RO-MAN 2020August 2020, Institute of Electrical and Electronics Engineers (IEEE), 2020, p. 208-214, article id 9223599Conference paper (Refereed)
    Abstract [en]

    Accents are vocal features that immediately tella listener whether a speaker comes from their same place,i.e. whether they share a social group. This in-groupness isimportant, as people tend to prefer interacting with others whobelong to their same groups. Accents also evoke attitudinalresponses based on their supposed prestigious status. Theseaccent-based perceptions might affect interactions between humansand robots. Yet, very few studies so far have investigatedthe effect of accented robot speakers on users’ perceptions andbehaviour, and none have collected users’ explicit preferenceson robot accents. In this paper we present results from asurvey of over 500 British speakers, who indicated what accentthey would like a robot to have. The biggest proportion ofparticipants wanted a robot to have a Standard SouthernBritish English (SSBE) accent, followed by an Irish accent.Crucially, very few people wanted a robot with their sameaccent, or with a machine-like voice. These explicit preferencesmight not turn out to predict more successful interactions, alsobecause of the unrealistic expectations that such human-likevocal features might generate in a user. Nonetheless, it seemsthat people have an idea of how their artificial companionsshould sound like, and this preference should be consideredwhen designing them.

    Download full text (pdf)
    fulltext
  • 32.
    Torre, Ilaria
    et al.
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Robotics, Perception and Learning, RPL.
    Linard, Alexis
    Steen, Anders
    Tumová, Jana
    Leite, Iolanda
    Should Robots Chicken?: How Anthropomorphism and Perceived Autonomy Influence Trajectories in a Game-Theoretic Problem2021In: Proceedings of the 2021 ACM/IEEE International Conference on Human-Robot Interaction, Association for Computing Machinery , 2021, p. 370-379Conference paper (Refereed)
    Abstract [en]

    Two people walking towards each other in a colliding course is an everyday problem of human-human interaction. In spite of the different environmental and individual factors that might jeopardise successful human trajectories, people are generally skilled at avoiding crashing into each other. However, it is not clear if the same strategies will apply when a human is in a colliding course with a robot, nor which (if any) robot-related factors will influence the human’s decision to swerve or not. In this work, we present the results of an online study where participants walked towards a virtual robot that differed in terms of anthropomorphism and perceived autonomy, and had to decide whether to swerve, or continue straight. The experiment was inspired by the game-theoretic game of chicken. We found that people performed more swerving actions when they believed the robot to be teleoperated by another participant. When they swerved, they also swerved closer to the robot with high levels of human-likeness, and farther away from the robot with low anthropomorphism score, suggesting a higher uncertainty about the mechanical-looking robot’s intentions. These results are discussed in the context of socially-aware robot navigation, and will be used to design novel algorithms for robot trajectories that take robot-related differences into account.

  • 33.
    Torre, Ilaria
    et al.
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Robotics, Perception and Learning, RPL.
    Tuncer, Sylvaine
    Kings Coll London, London, England..
    McDuff, Daniel
    Microsoft Res, Redmond, WA USA..
    Czerwinski, Mary
    Microsoft Res, Redmond, WA USA..
    Exploring the Effects of Virtual Agents' Smiles on Human-Agent Interaction: A Mixed-Methods Study2021In: 2021 9Th International Conference On Affective Computing And Intelligent Interaction (Acii), Institute of Electrical and Electronics Engineers (IEEE) , 2021Conference paper (Refereed)
    Abstract [en]

    Artificial agents' smiling behaviour is likely to influence their likeability and the quality of user experience. While studies of human interaction highlight the importance of smile dynamics, this feature is often lacking in artificial agents, presenting a design opportunity. We developed a virtual motivational therapist with four smiling behaviours, varying in terms of quality and dynamism. We video-recorded experimental sessions with participants who posed as patients in a therapy session. The data were analysed combining a mix of quantitative and qualitative methods, focusing on participants' own facial expressions during the interaction. Results suggest that the condition driven using data from a real therapist, where smiles are dynamic and occur at specific moments, is the most effective. We further discuss the particular importance of smile as a multipurpose emotional display in human-machine interaction.

    Download full text (pdf)
    fulltext
  • 34.
    Torre, Ilaria
    et al.
    University of Plymouth.
    White, Laurence
    University of Plymouth.
    Goslin, Jeremy
    University of Plymouth.
    Behavioural mediation of prosodic cues to implicit judgements of trustworthiness2016In: Proceedings of the eighth International Conference on Speech Prosody 2016, ISCA , 2016Conference paper (Refereed)
    Download full text (pdf)
    fulltext
  • 35.
    Winkle, Katie
    et al.
    Uppsala Universitet, Lägerhyddsvägen 1, Uppsala, Uppland, 752 37, Sweden, Lägerhyddsvägen 1, Uppland.
    Lagerstedt, Erik
    University of Skövde, Högskolevägen 1, Skövde, Västra Götaland, 541 46, Sweden, Högskolevägen 1, Västra Götaland.
    Torre, Ilaria
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Robotics, Perception and Learning, RPL.
    Offenwanger, Anna
    Université Paris-Saclay, CNRS, Inria, LISN, Rue du Belvédère, Orsay, Île-de-France, 91400, France, Rue du Belvédère, Île-de-France.
    15 Years of (Who)man Robot Interaction: Reviewing the H in Human-Robot Interaction2023In: ACM Transactions on Human-Robot Interaction, E-ISSN 2573-9522, Vol. 12, no 3, article id 3571718Article in journal (Refereed)
    Abstract [en]

    Recent work identified a concerning trend of disproportional gender representation in research participants in Human-Computer Interaction (HCI). Motivated by the fact that Human-Robot Interaction (HRI) shares many participant practices with HCI, we explored whether this trend is mirrored in our field. By producing a dataset covering participant gender representation in all 684 full papers published at the HRI conference from 2006-2021, we identify current trends in HRI research participation. We find an over-representation of men in research participants to date, as well as inconsistent and/or incomplete gender reporting, which typically engages in a binary treatment of gender at odds with published best practice guidelines. We further examine if and how participant gender has been considered in user studies to date, in-line with current discourse surrounding the importance and/or potential risks of gender based analyses. Finally, we complement this with a survey of HRI researchers to examine correlations between who is doing with the who is taking part, to further reflect on factors which seemingly influence gender bias in research participation across different sub-fields of HRI. Through our analysis, we identify areas for improvement, but also reason for optimism, and derive some practical suggestions for HRI researchers going forward.

  • 36.
    Zhang, Brian J.
    et al.
    Oregon State Univ, Collaborat Robot & Intelligent Syst Inst, Corvallis, OR 97331 USA..
    Orthmann, Bastian
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Robotics, Perception and Learning, RPL.
    Torre, Ilaria
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Robotics, Perception and Learning, RPL.
    Bresin, Roberto
    KTH, School of Electrical Engineering and Computer Science (EECS), Human Centered Technology, Media Technology and Interaction Design, MID.
    Fick, Jason
    Oregon State Univ, Mus Dept, Corvallis, OR 97331 USA..
    Leite, Iolanda
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Robotics, Perception and Learning, RPL.
    Fitter, Naomi T.
    Oregon State Univ, Collaborat Robot & Intelligent Syst Inst, Corvallis, OR 97331 USA..
    Hearing it Out: Guiding Robot Sound Design through Design Thinking2023In: 2023 32ND IEEE INTERNATIONAL CONFERENCE ON ROBOT AND HUMAN INTERACTIVE COMMUNICATION, RO-MAN, Institute of Electrical and Electronics Engineers (IEEE) , 2023, p. 2064-2071Conference paper (Refereed)
    Abstract [en]

    Sound can benefit human-robot interaction, but little work has explored questions on the design of nonverbal sound for robots. The unique confluence of sound design and robotics expertise complicates these questions, as most roboticists do not have sound design expertise, necessitating collaborations with sound designers. We sought to understand how roboticists and sound designers approach the problem of robot sound design through two qualitative studies. The first study followed discussions by robotics researchers in focus groups, where these experts described motivations to add robot sound for various purposes. The second study guided music technology students through a generative activity for robot sound design; these sound designers in-training demonstrated high variability in design intent, processes, and inspiration. To unify the two perspectives, we structured recommendations through the design thinking framework, a popular design process. The insights provided in this work may aid roboticists in implementing helpful sounds in their robots, encourage sound designers to enter into collaborations on robot sound, and give key tips and warnings to both.

1 - 36 of 36
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf