Change search
Link to record
Permanent link

Direct link
BETA
Leite, Iolanda
Publications (7 of 7) Show all publications
Li, R., van Almkerk, M., van Waveren, S., Carter, E. & Leite, I. (2019). Comparing Human-Robot Proxemics between Virtual Reality and the Real World. In: HRI '19: 2019 14TH ACM/IEEE INTERNATIONAL CONFERENCE ON HUMAN-ROBOT INTERACTION. Paper presented at 14th ACM/IEEE International Conference on Human-Robot Interaction (HRI), MAR 11-14, 2019, Daegu, SOUTH KOREA (pp. 431-439). IEEE
Open this publication in new window or tab >>Comparing Human-Robot Proxemics between Virtual Reality and the Real World
Show others...
2019 (English)In: HRI '19: 2019 14TH ACM/IEEE INTERNATIONAL CONFERENCE ON HUMAN-ROBOT INTERACTION, IEEE , 2019, p. 431-439Conference paper, Published paper (Refereed)
Abstract [en]

Virtual Reality (VR) can greatly benefit Human-Robot Interaction (HRI) as a tool to effectively iterate across robot designs. However, possible system limitations of VR could influence the results such that they do not fully reflect real-life encounters with robots. In order to better deploy VR in HRI, we need to establish a basic understanding of what the differences are between HRI studies in the real world and in VR. This paper investigates the differences between the real life and VR with a focus on proxemic preferences, in combination with exploring the effects of visual familiarity and spatial sound within the VR experience. Results suggested that people prefer closer interaction distances with a real, physical robot than with a virtual robot in VR. Additionally, the virtual robot was perceived as more discomforting than the real robot, which could result in the differences in proxemics. Overall, these results indicate that the perception of the robot has to be evaluated before the interaction can be studied. However, the results also suggested that VR settings with different visual familiarities are consistent with each other in how they affect HRI proxemics and virtual robot perceptions, indicating the freedom to study HRI in various scenarios in VR. The effect of spatial sound in VR drew a more complex picture and thus calls for more in-depth research to understand its influence on HRI in VR.

Place, publisher, year, edition, pages
IEEE, 2019
Series
ACM IEEE International Conference on Human-Robot Interaction, ISSN 2167-2121
Keywords
Virtual Reality, Human-Robot Interaction, Proxemics
National Category
Computer Vision and Robotics (Autonomous Systems)
Identifiers
urn:nbn:se:kth:diva-252424 (URN)10.1109/HRI.2019.8673116 (DOI)000467295400062 ()2-s2.0-85063987676 (Scopus ID)978-1-5386-8555-6 (ISBN)
Conference
14th ACM/IEEE International Conference on Human-Robot Interaction (HRI), MAR 11-14, 2019, Daegu, SOUTH KOREA
Note

QC 20190614

Available from: 2019-06-14 Created: 2019-06-14 Last updated: 2019-06-14Bibliographically approved
Correia, F., Mascarenhas, S. F., Gomes, S., Arriaga, P., Leite, I., Prada, R., . . . Paiva, A. (2019). Exploring Prosociality in Human-Robot Teams. In: HRI '19: 2019 14TH ACM/IEEE INTERNATIONAL CONFERENCE ON HUMAN-ROBOT INTERACTION. Paper presented at 14th ACM/IEEE International Conference on Human-Robot Interaction (HRI), MAR 11-14, 2019, Daegu, SOUTH KOREA (pp. 143-151). IEEE
Open this publication in new window or tab >>Exploring Prosociality in Human-Robot Teams
Show others...
2019 (English)In: HRI '19: 2019 14TH ACM/IEEE INTERNATIONAL CONFERENCE ON HUMAN-ROBOT INTERACTION, IEEE , 2019, p. 143-151Conference paper, Published paper (Refereed)
Abstract [en]

This paper explores the role of prosocial behaviour when people team up with robots in a collaborative game that presents a social dilemma similar to a public goods game. An experiment was conducted with the proposed game in which each participant joined a team with a prosocial robot and a selfish robot. During 5 rounds of the game, each player chooses between contributing to the team goal (cooperate) or contributing to his individual goal (defect). The prosociality level of the robots only affects their strategies to play the game, as one always cooperates and the other always defects. We conducted a user study at the office of a large corporation with 70 participants where we manipulated the game result (winning or losing) in a between-subjects design. Results revealed two important considerations: (1) the prosocial robot was rated more positively in terms of its social attributes than the selfish robot, regardless of the game result; (2) the perception of competence, the responsibility attribution (blame/credit), and the preference for a future partner revealed significant differences only in the losing condition. These results yield important concerns for the creation of robotic partners, the understanding of group dynamics and, from a more general perspective, the promotion of a prosocial society.

Place, publisher, year, edition, pages
IEEE, 2019
Series
ACM IEEE International Conference on Human-Robot Interaction, ISSN 2167-2121
Keywords
Groups, Social Dilemma, Public Goods Game, Prosocial, Selfish
National Category
Robotics
Identifiers
urn:nbn:se:kth:diva-252423 (URN)10.1109/HRI.2019.8673299 (DOI)000467295400020 ()2-s2.0-85063984815 (Scopus ID)978-1-5386-8555-6 (ISBN)
Conference
14th ACM/IEEE International Conference on Human-Robot Interaction (HRI), MAR 11-14, 2019, Daegu, SOUTH KOREA
Note

QC 20190716

Available from: 2019-07-16 Created: 2019-07-16 Last updated: 2019-07-16Bibliographically approved
Sibirtseva, E., Ghadirzadeh, A., Leite, I., Björkman, M. & Kragic, D. (2019). Exploring Temporal Dependencies in Multimodal Referring Expressions with Mixed Reality. In: Virtual, Augmented and Mixed Reality. Multimodal Interaction 11th International Conference, VAMR 2019, Held as Part of the 21st HCI International Conference, HCII 2019, Orlando, FL, USA, July 26–31, 2019, Proceedings: . Paper presented at 11th International Conference on Virtual, Augmented and Mixed Reality, VAMR 2019, held as part of the 21st International Conference on Human-Computer Interaction, HCI International 2019; Orlando; United States; 26 July 2019 through 31 July 2019 (pp. 108-123). Springer Verlag
Open this publication in new window or tab >>Exploring Temporal Dependencies in Multimodal Referring Expressions with Mixed Reality
Show others...
2019 (English)In: Virtual, Augmented and Mixed Reality. Multimodal Interaction 11th International Conference, VAMR 2019, Held as Part of the 21st HCI International Conference, HCII 2019, Orlando, FL, USA, July 26–31, 2019, Proceedings, Springer Verlag , 2019, p. 108-123Conference paper, Published paper (Refereed)
Abstract [en]

In collaborative tasks, people rely both on verbal and non-verbal cues simultaneously to communicate with each other. For human-robot interaction to run smoothly and naturally, a robot should be equipped with the ability to robustly disambiguate referring expressions. In this work, we propose a model that can disambiguate multimodal fetching requests using modalities such as head movements, hand gestures, and speech. We analysed the acquired data from mixed reality experiments and formulated a hypothesis that modelling temporal dependencies of events in these three modalities increases the model’s predictive power. We evaluated our model on a Bayesian framework to interpret referring expressions with and without exploiting the temporal prior.

Place, publisher, year, edition, pages
Springer Verlag, 2019
Series
Lecture Notes in Artificial Intelligence, ISSN 0302-9743 ; 11575
Keywords
Human-robot interaction, Mixed reality, Multimodal interaction, Referring expressions, Human computer interaction, Human robot interaction, Bayesian frameworks, Collaborative tasks, Hand gesture, Head movements, Multi-modal, Multi-Modal Interactions, Predictive power
National Category
Computer Vision and Robotics (Autonomous Systems)
Identifiers
urn:nbn:se:kth:diva-262467 (URN)10.1007/978-3-030-21565-1_8 (DOI)2-s2.0-85069730416 (Scopus ID)9783030215644 (ISBN)
Conference
11th International Conference on Virtual, Augmented and Mixed Reality, VAMR 2019, held as part of the 21st International Conference on Human-Computer Interaction, HCI International 2019; Orlando; United States; 26 July 2019 through 31 July 2019
Note

QC 20191017

Available from: 2019-10-17 Created: 2019-10-17 Last updated: 2019-10-17Bibliographically approved
Irfan, B., Ramachandran, A., Spaulding, S., Glas, D. F., Leite, I. & Koay, K. L. (2019). Personalization in Long-Term Human-Robot Interaction. In: HRI '19: 2019 14TH ACM/IEEE INTERNATIONAL CONFERENCE ON HUMAN-ROBOT INTERACTION. Paper presented at 14th ACM/IEEE International Conference on Human-Robot Interaction (HRI), MAR 11-14, 2019, Daegu, SOUTH KOREA (pp. 685-686). IEEE
Open this publication in new window or tab >>Personalization in Long-Term Human-Robot Interaction
Show others...
2019 (English)In: HRI '19: 2019 14TH ACM/IEEE INTERNATIONAL CONFERENCE ON HUMAN-ROBOT INTERACTION, IEEE , 2019, p. 685-686Conference paper, Published paper (Refereed)
Abstract [en]

For practical reasons, most human-robot interaction (HRI) studies focus on short-term interactions between humans and robots. However, such studies do not capture the difficulty of sustaining engagement and interaction quality across long-term interactions. Many real-world robot applications will require repeated interactions and relationship-building over the long term, and personalization and adaptation to users will be necessary to maintain user engagement and to build rapport and trust between the user and the robot. This full-day workshop brings together perspectives from a variety of research areas, including companion robots, elderly care, and educational robots, in order to provide a forum for sharing and discussing innovations, experiences, works-in-progress, and best practices which address the challenges of personalization in long-term HRI.

Place, publisher, year, edition, pages
IEEE, 2019
Series
ACM IEEE International Conference on Human-Robot Interaction, ISSN 2167-2121
Keywords
Personalization, Long-Term Interaction, Human-Robot Interaction, Adaptation, Long-Term Memory, User Modeling, User Recognition
National Category
Computer Vision and Robotics (Autonomous Systems)
Identifiers
urn:nbn:se:kth:diva-252425 (URN)10.1109/HRI.2019.8673076 (DOI)000467295400156 ()2-s2.0-85064003811 (Scopus ID)978-1-5386-8555-6 (ISBN)
Conference
14th ACM/IEEE International Conference on Human-Robot Interaction (HRI), MAR 11-14, 2019, Daegu, SOUTH KOREA
Note

QC 20190715

Available from: 2019-07-15 Created: 2019-07-15 Last updated: 2019-07-15Bibliographically approved
van Waveren, S., Carter, E. J. & Leite, I. (2019). Take one for the team: The effects of error severity in collaborative tasks with social robots. In: IVA 2019 - Proceedings of the 19th ACM International Conference on Intelligent Virtual Agents: . Paper presented at 19th ACM International Conference on Intelligent Virtual Agents, IVA 2019; Paris; France; 2 July 2019 through 5 July 2019 (pp. 151-158). Association for Computing Machinery (ACM)
Open this publication in new window or tab >>Take one for the team: The effects of error severity in collaborative tasks with social robots
2019 (English)In: IVA 2019 - Proceedings of the 19th ACM International Conference on Intelligent Virtual Agents, Association for Computing Machinery (ACM), 2019, p. 151-158Conference paper, Published paper (Refereed)
Abstract [en]

We explore the effects of robot failure severity (no failure vs. lowimpact vs. high-impact) on people's subjective ratings of the robot. We designed an escape room scenario in which one participant teams up with a remotely-controlled Pepper robot.We manipulated the robot's performance at the end of the game: The robot would either correctly follow the participant's instructions (control condition), the robot would fail but people could still complete the task of escaping the room (low-impact condition), or the robot's failure would cause the game to be lost (high-impact condition). Results showed no difference across conditions for people's ratings of the robot in terms of warmth, competence, and discomfort. However, people in the low-impact condition had significantly less faith in the robot's robustness in future escape room scenarios. Open-ended questions revealed interesting trends that are worth pursuing in the future: people may view task performance as a team effort and may blame their team or themselves more for the robot failure in case of a high-impact failure as compared to the low-impact failure.

Place, publisher, year, edition, pages
Association for Computing Machinery (ACM), 2019
Keywords
Failure, Human-robot interaction, Socially collaborative robots
National Category
Electrical Engineering, Electronic Engineering, Information Engineering
Identifiers
urn:nbn:se:kth:diva-262609 (URN)10.1145/3308532.3329475 (DOI)2-s2.0-85069747331 (Scopus ID)9781450366724 (ISBN)
Conference
19th ACM International Conference on Intelligent Virtual Agents, IVA 2019; Paris; France; 2 July 2019 through 5 July 2019
Note

QC 20191022

Available from: 2019-10-22 Created: 2019-10-22 Last updated: 2019-10-22Bibliographically approved
Sibirtseva, E., Kontogiorgos, D., Nykvist, O., Karaoguz, H., Leite, I., Gustafson, J. & Kragic, D. (2018). A Comparison of Visualisation Methods for Disambiguating Verbal Requests in Human-Robot Interaction. In: 2018 27th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN) 2018 27th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN): . Paper presented at ROMAN 2018.
Open this publication in new window or tab >>A Comparison of Visualisation Methods for Disambiguating Verbal Requests in Human-Robot Interaction
Show others...
2018 (English)In: 2018 27th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN) 2018 27th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN), 2018Conference paper, Published paper (Refereed)
Abstract [en]

Picking up objects requested by a human user is a common task in human-robot interaction. When multiple objects match the user's verbal description, the robot needs to clarify which object the user is referring to before executing the action. Previous research has focused on perceiving user's multimodal behaviour to complement verbal commands or minimising the number of follow up questions to reduce task time. In this paper, we propose a system for reference disambiguation based on visualisation and compare three methods to disambiguate natural language instructions. In a controlled experiment with a YuMi robot, we investigated realtime augmentations of the workspace in three conditions - head-mounted display, projector, and a monitor as the baseline - using objective measures such as time and accuracy, and subjective measures like engagement, immersion, and display interference. Significant differences were found in accuracy and engagement between the conditions, but no differences were found in task time. Despite the higher error rates in the head-mounted display condition, participants found that modality more engaging than the other two, but overall showed preference for the projector condition over the monitor and head-mounted display conditions.

National Category
Human Computer Interaction
Identifiers
urn:nbn:se:kth:diva-235548 (URN)10.1109/ROMAN.2018.8525554 (DOI)978-1-5386-7981-4 (ISBN)
Conference
ROMAN 2018
Note

QC 20181207

Available from: 2018-09-29 Created: 2018-09-29 Last updated: 2018-12-07Bibliographically approved
Vijayan, A. E., Alexanderson, S., Beskow, J. & Leite, I. (2018). Using Constrained Optimization for Real-Time Synchronization of Verbal and Nonverbal Robot Behavior. In: 2018 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION (ICRA): . Paper presented at IEEE International Conference on Robotics and Automation (ICRA), MAY 21-25, 2018, Brisbane, AUSTRALIA (pp. 1955-1961). IEEE Computer Society
Open this publication in new window or tab >>Using Constrained Optimization for Real-Time Synchronization of Verbal and Nonverbal Robot Behavior
2018 (English)In: 2018 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION (ICRA), IEEE Computer Society, 2018, p. 1955-1961Conference paper, Published paper (Refereed)
Abstract [en]

Most of the motion re-targeting techniques are grounded on virtual character animation research, which means that they typically assume that the target embodiment has unconstrained joint angular velocities. However, because robots often do have such constraints, traditional re-targeting approaches can originate irregular delays in the robot motion. With the goal of ensuring synchronization between verbal and nonverbal behavior, this paper proposes an optimization framework for processing re-targeted motion sequences that addresses constraints such as joint angle and angular velocities. The proposed framework was evaluated on a humanoid robot using both objective and subjective metrics. While the analysis of the joint motion trajectories provides evidence that our framework successfully performs the desired modifications to ensure verbal and nonverbal behavior synchronization, results from a perceptual study showed that participants found the robot motion generated by our method more natural, elegant and lifelike than a control condition.

Place, publisher, year, edition, pages
IEEE Computer Society, 2018
Series
IEEE International Conference on Robotics and Automation ICRA, ISSN 1050-4729
National Category
Robotics
Identifiers
urn:nbn:se:kth:diva-237162 (URN)000446394501077 ()2-s2.0-85063159854 (Scopus ID)978-1-5386-3081-5 (ISBN)
Conference
IEEE International Conference on Robotics and Automation (ICRA), MAY 21-25, 2018, Brisbane, AUSTRALIA
Note

QC 20181024

Available from: 2018-10-24 Created: 2018-10-24 Last updated: 2019-08-20Bibliographically approved
Organisations

Search in DiVA

Show all publications