Change search
Link to record
Permanent link

Direct link
BETA
Publications (10 of 13) Show all publications
Kontogiorgos, D., van Waveren, S., Wallberg, O., Abelho Pereira, A. T., Leite, I. & Gustafson, J. (2020). Embodiment Effects in Interactions with Failing Robots. In: : . Paper presented at SIGCHI Conference on Human Factors in Computing Systems, CHI ’20, April 25–30, 2020, Honolulu, HI, USA. ACM Digital Library
Open this publication in new window or tab >>Embodiment Effects in Interactions with Failing Robots
Show others...
2020 (English)Conference paper, Published paper (Refereed)
Abstract [en]

The increasing use of robots in real-world applications will inevitably cause users to encounter more failures in interactions. While there is a longstanding effort in bringing human-likeness to robots, how robot embodiment affects users’ perception of failures remains largely unexplored. In this paper, we extend prior work on robot failures by assessing the impact that embodiment and failure severity have on people’s behaviours and their perception of robots. Our findings show that when using a smart-speaker embodiment, failures negatively affect users’ intention to frequently interact with the device, however not when using a human-like robot embodiment. Additionally, users significantly rate the human-like robot higher in terms of perceived intelligence and social presence. Our results further suggest that in higher severity situations, human-likeness is distracting and detrimental to the interaction. Drawing on quantitative findings, we discuss benefits and drawbacks of embodiment in robot failures that occur in guided tasks.

Place, publisher, year, edition, pages
ACM Digital Library, 2020
National Category
Interaction Technologies
Identifiers
urn:nbn:se:kth:diva-267232 (URN)10.1145/3313831.3376372 (DOI)978-1-4503-6708-0 (ISBN)
Conference
SIGCHI Conference on Human Factors in Computing Systems, CHI ’20, April 25–30, 2020, Honolulu, HI, USA
Note

QC 20200214

Available from: 2020-02-04 Created: 2020-02-04 Last updated: 2020-02-14Bibliographically approved
Vinuesa, R., Azizpour, H., Leite, I., Balaam, M., Dignum, V., Domisch, S., . . . Nerini, F. F. (2020). The role of artificial intelligence in achieving the Sustainable Development Goals. Nature Communications, 11(1), Article ID 233.
Open this publication in new window or tab >>The role of artificial intelligence in achieving the Sustainable Development Goals
Show others...
2020 (English)In: Nature Communications, ISSN 2041-1723, E-ISSN 2041-1723, Vol. 11, no 1, article id 233Article, review/survey (Refereed) Published
Abstract [en]

The emergence of artificial intelligence (AI) and its progressively wider impact on many sectors requires an assessment of its effect on the achievement of the Sustainable Development Goals. Using a consensus-based expert elicitation process, we find that AI can enable the accomplishment of 134 targets across all the goals, but it may also inhibit 59 targets. However, current research foci overlook important aspects. The fast development of AI needs to be supported by the necessary regulatory insight and oversight for AI-based technologies to enable sustainable development. Failure to do so could result in gaps in transparency, safety, and ethical standards.

Place, publisher, year, edition, pages
Nature Research, 2020
National Category
Earth and Related Environmental Sciences
Identifiers
urn:nbn:se:kth:diva-267774 (URN)10.1038/s41467-019-14108-y (DOI)000511916800011 ()31932590 (PubMedID)2-s2.0-85077785900 (Scopus ID)
Note

QC 20200302

Available from: 2020-03-02 Created: 2020-03-02 Last updated: 2020-03-04
Vinuesa, R., Azizpour, H., Leite, I., Balaam, M., Dignum, V., Domisch, S., . . . Nerini, F. F. (2020). The role of artificial intelligence in achieving the Sustainable Development Goals. Nature Communications, 11(1), Article ID 233.
Open this publication in new window or tab >>The role of artificial intelligence in achieving the Sustainable Development Goals
Show others...
2020 (English)In: Nature Communications, ISSN 2041-1723, E-ISSN 2041-1723, Vol. 11, no 1, article id 233Article, review/survey (Refereed) Published
Abstract [en]

The emergence of artificial intelligence (AI) and its progressively wider impact on many sectors requires an assessment of its effect on the achievement of the Sustainable Development Goals. Using a consensus-based expert elicitation process, we find that AI can enable the accomplishment of 134 targets across all the goals, but it may also inhibit 59 targets. However, current research foci overlook important aspects. The fast development of AI needs to be supported by the necessary regulatory insight and oversight for AI-based technologies to enable sustainable development. Failure to do so could result in gaps in transparency, safety, and ethical standards.

Place, publisher, year, edition, pages
NATURE PUBLISHING GROUP, 2020
National Category
Robotics Environmental Sciences
Identifiers
urn:nbn:se:kth:diva-269019 (URN)10.1038/s41467-019-14108-y (DOI)000511916800011 ()31932590 (PubMedID)2-s2.0-85077785900 (Scopus ID)
Note

QC 20200316

Available from: 2020-03-16 Created: 2020-03-16 Last updated: 2020-03-16Bibliographically approved
Li, R., van Almkerk, M., van Waveren, S., Carter, E. & Leite, I. (2019). Comparing Human-Robot Proxemics between Virtual Reality and the Real World. In: HRI '19: 2019 14TH ACM/IEEE INTERNATIONAL CONFERENCE ON HUMAN-ROBOT INTERACTION. Paper presented at 14th ACM/IEEE International Conference on Human-Robot Interaction (HRI), MAR 11-14, 2019, Daegu, SOUTH KOREA (pp. 431-439). IEEE
Open this publication in new window or tab >>Comparing Human-Robot Proxemics between Virtual Reality and the Real World
Show others...
2019 (English)In: HRI '19: 2019 14TH ACM/IEEE INTERNATIONAL CONFERENCE ON HUMAN-ROBOT INTERACTION, IEEE , 2019, p. 431-439Conference paper, Published paper (Refereed)
Abstract [en]

Virtual Reality (VR) can greatly benefit Human-Robot Interaction (HRI) as a tool to effectively iterate across robot designs. However, possible system limitations of VR could influence the results such that they do not fully reflect real-life encounters with robots. In order to better deploy VR in HRI, we need to establish a basic understanding of what the differences are between HRI studies in the real world and in VR. This paper investigates the differences between the real life and VR with a focus on proxemic preferences, in combination with exploring the effects of visual familiarity and spatial sound within the VR experience. Results suggested that people prefer closer interaction distances with a real, physical robot than with a virtual robot in VR. Additionally, the virtual robot was perceived as more discomforting than the real robot, which could result in the differences in proxemics. Overall, these results indicate that the perception of the robot has to be evaluated before the interaction can be studied. However, the results also suggested that VR settings with different visual familiarities are consistent with each other in how they affect HRI proxemics and virtual robot perceptions, indicating the freedom to study HRI in various scenarios in VR. The effect of spatial sound in VR drew a more complex picture and thus calls for more in-depth research to understand its influence on HRI in VR.

Place, publisher, year, edition, pages
IEEE, 2019
Series
ACM IEEE International Conference on Human-Robot Interaction, ISSN 2167-2121
Keywords
Virtual Reality, Human-Robot Interaction, Proxemics
National Category
Computer Vision and Robotics (Autonomous Systems)
Identifiers
urn:nbn:se:kth:diva-252424 (URN)10.1109/HRI.2019.8673116 (DOI)000467295400062 ()2-s2.0-85063987676 (Scopus ID)978-1-5386-8555-6 (ISBN)
Conference
14th ACM/IEEE International Conference on Human-Robot Interaction (HRI), MAR 11-14, 2019, Daegu, SOUTH KOREA
Note

QC 20190614

Available from: 2019-06-14 Created: 2019-06-14 Last updated: 2019-06-14Bibliographically approved
Correia, F., Mascarenhas, S. F., Gomes, S., Arriaga, P., Leite, I., Prada, R., . . . Paiva, A. (2019). Exploring Prosociality in Human-Robot Teams. In: HRI '19: 2019 14TH ACM/IEEE INTERNATIONAL CONFERENCE ON HUMAN-ROBOT INTERACTION. Paper presented at 14th ACM/IEEE International Conference on Human-Robot Interaction (HRI), MAR 11-14, 2019, Daegu, SOUTH KOREA (pp. 143-151). IEEE
Open this publication in new window or tab >>Exploring Prosociality in Human-Robot Teams
Show others...
2019 (English)In: HRI '19: 2019 14TH ACM/IEEE INTERNATIONAL CONFERENCE ON HUMAN-ROBOT INTERACTION, IEEE , 2019, p. 143-151Conference paper, Published paper (Refereed)
Abstract [en]

This paper explores the role of prosocial behaviour when people team up with robots in a collaborative game that presents a social dilemma similar to a public goods game. An experiment was conducted with the proposed game in which each participant joined a team with a prosocial robot and a selfish robot. During 5 rounds of the game, each player chooses between contributing to the team goal (cooperate) or contributing to his individual goal (defect). The prosociality level of the robots only affects their strategies to play the game, as one always cooperates and the other always defects. We conducted a user study at the office of a large corporation with 70 participants where we manipulated the game result (winning or losing) in a between-subjects design. Results revealed two important considerations: (1) the prosocial robot was rated more positively in terms of its social attributes than the selfish robot, regardless of the game result; (2) the perception of competence, the responsibility attribution (blame/credit), and the preference for a future partner revealed significant differences only in the losing condition. These results yield important concerns for the creation of robotic partners, the understanding of group dynamics and, from a more general perspective, the promotion of a prosocial society.

Place, publisher, year, edition, pages
IEEE, 2019
Series
ACM IEEE International Conference on Human-Robot Interaction, ISSN 2167-2121
Keywords
Groups, Social Dilemma, Public Goods Game, Prosocial, Selfish
National Category
Robotics
Identifiers
urn:nbn:se:kth:diva-252423 (URN)10.1109/HRI.2019.8673299 (DOI)000467295400020 ()2-s2.0-85063984815 (Scopus ID)978-1-5386-8555-6 (ISBN)
Conference
14th ACM/IEEE International Conference on Human-Robot Interaction (HRI), MAR 11-14, 2019, Daegu, SOUTH KOREA
Note

QC 20190716

Available from: 2019-07-16 Created: 2019-07-16 Last updated: 2019-07-16Bibliographically approved
Sibirtseva, E., Ghadirzadeh, A., Leite, I., Björkman, M. & Kragic, D. (2019). Exploring Temporal Dependencies in Multimodal Referring Expressions with Mixed Reality. In: Virtual, Augmented and Mixed Reality. Multimodal Interaction 11th International Conference, VAMR 2019, Held as Part of the 21st HCI International Conference, HCII 2019, Orlando, FL, USA, July 26–31, 2019, Proceedings: . Paper presented at 11th International Conference on Virtual, Augmented and Mixed Reality, VAMR 2019, held as part of the 21st International Conference on Human-Computer Interaction, HCI International 2019; Orlando; United States; 26 July 2019 through 31 July 2019 (pp. 108-123). Springer Verlag
Open this publication in new window or tab >>Exploring Temporal Dependencies in Multimodal Referring Expressions with Mixed Reality
Show others...
2019 (English)In: Virtual, Augmented and Mixed Reality. Multimodal Interaction 11th International Conference, VAMR 2019, Held as Part of the 21st HCI International Conference, HCII 2019, Orlando, FL, USA, July 26–31, 2019, Proceedings, Springer Verlag , 2019, p. 108-123Conference paper, Published paper (Refereed)
Abstract [en]

In collaborative tasks, people rely both on verbal and non-verbal cues simultaneously to communicate with each other. For human-robot interaction to run smoothly and naturally, a robot should be equipped with the ability to robustly disambiguate referring expressions. In this work, we propose a model that can disambiguate multimodal fetching requests using modalities such as head movements, hand gestures, and speech. We analysed the acquired data from mixed reality experiments and formulated a hypothesis that modelling temporal dependencies of events in these three modalities increases the model’s predictive power. We evaluated our model on a Bayesian framework to interpret referring expressions with and without exploiting the temporal prior.

Place, publisher, year, edition, pages
Springer Verlag, 2019
Series
Lecture Notes in Artificial Intelligence, ISSN 0302-9743 ; 11575
Keywords
Human-robot interaction, Mixed reality, Multimodal interaction, Referring expressions, Human computer interaction, Human robot interaction, Bayesian frameworks, Collaborative tasks, Hand gesture, Head movements, Multi-modal, Multi-Modal Interactions, Predictive power
National Category
Computer Vision and Robotics (Autonomous Systems)
Identifiers
urn:nbn:se:kth:diva-262467 (URN)10.1007/978-3-030-21565-1_8 (DOI)2-s2.0-85069730416 (Scopus ID)9783030215644 (ISBN)
Conference
11th International Conference on Virtual, Augmented and Mixed Reality, VAMR 2019, held as part of the 21st International Conference on Human-Computer Interaction, HCI International 2019; Orlando; United States; 26 July 2019 through 31 July 2019
Note

QC 20191017

Available from: 2019-10-17 Created: 2019-10-17 Last updated: 2020-01-15Bibliographically approved
van Waveren, S., Björklund, L., Carter, E. & Leite, I. (2019). Knock on Wood: The Effects of Material Choice on the Perception of Social Robots. In: Lecture Notes in Artificial Intelligence series (LNAI): . Paper presented at The Proceedings of the 11th International Conference on Social Robotics (ICSR 2019).
Open this publication in new window or tab >>Knock on Wood: The Effects of Material Choice on the Perception of Social Robots
2019 (English)In: Lecture Notes in Artificial Intelligence series (LNAI), 2019Conference paper, Published paper (Refereed)
Abstract [en]

Many people who interact with robots in the near future will not have prior experience, and they are likely to intuitively form their first impressions of the robot based on its appearance. This paper explores the effects of component material on people’s perception of the robots in terms of social attributes and willingness to interact. Participants watched videos of three robots with different outer materials: wood, synthetic fur, and plastic. The results showed that people rated the perceived warmth of a plastic robot lower than a wooden or furry robot. Ratings of perceived competence and discomfort did not differ between the three robots.

National Category
Engineering and Technology
Identifiers
urn:nbn:se:kth:diva-263923 (URN)10.1007/978-3-030-35888-4_20 (DOI)2-s2.0-85076515288 (Scopus ID)
Conference
The Proceedings of the 11th International Conference on Social Robotics (ICSR 2019)
Funder
Swedish Research Council, 2017-05189
Note

QC 20191122

Available from: 2019-11-19 Created: 2019-11-19 Last updated: 2020-05-11Bibliographically approved
Irfan, B., Ramachandran, A., Spaulding, S., Glas, D. F., Leite, I. & Koay, K. L. (2019). Personalization in Long-Term Human-Robot Interaction. In: HRI '19: 2019 14TH ACM/IEEE INTERNATIONAL CONFERENCE ON HUMAN-ROBOT INTERACTION. Paper presented at 14th ACM/IEEE International Conference on Human-Robot Interaction (HRI), MAR 11-14, 2019, Daegu, SOUTH KOREA (pp. 685-686). IEEE
Open this publication in new window or tab >>Personalization in Long-Term Human-Robot Interaction
Show others...
2019 (English)In: HRI '19: 2019 14TH ACM/IEEE INTERNATIONAL CONFERENCE ON HUMAN-ROBOT INTERACTION, IEEE , 2019, p. 685-686Conference paper, Published paper (Refereed)
Abstract [en]

For practical reasons, most human-robot interaction (HRI) studies focus on short-term interactions between humans and robots. However, such studies do not capture the difficulty of sustaining engagement and interaction quality across long-term interactions. Many real-world robot applications will require repeated interactions and relationship-building over the long term, and personalization and adaptation to users will be necessary to maintain user engagement and to build rapport and trust between the user and the robot. This full-day workshop brings together perspectives from a variety of research areas, including companion robots, elderly care, and educational robots, in order to provide a forum for sharing and discussing innovations, experiences, works-in-progress, and best practices which address the challenges of personalization in long-term HRI.

Place, publisher, year, edition, pages
IEEE, 2019
Series
ACM IEEE International Conference on Human-Robot Interaction, ISSN 2167-2121
Keywords
Personalization, Long-Term Interaction, Human-Robot Interaction, Adaptation, Long-Term Memory, User Modeling, User Recognition
National Category
Computer Vision and Robotics (Autonomous Systems)
Identifiers
urn:nbn:se:kth:diva-252425 (URN)10.1109/HRI.2019.8673076 (DOI)000467295400156 ()2-s2.0-85064003811 (Scopus ID)978-1-5386-8555-6 (ISBN)
Conference
14th ACM/IEEE International Conference on Human-Robot Interaction (HRI), MAR 11-14, 2019, Daegu, SOUTH KOREA
Note

QC 20190715

Available from: 2019-07-15 Created: 2019-07-15 Last updated: 2019-07-15Bibliographically approved
van Waveren, S., Carter, E. J. & Leite, I. (2019). Take one for the team: The effects of error severity in collaborative tasks with social robots. In: IVA 2019 - Proceedings of the 19th ACM International Conference on Intelligent Virtual Agents: . Paper presented at 19th ACM International Conference on Intelligent Virtual Agents, IVA 2019; Paris; France; 2 July 2019 through 5 July 2019 (pp. 151-158). Association for Computing Machinery (ACM)
Open this publication in new window or tab >>Take one for the team: The effects of error severity in collaborative tasks with social robots
2019 (English)In: IVA 2019 - Proceedings of the 19th ACM International Conference on Intelligent Virtual Agents, Association for Computing Machinery (ACM), 2019, p. 151-158Conference paper, Published paper (Refereed)
Abstract [en]

We explore the effects of robot failure severity (no failure vs. lowimpact vs. high-impact) on people's subjective ratings of the robot. We designed an escape room scenario in which one participant teams up with a remotely-controlled Pepper robot.We manipulated the robot's performance at the end of the game: The robot would either correctly follow the participant's instructions (control condition), the robot would fail but people could still complete the task of escaping the room (low-impact condition), or the robot's failure would cause the game to be lost (high-impact condition). Results showed no difference across conditions for people's ratings of the robot in terms of warmth, competence, and discomfort. However, people in the low-impact condition had significantly less faith in the robot's robustness in future escape room scenarios. Open-ended questions revealed interesting trends that are worth pursuing in the future: people may view task performance as a team effort and may blame their team or themselves more for the robot failure in case of a high-impact failure as compared to the low-impact failure.

Place, publisher, year, edition, pages
Association for Computing Machinery (ACM), 2019
Keywords
Failure, Human-robot interaction, Socially collaborative robots
National Category
Electrical Engineering, Electronic Engineering, Information Engineering
Identifiers
urn:nbn:se:kth:diva-262609 (URN)10.1145/3308532.3329475 (DOI)2-s2.0-85069747331 (Scopus ID)9781450366724 (ISBN)
Conference
19th ACM International Conference on Intelligent Virtual Agents, IVA 2019; Paris; France; 2 July 2019 through 5 July 2019
Note

QC 20191022

Available from: 2019-10-22 Created: 2019-10-22 Last updated: 2019-10-22Bibliographically approved
Sibirtseva, E., Kontogiorgos, D., Nykvist, O., Karaoǧuz, H., Leite, I., Gustafson, J. & Kragic, D. (2018). A Comparison of Visualisation Methods for Disambiguating Verbal Requests in Human-Robot Interaction. In: Proceedings 27th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN) 2018: . Paper presented at 27th IEEE International Symposium on Robot and Human Interactive Communication (IEEE RO-MAN), Nanjing, PEOPLES R CHINA, AUG 27-31, 2018. IEEE
Open this publication in new window or tab >>A Comparison of Visualisation Methods for Disambiguating Verbal Requests in Human-Robot Interaction
Show others...
2018 (English)In: Proceedings 27th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN) 2018, IEEE, 2018Conference paper, Published paper (Refereed)
Abstract [en]

Picking up objects requested by a human user is a common task in human-robot interaction. When multiple objects match the user's verbal description, the robot needs to clarify which object the user is referring to before executing the action. Previous research has focused on perceiving user's multimodal behaviour to complement verbal commands or minimising the number of follow up questions to reduce task time. In this paper, we propose a system for reference disambiguation based on visualisation and compare three methods to disambiguate natural language instructions. In a controlled experiment with a YuMi robot, we investigated realtime augmentations of the workspace in three conditions - head-mounted display, projector, and a monitor as the baseline - using objective measures such as time and accuracy, and subjective measures like engagement, immersion, and display interference. Significant differences were found in accuracy and engagement between the conditions, but no differences were found in task time. Despite the higher error rates in the head-mounted display condition, participants found that modality more engaging than the other two, but overall showed preference for the projector condition over the monitor and head-mounted display conditions.

Place, publisher, year, edition, pages
IEEE, 2018
National Category
Human Computer Interaction
Identifiers
urn:nbn:se:kth:diva-235548 (URN)10.1109/ROMAN.2018.8525554 (DOI)000494315600008 ()2-s2.0-85050581339 (Scopus ID)978-1-5386-7981-4 (ISBN)
Conference
27th IEEE International Symposium on Robot and Human Interactive Communication (IEEE RO-MAN), Nanjing, PEOPLES R CHINA, AUG 27-31, 2018
Note

QC 20181207. QC 20191219. QC 20200108

Available from: 2018-09-29 Created: 2018-09-29 Last updated: 2020-01-08Bibliographically approved
Organisations
Identifiers
ORCID iD: ORCID iD iconorcid.org/0000-0002-2212-4325

Search in DiVA

Show all publications