kth.sePublications KTH
Change search
Link to record
Permanent link

Direct link
Publications (10 of 16) Show all publications
van Waveren, S., Pek, C., Leite, I., Tumova, J. & Kragic, D. (2023). Generating Scenarios from High-Level Specifications for Object Rearrangement Tasks. In: 2023 IEEE/RSJ International Conference on Intelligent Robots and Systems, IROS 2023: . Paper presented at 2023 IEEE/RSJ International Conference on Intelligent Robots and Systems, IROS 2023, Detroit, United States of America, Oct 1 2023 - Oct 5 2023 (pp. 11420-11427). Institute of Electrical and Electronics Engineers (IEEE)
Open this publication in new window or tab >>Generating Scenarios from High-Level Specifications for Object Rearrangement Tasks
Show others...
2023 (English)In: 2023 IEEE/RSJ International Conference on Intelligent Robots and Systems, IROS 2023, Institute of Electrical and Electronics Engineers (IEEE) , 2023, p. 11420-11427Conference paper, Published paper (Refereed)
Abstract [en]

Rearranging objects is an essential skill for robots. To quickly teach robots new rearrangements tasks, we would like to generate training scenarios from high-level specifications that define the relative placement of objects for the task at hand. Ideally, to guide the robot's learning we also want to be able to rank these scenarios according to their difficulty. Prior work has shown how generating diverse scenario from specifications and providing the robot with easy-to-difficult samples can improve the learning. Yet, existing scenario generation methods typically cannot generate diverse scenarios while controlling their difficulty. We address this challenge by conditioning generative models on spatial logic specifications to generate spatially-structured scenarios that meet the specification and desired difficulty level. Our experiments showed that generative models are more effective and data-efficient than rejection sam-pling and that the spatially-structured scenarios can drastically improve training of downstream tasks by orders of magnitude.

Place, publisher, year, edition, pages
Institute of Electrical and Electronics Engineers (IEEE), 2023
National Category
Robotics and automation
Identifiers
urn:nbn:se:kth:diva-342642 (URN)10.1109/IROS55552.2023.10341369 (DOI)001136907804123 ()2-s2.0-85182525633 (Scopus ID)
Conference
2023 IEEE/RSJ International Conference on Intelligent Robots and Systems, IROS 2023, Detroit, United States of America, Oct 1 2023 - Oct 5 2023
Note

Part of ISBN 9781665491907

QC 20240125

Available from: 2024-01-25 Created: 2024-01-25 Last updated: 2025-02-09Bibliographically approved
van Waveren, S., Rudling, R., Leite, I., Jensfelt, P. & Pek, C. (2023). Increasing perceived safety in motion planning for human-drone interaction. In: HRI 2023: Proceedings of the 2023 ACM/IEEE International Conference on Human-Robot Interaction. Paper presented at 18th Annual ACM/IEEE International Conference on Human-Robot Interaction, HRI 2023, Stockholm, Sweden, Mar 13 2023 - Mar 16 2023 (pp. 446-455). Association for Computing Machinery (ACM)
Open this publication in new window or tab >>Increasing perceived safety in motion planning for human-drone interaction
Show others...
2023 (English)In: HRI 2023: Proceedings of the 2023 ACM/IEEE International Conference on Human-Robot Interaction, Association for Computing Machinery (ACM) , 2023, p. 446-455Conference paper, Published paper (Refereed)
Abstract [en]

Safety is crucial for autonomous drones to operate close to humans. Besides avoiding unwanted or harmful contact, people should also perceive the drone as safe. Existing safe motion planning approaches for autonomous robots, such as drones, have primarily focused on ensuring physical safety, e.g., by imposing constraints on motion planners. However, studies indicate that ensuring physical safety does not necessarily lead to perceived safety. Prior work in Human-Drone Interaction (HDI) shows that factors such as the drone's speed and distance to the human are important for perceived safety. Building on these works, we propose a parameterized control barrier function (CBF) that constrains the drone's maximum deceleration and minimum distance to the human and update its parameters on people's ratings of perceived safety. We describe an implementation and evaluation of our approach. Results of a withinsubject user study (Ng= 15) show that we can improve perceived safety of a drone by adjusting to people individually.

Place, publisher, year, edition, pages
Association for Computing Machinery (ACM), 2023
Keywords
control barrier functions, human-drone interaction, motion planning, perceived safety
National Category
Computer Systems
Identifiers
urn:nbn:se:kth:diva-333381 (URN)10.1145/3568162.3576966 (DOI)001504959700049 ()2-s2.0-85150349732 (Scopus ID)
Conference
18th Annual ACM/IEEE International Conference on Human-Robot Interaction, HRI 2023, Stockholm, Sweden, Mar 13 2023 - Mar 16 2023
Note

Part of ISBN 9781450399647

QC 20230801

Available from: 2023-08-01 Created: 2023-08-01 Last updated: 2025-12-08Bibliographically approved
Harrison, K., Perugia, G., Correia, F., Somasundaram, K., van Waveren, S., Paiva, A. & Loutfi, A. (2023). The Imperfectly Relatable Robot: An Interdisciplinary Workshop on the Role of Failure in HRI. In: HRI 2023: Companion of the ACM/IEEE International Conference on Human-Robot Interaction. Paper presented at 18th Annual ACM/IEEE International Conference on Human-Robot Interaction, HRI 2023, Stockholm, Sweden, Mar 13 2023 - Mar 16 2023 (pp. 917-919). Association for Computing Machinery (ACM)
Open this publication in new window or tab >>The Imperfectly Relatable Robot: An Interdisciplinary Workshop on the Role of Failure in HRI
Show others...
2023 (English)In: HRI 2023: Companion of the ACM/IEEE International Conference on Human-Robot Interaction, Association for Computing Machinery (ACM) , 2023, p. 917-919Conference paper, Published paper (Refereed)
Abstract [en]

Focusing on failure to improve human-robot interactions represents a novel approach that calls into question human expectations of robots, as well as posing ethical and methodological challenges to researchers. Fictional representations of robots (still for many non-expert users the primary source of expectations and assumptions about robots) often emphasize the ways in which robots surpass/perfect humans, rather than portraying them as fallible. Thus, to encounter robots that come too close, drop items or stop suddenly starts to close the gap between fiction and reality. These kinds of failures - if mitigated by explanation or recovery procedures - have the potential to make the robot a little more relatable and human-like. However, studying failures in human-robot interaction requires producing potentially difficult or uncomfortable interactions in which robots failing to behave as expected may seem counterintuitive and unethical. In this space, interdisciplinary conversations are the key to untangling the multiple challenges and bringing themes of power and context into view. In this workshop, we invite researchers from across the disciplines to an interactive, interdisciplinary discussion around failure in social robotics. Topics for discussion include (but are not limited to) methodological and ethical challenges around studying failure in HRI, epistemological gaps in defining and understanding failure in HRI, sociocultural expectations around failure and users' responses.

Place, publisher, year, edition, pages
Association for Computing Machinery (ACM), 2023
Keywords
Ethics, Failure, Faulty robots, Human-Robot Interaction, Interdisciplinary, Methodology, Social Robotics
National Category
Human Computer Interaction Ethics
Identifiers
urn:nbn:se:kth:diva-333373 (URN)10.1145/3568294.3579952 (DOI)001054975700203 ()2-s2.0-85150440939 (Scopus ID)
Conference
18th Annual ACM/IEEE International Conference on Human-Robot Interaction, HRI 2023, Stockholm, Sweden, Mar 13 2023 - Mar 16 2023
Note

Part of ISBN 9781450399708

QC 20230801

Available from: 2023-08-01 Created: 2023-08-01 Last updated: 2023-10-17Bibliographically approved
van Waveren, S., Pek, C., Tumova, J. & Leite, I. (2022). Correct Me If I'm Wrong: Using Non-Experts to Repair Reinforcement Learning Policies. In: Proceedings of the 17th ACM/IEEE International Conference on Human-Robot Interaction: . Paper presented at Proceedings of the 17th ACM/IEEE International Conference on Human-Robot Interaction, March 7-10, 2022 (pp. 493-501). Institute of Electrical and Electronics Engineers (IEEE)
Open this publication in new window or tab >>Correct Me If I'm Wrong: Using Non-Experts to Repair Reinforcement Learning Policies
2022 (English)In: Proceedings of the 17th ACM/IEEE International Conference on Human-Robot Interaction, Institute of Electrical and Electronics Engineers (IEEE) , 2022, p. 493-501Conference paper, Published paper (Refereed)
Abstract [en]

Reinforcement learning has shown great potential for learning sequential decision-making tasks. Yet, it is difficult to anticipate all possible real-world scenarios during training, causing robots to inevitably fail in the long run. Many of these failures are due to variations in the robot's environment. Usually experts are called to correct the robot's behavior; however, some of these failures do not necessarily require an expert to solve them. In this work, we query non-experts online for help and explore 1) if/how non-experts can provide feedback to the robot after a failure and 2) how the robot can use this feedback to avoid such failures in the future by generating shields that restrict or correct its high-level actions. We demonstrate our approach on common daily scenarios of a simulated kitchen robot. The results indicate that non-experts can indeed understand and repair robot failures. Our generated shields accelerate learning and improve data-efficiency during retraining.

Place, publisher, year, edition, pages
Institute of Electrical and Electronics Engineers (IEEE), 2022
Series
ACM IEEE International Conference on Human-Robot Interaction, ISSN 2167-2121
Keywords
robot failure, policy repair, non-experts, shielded reinforcement learning
National Category
Robotics and automation
Identifiers
urn:nbn:se:kth:diva-308441 (URN)10.1109/HRI53351.2022.9889604 (DOI)000869793600054 ()2-s2.0-85140707989 (Scopus ID)
Conference
Proceedings of the 17th ACM/IEEE International Conference on Human-Robot Interaction, March 7-10, 2022
Note

Part of proceedings: ISBN 978-1-6654-0731-1

QC 20220215

Available from: 2022-02-07 Created: 2022-02-07 Last updated: 2025-02-09Bibliographically approved
van Waveren, S. (2022). Leveraging Non-Experts and Formal Methods to Automatically Correct Robot Failures. In: PROCEEDINGS OF THE 2022 17TH ACM/IEEE INTERNATIONAL CONFERENCE ON HUMAN-ROBOT INTERACTION (HRI '22): . Paper presented at 17th Annual ACM/IEEE International Conference on Human-Robot Interaction (HRI), MAR 07-10, 2022, ELECTR NETWORK (pp. 1182-1184). Institute of Electrical and Electronics Engineers (IEEE)
Open this publication in new window or tab >>Leveraging Non-Experts and Formal Methods to Automatically Correct Robot Failures
2022 (English)In: PROCEEDINGS OF THE 2022 17TH ACM/IEEE INTERNATIONAL CONFERENCE ON HUMAN-ROBOT INTERACTION (HRI '22), Institute of Electrical and Electronics Engineers (IEEE) , 2022, p. 1182-1184Conference paper, Published paper (Refereed)
Abstract [en]

State-of-the-art robots are not yet fully equipped to automatically correct their policy when they encounter new situations during deployment. We argue that in common everyday robot tasks, failures may be resolved by knowledge that non-experts could provide. Our research aims to integrate elements of formal synthesis approaches into computational human-robot interaction to develop verifiable robots that can automatically correct their policy using non-expert feedback on the fly. Preliminary results from two online studies show that non-experts can indeed correct failures and that robots can use the feedback to automatically synthesize correction mechanisms to avoid failures.

Place, publisher, year, edition, pages
Institute of Electrical and Electronics Engineers (IEEE), 2022
Series
ACM IEEE International Conference on Human-Robot Interaction, ISSN 2167-2121
Keywords
robot failure, policy repair, non-experts, shielded reinforcement learning
National Category
Robotics and automation
Identifiers
urn:nbn:se:kth:diva-322470 (URN)10.1109/HRI53351.2022.9889361 (DOI)000869793600188 ()2-s2.0-85140752171 (Scopus ID)
Conference
17th Annual ACM/IEEE International Conference on Human-Robot Interaction (HRI), MAR 07-10, 2022, ELECTR NETWORK
Note

Part of proceedings: ISBN 978-1-6654-0731-1

QC 20221216

Available from: 2022-12-16 Created: 2022-12-16 Last updated: 2025-02-09Bibliographically approved
van Waveren, S. (2022). Towards Automatically Correcting Robot Behavior Using Non-Expert Feedback. (Doctoral dissertation). Stockholm: KTH Royal Institute of Technology
Open this publication in new window or tab >>Towards Automatically Correcting Robot Behavior Using Non-Expert Feedback
2022 (English)Doctoral thesis, comprehensive summary (Other academic)
Abstract [en]

Robots that operate in human environments need the capability to adapt their behavior to new situations. Most robots so far rely on pre-programmed behavior or machine learning algorithms trained offline with selected data. Due to the large number of possible situations robots might encounter, it becomes impractical to define or learn all behaviors before deployment, causing them to inevitably fail at some point in time. As a result of this inability to adapt to new situations, the robot might fail to successfully complete its task or achieve a goal in a way that defies people's expectations or preferences. Ideally, robots need to ability to autonomously collect additional behaviors and constraints that enable them to correct their behaviors.The topic of this dissertation is robot behavior correction using feedback from non-experts, people who are not necessarily programmers or roboticists. We explore how non-experts can help robots recover when their plan or policy fails. Furthermore, working with and around humans, robots need to adapt to user preferences. For instance, users might prefer their autonomous vehicle to adopt a defensive driving style over an aggressive one, or someone might prefer their coffee mug to be placed on the coffee table left of their chair. In many everyday situations, robots will require additional rules that do not require technical knowledge. For instance, a rule that the robot should not place the coffee mug too close to the edge of the table, or that the robot might need to open the door of a cabinet first before it can place something in it. We propose an approach that leverages knowledge from non-experts to provide input to correct robot behaviors. We identify two main types of input: what the robot should do (task goals and constraints) and how the robot should achieve its task (preferences and decision-making). This dissertation explores this approach by drawing on human-robot interaction research on robot failures, crowdsourcing, and machine learning for large-scale data collection and generation, and techniques from formal methods to ensure the safety and correctness of the robot. The work described in this dissertation is a step towards better understanding how we can design robots that can automatically correct their behavior using non-expert feedback and what the challenges are of non-expert robot behavior correction.

Abstract [sv]

Robotar som agerar i miljöer med människor måste ha egenskapen att anpassa sig till nya situationer. De flesta robotar har hittills utgått från förprogrammerat beteende eller beteende från maskininlärning som tränats offline. På grund av det stora antalet möjliga situationer som en robot kan befinna sig i, så är det opraktiskt att definiera eller lära sig alla beteenden före utplacering, vilket leder till att roboten oundvikligen misslyckas vid någon tidpunkt. Resultatet av robotens oförmåga att hantera nya situationer är att den kan misslyckas med sitt uppdrag eller uppfylla sitt mål på det sättet som förväntas eller föredras. I den bästa av världar har roboten egenskapen att autonomt samla ytterligare beteenden och begränsningar som möjliggör korrekt beteende. Denna avhandlings ämne är korrigering av robotbeteende genom användning av återkoppling från personer som inte är skolade inom programmering eller robotik, d.v.s. lekmän.Vi utforskar hur lekmän kan hjälpa robotar att återhämta sig när deras plan eller policy misslyckas. Vidare måste robotar som arbetar med och runt människor kunna ta hänsyn till användarnas preferenser. Till exempel så kan användare föredra defensiva körstilar framför aggressiva körstilar hos autonoma bilar, eller en användare kan föredra att deras kaffekopp placeras på soffbordet till vänster om deras stol. I många vardagssituationer kommer robotar behöva ytterligare regler som inte kräver teknisk kunskap. Till exempel, en regel som fastslår att en kaffekopp inte ska placeras för nära kanten på ett bord, eller att roboten måste öppna dörren till ett skåp innan något kan placeras i det.Vi föreslår ett tillvägagångssätt som utnyttjar kunskap från lekmän för att förse en robot med indata för korrekta beteenden. Vi identifierar två huvudsakliga typer av indata: vad en robot borde göra (uppdrag och begränsningar), och hur en robot ska uppfylla sitt uppdrag (preferenser och beslutstagande). Denna avhandling utforskar detta tillvägagångssätt genom att använda sig av forskning inom människa-robot interaktion rörande misslyckande, crowdsourcing, och maskininlärning för storskalig datainsamling och generering, samt tekniker inom formella metoder för att garantera säkerhet och korrekthet. Arbetet som beskrivs i denna avhandling är ett steg mot en bättre förståelse av hur vi kan designa robotar som kan rätta sitt beteende automatiskt genom användning av återkoppling från lekmän samt utmaningarna inom icke-expert korrigering av robotbeteende.

Place, publisher, year, edition, pages
Stockholm: KTH Royal Institute of Technology, 2022. p. vii, 40
Series
TRITA-EECS-AVL ; 2022:73
Keywords
Non-expert robot correction, robot failure, human-robot interaction, robotics
National Category
Robotics and automation
Research subject
Computer Science
Identifiers
urn:nbn:se:kth:diva-321237 (URN)978-91-8040-412-9 (ISBN)
Public defence
2022-12-05, Zoom: https://kth-se.zoom.us/j/61095601099, Kollegiesalen, Brinellvägen 6, Stockholm, 14:00 (English)
Opponent
Supervisors
Note

QC 20221109

Available from: 2022-11-09 Created: 2022-11-09 Last updated: 2025-10-30Bibliographically approved
Karlsson, J., van Waveren, S., Pek, C., Torre, I., Leite, I. & Tumova, J. (2021). Encoding Human Driving Styles in Motion Planning for Autonomous Vehicles. In: 2021 IEEE International Conference on Robotics and Automation (ICRA): . Paper presented at 2021 IEEE International Conference on Robotics and Automation, ICRA 2021, 30 May 2021 through 5 June 2021, Xian, China (pp. 11262-11268). Institute of Electrical and Electronics Engineers (IEEE)
Open this publication in new window or tab >>Encoding Human Driving Styles in Motion Planning for Autonomous Vehicles
Show others...
2021 (English)In: 2021 IEEE International Conference on Robotics and Automation (ICRA), Institute of Electrical and Electronics Engineers (IEEE) , 2021, p. 11262-11268Conference paper, Published paper (Refereed)
Abstract [en]

Driving styles play a major role in the acceptance and use of autonomous vehicles. Yet, existing motion planning techniques can often only incorporate simple driving styles that are modeled by the developers of the planner and not tailored to the passenger. We present a new approach to encode human driving styles through the use of signal temporal logic and its robustness metrics. Specifically, we use a penalty structure that can be used in many motion planning frameworks, and calibrate its parameters to model different automated driving styles. We combine this penalty structure with a set of signal temporal logic formula, based on the Responsibility-Sensitive Safety model, to generate trajectories that we expected to correlate with three different driving styles: aggressive, neutral, and defensive. An online study showed that people perceived different parameterizations of the motion planner as unique driving styles, and that most people tend to prefer a more defensive automated driving style, which correlated to their self-reported driving style.

Place, publisher, year, edition, pages
Institute of Electrical and Electronics Engineers (IEEE), 2021
Series
Proceedings - IEEE International Conference on Robotics and Automation, ISSN 1050-4729
Keywords
Autonomous vehicle navigation, Formal methods in robotics and automation, Human factors, Human-in-the-loop
National Category
Robotics and automation Control Engineering Computer Sciences
Identifiers
urn:nbn:se:kth:diva-310389 (URN)10.1109/ICRA48506.2021.9561777 (DOI)000765738801034 ()2-s2.0-85109997697 (Scopus ID)
Conference
2021 IEEE International Conference on Robotics and Automation, ICRA 2021, 30 May 2021 through 5 June 2021, Xian, China
Note

QC 20220502

Part of proceedings: ISBN 978-1-7281-9077-8

Available from: 2022-04-04 Created: 2022-04-04 Last updated: 2025-02-05Bibliographically approved
van Waveren, S., Carter, E., Örnberg, O. & Leite, I. (2021). Exploring Non-Expert Robot Programming Through Crowdsourcing. Frontiers in Robotics and AI, 8, Article ID 646002.
Open this publication in new window or tab >>Exploring Non-Expert Robot Programming Through Crowdsourcing
2021 (English)In: Frontiers in Robotics and AI, E-ISSN 2296-9144, Vol. 8, article id 646002Article in journal (Refereed) Published
Abstract [en]

A longstanding barrier to deploying robots in the real world is the ongoing need to author robot behavior. Remote data collection-particularly crowdsourcing-is increasingly receiving interest. In this paper, we make the argument to scale robot programming to the crowd and present an initial investigation of the feasibility of this proposed method. Using an off-the-shelf visual programming interface, non-experts created simple robot programs for two typical robot tasks (navigation and pick-and-place). Each needed four subtasks with an increasing number of programming statements (if statement, while loop, variables) for successful completion of the programs. Initial findings of an online study (N = 279) indicate that non-experts, after minimal instruction, were able to create simple programs using an off-the-shelf visual programming interface. We discuss our findings and identify future avenues for this line of research.

Place, publisher, year, edition, pages
Frontiers Media SA, 2021
Keywords
human-robot interaction, non-expert robot programming, crowdsourcing, block-based programming, robots
National Category
Robotics and automation
Identifiers
urn:nbn:se:kth:diva-300864 (URN)10.3389/frobt.2021.646002 (DOI)000685497100001 ()34395535 (PubMedID)2-s2.0-85112482189 (Scopus ID)
Note

QC 20210923

Available from: 2021-09-23 Created: 2021-09-23 Last updated: 2025-02-09Bibliographically approved
Kontogiorgos, D., Abelho Pereira, A. T., Sahindal, B., van Waveren, S. & Gustafson, J. (2020). Behavioural Responses to Robot Conversational Failures. In: HRI '20: Proceedings of the 2020 ACM/IEEE International Conference on Human-Robot Interaction: . Paper presented at International Conference on Human Robot Interaction (HRI), HRI ’20, March 23–26, 2020, Cambridge, United Kingdom. ACM Digital Library
Open this publication in new window or tab >>Behavioural Responses to Robot Conversational Failures
Show others...
2020 (English)In: HRI '20: Proceedings of the 2020 ACM/IEEE International Conference on Human-Robot Interaction, ACM Digital Library, 2020Conference paper, Published paper (Refereed)
Abstract [en]

Humans and robots will increasingly collaborate in domestic environments which will cause users to encounter more failures in interactions. Robots should be able to infer conversational failures by detecting human users’ behavioural and social signals. In this paper, we study and analyse these behavioural cues in response to robot conversational failures. Using a guided task corpus, where robot embodiment and time pressure are manipulated, we ask human annotators to estimate whether user affective states differ during various types of robot failures. We also train a random forest classifier to detect whether a robot failure has occurred and compare results to human annotator benchmarks. Our findings show that human-like robots augment users’ reactions to failures, as shown in users’ visual attention, in comparison to non-humanlike smart-speaker embodiments. The results further suggest that speech behaviours are utilised more in responses to failures when non-human-like designs are present. This is particularly important to robot failure detection mechanisms that may need to consider the robot’s physical design in its failure detection model.

Place, publisher, year, edition, pages
ACM Digital Library, 2020
National Category
Other Engineering and Technologies
Identifiers
urn:nbn:se:kth:diva-267231 (URN)10.1145/3319502.3374782 (DOI)000570011000007 ()2-s2.0-85082009759 (Scopus ID)
Conference
International Conference on Human Robot Interaction (HRI), HRI ’20, March 23–26, 2020, Cambridge, United Kingdom
Note

QC 20200214

Available from: 2020-02-04 Created: 2020-02-04 Last updated: 2025-02-18Bibliographically approved
Kontogiorgos, D., van Waveren, S., Wallberg, O., Abelho Pereira, A. T., Leite, I. & Gustafson, J. (2020). Embodiment Effects in Interactions with Failing Robots. In: CHI '20: Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems: . Paper presented at SIGCHI Conference on Human Factors in Computing Systems, CHI ’20, April 25–30, 2020, Honolulu, HI, USA. ACM Digital Library
Open this publication in new window or tab >>Embodiment Effects in Interactions with Failing Robots
Show others...
2020 (English)In: CHI '20: Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems, ACM Digital Library, 2020Conference paper, Published paper (Refereed)
Abstract [en]

The increasing use of robots in real-world applications will inevitably cause users to encounter more failures in interactions. While there is a longstanding effort in bringing human-likeness to robots, how robot embodiment affects users’ perception of failures remains largely unexplored. In this paper, we extend prior work on robot failures by assessing the impact that embodiment and failure severity have on people’s behaviours and their perception of robots. Our findings show that when using a smart-speaker embodiment, failures negatively affect users’ intention to frequently interact with the device, however not when using a human-like robot embodiment. Additionally, users significantly rate the human-like robot higher in terms of perceived intelligence and social presence. Our results further suggest that in higher severity situations, human-likeness is distracting and detrimental to the interaction. Drawing on quantitative findings, we discuss benefits and drawbacks of embodiment in robot failures that occur in guided tasks.

Place, publisher, year, edition, pages
ACM Digital Library, 2020
National Category
Other Engineering and Technologies
Identifiers
urn:nbn:se:kth:diva-267232 (URN)10.1145/3313831.3376372 (DOI)000695438100045 ()2-s2.0-85081988472 (Scopus ID)
Conference
SIGCHI Conference on Human Factors in Computing Systems, CHI ’20, April 25–30, 2020, Honolulu, HI, USA
Note

QC 20211011

Available from: 2020-02-04 Created: 2020-02-04 Last updated: 2025-02-18Bibliographically approved
Organisations
Identifiers
ORCID iD: ORCID iD iconorcid.org/0000-0003-3729-157x

Search in DiVA

Show all publications