kth.sePublications
Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Towards Automatically Correcting Robot Behavior Using Non-Expert Feedback
KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Robotics, Perception and Learning, RPL.ORCID iD: 0000-0003-3729-157x
2022 (English)Doctoral thesis, comprehensive summary (Other academic)
Abstract [en]

Robots that operate in human environments need the capability to adapt their behavior to new situations. Most robots so far rely on pre-programmed behavior or machine learning algorithms trained offline with selected data. Due to the large number of possible situations robots might encounter, it becomes impractical to define or learn all behaviors before deployment, causing them to inevitably fail at some point in time. As a result of this inability to adapt to new situations, the robot might fail to successfully complete its task or achieve a goal in a way that defies people's expectations or preferences. Ideally, robots need to ability to autonomously collect additional behaviors and constraints that enable them to correct their behaviors.The topic of this dissertation is robot behavior correction using feedback from non-experts, people who are not necessarily programmers or roboticists. We explore how non-experts can help robots recover when their plan or policy fails. Furthermore, working with and around humans, robots need to adapt to user preferences. For instance, users might prefer their autonomous vehicle to adopt a defensive driving style over an aggressive one, or someone might prefer their coffee mug to be placed on the coffee table left of their chair. In many everyday situations, robots will require additional rules that do not require technical knowledge. For instance, a rule that the robot should not place the coffee mug too close to the edge of the table, or that the robot might need to open the door of a cabinet first before it can place something in it. We propose an approach that leverages knowledge from non-experts to provide input to correct robot behaviors. We identify two main types of input: what the robot should do (task goals and constraints) and how the robot should achieve its task (preferences and decision-making). This dissertation explores this approach by drawing on human-robot interaction research on robot failures, crowdsourcing, and machine learning for large-scale data collection and generation, and techniques from formal methods to ensure the safety and correctness of the robot. The work described in this dissertation is a step towards better understanding how we can design robots that can automatically correct their behavior using non-expert feedback and what the challenges are of non-expert robot behavior correction.

Abstract [sv]

Robotar som agerar i miljöer med människor måste ha egenskapen att anpassa sig till nya situationer. De flesta robotar har hittills utgått från förprogrammerat beteende eller beteende från maskininlärning som tränats offline. På grund av det stora antalet möjliga situationer som en robot kan befinna sig i, så är det opraktiskt att definiera eller lära sig alla beteenden före utplacering, vilket leder till att roboten oundvikligen misslyckas vid någon tidpunkt. Resultatet av robotens oförmåga att hantera nya situationer är att den kan misslyckas med sitt uppdrag eller uppfylla sitt mål på det sättet som förväntas eller föredras. I den bästa av världar har roboten egenskapen att autonomt samla ytterligare beteenden och begränsningar som möjliggör korrekt beteende. Denna avhandlings ämne är korrigering av robotbeteende genom användning av återkoppling från personer som inte är skolade inom programmering eller robotik, d.v.s. lekmän.Vi utforskar hur lekmän kan hjälpa robotar att återhämta sig när deras plan eller policy misslyckas. Vidare måste robotar som arbetar med och runt människor kunna ta hänsyn till användarnas preferenser. Till exempel så kan användare föredra defensiva körstilar framför aggressiva körstilar hos autonoma bilar, eller en användare kan föredra att deras kaffekopp placeras på soffbordet till vänster om deras stol. I många vardagssituationer kommer robotar behöva ytterligare regler som inte kräver teknisk kunskap. Till exempel, en regel som fastslår att en kaffekopp inte ska placeras för nära kanten på ett bord, eller att roboten måste öppna dörren till ett skåp innan något kan placeras i det.Vi föreslår ett tillvägagångssätt som utnyttjar kunskap från lekmän för att förse en robot med indata för korrekta beteenden. Vi identifierar två huvudsakliga typer av indata: vad en robot borde göra (uppdrag och begränsningar), och hur en robot ska uppfylla sitt uppdrag (preferenser och beslutstagande). Denna avhandling utforskar detta tillvägagångssätt genom att använda sig av forskning inom människa-robot interaktion rörande misslyckande, crowdsourcing, och maskininlärning för storskalig datainsamling och generering, samt tekniker inom formella metoder för att garantera säkerhet och korrekthet. Arbetet som beskrivs i denna avhandling är ett steg mot en bättre förståelse av hur vi kan designa robotar som kan rätta sitt beteende automatiskt genom användning av återkoppling från lekmän samt utmaningarna inom icke-expert korrigering av robotbeteende.

Place, publisher, year, edition, pages
Stockholm: KTH Royal Institute of Technology, 2022. , p. vii, 40
Series
TRITA-EECS-AVL ; 2022:73
Keywords [en]
Non-expert robot correction, robot failure, human-robot interaction, robotics
National Category
Robotics
Research subject
Computer Science
Identifiers
URN: urn:nbn:se:kth:diva-321237ISBN: 978-91-8040-412-9 (print)OAI: oai:DiVA.org:kth-321237DiVA, id: diva2:1709776
Public defence
2022-12-05, Zoom: https://kth-se.zoom.us/j/61095601099, Kollegiesalen, Brinellvägen 6, Stockholm, 14:00 (English)
Opponent
Supervisors
Note

QC 20221109

Available from: 2022-11-09 Created: 2022-11-09 Last updated: 2022-11-16Bibliographically approved
List of papers
1. Take one for the team: The effects of error severity in collaborative tasks with social robots
Open this publication in new window or tab >>Take one for the team: The effects of error severity in collaborative tasks with social robots
2019 (English)In: IVA 2019 - Proceedings of the 19th ACM International Conference on Intelligent Virtual Agents, Association for Computing Machinery (ACM), 2019, p. 151-158Conference paper, Published paper (Refereed)
Abstract [en]

We explore the effects of robot failure severity (no failure vs. lowimpact vs. high-impact) on people's subjective ratings of the robot. We designed an escape room scenario in which one participant teams up with a remotely-controlled Pepper robot.We manipulated the robot's performance at the end of the game: The robot would either correctly follow the participant's instructions (control condition), the robot would fail but people could still complete the task of escaping the room (low-impact condition), or the robot's failure would cause the game to be lost (high-impact condition). Results showed no difference across conditions for people's ratings of the robot in terms of warmth, competence, and discomfort. However, people in the low-impact condition had significantly less faith in the robot's robustness in future escape room scenarios. Open-ended questions revealed interesting trends that are worth pursuing in the future: people may view task performance as a team effort and may blame their team or themselves more for the robot failure in case of a high-impact failure as compared to the low-impact failure.

Place, publisher, year, edition, pages
Association for Computing Machinery (ACM), 2019
Keywords
Failure, Human-robot interaction, Socially collaborative robots
National Category
Electrical Engineering, Electronic Engineering, Information Engineering
Identifiers
urn:nbn:se:kth:diva-262609 (URN)10.1145/3308532.3329475 (DOI)000556671900033 ()2-s2.0-85069747331 (Scopus ID)
Conference
19th ACM International Conference on Intelligent Virtual Agents, IVA 2019; Paris; France; 2 July 2019 through 5 July 2019
Note

QC 20191022

Part of ISBN 9781450366724

Available from: 2019-10-22 Created: 2019-10-22 Last updated: 2024-10-18Bibliographically approved
2. Exploring Non-Expert Robot Programming Through Crowdsourcing
Open this publication in new window or tab >>Exploring Non-Expert Robot Programming Through Crowdsourcing
2021 (English)In: Frontiers in Robotics and AI, E-ISSN 2296-9144, Vol. 8, article id 646002Article in journal (Refereed) Published
Abstract [en]

A longstanding barrier to deploying robots in the real world is the ongoing need to author robot behavior. Remote data collection-particularly crowdsourcing-is increasingly receiving interest. In this paper, we make the argument to scale robot programming to the crowd and present an initial investigation of the feasibility of this proposed method. Using an off-the-shelf visual programming interface, non-experts created simple robot programs for two typical robot tasks (navigation and pick-and-place). Each needed four subtasks with an increasing number of programming statements (if statement, while loop, variables) for successful completion of the programs. Initial findings of an online study (N = 279) indicate that non-experts, after minimal instruction, were able to create simple programs using an off-the-shelf visual programming interface. We discuss our findings and identify future avenues for this line of research.

Place, publisher, year, edition, pages
Frontiers Media SA, 2021
Keywords
human-robot interaction, non-expert robot programming, crowdsourcing, block-based programming, robots
National Category
Robotics
Identifiers
urn:nbn:se:kth:diva-300864 (URN)10.3389/frobt.2021.646002 (DOI)000685497100001 ()34395535 (PubMedID)2-s2.0-85112482189 (Scopus ID)
Note

QC 20210923

Available from: 2021-09-23 Created: 2021-09-23 Last updated: 2022-11-09Bibliographically approved
3. Correct Me If I'm Wrong: Using Non-Experts to Repair Reinforcement Learning Policies
Open this publication in new window or tab >>Correct Me If I'm Wrong: Using Non-Experts to Repair Reinforcement Learning Policies
2022 (English)In: Proceedings of the 17th ACM/IEEE International Conference on Human-Robot Interaction, Institute of Electrical and Electronics Engineers (IEEE) , 2022, p. 493-501Conference paper, Published paper (Refereed)
Abstract [en]

Reinforcement learning has shown great potential for learning sequential decision-making tasks. Yet, it is difficult to anticipate all possible real-world scenarios during training, causing robots to inevitably fail in the long run. Many of these failures are due to variations in the robot's environment. Usually experts are called to correct the robot's behavior; however, some of these failures do not necessarily require an expert to solve them. In this work, we query non-experts online for help and explore 1) if/how non-experts can provide feedback to the robot after a failure and 2) how the robot can use this feedback to avoid such failures in the future by generating shields that restrict or correct its high-level actions. We demonstrate our approach on common daily scenarios of a simulated kitchen robot. The results indicate that non-experts can indeed understand and repair robot failures. Our generated shields accelerate learning and improve data-efficiency during retraining.

Place, publisher, year, edition, pages
Institute of Electrical and Electronics Engineers (IEEE), 2022
Series
ACM IEEE International Conference on Human-Robot Interaction, ISSN 2167-2121
Keywords
robot failure, policy repair, non-experts, shielded reinforcement learning
National Category
Robotics
Identifiers
urn:nbn:se:kth:diva-308441 (URN)10.1109/HRI53351.2022.9889604 (DOI)000869793600054 ()2-s2.0-85140707989 (Scopus ID)
Conference
Proceedings of the 17th ACM/IEEE International Conference on Human-Robot Interaction, March 7-10, 2022
Note

Part of proceedings: ISBN 978-1-6654-0731-1

QC 20220215

Available from: 2022-02-07 Created: 2022-02-07 Last updated: 2022-12-16Bibliographically approved
4. Encoding Human Driving Styles in Motion Planning for Autonomous Vehicles
Open this publication in new window or tab >>Encoding Human Driving Styles in Motion Planning for Autonomous Vehicles
Show others...
2021 (English)In: 2021 IEEE International Conference on Robotics and Automation (ICRA), Institute of Electrical and Electronics Engineers (IEEE) , 2021, p. 11262-11268Conference paper, Published paper (Refereed)
Abstract [en]

Driving styles play a major role in the acceptance and use of autonomous vehicles. Yet, existing motion planning techniques can often only incorporate simple driving styles that are modeled by the developers of the planner and not tailored to the passenger. We present a new approach to encode human driving styles through the use of signal temporal logic and its robustness metrics. Specifically, we use a penalty structure that can be used in many motion planning frameworks, and calibrate its parameters to model different automated driving styles. We combine this penalty structure with a set of signal temporal logic formula, based on the Responsibility-Sensitive Safety model, to generate trajectories that we expected to correlate with three different driving styles: aggressive, neutral, and defensive. An online study showed that people perceived different parameterizations of the motion planner as unique driving styles, and that most people tend to prefer a more defensive automated driving style, which correlated to their self-reported driving style.

Place, publisher, year, edition, pages
Institute of Electrical and Electronics Engineers (IEEE), 2021
Series
Proceedings - IEEE International Conference on Robotics and Automation, ISSN 1050-4729
Keywords
Autonomous vehicle navigation, Formal methods in robotics and automation, Human factors, Human-in-the-loop
National Category
Robotics Control Engineering Computer Sciences
Identifiers
urn:nbn:se:kth:diva-310389 (URN)10.1109/ICRA48506.2021.9561777 (DOI)000765738801034 ()2-s2.0-85109997697 (Scopus ID)
Conference
2021 IEEE International Conference on Robotics and Automation, ICRA 2021, 30 May 2021 through 5 June 2021, Xian, China
Note

QC 20220502

Part of proceedings: ISBN 978-1-7281-9077-8

Available from: 2022-04-04 Created: 2022-04-04 Last updated: 2022-11-09Bibliographically approved
5. Large-Scale Scenario Generation for Robotic Manipulation via Conditioned Generative Models
Open this publication in new window or tab >>Large-Scale Scenario Generation for Robotic Manipulation via Conditioned Generative Models
Show others...
(English)Manuscript (preprint) (Other academic)
Abstract [en]

Data-driven robotic manipulation has been gaining traction. However, creating synthetic large-scale datasets for training, validation and benchmarks often relies on random sampling or perturbations, and the resulting scenarios do not necessarily reflect the desired task goals or spatial constraints on the manipulated objects, i.e., they are not spatially structured. We leverage spatial logics and generative models to automatically create spatially-structured manipulation scenarios from high-level specifications. We condition the models on such specifications to impose diverse spatial object relations on the data, e.g., the mug should be left of the plate. This approach enables users to define custom specifications and generate millions of scenarios within minutes, which specifically satisfy or violate the specifications to a desired extent.

National Category
Robotics
Identifiers
urn:nbn:se:kth:diva-321075 (URN)
Note

QC 20221109

Available from: 2022-11-04 Created: 2022-11-04 Last updated: 2022-11-09Bibliographically approved
6. Increasing Perceived Safety in Motion Planning for Human-Drone Interaction
Open this publication in new window or tab >>Increasing Perceived Safety in Motion Planning for Human-Drone Interaction
Show others...
(English)Manuscript (preprint) (Other academic)
National Category
Robotics
Identifiers
urn:nbn:se:kth:diva-321074 (URN)
Note

QC 20221109

Available from: 2022-11-04 Created: 2022-11-04 Last updated: 2022-11-09Bibliographically approved

Open Access in DiVA

Kappa thesis Sanne van Waveren(5914 kB)312 downloads
File information
File name SUMMARY01.pdfFile size 5914 kBChecksum SHA-512
6d09501e30a51d30cf36ad49448c53988821738fd8c80fce0444f06fa4b9406fe0211a1781acd1690f1304b6a4cab74cd6bdc0de17d8dd7c45a4f3a5a29d7539
Type summaryMimetype application/pdf

Authority records

van Waveren, Sanne

Search in DiVA

By author/editor
van Waveren, Sanne
By organisation
Robotics, Perception and Learning, RPL
Robotics

Search outside of DiVA

GoogleGoogle Scholar
The number of downloads is the sum of all downloads of full texts. It may include eg previous versions that are now no longer available

isbn
urn-nbn

Altmetric score

isbn
urn-nbn
Total: 1519 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf