kth.sePublications
Planned maintenance
A system upgrade is planned for 10/12-2024, at 12:00-13:00. During this time DiVA will be unavailable.
Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Combining Context Awareness and Planning to Learn Behavior Trees from Demonstration
KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Robotics, Perception and Learning, RPL.
KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Robotics, Perception and Learning, RPL. ABB Corp Res, Västerås, Sweden..ORCID iD: 0000-0002-6119-6399
KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Robotics, Perception and Learning, RPL. ABB Robot, Västerås, Sweden..ORCID iD: 0000-0003-0312-8811
KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Robotics, Perception and Learning, RPL.ORCID iD: 0000-0003-2078-8854
2022 (English)In: 2022 31ST IEEE INTERNATIONAL CONFERENCE ON ROBOT AND HUMAN INTERACTIVE COMMUNICATION (IEEE RO-MAN 2022), Institute of Electrical and Electronics Engineers Inc. , 2022, p. 1153-1160Conference paper, Published paper (Refereed)
Abstract [en]

Fast changing tasks in unpredictable, collaborative environments are typical for medium-small companies, where robotised applications are increasing. Thus, robot programs should be generated in short time with small effort, and the robot able to react dynamically to the environment. To address this we propose a method that combines context awareness and planning to learn Behavior Trees (BTs), a reactive policy representation that is becoming more popular in robotics and has been used successfully in many collaborative scenarios. Context awareness allows for inferring from the demonstration the frames in which actions are executed and to capture relevant aspects of the task, while a planner is used to automatically generate the BT from the sequence of actions from the demonstration. The learned BT is shown to solve non-trivial manipulation tasks where learning the context is fundamental to achieve the goal. Moreover, we collected non-expert demonstrations to study the performances of the algorithm in industrial scenarios.

Place, publisher, year, edition, pages
Institute of Electrical and Electronics Engineers Inc. , 2022. p. 1153-1160
Keywords [en]
Behavior Trees, Learning from Demonstration, Manipulation, Collaborative Robotics
National Category
Computer Vision and Robotics (Autonomous Systems)
Identifiers
URN: urn:nbn:se:kth:diva-322437DOI: 10.1109/RO-MAN53752.2022.9900603ISI: 000885903300165Scopus ID: 2-s2.0-85138283933OAI: oai:DiVA.org:kth-322437DiVA, id: diva2:1719356
Conference
31st IEEE International Conference on Robot and Human Interactive Communication (RO-MAN) - Social, Asocial, and Antisocial Robots, AUG 29-SEP 02, 2022, Napoli, ITALY
Note

Part of proceedings: ISBN 978-1-7281-8859-1

QC 20221215

Available from: 2022-12-15 Created: 2022-12-15 Last updated: 2023-05-22Bibliographically approved
In thesis
1. Learning Behavior Trees for Collaborative Robotics
Open this publication in new window or tab >>Learning Behavior Trees for Collaborative Robotics
2023 (English)Doctoral thesis, comprehensive summary (Other academic)
Abstract [en]

This thesis aims to address the challenge of generating task plans for robots in industry-relevant scenarios. With the increase in small-batch production, companies require robots to be reprogrammed frequently for new tasks. However, maintaining a team of operators with specific programming skills is only cost-efficient for large-scale production. The increase in automation targets companies where humans share their working environment with robots, expanding the scope of manufacturing applications. To achieve that, robots need to be controlled by task plans, which sequence and optimize the execution of actions. This thesis focuses on generating task plans that are reactive, transparent and explainable, modular, and automatically synthesized. These task plans improve the robot’s autonomy, fault-tolerance, and robustness. Furthermore, such task plans facilitate the collaboration with humans, enabling intuitive representations of the plan and the possibility for humans to prompt instructions at run-time to modify the robot’s behavior. Lastly, autonomous generation decreases the programming skills required for the operator to program a robot, and optimizes the task plan. This thesis discusses the use of Behavior Trees (BTs) as policy representations for robotic task plans. It compares the modularity of BTs and Finite State Machines (FSMs) and concludes that BTs are more effective for industrial scenarios. This thesis also explores the automatic and intuitive generation of BTs using Genetic Programming and Learning from Demonstration methods, respectively. The proposed methods aim to time-efficiently evolve BTs for mobile manipulation tasks and allow non-expert users to intuitively teach robots manipulation tasks. This thesis highlights the importance of user experience in task solving and how it can benefit evolutionary algorithms. Finally, it proposes the use of previously learned BTs from demonstration to intervene in the unsupervised learning process.

Abstract [sv]

Den här avhandlingen syftar till att ta itu med utmaningen att generera uppgiftsplaner för robotar i industriella scenarier. Med ökningen av småskalig produktion kräver företag att robotar omprogrammeras frekvent för nya uppgifter. Att upprätthålla en grupp operatörer med specifika programmeringsfärdigheter är dock endast kostnadseffektivt för storskalig produktion. Ökningen av automation riktar sig till företag där människor delar sin arbetsmiljö med robotar och utökar omfattningen av tillverkningsapplikationer. För att uppnå detta måste robotar styras av uppgiftsplaner som sekvenserar och optimerar utförandet av åtgärder. Denna avhandling fokuserar på att generera uppgiftsplaner som är reaktiva, transparenta och förklarbara, modulära och automatiskt syntetiserade. Dessa uppgiftsplaner förbättrar robotens autonomi, feltolerans och robusthet. Dessutom underlättar sådana uppgiftsplaner samarbetet med människor genom att möjliggöra intuitiva representationer av planen och möjligheten för människor att ge instruktioner vid körningstid för att ändra robotens beteende. Slutligen minskar autonom generering programmeringsfärdigheterna som krävs för att operatören ska kunna programmera en robot och optimerar uppgiftsplanen. Denna avhandling diskuterar användningen av beteendeträd (BTs) som policyrepresentationer för robotiska uppgiftsplaner. Den jämför moduleringen av BT och deterministiska tillståndsmaskiner (FSMs) och drar slutsatsen att BTs är mer effektiva för industriella scenarier. Denna avhandling utforskar också den automatiska och intuitiva generationen av BTs med hjälp av genetisk programmering och lärande från demonstrationsmetoder, respektive. De föreslagna metoderna syftar till att tidsmässigt utveckla BTs för mobila manipulationuppgifter och tillåta icke-experter att intuitivt lära robotar manipulationsuppgifter. Denna avhandling belyser vikten av användarupplevelsen i uppgiftslösning och hur den kan gynna evolutionära algoritmer. Slutligen föreslår den användningen av tidigare inlärda BTs från demonstration för att ingripa i den oövervakade inlärningsprocessen.

Place, publisher, year, edition, pages
Stockholm: KTH Royal Institute of Technology, 2023. p. xi, 127
Series
TRITA-EECS-AVL ; 2023:46
Keywords
Collaborative Robotics, Behavior Trees
National Category
Robotics
Research subject
Computer Science
Identifiers
urn:nbn:se:kth:diva-327210 (URN)978-91-8040-594-2 (ISBN)
Public defence
2023-06-12, https://kth-se.zoom.us/j/64592198901, Kollegiesalen, Brinellvägen 8, Stockholm, 10:00 (English)
Opponent
Supervisors
Note

QC 20230523

Available from: 2023-05-23 Created: 2023-05-22 Last updated: 2023-06-27Bibliographically approved

Open Access in DiVA

No full text in DiVA

Other links

Publisher's full textScopus

Authority records

Gustavsson, OscarIovino, MatteoStyrud, JonathanSmith, Christian

Search in DiVA

By author/editor
Gustavsson, OscarIovino, MatteoStyrud, JonathanSmith, Christian
By organisation
Robotics, Perception and Learning, RPL
Computer Vision and Robotics (Autonomous Systems)

Search outside of DiVA

GoogleGoogle Scholar

doi
urn-nbn

Altmetric score

doi
urn-nbn
Total: 62 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf