Change search
Link to record
Permanent link

Direct link
BETA
Alternative names
Publications (10 of 291) Show all publications
Garcia-Camacho, I., Lippi, M., Welle, M. C., Yin, H., Antonova, R., Varava, A., . . . Kragic, D. (2020). Benchmarking Bimanual Cloth Manipulation. IEEE Robotics and Automation Letters, 5(2), 1111-1118
Open this publication in new window or tab >>Benchmarking Bimanual Cloth Manipulation
Show others...
2020 (English)In: IEEE Robotics and Automation Letters, ISSN 2377-3766, E-ISSN 1949-3045, Vol. 5, no 2, p. 1111-1118Article in journal (Refereed) Published
Abstract [en]

Cloth manipulation is a challenging task that, despite its importance, has received relatively little attention compared to rigid object manipulation. In this letter, we provide three benchmarks for evaluation and comparison of different approaches towards three basic tasks in cloth manipulation: spreading a tablecloth over a table, folding a towel, and dressing. The tasks can be executed on any bimanual robotic platform and the objects involved in the tasks are standardized and easy to acquire. We provide several complexity levels for each task, and describe the quality measures to evaluate task execution. Furthermore, we provide baseline solutions for all the tasks and evaluate them according to the proposed metrics.

Place, publisher, year, edition, pages
IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC, 2020
Keywords
Cooperating robots, performance evaluation and benchmarking
National Category
Robotics
Identifiers
urn:nbn:se:kth:diva-269022 (URN)10.1109/LRA.2020.2965891 (DOI)000511836600009 ()2-s2.0-85079233626 (Scopus ID)
Note

QC 20200313

Available from: 2020-03-13 Created: 2020-03-13 Last updated: 2020-03-13Bibliographically approved
Cruciani, S., Sundaralingam, B., Hang, K., Kumar, V., Hermans, T. & Kragic, D. (2020). Benchmarking In-Hand Manipulation. IEEE Robotics and Automation Letters, 5(2), 588-595
Open this publication in new window or tab >>Benchmarking In-Hand Manipulation
Show others...
2020 (English)In: IEEE Robotics and Automation Letters, ISSN 2377-3766, E-ISSN 1949-3045, Vol. 5, no 2, p. 588-595Article in journal (Refereed) Published
Abstract [en]

The purpose of this benchmark is to evaluate the planning and control aspects of robotic in-hand manipulation systems. The goal is to assess the systems ability to change the pose of a hand-held object by either using the fingers, environment or a combination of both. Given an object surface mesh from the YCB data-set, we provide examples of initial and goal states (i.e. static object poses and fingertip locations) for various in-hand manipulation tasks. We further propose metrics that measure the error in reaching the goal state from a specific initial state, which, when aggregated across all tasks, also serves as a measure of the systems in-hand manipulation capability. We provide supporting software, task examples, and evaluation results associated with the benchmark.

Place, publisher, year, edition, pages
IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC, 2020
Keywords
Performance evaluation and benchmarking, dexterous manipulation
National Category
Robotics
Identifiers
urn:nbn:se:kth:diva-267732 (URN)10.1109/LRA.2020.2964160 (DOI)000509509300002 ()2-s2.0-85078545547 (Scopus ID)
Note

QC 20200217

Available from: 2020-02-17 Created: 2020-02-17 Last updated: 2020-02-17Bibliographically approved
Mitsioni, I., Karayiannidis, Y., Stork, J. A. & Kragic, D. (2019). Data-Driven Model Predictive Control for the Contact-Rich Task of Food Cutting. In: : . Paper presented at The 2019 IEEE-RAS International Conference on Humanoid Robots, Toronto, Canada, October 15-17, 2019..
Open this publication in new window or tab >>Data-Driven Model Predictive Control for the Contact-Rich Task of Food Cutting
2019 (English)Conference paper, Published paper (Refereed)
Abstract [en]

Modelling of contact-rich tasks is challenging and cannot be entirely solved using classical control approaches due to the difficulty of constructing an analytic description of the contact dynamics. Additionally, in a manipulation task like food-cutting, purely learning-based methods such as Reinforcement Learning, require either a vast amount of data that is expensive to collect on a real robot, or a highly realistic simulation environment, which is currently not available. This paper presents a data-driven control approach that employs a recurrent neural network to model the dynamics for a Model Predictive Controller. We build upon earlier work limited to torque-controlled robots and redefine it for velocity controlled ones. We incorporate force/torque sensor measurements, reformulate and further extend the control problem formulation. We evaluate the performance on objects used for training, as well as on unknown objects, by means of the cutting rates achieved and demonstrate that the method can efficiently treat different cases with only one dynamic model. Finally we investigate the behavior of the system during force-critical instances of cutting and illustrate its adaptive behavior in difficult cases.

National Category
Engineering and Technology Robotics
Identifiers
urn:nbn:se:kth:diva-258796 (URN)
Conference
The 2019 IEEE-RAS International Conference on Humanoid Robots, Toronto, Canada, October 15-17, 2019.
Note

QC 20191021

Available from: 2019-09-16 Created: 2019-09-16 Last updated: 2020-01-31Bibliographically approved
Cruciani, S., Hang, K., Smith, C. & Kragic, D. (2019). Dual-Arm In-Hand Manipulation Using Visual Feedback. In: : . Paper presented at IEEE-RAS 19th International Conference on Humanoid Robots (Humanoids) October 15-17, 2019 Toronto, Canada (pp. 411-418).
Open this publication in new window or tab >>Dual-Arm In-Hand Manipulation Using Visual Feedback
2019 (English)Conference paper, Published paper (Refereed)
Abstract [en]

In this work, we address the problem of executing in-hand manipulation based on visual input. Given an initial grasp, the robot has to change its grasp configuration without releasing the object. We propose a method for in-hand manipulation planning and execution based on information on the object’s shape using a dual-arm robot. From the available information on the object, which can be a complete point cloud but also partial data, our method plans a sequence of rotations and translations to reconfigure the object’s pose. This sequence is executed using non-prehensile pushes defined as relative motions between the two robot arms.

National Category
Robotics
Identifiers
urn:nbn:se:kth:diva-262881 (URN)
Conference
IEEE-RAS 19th International Conference on Humanoid Robots (Humanoids) October 15-17, 2019 Toronto, Canada
Note

QC 20191129

Available from: 2019-10-22 Created: 2019-10-22 Last updated: 2019-11-29Bibliographically approved
Yuan, W., Hang, K., Kragic, D., Wang, M. Y. & Stork, J. A. (2019). End-to-end nonprehensile rearrangement with deep reinforcement learning and simulation-to-reality transfer. Robotics and Autonomous Systems, 119, 119-134
Open this publication in new window or tab >>End-to-end nonprehensile rearrangement with deep reinforcement learning and simulation-to-reality transfer
Show others...
2019 (English)In: Robotics and Autonomous Systems, ISSN 0921-8890, E-ISSN 1872-793X, Vol. 119, p. 119-134Article in journal (Refereed) Published
Abstract [en]

Nonprehensile rearrangement is the problem of controlling a robot to interact with objects through pushing actions in order to reconfigure the objects into a predefined goal pose. In this work, we rearrange one object at a time in an environment with obstacles using an end-to-end policy that maps raw pixels as visual input to control actions without any form of engineered feature extraction. To reduce the amount of training data that needs to be collected using a real robot, we propose a simulation-to-reality transfer approach. In the first step, we model the nonprehensile rearrangement task in simulation and use deep reinforcement learning to learn a suitable rearrangement policy, which requires in the order of hundreds of thousands of example actions for training. Thereafter, we collect a small dataset of only 70 episodes of real-world actions as supervised examples for adapting the learned rearrangement policy to real-world input data. In this process, we make use of newly proposed strategies for improving the reinforcement learning process, such as heuristic exploration and the curation of a balanced set of experiences. We evaluate our method in both simulation and real setting using a Baxter robot to show that the proposed approach can effectively improve the training process in simulation, as well as efficiently adapt the learned policy to the real world application, even when the camera pose is different from simulation. Additionally, we show that the learned system not only can provide adaptive behavior to handle unforeseen events during executions, such as distraction objects, sudden changes in positions of the objects, and obstacles, but also can deal with obstacle shapes that were not present in the training process.

Place, publisher, year, edition, pages
ELSEVIER, 2019
Keywords
Nonprehensile rearrangement, Deep reinforcement learning, Transfer learning
National Category
Computer Vision and Robotics (Autonomous Systems)
Identifiers
urn:nbn:se:kth:diva-259430 (URN)10.1016/j.robot.2019.06.007 (DOI)000482250400009 ()2-s2.0-85068467713 (Scopus ID)
Note

QC 20190924

Available from: 2019-09-24 Created: 2019-09-24 Last updated: 2019-09-24Bibliographically approved
Sibirtseva, E., Ghadirzadeh, A., Leite, I., Björkman, M. & Kragic, D. (2019). Exploring Temporal Dependencies in Multimodal Referring Expressions with Mixed Reality. In: Virtual, Augmented and Mixed Reality. Multimodal Interaction 11th International Conference, VAMR 2019, Held as Part of the 21st HCI International Conference, HCII 2019, Orlando, FL, USA, July 26–31, 2019, Proceedings: . Paper presented at 11th International Conference on Virtual, Augmented and Mixed Reality, VAMR 2019, held as part of the 21st International Conference on Human-Computer Interaction, HCI International 2019; Orlando; United States; 26 July 2019 through 31 July 2019 (pp. 108-123). Springer Verlag
Open this publication in new window or tab >>Exploring Temporal Dependencies in Multimodal Referring Expressions with Mixed Reality
Show others...
2019 (English)In: Virtual, Augmented and Mixed Reality. Multimodal Interaction 11th International Conference, VAMR 2019, Held as Part of the 21st HCI International Conference, HCII 2019, Orlando, FL, USA, July 26–31, 2019, Proceedings, Springer Verlag , 2019, p. 108-123Conference paper, Published paper (Refereed)
Abstract [en]

In collaborative tasks, people rely both on verbal and non-verbal cues simultaneously to communicate with each other. For human-robot interaction to run smoothly and naturally, a robot should be equipped with the ability to robustly disambiguate referring expressions. In this work, we propose a model that can disambiguate multimodal fetching requests using modalities such as head movements, hand gestures, and speech. We analysed the acquired data from mixed reality experiments and formulated a hypothesis that modelling temporal dependencies of events in these three modalities increases the model’s predictive power. We evaluated our model on a Bayesian framework to interpret referring expressions with and without exploiting the temporal prior.

Place, publisher, year, edition, pages
Springer Verlag, 2019
Series
Lecture Notes in Artificial Intelligence, ISSN 0302-9743 ; 11575
Keywords
Human-robot interaction, Mixed reality, Multimodal interaction, Referring expressions, Human computer interaction, Human robot interaction, Bayesian frameworks, Collaborative tasks, Hand gesture, Head movements, Multi-modal, Multi-Modal Interactions, Predictive power
National Category
Computer Vision and Robotics (Autonomous Systems)
Identifiers
urn:nbn:se:kth:diva-262467 (URN)10.1007/978-3-030-21565-1_8 (DOI)2-s2.0-85069730416 (Scopus ID)9783030215644 (ISBN)
Conference
11th International Conference on Virtual, Augmented and Mixed Reality, VAMR 2019, held as part of the 21st International Conference on Human-Computer Interaction, HCI International 2019; Orlando; United States; 26 July 2019 through 31 July 2019
Note

QC 20191017

Available from: 2019-10-17 Created: 2019-10-17 Last updated: 2020-01-15Bibliographically approved
Pinto Basto de Carvalho, J. F., Vejdemo-Johansson, M., Pokorny, F. T. & Kragic, D. (2019). Long-term Prediction of Motion Trajectories Using Path Homology Clusters. In: 2019 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS): . Paper presented at IEEE/RSJ International Conference on Intelligent Robots and Systems,3-8 Nov. 2019, Macau, China, China. Institute of Electrical and Electronics Engineers (IEEE)
Open this publication in new window or tab >>Long-term Prediction of Motion Trajectories Using Path Homology Clusters
2019 (English)In: 2019 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Institute of Electrical and Electronics Engineers (IEEE), 2019Conference paper, Published paper (Refereed)
Abstract [en]

In order for robots to share their workspace with people, they need to reason about human motion efficiently. In this work we leverage large datasets of paths in order to infer local models that are able to perform long-term predictions of human motion. Further, since our method is based on simple dynamics, it is conceptually simple to understand and allows one to interpret the predictions produced, as well as to extract a cost function that can be used for planning. The main difference between our method and similar systems, is that we employ a map of the space and translate the motion of groups of paths into vector fields on that map. We test our method on synthetic data and show its performance on the Edinburgh forum pedestrian long-term tracking dataset [1] where we were able to outperform a Gaussian Mixture Model tasked with extracting dynamics from the paths.

Place, publisher, year, edition, pages
Institute of Electrical and Electronics Engineers (IEEE), 2019
National Category
Computer Sciences
Identifiers
urn:nbn:se:kth:diva-266956 (URN)10.1109/IROS40897.2019.8968125 (DOI)978-1-7281-4004-9 (ISBN)
Conference
IEEE/RSJ International Conference on Intelligent Robots and Systems,3-8 Nov. 2019, Macau, China, China
Funder
Knut and Alice Wallenberg Foundation
Note

QC 20200203

Available from: 2020-01-27 Created: 2020-01-27 Last updated: 2020-02-18Bibliographically approved
Haustein, J. A., Hang, K., Stork, J. A. & Kragic, D. (2019). Object Placement Planning and Optimization for Robot Manipulators. In: 2019 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2019): . Paper presented at International Conference on Intelligent Robots and Systems (IROS), Macau, China, November 4-8, 2019.
Open this publication in new window or tab >>Object Placement Planning and Optimization for Robot Manipulators
2019 (English)In: 2019 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2019), 2019Conference paper, Published paper (Refereed)
Abstract [en]

We address the problem of planning the placement of a rigid object with a dual-arm robot in a cluttered environment. In this task, we need to locate a collision-free pose for the object that a) facilitates the stable placement of the object, b) is reachable by the robot and c) optimizes a user-given placement objective. In addition, we need to select which robot arm to perform the placement with. To solve this task, we propose an anytime algorithm that integrates sampling-based motion planning with a novel hierarchical search for suitable placement poses. Our algorithm incrementally produces approach motions to stable placement poses, reaching placements with better objective as runtime progresses. We evaluate our approach for two different placement objectives, and observe its effectiveness even in challenging scenarios.

Keywords
Motion planning, Object placing
National Category
Computer Vision and Robotics (Autonomous Systems)
Research subject
Computer Science
Identifiers
urn:nbn:se:kth:diva-264015 (URN)
Conference
International Conference on Intelligent Robots and Systems (IROS), Macau, China, November 4-8, 2019
Funder
Swedish Foundation for Strategic Research Knut and Alice Wallenberg Foundation
Note

QC 20191210

Available from: 2019-11-20 Created: 2019-11-20 Last updated: 2020-01-31Bibliographically approved
Hang, K., Lyu, X., Song, H., Stork, J. A., Dollar, A. M., Kragic, D. & Zhang, F. (2019). Perching and resting-A paradigm for UAV maneuvering with modularized landing gears. SCIENCE ROBOTICS, 4(28), Article ID eaau6637.
Open this publication in new window or tab >>Perching and resting-A paradigm for UAV maneuvering with modularized landing gears
Show others...
2019 (English)In: SCIENCE ROBOTICS, ISSN 2470-9476, Vol. 4, no 28, article id eaau6637Article in journal (Refereed) Published
Abstract [en]

Perching helps small unmanned aerial vehicles (UAVs) extend their time of operation by saving battery power. However, most strategies for UAV perching require complex maneuvering and rely on specific structures, such as rough walls for attaching or tree branches for grasping. Many strategies to perching neglect the UAV's mission such that saving battery power interrupts the mission. We suggest enabling UAVs with the capability of making and stabilizing contacts with the environment, which will allow the UAV to consume less energy while retaining its altitude, in addition to the perching capability that has been proposed before. This new capability is termed "resting." For this, we propose a modularized and actuated landing gear framework that allows stabilizing the UAV on a wide range of different structures by perching and resting. Modularization allows our framework to adapt to specific structures for resting through rapid prototyping with additive manufacturing. Actuation allows switching between different modes of perching and resting during flight and additionally enables perching by grasping. Our results show that this framework can be used to perform UAV perching and resting on a set of common structures, such as street lights and edges or corners of buildings. We show that the design is effective in reducing power consumption, promotes increased pose stability, and preserves large vision ranges while perching or resting at heights. In addition, we discuss the potential applications facilitated by our design, as well as the potential issues to be addressed for deployment in practice.

Place, publisher, year, edition, pages
AMER ASSOC ADVANCEMENT SCIENCE, 2019
National Category
Robotics
Identifiers
urn:nbn:se:kth:diva-251220 (URN)10.1126/scirobotics.aau6637 (DOI)000464024300001 ()2-s2.0-85063677452 (Scopus ID)
Note

QC 20190523

Available from: 2019-05-23 Created: 2019-05-23 Last updated: 2020-01-31Bibliographically approved
Haustein, J. A., Cruciani, S., Asif, R., Hang, K. & Kragic, D. (2019). Placing Objects with prior In-Hand Manipulation using Dexterous Manipulation Graphs. In: : . Paper presented at IEEE-RAS 19th International Conference on Humanoid Robots (Humanoids), Toronto, Canada, October 15-17, 2019. (pp. 477-484).
Open this publication in new window or tab >>Placing Objects with prior In-Hand Manipulation using Dexterous Manipulation Graphs
Show others...
2019 (English)Conference paper, Published paper (Refereed)
Abstract [en]

We address the problem of planning the placement of a grasped object with a robot manipulator. More specifically, the robot is tasked to place the grasped object such that a placement preference function is maximized. For this, we present an approach that uses in-hand manipulation to adjust the robot’s initial grasp to extend the set of reachable placements. Given an initial grasp, the algorithm computes a set of grasps that can be reached by pushing and rotating the object in-hand. With this set of reachable grasps, it then searches for a stable placement that maximizes the preference function. If successful it returns a sequence of in-hand pushes to adjust the initial grasp to a more advantageous grasp together with a transport motion that carries the object to the placement. We evaluate our algorithm’s performance on various placing scenarios, and observe its effectiveness also in challenging scenes containing many obstacles. Our experiments demonstrate that re-grasping with in-hand manipulation increases the quality of placements the robot can reach. In particular, it enables the algorithm to find solutions in situations where safe placing with the initial grasp wouldn’t be possible.

National Category
Robotics
Identifiers
urn:nbn:se:kth:diva-262882 (URN)
Conference
IEEE-RAS 19th International Conference on Humanoid Robots (Humanoids), Toronto, Canada, October 15-17, 2019.
Note

QC 20191115

Available from: 2019-10-22 Created: 2019-10-22 Last updated: 2020-01-22Bibliographically approved
Organisations
Identifiers
ORCID iD: ORCID iD iconorcid.org/0000-0003-2965-2953

Search in DiVA

Show all publications