Change search
Link to record
Permanent link

Direct link
BETA
Alternative names
Publications (10 of 273) Show all publications
Hang, K., Lyu, X., Song, H., Stork, J. A., Dollar, A. M., Kragic, D. & Zhang, F. (2019). Perching and resting-A paradigm for UAV maneuvering with modularized landing gears. SCIENCE ROBOTICS, 4(28), Article ID eaau6637.
Open this publication in new window or tab >>Perching and resting-A paradigm for UAV maneuvering with modularized landing gears
Show others...
2019 (English)In: SCIENCE ROBOTICS, ISSN 2470-9476, Vol. 4, no 28, article id eaau6637Article in journal (Refereed) Published
Abstract [en]

Perching helps small unmanned aerial vehicles (UAVs) extend their time of operation by saving battery power. However, most strategies for UAV perching require complex maneuvering and rely on specific structures, such as rough walls for attaching or tree branches for grasping. Many strategies to perching neglect the UAV's mission such that saving battery power interrupts the mission. We suggest enabling UAVs with the capability of making and stabilizing contacts with the environment, which will allow the UAV to consume less energy while retaining its altitude, in addition to the perching capability that has been proposed before. This new capability is termed "resting." For this, we propose a modularized and actuated landing gear framework that allows stabilizing the UAV on a wide range of different structures by perching and resting. Modularization allows our framework to adapt to specific structures for resting through rapid prototyping with additive manufacturing. Actuation allows switching between different modes of perching and resting during flight and additionally enables perching by grasping. Our results show that this framework can be used to perform UAV perching and resting on a set of common structures, such as street lights and edges or corners of buildings. We show that the design is effective in reducing power consumption, promotes increased pose stability, and preserves large vision ranges while perching or resting at heights. In addition, we discuss the potential applications facilitated by our design, as well as the potential issues to be addressed for deployment in practice.

Place, publisher, year, edition, pages
AMER ASSOC ADVANCEMENT SCIENCE, 2019
National Category
Robotics
Identifiers
urn:nbn:se:kth:diva-251220 (URN)10.1126/scirobotics.aau6637 (DOI)000464024300001 ()2-s2.0-85063677452 (Scopus ID)
Note

QC 20190523

Available from: 2019-05-23 Created: 2019-05-23 Last updated: 2019-05-23Bibliographically approved
Billard, A. & Kragic, D. (2019). Trends and challenges in robot manipulation. Science, 364(6446), 1149-+
Open this publication in new window or tab >>Trends and challenges in robot manipulation
2019 (English)In: Science, ISSN 0036-8075, E-ISSN 1095-9203, Vol. 364, no 6446, p. 1149-+Article, review/survey (Refereed) Published
Abstract [en]

Dexterous manipulation is one of the primary goals in robotics. Robots with this capability could sort and package objects, chop vegetables, and fold clothes. As robots come to work side by side with humans, they must also become human-aware. Over the past decade, research has made strides toward these goals. Progress has come from advances in visual and haptic perception and in mechanics in the form of soft actuators that offer a natural compliance. Most notably, immense progress in machine learning has been leveraged to encapsulate models of uncertainty and to support improvements in adaptive and robust control. Open questions remain in terms of how to enable robots to deal with the most unpredictable agent of all, the human.

Place, publisher, year, edition, pages
American Association for the Advancement of Science, 2019
National Category
Robotics
Identifiers
urn:nbn:se:kth:diva-255314 (URN)10.1126/science.aat8414 (DOI)000472175100030 ()31221831 (PubMedID)2-s2.0-85068153256 (Scopus ID)
Note

QC 20190807

Available from: 2019-08-07 Created: 2019-08-07 Last updated: 2019-08-07Bibliographically approved
Sibirtseva, E., Kontogiorgos, D., Nykvist, O., Karaoguz, H., Leite, I., Gustafson, J. & Kragic, D. (2018). A Comparison of Visualisation Methods for Disambiguating Verbal Requests in Human-Robot Interaction. In: 2018 27th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN) 2018 27th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN): . Paper presented at ROMAN 2018.
Open this publication in new window or tab >>A Comparison of Visualisation Methods for Disambiguating Verbal Requests in Human-Robot Interaction
Show others...
2018 (English)In: 2018 27th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN) 2018 27th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN), 2018Conference paper, Published paper (Refereed)
Abstract [en]

Picking up objects requested by a human user is a common task in human-robot interaction. When multiple objects match the user's verbal description, the robot needs to clarify which object the user is referring to before executing the action. Previous research has focused on perceiving user's multimodal behaviour to complement verbal commands or minimising the number of follow up questions to reduce task time. In this paper, we propose a system for reference disambiguation based on visualisation and compare three methods to disambiguate natural language instructions. In a controlled experiment with a YuMi robot, we investigated realtime augmentations of the workspace in three conditions - head-mounted display, projector, and a monitor as the baseline - using objective measures such as time and accuracy, and subjective measures like engagement, immersion, and display interference. Significant differences were found in accuracy and engagement between the conditions, but no differences were found in task time. Despite the higher error rates in the head-mounted display condition, participants found that modality more engaging than the other two, but overall showed preference for the projector condition over the monitor and head-mounted display conditions.

National Category
Human Computer Interaction
Identifiers
urn:nbn:se:kth:diva-235548 (URN)10.1109/ROMAN.2018.8525554 (DOI)978-1-5386-7981-4 (ISBN)
Conference
ROMAN 2018
Note

QC 20181207

Available from: 2018-09-29 Created: 2018-09-29 Last updated: 2018-12-07Bibliographically approved
Carvalho, J. F., Vejdemo-Johansson, M., Kragic, D. & Pokorny, F. T. (2018). An algorithm for calculating top-dimensional bounding chains. PEERJ COMPUTER SCIENCE, Article ID e153.
Open this publication in new window or tab >>An algorithm for calculating top-dimensional bounding chains
2018 (English)In: PEERJ COMPUTER SCIENCE, ISSN 2376-5992, article id e153Article in journal (Refereed) Published
Abstract [en]

We describe the Coefficient-Flow algorithm for calculating the bounding chain of an (n-1)-boundary on an n-manifold-like simplicial complex S. We prove its correctness and show that it has a computational time complexity of O(vertical bar S(n-1)vertical bar) (where S(n-1) is the set of (n-1)-faces of S). We estimate the big-O coefficient which depends on the dimension of S and the implementation. We present an implementation, experimentally evaluate the complexity of our algorithm, and compare its performance with that of solving the underlying linear system.

Place, publisher, year, edition, pages
PEERJ INC, 2018
Keywords
Homology, Computational algebraic topology
National Category
Computational Mathematics
Identifiers
urn:nbn:se:kth:diva-232420 (URN)10.7717/peerj-cs.153 (DOI)000437236300001 ()
Funder
Knut and Alice Wallenberg FoundationSwedish Research Council
Note

QC 20180725

Available from: 2018-07-25 Created: 2018-07-25 Last updated: 2019-04-12Bibliographically approved
Butepage, J., Kjellström, H. & Kragic, D. (2018). Anticipating many futures: Online human motion prediction and generation for human-robot interaction. In: 2018 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION (ICRA): . Paper presented at IEEE International Conference on Robotics and Automation (ICRA), MAY 21-25, 2018, Brisbane, AUSTRALIA (pp. 4563-4570). IEEE COMPUTER SOC
Open this publication in new window or tab >>Anticipating many futures: Online human motion prediction and generation for human-robot interaction
2018 (English)In: 2018 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION (ICRA), IEEE COMPUTER SOC , 2018, p. 4563-4570Conference paper, Published paper (Refereed)
Abstract [en]

Fluent and safe interactions of humans and robots require both partners to anticipate the others' actions. The bottleneck of most methods is the lack of an accurate model of natural human motion. In this work, we present a conditional variational autoencoder that is trained to predict a window of future human motion given a window of past frames. Using skeletal data obtained from RGB depth images, we show how this unsupervised approach can be used for online motion prediction for up to 1660 ms. Additionally, we demonstrate online target prediction within the first 300-500 ms after motion onset without the use of target specific training data. The advantage of our probabilistic approach is the possibility to draw samples of possible future motion patterns. Finally, we investigate how movements and kinematic cues are represented on the learned low dimensional manifold.

Place, publisher, year, edition, pages
IEEE COMPUTER SOC, 2018
Series
IEEE International Conference on Robotics and Automation ICRA, ISSN 1050-4729
National Category
Computer Vision and Robotics (Autonomous Systems)
Identifiers
urn:nbn:se:kth:diva-237164 (URN)000446394503071 ()978-1-5386-3081-5 (ISBN)
Conference
IEEE International Conference on Robotics and Automation (ICRA), MAY 21-25, 2018, Brisbane, AUSTRALIA
Funder
Swedish Foundation for Strategic Research
Note

QC 20181024

Available from: 2018-10-24 Created: 2018-10-24 Last updated: 2019-08-20Bibliographically approved
Cruciani, S., Smith, C., Kragic, D. & Hang, K. (2018). Dexterous Manipulation Graphs. In: Maciejewski, AA Okamura, A Bicchi, A Stachniss, C Song, DZ Lee, DH Chaumette, F Ding, H Li, JS Wen, J Roberts, J Masamune, K Chong, NY Amato, N Tsagwarakis, N Rocco, P Asfour, T Chung, WK Yasuyoshi, Y Sun, Y Maciekeski, T Althoefer, K AndradeCetto, J Chung, WK Demircan, E Dias, J Fraisse, P Gross, R Harada, H Hasegawa, Y Hayashibe, M Kiguchi, K Kim, K Kroeger, T Li, Y Ma, S Mochiyama, H Monje, CA Rekleitis, I Roberts, R Stulp, F Tsai, CHD Zollo, L (Ed.), 2018 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS (IROS): . Paper presented at 25th IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), OCT 01-05, 2018, Madrid, SPAIN (pp. 2040-2047). IEEE
Open this publication in new window or tab >>Dexterous Manipulation Graphs
2018 (English)In: 2018 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS (IROS) / [ed] Maciejewski, AA Okamura, A Bicchi, A Stachniss, C Song, DZ Lee, DH Chaumette, F Ding, H Li, JS Wen, J Roberts, J Masamune, K Chong, NY Amato, N Tsagwarakis, N Rocco, P Asfour, T Chung, WK Yasuyoshi, Y Sun, Y Maciekeski, T Althoefer, K AndradeCetto, J Chung, WK Demircan, E Dias, J Fraisse, P Gross, R Harada, H Hasegawa, Y Hayashibe, M Kiguchi, K Kim, K Kroeger, T Li, Y Ma, S Mochiyama, H Monje, CA Rekleitis, I Roberts, R Stulp, F Tsai, CHD Zollo, L, IEEE , 2018, p. 2040-2047Conference paper, Published paper (Refereed)
Abstract [en]

We propose the Dexterous Manipulation Graph as a tool to address in-hand manipulation and reposition an object inside a robot's end-effector. This graph is used to plan a sequence of manipulation primitives so to bring the object to the desired end pose. This sequence of primitives is translated into motions of the robot to move the object held by the end-effector. We use a dual arm robot with parallel grippers to test our method on a real system and show successful planning and execution of in-hand manipulation.

Place, publisher, year, edition, pages
IEEE, 2018
Series
IEEE International Conference on Intelligent Robots and Systems, ISSN 2153-0858
National Category
Computer Vision and Robotics (Autonomous Systems)
Identifiers
urn:nbn:se:kth:diva-246311 (URN)10.1109/IROS.2018.8594303 (DOI)000458872702017 ()978-1-5386-8094-0 (ISBN)
Conference
25th IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), OCT 01-05, 2018, Madrid, SPAIN
Note

QC 20190319

Available from: 2019-03-19 Created: 2019-03-19 Last updated: 2019-03-19Bibliographically approved
Krug, R., Bekiroglu, Y., Kragic, D. & Roa, M. A. (2018). Evaluating the Quality of Non-Prehensile Balancing Grasps. In: 2018 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION (ICRA): . Paper presented at IEEE International Conference on Robotics and Automation (ICRA), MAY 21-25, 2018, Brisbane, AUSTRALIA (pp. 4215-4220). IEEE Computer Society
Open this publication in new window or tab >>Evaluating the Quality of Non-Prehensile Balancing Grasps
2018 (English)In: 2018 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION (ICRA), IEEE Computer Society, 2018, p. 4215-4220Conference paper, Published paper (Refereed)
Abstract [en]

Assessing grasp quality and, subsequently, predicting grasp success is useful for avoiding failures in many autonomous robotic applications. In addition, interest in non-prehensile grasping and manipulation has been growing as it offers the potential for a large increase in dexterity. However, while force-closure grasping has been the subject of intense study for many years, few existing works have considered quality metrics for non-prehensile grasps. Furthermore, no studies exist to validate them in practice. In this work we use a real-world data set of non-prehensile balancing grasps and use it to experimentally validate a wrench-based quality metric by means of its grasp success prediction capability. The overall accuracy of up to 84% is encouraging and in line with existing results for force-closure grasps.

Place, publisher, year, edition, pages
IEEE Computer Society, 2018
Series
IEEE International Conference on Robotics and Automation ICRA, ISSN 1050-4729
National Category
Robotics
Identifiers
urn:nbn:se:kth:diva-237163 (URN)000446394503032 ()2-s2.0-85063137634 (Scopus ID)978-1-5386-3081-5 (ISBN)
Conference
IEEE International Conference on Robotics and Automation (ICRA), MAY 21-25, 2018, Brisbane, AUSTRALIA
Funder
Swedish Foundation for Strategic Research
Note

QC 20181024

Available from: 2018-10-24 Created: 2018-10-24 Last updated: 2019-08-20Bibliographically approved
Kragic, D. (2018). From active perception to deep learning. SCIENCE ROBOTICS, 3(23), Article ID eaav1778.
Open this publication in new window or tab >>From active perception to deep learning
2018 (English)In: SCIENCE ROBOTICS, ISSN 2470-9476, Vol. 3, no 23, article id eaav1778Article in journal, Editorial material (Other academic) Published
Place, publisher, year, edition, pages
AMER ASSOC ADVANCEMENT SCIENCE, 2018
National Category
Robotics
Identifiers
urn:nbn:se:kth:diva-239087 (URN)10.1126/scirobotics.aav1778 (DOI)000448624000004 ()2-s2.0-85056580508 (Scopus ID)
Note

QC 20181121

Available from: 2018-11-21 Created: 2018-11-21 Last updated: 2019-08-20Bibliographically approved
Kokic, M., Antonova, R., Stork, J. A. & Kragic, D. (2018). Global Search with Bernoulli Alternation Kernel for Task-oriented Grasping Informed by Simulation. In: Proceedings of The 2nd Conference on Robot Learning, PMLR 87: . Paper presented at 2nd Conference on Robot Learning, October 29th-31st, 2018, Zürich, Switzerland. (pp. 641-650).
Open this publication in new window or tab >>Global Search with Bernoulli Alternation Kernel for Task-oriented Grasping Informed by Simulation
2018 (English)In: Proceedings of The 2nd Conference on Robot Learning, PMLR 87, 2018, p. 641-650Conference paper, Oral presentation with published abstract (Refereed)
Abstract [en]

We develop an approach that benefits from large simulated datasets and takes full advantage of the limited online data that is most relevant. We propose a variant of Bayesian optimization that alternates between using informed and uninformed kernels. With this Bernoulli Alternation Kernel we ensure that discrepancies between simulation and reality do not hinder adapting robot control policies online. The proposed approach is applied to a challenging real-world problem of task-oriented grasping with novel objects. Our further contribution is a neural network architecture and training pipeline that use experience from grasping objects in simulation to learn grasp stability scores. We learn task scores from a labeled dataset with a convolutional network, which is used to construct an informed kernel for our variant of Bayesian optimization. Experiments on an ABB Yumi robot with real sensor data demonstrate success of our approach, despite the challenge of fulfilling task requirements and high uncertainty over physical properties of objects.

National Category
Computer Systems
Research subject
Computer Science
Identifiers
urn:nbn:se:kth:diva-248396 (URN)
Conference
2nd Conference on Robot Learning, October 29th-31st, 2018, Zürich, Switzerland.
Note

QC 20190507

Available from: 2019-04-07 Created: 2019-04-07 Last updated: 2019-05-07Bibliographically approved
Antonova, R., Kokic, M., Stork, J. A. & Kragic, D. (2018). Global Search with Bernoulli Alternation Kernel for Task-oriented Grasping Informed by Simulation. In: : . Paper presented at 2nd Conference on Robot Learning, Zürich, Switzerland, Oct. 29-31 2018.
Open this publication in new window or tab >>Global Search with Bernoulli Alternation Kernel for Task-oriented Grasping Informed by Simulation
2018 (English)Conference paper, Published paper (Refereed)
Abstract [en]

We develop an approach that benefits from large simulated datasets and takes full advantage of the limited online data that is most relevant. We propose a variant of Bayesian optimization that alternates between using informed and uninformed kernels. With this Bernoulli Alternation Kernel we ensure that discrepancies between simulation and reality do not hinder adapting robot control policies online. The proposed approach is applied to a challenging real-world problem of task-oriented grasping with novel objects. Our further contribution is a neural network architecture and training pipeline that use experience from grasping objects in simulation to learn grasp stability scores. We learn task scores from a labeled dataset with a convolutional network, which is used to construct an informed kernel for our variant of Bayesian optimization. Experiments on an ABB Yumi robot with real sensor data demonstrate success of our approach, despite the challenge of fulfilling task requirements and high uncertainty over physical properties of objects.

National Category
Computer Sciences
Research subject
Computer Science
Identifiers
urn:nbn:se:kth:diva-249696 (URN)
Conference
2nd Conference on Robot Learning, Zürich, Switzerland, Oct. 29-31 2018
Note

Contribution/Authorship note: Rika Antonova and Mia Kokic contributed equally

QC 20190520

Available from: 2019-04-17 Created: 2019-04-17 Last updated: 2019-05-20Bibliographically approved
Organisations
Identifiers
ORCID iD: ORCID iD iconorcid.org/0000-0003-2965-2953

Search in DiVA

Show all publications