Change search
Link to record
Permanent link

Direct link
BETA
Stork, Johannes A.
Publications (8 of 8) Show all publications
Hang, K., Lyu, X., Song, H., Stork, J. A., Dollar, A. M., Kragic, D. & Zhang, F. (2019). Perching and resting-A paradigm for UAV maneuvering with modularized landing gears. SCIENCE ROBOTICS, 4(28), Article ID eaau6637.
Open this publication in new window or tab >>Perching and resting-A paradigm for UAV maneuvering with modularized landing gears
Show others...
2019 (English)In: SCIENCE ROBOTICS, ISSN 2470-9476, Vol. 4, no 28, article id eaau6637Article in journal (Refereed) Published
Abstract [en]

Perching helps small unmanned aerial vehicles (UAVs) extend their time of operation by saving battery power. However, most strategies for UAV perching require complex maneuvering and rely on specific structures, such as rough walls for attaching or tree branches for grasping. Many strategies to perching neglect the UAV's mission such that saving battery power interrupts the mission. We suggest enabling UAVs with the capability of making and stabilizing contacts with the environment, which will allow the UAV to consume less energy while retaining its altitude, in addition to the perching capability that has been proposed before. This new capability is termed "resting." For this, we propose a modularized and actuated landing gear framework that allows stabilizing the UAV on a wide range of different structures by perching and resting. Modularization allows our framework to adapt to specific structures for resting through rapid prototyping with additive manufacturing. Actuation allows switching between different modes of perching and resting during flight and additionally enables perching by grasping. Our results show that this framework can be used to perform UAV perching and resting on a set of common structures, such as street lights and edges or corners of buildings. We show that the design is effective in reducing power consumption, promotes increased pose stability, and preserves large vision ranges while perching or resting at heights. In addition, we discuss the potential applications facilitated by our design, as well as the potential issues to be addressed for deployment in practice.

Place, publisher, year, edition, pages
AMER ASSOC ADVANCEMENT SCIENCE, 2019
National Category
Robotics
Identifiers
urn:nbn:se:kth:diva-251220 (URN)10.1126/scirobotics.aau6637 (DOI)000464024300001 ()2-s2.0-85063677452 (Scopus ID)
Note

QC 20190523

Available from: 2019-05-23 Created: 2019-05-23 Last updated: 2019-05-23Bibliographically approved
Arnekvist, I., Kragic, D. & Stork, J. A. (2019). Vpe: Variational policy embedding for transfer reinforcement learning. In: : . Paper presented at International Conference on Robotics and Automation.
Open this publication in new window or tab >>Vpe: Variational policy embedding for transfer reinforcement learning
2019 (English)Conference paper, Published paper (Refereed)
National Category
Computer Vision and Robotics (Autonomous Systems) Computer Sciences
Identifiers
urn:nbn:se:kth:diva-258072 (URN)
Conference
International Conference on Robotics and Automation
Projects
Factories of the Future (FACT)
Note

QC 20190916

Available from: 2019-09-09 Created: 2019-09-09 Last updated: 2019-09-16Bibliographically approved
Kokic, M., Antonova, R., Stork, J. A. & Kragic, D. (2018). Global Search with Bernoulli Alternation Kernel for Task-oriented Grasping Informed by Simulation. In: Proceedings of The 2nd Conference on Robot Learning, PMLR 87: . Paper presented at 2nd Conference on Robot Learning, October 29th-31st, 2018, Zürich, Switzerland. (pp. 641-650).
Open this publication in new window or tab >>Global Search with Bernoulli Alternation Kernel for Task-oriented Grasping Informed by Simulation
2018 (English)In: Proceedings of The 2nd Conference on Robot Learning, PMLR 87, 2018, p. 641-650Conference paper, Oral presentation with published abstract (Refereed)
Abstract [en]

We develop an approach that benefits from large simulated datasets and takes full advantage of the limited online data that is most relevant. We propose a variant of Bayesian optimization that alternates between using informed and uninformed kernels. With this Bernoulli Alternation Kernel we ensure that discrepancies between simulation and reality do not hinder adapting robot control policies online. The proposed approach is applied to a challenging real-world problem of task-oriented grasping with novel objects. Our further contribution is a neural network architecture and training pipeline that use experience from grasping objects in simulation to learn grasp stability scores. We learn task scores from a labeled dataset with a convolutional network, which is used to construct an informed kernel for our variant of Bayesian optimization. Experiments on an ABB Yumi robot with real sensor data demonstrate success of our approach, despite the challenge of fulfilling task requirements and high uncertainty over physical properties of objects.

National Category
Computer Systems
Research subject
Computer Science
Identifiers
urn:nbn:se:kth:diva-248396 (URN)
Conference
2nd Conference on Robot Learning, October 29th-31st, 2018, Zürich, Switzerland.
Note

QC 20190507

Available from: 2019-04-07 Created: 2019-04-07 Last updated: 2019-05-07Bibliographically approved
Antonova, R., Kokic, M., Stork, J. A. & Kragic, D. (2018). Global Search with Bernoulli Alternation Kernel for Task-oriented Grasping Informed by Simulation. In: : . Paper presented at 2nd Conference on Robot Learning, Zürich, Switzerland, Oct. 29-31 2018.
Open this publication in new window or tab >>Global Search with Bernoulli Alternation Kernel for Task-oriented Grasping Informed by Simulation
2018 (English)Conference paper, Published paper (Refereed)
Abstract [en]

We develop an approach that benefits from large simulated datasets and takes full advantage of the limited online data that is most relevant. We propose a variant of Bayesian optimization that alternates between using informed and uninformed kernels. With this Bernoulli Alternation Kernel we ensure that discrepancies between simulation and reality do not hinder adapting robot control policies online. The proposed approach is applied to a challenging real-world problem of task-oriented grasping with novel objects. Our further contribution is a neural network architecture and training pipeline that use experience from grasping objects in simulation to learn grasp stability scores. We learn task scores from a labeled dataset with a convolutional network, which is used to construct an informed kernel for our variant of Bayesian optimization. Experiments on an ABB Yumi robot with real sensor data demonstrate success of our approach, despite the challenge of fulfilling task requirements and high uncertainty over physical properties of objects.

National Category
Computer Sciences
Research subject
Computer Science
Identifiers
urn:nbn:se:kth:diva-249696 (URN)
Conference
2nd Conference on Robot Learning, Zürich, Switzerland, Oct. 29-31 2018
Note

Contribution/Authorship note: Rika Antonova and Mia Kokic contributed equally

QC 20190520

Available from: 2019-04-17 Created: 2019-04-17 Last updated: 2019-05-20Bibliographically approved
Yuan, W., Stork, J. A., Kragic, D., Wang, M. Y. & Hang, K. (2018). Rearrangement with Nonprehensile Manipulation Using Deep Reinforcement Learning. In: 2018 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION (ICRA): . Paper presented at IEEE International Conference on Robotics and Automation (ICRA), MAY 21-25, 2018, Brisbane, AUSTRALIA (pp. 270-277). IEEE Computer Society
Open this publication in new window or tab >>Rearrangement with Nonprehensile Manipulation Using Deep Reinforcement Learning
Show others...
2018 (English)In: 2018 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION (ICRA), IEEE Computer Society, 2018, p. 270-277Conference paper, Published paper (Refereed)
Abstract [en]

Rearranging objects on a tabletop surface by means of nonprehensile manipulation is a task which requires skillful interaction with the physical world. Usually, this is achieved by precisely modeling physical properties of the objects, robot, and the environment for explicit planning. In contrast, as explicitly modeling the physical environment is not always feasible and involves various uncertainties, we learn a nonprehensile rearrangement strategy with deep reinforcement learning based on only visual feedback. For this, we model the task with rewards and train a deep Q-network. Our potential field-based heuristic exploration strategy reduces the amount of collisions which lead to suboptimal outcomes and we actively balance the training set to avoid bias towards poor examples. Our training process leads to quicker learning and better performance on the task as compared to uniform exploration and standard experience replay. We demonstrate empirical evidence from simulation that our method leads to a success rate of 85%, show that our system can cope with sudden changes of the environment, and compare our performance with human level performance.

Place, publisher, year, edition, pages
IEEE Computer Society, 2018
Series
IEEE International Conference on Robotics and Automation ICRA, ISSN 1050-4729
National Category
Robotics
Identifiers
urn:nbn:se:kth:diva-237158 (URN)000446394500028 ()2-s2.0-85063133829 (Scopus ID)978-1-5386-3081-5 (ISBN)
Conference
IEEE International Conference on Robotics and Automation (ICRA), MAY 21-25, 2018, Brisbane, AUSTRALIA
Funder
Knut and Alice Wallenberg Foundation
Note

QC 20181024

Available from: 2018-10-24 Created: 2018-10-24 Last updated: 2019-08-20Bibliographically approved
Hang, K., Stork, J. A., Pollard, N. S. & Kragic, D. (2017). A Framework for Optimal Grasp Contact Planning. IEEE Robotics and Automation Letters, 2(2), 704-711
Open this publication in new window or tab >>A Framework for Optimal Grasp Contact Planning
2017 (English)In: IEEE Robotics and Automation Letters, ISSN 2377-3766, E-ISSN 1949-3045, Vol. 2, no 2, p. 704-711Article in journal (Refereed) Published
Abstract [en]

We consider the problem of finding grasp contacts that are optimal under a given grasp quality function on arbitrary objects. Our approach formulates a framework for contact-level grasping as a path finding problem in the space of supercontact grasps. The initial supercontact grasp contains all grasps and in each step along a path grasps are removed. For this, we introduce and formally characterize search space structure and cost functions underwhich minimal cost paths correspond to optimal grasps. Our formulation avoids expensive exhaustive search and reduces computational cost by several orders of magnitude. We present admissible heuristic functions and exploit approximate heuristic search to further reduce the computational cost while maintaining bounded suboptimality for resulting grasps. We exemplify our formulation with point-contact grasping for which we define domain specific heuristics and demonstrate optimality and bounded suboptimality by comparing against exhaustive and uniform cost search on example objects. Furthermore, we explain how to restrict the search graph to satisfy grasp constraints for modeling hand kinematics. We also analyze our algorithm empirically in terms of created and visited search states and resultant effective branching factor.

Place, publisher, year, edition, pages
IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC, 2017
Keywords
Grasping, dexterous manipulation, multifingered hands, contact modeling
National Category
Robotics
Identifiers
urn:nbn:se:kth:diva-217455 (URN)10.1109/LRA.2017.2651381 (DOI)000413736600043 ()
Note

QC 20171117

Available from: 2017-11-17 Created: 2017-11-17 Last updated: 2017-11-17Bibliographically approved
Kokic, M., Stork, J. A., Haustein, J. A. & Kragic, D. (2017). Affordance Detection for Task-Specific Grasping Using Deep Learning. In: 2017 IEEE-RAS 17TH INTERNATIONAL CONFERENCE ON HUMANOID ROBOTICS (HUMANOIDS): . Paper presented at 2017 IEEE-RAS 17TH INTERNATIONAL CONFERENCE ON HUMANOID ROBOTICS (HUMANOIDS) (pp. 91-98). Institute of Electrical and Electronics Engineers (IEEE)
Open this publication in new window or tab >>Affordance Detection for Task-Specific Grasping Using Deep Learning
2017 (English)In: 2017 IEEE-RAS 17TH INTERNATIONAL CONFERENCE ON HUMANOID ROBOTICS (HUMANOIDS), Institute of Electrical and Electronics Engineers (IEEE), 2017, p. 91-98Conference paper, Published paper (Refereed)
Abstract [en]

In this paper we utilize the notion of affordances to model relations between task, object and a grasp to address the problem of task-specific robotic grasping. We use convolutional neural networks for encoding and detecting object affordances, class and orientation, which we utilize to formulate grasp constraints. Our approach applies to previously unseen objects from a fixed set of classes and facilitates reasoning about which tasks an object affords and how to grasp it for that task. We evaluate affordance detection on full-view and partial-view synthetic data and compute task-specific grasps for objects that belong to ten different classes and afford five different tasks. We demonstrate the feasibility of our approach by employing an optimization-based grasp planner to compute task-specific grasps.

Place, publisher, year, edition, pages
Institute of Electrical and Electronics Engineers (IEEE), 2017
Series
IEEE-RAS International Conference on Humanoid Robots, ISSN 2164-0572
National Category
Computer and Information Sciences
Identifiers
urn:nbn:se:kth:diva-225257 (URN)10.1109/HUMANOIDS.2017.8239542 (DOI)000427350100013 ()2-s2.0-85044473077 (Scopus ID)9781538646786 (ISBN)
Conference
2017 IEEE-RAS 17TH INTERNATIONAL CONFERENCE ON HUMANOID ROBOTICS (HUMANOIDS)
Funder
Wallenberg FoundationsSwedish Foundation for Strategic Research Swedish Research Council
Note

QC 20180403

Available from: 2018-04-03 Created: 2018-04-03 Last updated: 2018-04-06Bibliographically approved
Thippur, A., Stork, J. A. & Jensfelt, P. (2017). Non-Parametric Spatial Context Structure Learning for Autonomous Understanding of Human Environments. In: Howard, A Suzuki, K Zollo, L (Ed.), 2017 26TH IEEE INTERNATIONAL SYMPOSIUM ON ROBOT AND HUMAN INTERACTIVE COMMUNICATION (RO-MAN): . Paper presented at 26th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN), AUG 28-SEP 01, 2017, Lisbon, PORTUGAL (pp. 1317-1324). IEEE
Open this publication in new window or tab >>Non-Parametric Spatial Context Structure Learning for Autonomous Understanding of Human Environments
2017 (English)In: 2017 26TH IEEE INTERNATIONAL SYMPOSIUM ON ROBOT AND HUMAN INTERACTIVE COMMUNICATION (RO-MAN) / [ed] Howard, A Suzuki, K Zollo, L, IEEE , 2017, p. 1317-1324Conference paper, Published paper (Refereed)
Abstract [en]

Autonomous scene understanding by object classification today, crucially depends on the accuracy of appearance based robotic perception. However, this is prone to difficulties in object detection arising from unfavourable lighting conditions and vision unfriendly object properties. In our work, we propose a spatial context based system which infers object classes utilising solely structural information captured from the scenes to aid traditional perception systems. Our system operates on novel spatial features (IFRC) that are robust to noisy object detections; It also caters to on-the-fly learned knowledge modification improving performance with practise. IFRC are aligned with human expression of 3D space, thereby facilitating easy HRI and hence simpler supervised learning. We tested our spatial context based system to successfully conclude that it can capture spatio structural information to do joint object classification to not only act as a vision aide, but sometimes even perform on par with appearance based robotic vision.

Place, publisher, year, edition, pages
IEEE, 2017
Series
IEEE RO-MAN, ISSN 1944-9445
Keywords
structure learning, spatial relationships, lazy learners, autonomous scene understanding
National Category
Computer and Information Sciences
Identifiers
urn:nbn:se:kth:diva-225236 (URN)000427262400205 ()2-s2.0-85045741190 (Scopus ID)978-1-5386-3518-6 (ISBN)
Conference
26th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN), AUG 28-SEP 01, 2017, Lisbon, PORTUGAL
Funder
EU, FP7, Seventh Framework Programme, 600623Swedish Research Council, C0475401
Note

QC 20180403

Available from: 2018-04-03 Created: 2018-04-03 Last updated: 2018-04-11Bibliographically approved
Organisations

Search in DiVA

Show all publications