Change search
Link to record
Permanent link

Direct link
BETA
Alternative names
Publications (10 of 253) Show all publications
Pokorny, F. T., Bekiroglu, Y., Pauwels, K., Butepage, J., Scherer, C. & Kragic, D. (2017). A database for reproducible manipulation research: CapriDB – Capture, Print, Innovate. Data in Brief, 11, 491-498.
Open this publication in new window or tab >>A database for reproducible manipulation research: CapriDB – Capture, Print, Innovate
Show others...
2017 (English)In: Data in Brief, ISSN 2352-3409, Vol. 11, 491-498 p.Article in journal (Refereed) Published
Abstract [en]

We present a novel approach and database which combines the inexpensive generation of 3D object models via monocular or RGB-D camera images with 3D printing and a state of the art object tracking algorithm. Unlike recent efforts towards the creation of 3D object databases for robotics, our approach does not require expensive and controlled 3D scanning setups and aims to enable anyone with a camera to scan, print and track complex objects for manipulation research. The proposed approach results in detailed textured mesh models whose 3D printed replicas provide close approximations of the originals. A key motivation for utilizing 3D printed objects is the ability to precisely control and vary object properties such as the size, material properties and mass distribution in the 3D printing process to obtain reproducible conditions for robotic manipulation research. We present CapriDB – an extensible database resulting from this approach containing initially 40 textured and 3D printable mesh models together with tracking features to facilitate the adoption of the proposed approach.

Place, publisher, year, edition, pages
Elsevier, 2017
National Category
Computer and Information Sciences
Identifiers
urn:nbn:se:kth:diva-210103 (URN)10.1016/j.dib.2017.02.015 (DOI)2-s2.0-85014438696 (Scopus ID)
Note

QC 20170630

Available from: 2017-06-30 Created: 2017-06-30 Last updated: 2018-01-13Bibliographically approved
Hang, K., Stork, J. A., Pollard, N. S. & Kragic, D. (2017). A Framework for Optimal Grasp Contact Planning. IEEE Robotics and Automation Letters, 2(2), 704-711.
Open this publication in new window or tab >>A Framework for Optimal Grasp Contact Planning
2017 (English)In: IEEE Robotics and Automation Letters, ISSN 2377-3766, E-ISSN 1949-3045, Vol. 2, no 2, 704-711 p.Article in journal (Refereed) Published
Abstract [en]

We consider the problem of finding grasp contacts that are optimal under a given grasp quality function on arbitrary objects. Our approach formulates a framework for contact-level grasping as a path finding problem in the space of supercontact grasps. The initial supercontact grasp contains all grasps and in each step along a path grasps are removed. For this, we introduce and formally characterize search space structure and cost functions underwhich minimal cost paths correspond to optimal grasps. Our formulation avoids expensive exhaustive search and reduces computational cost by several orders of magnitude. We present admissible heuristic functions and exploit approximate heuristic search to further reduce the computational cost while maintaining bounded suboptimality for resulting grasps. We exemplify our formulation with point-contact grasping for which we define domain specific heuristics and demonstrate optimality and bounded suboptimality by comparing against exhaustive and uniform cost search on example objects. Furthermore, we explain how to restrict the search graph to satisfy grasp constraints for modeling hand kinematics. We also analyze our algorithm empirically in terms of created and visited search states and resultant effective branching factor.

Place, publisher, year, edition, pages
IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC, 2017
Keyword
Grasping, dexterous manipulation, multifingered hands, contact modeling
National Category
Robotics
Identifiers
urn:nbn:se:kth:diva-217455 (URN)10.1109/LRA.2017.2651381 (DOI)000413736600043 ()
Note

QC 20171117

Available from: 2017-11-17 Created: 2017-11-17 Last updated: 2017-11-17Bibliographically approved
Bohg, J., Hausman, K., Sankaran, B., Brock, O., Kragic, D., Schaal, S. & Sukhatme, G. S. (2017). Interactive Perception: Leveraging Action in Perception and Perception in Action. IEEE Transactions on robotics, 33(6), 1273-1291.
Open this publication in new window or tab >>Interactive Perception: Leveraging Action in Perception and Perception in Action
Show others...
2017 (English)In: IEEE Transactions on robotics, ISSN 1552-3098, E-ISSN 1941-0468, Vol. 33, no 6, 1273-1291 p.Article in journal (Refereed) Published
Abstract [en]

Recent approaches in robot perception follow the insight that perception is facilitated by interaction with the environment. These approaches are subsumed under the term Interactive Perception (IP). This view of perception provides the following benefits. First, interaction with the environment creates a rich sensory signal that would otherwise not be present. Second, knowledge of the regularity in the combined space of sensory data and action parameters facilitates the prediction and interpretation of the sensory signal. In this survey, we postulate this as a principle for robot perception and collect evidence in its support by analyzing and categorizing existing work in this area. We also provide an overview of the most important applications of IP. We close this survey by discussing remaining open questions. With this survey, we hope to help define the field of Interactive Perception and to provide a valuable resource for future research.

Place, publisher, year, edition, pages
IEEE, 2017
National Category
Robotics
Identifiers
urn:nbn:se:kth:diva-220617 (URN)10.1109/TRO.2017.2721939 (DOI)000417841500001 ()
Note

QC 20180112

Available from: 2018-01-12 Created: 2018-01-12 Last updated: 2018-01-12Bibliographically approved
Seita, D., Pokorny, F. T., Mahler, J., Kragic, D., Franklin, M., Canny, J. & Goldberg, K. (2017). Large-scale supervised learning of the grasp robustness of surface patch pairs. In: 2016 IEEE International Conference on Simulation, Modeling, and Programming for Autonomous Robots, SIMPAR 2016: . Paper presented at 2016 IEEE International Conference on Simulation, Modeling, and Programming for Autonomous Robots, SIMPAR 2016, 13 December 2016 through 16 December 2016 (pp. 216-223). Institute of Electrical and Electronics Engineers Inc..
Open this publication in new window or tab >>Large-scale supervised learning of the grasp robustness of surface patch pairs
Show others...
2017 (English)In: 2016 IEEE International Conference on Simulation, Modeling, and Programming for Autonomous Robots, SIMPAR 2016, Institute of Electrical and Electronics Engineers Inc. , 2017, 216-223 p.Conference paper, Published paper (Refereed)
Abstract [en]

The robustness of a parallel-jaw grasp can be estimated by Monte Carlo sampling of perturbations in pose and friction but this is not computationally efficient. As an alternative, we consider fast methods using large-scale supervised learning, where the input is a description of a local surface patch at each of two contact points. We train and test with disjoint subsets of a corpus of 1.66 million grasps where robustness is estimated by Monte Carlo sampling using Dex-Net 1.0. We use the BIDMach machine learning toolkit to compare the performance of two supervised learning methods: Random Forests and Deep Learning. We find that both of these methods learn to estimate grasp robustness fairly reliably in terms of Mean Absolute Error (MAE) and ROC Area Under Curve (AUC) on a held-out test set. Speedups over Monte Carlo sampling are approximately 7500x for Random Forests and 1500x for Deep Learning.

Place, publisher, year, edition, pages
Institute of Electrical and Electronics Engineers Inc., 2017
Keyword
Decision trees, Deep learning, Learning systems, Robot programming, Robots, Supervised learning, Computationally efficient, Disjoint subsets, Local surfaces, Mean absolute error, Monte Carlo sampling, Random forests, Supervised learning methods, Surface patches, Monte Carlo methods
National Category
Probability Theory and Statistics
Identifiers
urn:nbn:se:kth:diva-207997 (URN)10.1109/SIMPAR.2016.7862399 (DOI)000405933700032 ()2-s2.0-85015928918 (Scopus ID)9781509046164 (ISBN)
Conference
2016 IEEE International Conference on Simulation, Modeling, and Programming for Autonomous Robots, SIMPAR 2016, 13 December 2016 through 16 December 2016
Note

QC 20170608

Available from: 2017-06-08 Created: 2017-06-08 Last updated: 2017-11-10Bibliographically approved
Ek, C. H. & Kragic, D. (2017). The importance of structure. In: 15th International Symposium of Robotics Research, 2011: . Paper presented at 9 December 2011 through 12 December 2011 (pp. 111-127). Springer.
Open this publication in new window or tab >>The importance of structure
2017 (English)In: 15th International Symposium of Robotics Research, 2011, Springer, 2017, 111-127 p.Conference paper, Published paper (Refereed)
Abstract [en]

Many tasks in robotics and computer vision are concerned with inferring a continuous or discrete state variable from observations and measurements from the environment. Due to the high-dimensional nature of the input data the inference is often cast as a two stage process: first a low-dimensional feature representation is extracted on which secondly a learning algorithm is applied. Due to the significant progress that have been achieved within the field of machine learning over the last decade focus have placed at the second stage of the inference process, improving the process by exploiting more advanced learning techniques applied to the same (or more of the same) data. We believe that for many scenarios significant strides in performance could be achieved by focusing on representation rather than aiming to alleviate inconclusive and/or redundant information by exploiting more advanced inference methods. This stems from the notion that; given the “correct” representation the inference problem becomes easier to solve. In this paper we argue that one important mode of information for many application scenarios is not the actual variation in the data but the rather the higher order statistics as the structure of variations. We will exemplify this through a set of applications and show different ways of representing the structure of data. © Springer International Publishing Switzerland 2017.

Place, publisher, year, edition, pages
Springer, 2017
Keyword
Artificial intelligence, Computer vision, Higher order statistics, Inference engines, Learning systems, Robotics, Advanced learning, Application scenario, Feature representation, Inference methods, Inference problem, Inference process, Observations and measurements, Two-stage process, Learning algorithms
National Category
Computer and Information Sciences
Identifiers
urn:nbn:se:kth:diva-195120 (URN)10.1007/978-3-319-29363-9_7 (DOI)2-s2.0-84984823812 (Scopus ID)9783319293622 (ISBN)
Conference
9 December 2011 through 12 December 2011
Note

Correspondence Address: Ek, C.H.; University of Bristol United Kingdom; email: carlhenrik.ek@bristol.ac.uk. QC 20161121

Available from: 2016-11-21 Created: 2016-11-02 Last updated: 2018-01-13Bibliographically approved
Högman, V., Björkman, M., Maki, A. & Kragic, D. (2016). A sensorimotor learning framework for object categorization. IEEE Transactions on Cognitive and Developmental Systems, 8(1), 15-25.
Open this publication in new window or tab >>A sensorimotor learning framework for object categorization
2016 (English)In: IEEE Transactions on Cognitive and Developmental Systems, ISSN 2379-8920, Vol. 8, no 1, 15-25 p.Article in journal (Refereed) Published
Abstract [en]

This paper presents a framework that enables a robot to discover various object categories through interaction. The categories are described using action-effect relations, i.e. sensorimotor contingencies rather than more static shape or appearance representation. The framework provides a functionality to classify objects and the resulting categories, associating a class with a specific module. We demonstrate the performance of the framework by studying a pushing behavior in robots, encoding the sensorimotor contingencies and their predictability with Gaussian Processes. We show how entropy-based action selection can improve object classification and how functional categories emerge from the similarities of effects observed among the objects. We also show how a multidimensional action space can be realized by parameterizing pushing using both position and velocity.

Place, publisher, year, edition, pages
Institute of Electrical and Electronics Engineers (IEEE), 2016
Keyword
sensorimotor learning, object classification, categorization, cognitive robotics, active perception, learning and adaptive system, embodiment, developmental robotics
National Category
Robotics
Research subject
Computer Science
Identifiers
urn:nbn:se:kth:diva-172143 (URN)10.1109/TAMD.2015.2463728 (DOI)000388682400003 ()
Funder
Swedish Research CouncilEU, European Research Council, H2020-FETPROACT-2014 641321
Note

QC 20160422

Available from: 2016-04-21 Created: 2015-08-13 Last updated: 2017-01-04Bibliographically approved
Ghadirzadeh, A., Bütepage, J., Maki, A., Kragic, D. & Björkman, M. (2016). A sensorimotor reinforcement learning framework for physical human-robot interaction. In: IEEE International Conference on Intelligent Robots and Systems: . Paper presented at 2016 IEEE/RSJ International Conference on Intelligent Robots and Systems, IROS 2016, 9 October 2016 through 14 October 2016 (pp. 2682-2688). IEEE.
Open this publication in new window or tab >>A sensorimotor reinforcement learning framework for physical human-robot interaction
Show others...
2016 (English)In: IEEE International Conference on Intelligent Robots and Systems, IEEE, 2016, 2682-2688 p.Conference paper, Published paper (Refereed)
Abstract [en]

Modeling of physical human-robot collaborations is generally a challenging problem due to the unpredictive nature of human behavior. To address this issue, we present a data-efficient reinforcement learning framework which enables a robot to learn how to collaborate with a human partner. The robot learns the task from its own sensorimotor experiences in an unsupervised manner. The uncertainty in the interaction is modeled using Gaussian processes (GP) to implement a forward model and an actionvalue function. Optimal action selection given the uncertain GP model is ensured by Bayesian optimization. We apply the framework to a scenario in which a human and a PR2 robot jointly control the ball position on a plank based on vision and force/torque data. Our experimental results show the suitability of the proposed method in terms of fast and data-efficient model learning, optimal action selection under uncertainty and equal role sharing between the partners.

Place, publisher, year, edition, pages
IEEE, 2016
Keyword
Behavioral research, Intelligent robots, Reinforcement learning, Robots, Bayesian optimization, Forward modeling, Gaussian process, Human behaviors, Human-robot collaboration, Model learning, Optimal actions, Physical human-robot interactions, Human robot interaction
National Category
Robotics
Identifiers
urn:nbn:se:kth:diva-202121 (URN)10.1109/IROS.2016.7759417 (DOI)000391921702127 ()2-s2.0-85006367922 (Scopus ID)9781509037629 (ISBN)
Conference
2016 IEEE/RSJ International Conference on Intelligent Robots and Systems, IROS 2016, 9 October 2016 through 14 October 2016
Note

QC 20170228

Available from: 2017-02-28 Created: 2017-02-28 Last updated: 2017-03-06Bibliographically approved
Caccamo, S., Bekiroglu, Y., Ek, C. H. & Kragic, D. (2016). Active Exploration Using Gaussian Random Fields and Gaussian Process Implicit Surfaces. In: 2016 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS (IROS 2016): . Paper presented at IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), OCT 09-14, 2016, Daejeon, SOUTH KOREA (pp. 582-589). Institute of Electrical and Electronics Engineers (IEEE).
Open this publication in new window or tab >>Active Exploration Using Gaussian Random Fields and Gaussian Process Implicit Surfaces
2016 (English)In: 2016 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS (IROS 2016), Institute of Electrical and Electronics Engineers (IEEE), 2016, 582-589 p.Conference paper, Published paper (Refereed)
Abstract [en]

In this work we study the problem of exploring surfaces and building compact 3D representations of the environment surrounding a robot through active perception. We propose an online probabilistic framework that merges visual and tactile measurements using Gaussian Random Field and Gaussian Process Implicit Surfaces. The system investigates incomplete point clouds in order to find a small set of regions of interest which are then physically explored with a robotic arm equipped with tactile sensors. We show experimental results obtained using a PrimeSense camera, a Kinova Jaco2 robotic arm and Optoforce sensors on different scenarios. We then demostrate how to use the online framework for object detection and terrain classification.

Place, publisher, year, edition, pages
Institute of Electrical and Electronics Engineers (IEEE), 2016
Keyword
Active perception, Surface reconstruction, Gaussian process, Implicit surface, Random field, Tactile exploration
National Category
Robotics
Identifiers
urn:nbn:se:kth:diva-202672 (URN)10.1109/IROS.2016.7759112 (DOI)000391921700086 ()2-s2.0-85006371409 (Scopus ID)978-1-5090-3762-9 (ISBN)
Conference
IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), OCT 09-14, 2016, Daejeon, SOUTH KOREA
Note

QC 20170306

Available from: 2017-03-06 Created: 2017-03-06 Last updated: 2017-03-06Bibliographically approved
Caccamo, S., Güler, P., Kjellström, H. & Kragic, D. (2016). Active perception and modeling of deformable surfaces using Gaussian processes and position-based dynamics. In: IEEE-RAS International Conference on Humanoid Robots: . Paper presented at 16th IEEE-RAS International Conference on Humanoid Robots, Humanoids 2016, 15 November 2016 through 17 November 2016 (pp. 530-537). IEEE.
Open this publication in new window or tab >>Active perception and modeling of deformable surfaces using Gaussian processes and position-based dynamics
2016 (English)In: IEEE-RAS International Conference on Humanoid Robots, IEEE, 2016, 530-537 p.Conference paper, Published paper (Refereed)
Abstract [en]

Exploring and modeling heterogeneous elastic surfaces requires multiple interactions with the environment and a complex selection of physical material parameters. The most common approaches model deformable properties from sets of offline observations using computationally expensive force-based simulators. In this work we present an online probabilistic framework for autonomous estimation of a deformability distribution map of heterogeneous elastic surfaces from few physical interactions. The method takes advantage of Gaussian Processes for constructing a model of the environment geometry surrounding a robot. A fast Position-based Dynamics simulator uses focused environmental observations in order to model the elastic behavior of portions of the environment. Gaussian Process Regression maps the local deformability on the whole environment in order to generate a deformability distribution map. We show experimental results using a PrimeSense camera, a Kinova Jaco2 robotic arm and an Optoforce sensor on different deformable surfaces.

Place, publisher, year, edition, pages
IEEE, 2016
Keyword
Active perception, Deformability modeling, Gaussian process, Position-based dynamics, Tactile exploration, Anthropomorphic robots, Deformation, Dynamics, Gaussian noise (electronic), Probability distributions, Robots, Active perceptions, Environmental observation, Gaussian process regression, Gaussian Processes, Multiple interactions, Physical interactions, Probabilistic framework, Gaussian distribution
National Category
Computer Vision and Robotics (Autonomous Systems)
Identifiers
urn:nbn:se:kth:diva-202842 (URN)10.1109/HUMANOIDS.2016.7803326 (DOI)000403009300081 ()2-s2.0-85010190205 (Scopus ID)9781509047185 (ISBN)
Conference
16th IEEE-RAS International Conference on Humanoid Robots, Humanoids 2016, 15 November 2016 through 17 November 2016
Note

QC 20170317

Available from: 2017-03-17 Created: 2017-03-17 Last updated: 2018-01-13Bibliographically approved
Viña Barrientos, F., Karayiannidis, Y., Smith, C. & Kragic, D. (2016). Adaptive Control for Pivoting with Visual and Tactile Feedback. In: : . Paper presented at IEEE International Conference on Robotics and Automation,Stockholm, Sweden 16-21 May 2016. Institute of Electrical and Electronics Engineers (IEEE).
Open this publication in new window or tab >>Adaptive Control for Pivoting with Visual and Tactile Feedback
2016 (English)Conference paper, Published paper (Refereed)
Abstract [en]

In this work we present an adaptive control approach for pivoting, which is an in-hand manipulation maneuver that consists of rotating a grasped object to a desired orientation relative to the robot’s hand. We perform pivoting by means of gravity, allowing the object to rotate between the fingers of a one degree of freedom gripper and controlling the gripping force to ensure that the object follows a reference trajectory and arrives at the desired angular position. We use a visual pose estimation system to track the pose of the object and force measurements from tactile sensors to control the gripping force. The adaptive controller employs an update law that accommodates for errors in the friction coefficient,which is one of the most common sources of uncertainty in manipulation. Our experiments confirm that the proposed adaptive controller successfully pivots a grasped object in the presence of uncertainty in the object’s friction parameters.

Place, publisher, year, edition, pages
Institute of Electrical and Electronics Engineers (IEEE), 2016
National Category
Computer Vision and Robotics (Autonomous Systems)
Identifiers
urn:nbn:se:kth:diva-187483 (URN)000389516200050 ()2-s2.0-84977472497 (Scopus ID)
Conference
IEEE International Conference on Robotics and Automation,Stockholm, Sweden 16-21 May 2016
Note

QC 20160524

Available from: 2016-05-24 Created: 2016-05-24 Last updated: 2018-01-10Bibliographically approved
Organisations
Identifiers
ORCID iD: ORCID iD iconorcid.org/0000-0003-2965-2953

Search in DiVA

Show all publications