kth.sePublications
Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Large-scale supervised learning of the grasp robustness of surface patch pairs
KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.ORCID iD: 0000-0003-1114-6040
KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS. KTH, School of Computer Science and Communication (CSC), Robotics, perception and learning, RPL.ORCID iD: 0000-0003-2965-2953
Show others and affiliations
2017 (English)In: 2016 IEEE International Conference on Simulation, Modeling, and Programming for Autonomous Robots, SIMPAR 2016, Institute of Electrical and Electronics Engineers Inc. , 2017, p. 216-223Conference paper, Published paper (Refereed)
Abstract [en]

The robustness of a parallel-jaw grasp can be estimated by Monte Carlo sampling of perturbations in pose and friction but this is not computationally efficient. As an alternative, we consider fast methods using large-scale supervised learning, where the input is a description of a local surface patch at each of two contact points. We train and test with disjoint subsets of a corpus of 1.66 million grasps where robustness is estimated by Monte Carlo sampling using Dex-Net 1.0. We use the BIDMach machine learning toolkit to compare the performance of two supervised learning methods: Random Forests and Deep Learning. We find that both of these methods learn to estimate grasp robustness fairly reliably in terms of Mean Absolute Error (MAE) and ROC Area Under Curve (AUC) on a held-out test set. Speedups over Monte Carlo sampling are approximately 7500x for Random Forests and 1500x for Deep Learning.

Place, publisher, year, edition, pages
Institute of Electrical and Electronics Engineers Inc. , 2017. p. 216-223
Keywords [en]
Decision trees, Deep learning, Learning systems, Robot programming, Robots, Supervised learning, Computationally efficient, Disjoint subsets, Local surfaces, Mean absolute error, Monte Carlo sampling, Random forests, Supervised learning methods, Surface patches, Monte Carlo methods
National Category
Probability Theory and Statistics
Identifiers
URN: urn:nbn:se:kth:diva-207997DOI: 10.1109/SIMPAR.2016.7862399ISI: 000405933700032Scopus ID: 2-s2.0-85015928918ISBN: 9781509046164 (print)OAI: oai:DiVA.org:kth-207997DiVA, id: diva2:1106852
Conference
2016 IEEE International Conference on Simulation, Modeling, and Programming for Autonomous Robots, SIMPAR 2016, 13 December 2016 through 16 December 2016
Note

QC 20170608

Available from: 2017-06-08 Created: 2017-06-08 Last updated: 2022-06-27Bibliographically approved

Open Access in DiVA

No full text in DiVA

Other links

Publisher's full textScopushttp://simpar2016.org/

Authority records

Pokorny, Florian T.Kragic, Danica

Search in DiVA

By author/editor
Pokorny, Florian T.Kragic, Danica
By organisation
Computer Vision and Active Perception, CVAPCentre for Autonomous Systems, CASRobotics, perception and learning, RPL
Probability Theory and Statistics

Search outside of DiVA

GoogleGoogle Scholar

doi
isbn
urn-nbn

Altmetric score

doi
isbn
urn-nbn
Total: 97 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf