Endre søk
Begrens søket
1234567 51 - 100 of 305
RefereraExporteraLink til resultatlisten
Permanent link
Referera
Referensformat
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Annet format
Fler format
Språk
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Annet språk
Fler språk
Utmatningsformat
  • html
  • text
  • asciidoc
  • rtf
Treff pr side
  • 5
  • 10
  • 20
  • 50
  • 100
  • 250
Sortering
  • Standard (Relevans)
  • Forfatter A-Ø
  • Forfatter Ø-A
  • Tittel A-Ø
  • Tittel Ø-A
  • Type publikasjon A-Ø
  • Type publikasjon Ø-A
  • Eldste først
  • Nyeste først
  • Skapad (Eldste først)
  • Skapad (Nyeste først)
  • Senast uppdaterad (Eldste først)
  • Senast uppdaterad (Nyeste først)
  • Disputationsdatum (tidligste først)
  • Disputationsdatum (siste først)
  • Standard (Relevans)
  • Forfatter A-Ø
  • Forfatter Ø-A
  • Tittel A-Ø
  • Tittel Ø-A
  • Type publikasjon A-Ø
  • Type publikasjon Ø-A
  • Eldste først
  • Nyeste først
  • Skapad (Eldste først)
  • Skapad (Nyeste først)
  • Senast uppdaterad (Eldste først)
  • Senast uppdaterad (Nyeste først)
  • Disputationsdatum (tidligste først)
  • Disputationsdatum (siste først)
Merk
Maxantalet träffar du kan exportera från sökgränssnittet är 250. Vid större uttag använd dig av utsökningar.
  • 51.
    Carvalho, Joao Frederico
    et al.
    KTH, Skolan för elektroteknik och datavetenskap (EECS), Intelligenta system, Robotik, perception och lärande, RPL.
    Vejdemo-Johansson, Mikael
    CUNY, Math Dept, Coll Staten Isl, New York, NY 10021 USA..
    Kragic, Danica
    KTH, Skolan för elektroteknik och datavetenskap (EECS), Intelligenta system, Robotik, perception och lärande, RPL.
    Pokorny, Florian T.
    KTH, Skolan för elektroteknik och datavetenskap (EECS), Intelligenta system, Robotik, perception och lärande, RPL.
    An algorithm for calculating top-dimensional bounding chains2018Inngår i: PEERJ COMPUTER SCIENCE, ISSN 2376-5992, artikkel-id e153Artikkel i tidsskrift (Fagfellevurdert)
    Abstract [en]

    We describe the Coefficient-Flow algorithm for calculating the bounding chain of an (n-1)-boundary on an n-manifold-like simplicial complex S. We prove its correctness and show that it has a computational time complexity of O(vertical bar S(n-1)vertical bar) (where S(n-1) is the set of (n-1)-faces of S). We estimate the big-O coefficient which depends on the dimension of S and the implementation. We present an implementation, experimentally evaluate the complexity of our algorithm, and compare its performance with that of solving the underlying linear system.

  • 52.
    Christensen, Henrik I.
    et al.
    KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Kragic, Danica
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Sandberg, F
    KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Computational Vision for Interaction with People and RobotsManuskript (preprint) (Annet vitenskapelig)
    Abstract [en]

    Facilities for sensing and modification of the environmentis crucial to delivery of robotics facilities that can interact with humansand objects in the environment. Both for recognition of objectsand interpretation of human activities (for instruction and avoidance)the by far most versatile sensory modality is computational vision.Use of vision for interpretation of human gestures and for manipulationof objects is outlined in this paper. It is here described how combinationof multiple visual cues can be used to achieve robustness andthe tradeoff between models and cue integration is illustrated. Thedescribed vision competences are demonstrated in the context of anintelligent service robot that operates in a regular domestic setting.

  • 53. Christensen, Henrik I
    et al.
    Kragic, Danica
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP.
    Sandberg, F
    Vision for Interaction2000Inngår i: Dagstuhl Seminars, 2000, s. 51-73Kapittel i bok, del av antologi (Fagfellevurdert)
  • 54. Comport, Andrew I.
    et al.
    Kragic, Danica
    KTH, Skolan för datavetenskap och kommunikation (CSC), Numerisk Analys och Datalogi, NADA. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS. KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP.
    Marchand, E.
    Chaumette, F.
    Robust Real-Time Visual Tracking: Comparison, Theoretical Analysis and Performance Evaluation2005Inngår i: 2005 IEEE International Conference on Robotics and Automation (ICRA), Vols 1-4, 2005, s. 2841-2846Konferansepaper (Fagfellevurdert)
    Abstract [en]

    In this paper, two real-time pose tracking algorithms for rigid objects are compared. Both methods are 3D-model based and are capable of calculating the pose between the camera and an object with a monocular vision system. Here, special consideration has been put into defining and evaluating different performance criteria such as computational efficiency, accuracy and robustness. Both methods are described and a unifying framework is derived. The main advantage of both algorithms lie in their real-time capabilities (on standard hardware) whilst being robust to miss-tracking, occlusion and changes in illumination.

  • 55.
    Cornelius, Hugo
    et al.
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP.
    Kragic, Danica
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP.
    Eklundh, Jan-Olof
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP.
    Object and pose recognition using contour and shape information2005Inngår i: 2005 12th International Conference on Advanced Robotics, NEW YORK, NY: IEEE , 2005, s. 613-620Konferansepaper (Fagfellevurdert)
    Abstract [en]

    Object recognition and pose estimation are of significant importance for robotic visual servoing, manipulation and grasping tasks. Traditionally, contour and shape based methods have been considered as most adequate for estimating stable and feasible grasps, [1]. More recently, a new research direction has been advocated in visual servoing where image moments are used to define a suitable error function to be minimized. Compared to appearance based methods, contour and shape based approaches are also suitable for use with range sensors such as, for example, lasers. In this paper, we evaluate a contour based object recognition system building on the method in [2], suitable for objects of uniform color properties such as cups, cutlery, fruits etc. This system is one of the building blocks of a more complex object recognition system based both on stereo and appearance cues, [3]. The system has a significant potential both in terms of service robot and programming by demonstration tasks. Experimental evaluation shows promising results in terms of robustness to occlusion and noise.

  • 56.
    Cruciani, Silvia
    et al.
    KTH, Skolan för elektroteknik och datavetenskap (EECS), Robotik, perception och lärande, RPL.
    Almeida, Diogo
    KTH, Skolan för elektroteknik och datavetenskap (EECS), Robotik, perception och lärande, RPL. KTH.
    Kragic, Danica
    KTH, Skolan för elektroteknik och datavetenskap (EECS), Robotik, perception och lärande, RPL.
    Karayiannidis, Yiannis
    KTH, Skolan för elektroteknik och datavetenskap (EECS), Robotik, perception och lärande, RPL.
    Discrete Bimanual Manipulation for Wrench BalancingManuskript (preprint) (Annet vitenskapelig)
    Abstract [en]

    Dual-arm robots can overcome grasping force and payload limitations of a single arm by jointly grasping an object.However, if the distribution of mass of the grasped object is not even, each arm will experience different wrenches that can exceed its payload limits.In this work, we consider the problem of balancing the wrenches experienced by  a dual-arm robot grasping a rigid tray.The distribution of wrenches among the robot arms changes due to objects being placed on the tray.We present an approach to reduce the wrench imbalance among arms through discrete bimanual manipulation.Our approach is based on sequential sliding motions of the grasp points on the surface of the object, to attain a more balanced configuration.%This is achieved in a discrete manner, one arm at a time, to minimize the potential for undesirable object motion during execution.We validate our modeling approach and system design through a set of robot experiments.

  • 57.
    Cruciani, Silvia
    et al.
    KTH, Skolan för elektroteknik och datavetenskap (EECS), Intelligenta system, Robotik, perception och lärande, RPL.
    Hang, Kaiyu
    Yale University.
    Smith, Christian
    KTH, Skolan för elektroteknik och datavetenskap (EECS), Intelligenta system, Robotik, perception och lärande, RPL.
    Kragic, Danica
    KTH, Skolan för elektroteknik och datavetenskap (EECS), Intelligenta system, Robotik, perception och lärande, RPL.
    Dual-Arm In-Hand Manipulation Using Visual Feedback2019Konferansepaper (Fagfellevurdert)
    Abstract [en]

    In this work, we address the problem of executing in-hand manipulation based on visual input. Given an initial grasp, the robot has to change its grasp configuration without releasing the object. We propose a method for in-hand manipulation planning and execution based on information on the object’s shape using a dual-arm robot. From the available information on the object, which can be a complete point cloud but also partial data, our method plans a sequence of rotations and translations to reconfigure the object’s pose. This sequence is executed using non-prehensile pushes defined as relative motions between the two robot arms.

  • 58.
    Cruciani, Silvia
    et al.
    KTH, Skolan för elektroteknik och datavetenskap (EECS), Robotik, perception och lärande, RPL.
    Hang, Yin
    KTH, Skolan för elektroteknik och datavetenskap (EECS), Robotik, perception och lärande, RPL.
    Kragic, Danica
    KTH, Skolan för elektroteknik och datavetenskap (EECS), Robotik, perception och lärande, RPL.
    In-Hand Manipulation of Objects with Unknown ShapesManuskript (preprint) (Annet vitenskapelig)
    Abstract [en]

    This work addresses the problem of changing grasp configurations on objects with an unknown shape through in-hand manipulation. Our approach leverages shape priors,learned as deep generative models, to infer novel object shapesfrom partial visual sensing. The Dexterous Manipulation Graph method is extended to build upon incremental data and account for estimation uncertainty in searching a sequence of manipulation actions. We show that our approach successfully solves in-hand manipulation tasks with unknown objects, and demonstrate the validity of these solutions with robot experiments.

  • 59.
    Cruciani, Silvia
    et al.
    KTH, Skolan för elektroteknik och datavetenskap (EECS), Robotik, perception och lärande, RPL.
    Smith, Christian
    KTH, Skolan för elektroteknik och datavetenskap (EECS), Robotik, perception och lärande, RPL.
    Kragic, Danica
    KTH, Skolan för elektroteknik och datavetenskap (EECS), Robotik, perception och lärande, RPL.
    Hang, Kaiyu
    Hong Kong Univ Sci & Technol, Dept Comp Sci & Engn, Hong Kong, Peoples R China.;Hong Kong Univ Sci & Technol, Inst Adv Study, Hong Kong, Peoples R China..
    Dexterous Manipulation Graphs2018Inngår i: 2018 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS (IROS) / [ed] Maciejewski, AA Okamura, A Bicchi, A Stachniss, C Song, DZ Lee, DH Chaumette, F Ding, H Li, JS Wen, J Roberts, J Masamune, K Chong, NY Amato, N Tsagwarakis, N Rocco, P Asfour, T Chung, WK Yasuyoshi, Y Sun, Y Maciekeski, T Althoefer, K AndradeCetto, J Chung, WK Demircan, E Dias, J Fraisse, P Gross, R Harada, H Hasegawa, Y Hayashibe, M Kiguchi, K Kim, K Kroeger, T Li, Y Ma, S Mochiyama, H Monje, CA Rekleitis, I Roberts, R Stulp, F Tsai, CHD Zollo, L, IEEE , 2018, s. 2040-2047Konferansepaper (Fagfellevurdert)
    Abstract [en]

    We propose the Dexterous Manipulation Graph as a tool to address in-hand manipulation and reposition an object inside a robot's end-effector. This graph is used to plan a sequence of manipulation primitives so to bring the object to the desired end pose. This sequence of primitives is translated into motions of the robot to move the object held by the end-effector. We use a dual arm robot with parallel grippers to test our method on a real system and show successful planning and execution of in-hand manipulation.

  • 60.
    Cruciani, Silvia
    et al.
    KTH, Skolan för elektroteknik och datavetenskap (EECS), Intelligenta system, Robotik, perception och lärande, RPL. KTH Royal Inst Technol, Div Robot Percept & Learning, EECS, S-11428 Stockholm, Sweden..
    Sundaralingam, Balakumar
    Univ Utah, Robot Ctr, Salt Lake City, UT 84112 USA.;Univ Utah, Sch Comp, Salt Lake City, UT 84112 USA..
    Hang, Kaiyu
    Yale Univ, Dept Mech Engn & Mat Sci, New Haven, CT 06520 USA..
    Kumar, Vikash
    Google AI, San Francisco, CA 94110 USA..
    Hermans, Tucker
    Univ Utah, Robot Ctr, Salt Lake City, UT 84112 USA.;Univ Utah, Sch Comp, Salt Lake City, UT 84112 USA.;NVIDIA Res, Santa Clara, CA USA..
    Kragic, Danica
    KTH, Tidigare Institutioner (före 2005), Numerisk analys och datalogi, NADA. KTH, Skolan för elektroteknik och datavetenskap (EECS), Intelligenta system, Robotik, perception och lärande, RPL. KTH, Skolan för elektroteknik och datavetenskap (EECS), Centra, Centrum för autonoma system, CAS. KTH Royal Inst Technol, Div Robot Percept & Learning, EECS, S-11428 Stockholm, Sweden..
    Benchmarking In-Hand Manipulation2020Inngår i: IEEE Robotics and Automation Letters, ISSN 2377-3766, E-ISSN 1949-3045, Vol. 5, nr 2, s. 588-595Artikkel i tidsskrift (Fagfellevurdert)
    Abstract [en]

    The purpose of this benchmark is to evaluate the planning and control aspects of robotic in-hand manipulation systems. The goal is to assess the systems ability to change the pose of a hand-held object by either using the fingers, environment or a combination of both. Given an object surface mesh from the YCB data-set, we provide examples of initial and goal states (i.e. static object poses and fingertip locations) for various in-hand manipulation tasks. We further propose metrics that measure the error in reaching the goal state from a specific initial state, which, when aggregated across all tasks, also serves as a measure of the systems in-hand manipulation capability. We provide supporting software, task examples, and evaluation results associated with the benchmark.

  • 61.
    Cruciani, Silvia
    et al.
    KTH, Skolan för elektroteknik och datavetenskap (EECS), Intelligenta system, Robotik, perception och lärande, RPL.
    Yin, Hang
    KTH, Skolan för elektroteknik och datavetenskap (EECS), Intelligenta system, Robotik, perception och lärande, RPL.
    Kragic, Danica
    KTH, Skolan för elektroteknik och datavetenskap (EECS), Intelligenta system, Robotik, perception och lärande, RPL.
    In-Hand Manipulation of Objects with Unknown ShapesManuskript (preprint) (Annet vitenskapelig)
    Abstract [en]

    This work addresses the problem of changing grasp configurations on objects with an unknown shape through in-hand manipulation. Our approach leverages shape priors,learned as deep generative models, to infer novel object shapesfrom partial visual sensing. The Dexterous Manipulation Graph method is extended to build upon incremental data and account for estimation uncertainty in searching a sequence of manipulation actions. We show that our approach successfully solves in-hand manipulation tasks with unknown objects, and demonstrate the validity of these solutions with robot experiments.

  • 62.
    Detry, Renaud
    et al.
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Ek, Carl Henrik
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Madry, Marianna
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Kragic, Danica
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Learning a dictionary of prototypical grasp-predicting parts from grasping experience2013Inngår i: 2013 IEEE International Conference on Robotics and Automation (ICRA), New York: IEEE , 2013, s. 601-608Konferansepaper (Fagfellevurdert)
    Abstract [en]

    We present a real-world robotic agent that is capable of transferring grasping strategies across objects that share similar parts. The agent transfers grasps across objects by identifying, from examples provided by a teacher, parts by which objects are often grasped in a similar fashion. It then uses these parts to identify grasping points onto novel objects. We focus our report on the definition of a similarity measure that reflects whether the shapes of two parts resemble each other, and whether their associated grasps are applied near one another. We present an experiment in which our agent extracts five prototypical parts from thirty-two real-world grasp examples, and we demonstrate the applicability of the prototypical parts for grasping novel objects.

  • 63.
    Detry, Renaud
    et al.
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Ek, Carl Henrik
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Madry, Marianna
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Piater, Justus
    Kragic, Danica
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Generalizing grasps across partly similar objects2012Inngår i: 2012 IEEE International Conference on Robotics and Automation (ICRA), IEEE Computer Society, 2012, s. 3791-3797Konferansepaper (Fagfellevurdert)
    Abstract [en]

    The paper starts by reviewing the challenges associated to grasp planning, and previous work on robot grasping. Our review emphasizes the importance of agents that generalize grasping strategies across objects, and that are able to transfer these strategies to novel objects. In the rest of the paper, we then devise a novel approach to the grasp transfer problem, where generalization is achieved by learning, from a set of grasp examples, a dictionary of object parts by which objects are often grasped. We detail the application of dimensionality reduction and unsupervised clustering algorithms to the end of identifying the size and shape of parts that often predict the application of a grasp. The learned dictionary allows our agent to grasp novel objects which share a part with previously seen objects, by matching the learned parts to the current view of the new object, and selecting the grasp associated to the best-fitting part. We present and discuss a proof-of-concept experiment in which a dictionary is learned from a set of synthetic grasp examples. While prior work in this area focused primarily on shape analysis (parts identified, e.g., through visual clustering, or salient structure analysis), the key aspect of this work is the emergence of parts from both object shape and grasp examples. As a result, parts intrinsically encode the intention of executing a grasp.

  • 64. Do, Martin
    et al.
    Romero, Javier
    KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Kjellström, Hedvig
    KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Azad, Pedram
    Asfour, Tamim
    Kragic, Danica
    KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Dillman, Rüdiger
    Grasp recognition and mapping on humanoid robots2009Inngår i: 9th IEEE-RAS International Conference on Humanoid Robots, HUMANOIDS09, 2009, s. 465-471Konferansepaper (Fagfellevurdert)
  • 65.
    Drimus, Alin
    et al.
    Mads Clausen Institute for Product Innovation, University of Southern Denmark.
    Kootstra, Gert
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Bilberg, A.
    Mads Clausen Institute for Product Innovation, University of Southern Denmark.
    Kragic, Danica
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Classification of Rigid and Deformable Objects Using a Novel Tactile Sensor2011Inngår i: Proceedings of the 15th International Conference on Advanced Robotics (ICAR), IEEE , 2011, s. 427-434Konferansepaper (Fagfellevurdert)
    Abstract [en]

    In this paper, we present a novel tactile-array sensor for use in robotic grippers based on flexible piezoresistive rubber. We start by describing the physical principles of piezoresistive materials, and continue by outlining how to build a flexible tactile-sensor array using conductive thread electrodes. A real-time acquisition system scans the data from the array which is then further processed. We validate the properties of the sensor in an application that classifies a number of household objects while performing a palpation procedure with a robotic gripper. Based on the haptic feedback, we classify various rigid and deformable objects. We represent the array of tactile information as a time series of features and use this as the input for a k-nearest neighbors classifier. Dynamic time warping is used to calculate the distances between different time series. The results from our novel tactile sensor are compared to results obtained from an experimental setup using a Weiss Robotics tactile sensor with similar characteristics. We conclude by exemplifying how the results of the classification can be used in different robotic applications.

  • 66. Drimus, Alin
    et al.
    Kootstra, Gert
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Bilberg, Arne
    Kragic, Danica
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Design of a flexible tactile sensor for classification of rigid and deformable objects2014Inngår i: Robotics and Autonomous Systems, ISSN 0921-8890, E-ISSN 1872-793X, Vol. 62, nr 1, s. 3-15Artikkel i tidsskrift (Fagfellevurdert)
    Abstract [en]

    For both humans and robots, tactile sensing is important for interaction with the environment: it is the core sensing used for exploration and manipulation of objects. In this paper, we present a novel tactile-array sensor based on flexible piezoresistive rubber. We describe the design of the sensor and data acquisition system. We evaluate the sensitivity and robustness of the sensor, and show that it is consistent over time with little relaxation. Furthermore, the sensor has the benefit of being flexible, having a high resolution, it is easy to mount, and simple to manufacture. We demonstrate the use of the sensor in an active object-classification system. A robotic gripper with two sensors mounted on its fingers performs a palpation procedure on a set of objects. By squeezing an object, the robot actively explores the material properties, and the system acquires tactile information corresponding to the resulting pressure. Based on a k nearest neighbor classifier and using dynamic time warping to calculate the distance between different time series, the system is able to successfully classify objects. Our sensor demonstrates similar classification performance to the Weiss Robotics tactile sensor, while having additional benefits.

  • 67. Ek, C. H.
    et al.
    Kragic, Danica
    KTH, Skolan för datavetenskap och kommunikation (CSC), Robotik, perception och lärande, RPL.
    The importance of structure2017Inngår i: 15th International Symposium of Robotics Research, 2011, Springer, 2017, s. 111-127Konferansepaper (Fagfellevurdert)
    Abstract [en]

    Many tasks in robotics and computer vision are concerned with inferring a continuous or discrete state variable from observations and measurements from the environment. Due to the high-dimensional nature of the input data the inference is often cast as a two stage process: first a low-dimensional feature representation is extracted on which secondly a learning algorithm is applied. Due to the significant progress that have been achieved within the field of machine learning over the last decade focus have placed at the second stage of the inference process, improving the process by exploiting more advanced learning techniques applied to the same (or more of the same) data. We believe that for many scenarios significant strides in performance could be achieved by focusing on representation rather than aiming to alleviate inconclusive and/or redundant information by exploiting more advanced inference methods. This stems from the notion that; given the “correct” representation the inference problem becomes easier to solve. In this paper we argue that one important mode of information for many application scenarios is not the actual variation in the data but the rather the higher order statistics as the structure of variations. We will exemplify this through a set of applications and show different ways of representing the structure of data. © Springer International Publishing Switzerland 2017.

  • 68.
    Ek, Carl Henrik
    et al.
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Kragic, Danica
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    The importance of structure2011Konferansepaper (Fagfellevurdert)
  • 69.
    Ek, Carl Henrik
    et al.
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Song, Dan
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Huebner, Kai
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Kragic, Danica
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Exploring affordances in robot grasping through latent structure representation2010Inngår i: The 11th European Conference on Computer Vision (ECCV 2010), 2010Konferansepaper (Fagfellevurdert)
  • 70.
    Ek, Carl Henrik
    et al.
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Song, Dan
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Huebner, Kai
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Kragic, Danica
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Task Modeling in Imitation Learning using Latent Variable Models2010Inngår i: 2010 10th IEEE-RAS International Conference on Humanoid Robots, Humanoids 2010, 2010, s. 458-553Konferansepaper (Fagfellevurdert)
    Abstract [en]

    An important challenge in robotic research is learning and reasoning about different manipulation tasks from scene observations. In this paper we present a probabilistic model capable of modeling several different types of input sources within the same model. Our model is capable to infer the task using only partial observations. Further, our framework allows the robot, given partial knowledge of the scene, to reason about what information streams to acquire in order to disambiguate the state-space the most. We present results for task classification within and also reason about different features discriminative power for different classes of tasks.

  • 71.
    Ek, Carl Henrik
    et al.
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP.
    Song, Dan
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP.
    Kragic, Danica
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP.
    Learning Conditional Structures in Graphical Models from a Large Set of Observation Streams through efficient Discretisation2011Inngår i: IEEE International Conference on Robotics and Automation, Workshop on Manipulation under Uncertainty, 2011Konferansepaper (Fagfellevurdert)
  • 72. Ekvall, S.
    et al.
    Kragic, Danica
    KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Hoffmann, F.
    Object recognition and pose estimation using color cooccurrence histograms and geometric modeling2005Inngår i: Image and Vision Computing, ISSN 0262-8856, E-ISSN 1872-8138, Vol. 23, nr 11, s. 943-955Artikkel i tidsskrift (Fagfellevurdert)
    Abstract [en]

    Robust techniques for object recognition and pose estimation are essential for robotic manipulation and object grasping. In this paper, a novel approach for object recognition and pose estimation based on color cooccurrence histograms and geometric modelling is presented. The particular problems addressed are: (i) robust recognition of objects in natural scenes, (ii) estimation of partial pose using an appearance based approach, and (iii) complete 6DOF model based pose estimation and tracking using geometric models. Our recognition scheme is based on the color cooccurrence histograms embedded in a classical learning framework that facilitates a 'winner-takes-all' strategy across different views and scales. The hypotheses generated in the recognition stage provide the basis for estimating the orientation of the object around the vertical axis. This prior, incomplete pose information is subsequently made precise by a technique that facilitates a geometric model of the object to estimate and continuously track the complete 6DOF pose of the object. Major contributions of the proposed system are the ability to automatically initiate an object tracking process, its robustness and invariance towards scaling and translations as well as the computational efficiency since both recognition and pose estimation rely on the same representation of the object. The performance of the system is evaluated in a domestic environment with changing lighting and background conditions on a set of everyday objects.

  • 73.
    Ekvall, Staffan
    et al.
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP.
    Aarno, Daniel
    KTH, Skolan för datavetenskap och kommunikation (CSC), Numerisk Analys och Datalogi, NADA.
    Kragic, Danica
    KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Online task recognition and real-time adaptive assistance for computer-aided machine control2006Inngår i: IEEE Transactions on robotics, ISSN 1552-3098, E-ISSN 1941-0468, Vol. 22, nr 5, s. 1029-1033Artikkel i tidsskrift (Fagfellevurdert)
    Abstract [en]

    Segmentation and recognition of operator-generated motions are commonly facilitated to provide appropriate assistance during task execution in teleoperative and human-machine collaborative settings. The assistance is usually provided in a virtual fixture framework where the level of compliance can be altered online, thus improving the performance in terms of execution time and overall precision. However, the fixtures are typically inflexible, resulting in a degraded performance in cases of unexpected obstacles or incorrect fixture models. In this paper, we present a method for online task tracking and propose the use of adaptive virtual fixtures that can cope with the above problems. Here, rather than executing a predefined plan, the operator has the ability to avoid unforeseen obstacles and deviate from the model. To allow this, the probability of following a certain trajectory (subtask) is estimated and used to automatically adjusts the compliance, thus providing the online decision of how to fixture the movement.

  • 74. Ekvall, Staffan
    et al.
    Aarno, Daniel
    Kragic, Danica
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP.
    Task Learning Using Graphical Programming and Human Demonstrations2006Inngår i: Proceedings - IEEE International Workshop on Robot and Human Interactive Communication, 2006, s. 398-403Konferansepaper (Fagfellevurdert)
    Abstract [en]

    The next generation of robots will have to learn new tasks or refine the existing ones through direct interaction with the environment or through a teaching/coaching process in programming by demonstration (PbD) and learning by instruction frameworks. In this paper, we propose to extend the classical PbD approach with a graphical language that makes robot coaching easier. The main idea is based on graphical programming where the user designs complex robot tasks by using a set of low-level action primitives. Different to other systems, our action primitives are made general and flexible so that the user can train them online and therefore easily design high level tasks

  • 75.
    Ekvall, Staffan
    et al.
    KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS. KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP.
    Jensfelt, Patric
    KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS. KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP.
    Kragic, Danica
    KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS. KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP.
    Integrating active mobile robot object recognition and SLAM in natural environments2006Inngår i: 2006 IEEE/RSJ International Conference on Intelligent Robots and Systems, Vols 1-12, NEW YORK: IEEE , 2006, s. 5792-5797Konferansepaper (Fagfellevurdert)
    Abstract [en]

    Linking semantic and spatial information has become an important research area in robotics since, for robots interacting with humans and performing tasks in natural environments, it is of foremost importance to be able to reason beyond simple geometrical and spatial levels. In this paper, we consider this problem in a service robot scenario where a mobile robot autonomously navigates in a domestic environment, builds a map as it moves along, localizes its position in it, recognizes objects on its way and puts them in the map. The experimental evaluation is performed in a realistic setting where the main concentration is put on the synergy of object recognition and Simultaneous Localization and Mapping systems.

  • 76.
    Ekvall, Staffan
    et al.
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP.
    Kragic, Danica
    KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Grasp recognition for programming by demonstration2005Inngår i: 2005 IEEE International Conference on Robotics and Automation (ICRA), Vols 1-4, NEW YORK, NY: IEEE , 2005, s. 748-753Konferansepaper (Fagfellevurdert)
    Abstract [en]

    The demand for flexible and re-programmable robots has increased the need for programming by demonstration systems. In this paper, grasp recognition is considered in a programming by demonstration framework. Three methods for grasp recognition are presented and evaluated. The first method uses Hidden Markov Models to model the hand posture sequence during the grasp sequence, while the second method relies on the hand trajectory and hand rotation. The third method is a hybrid method, in which both the first two methods are active in parallel. The particular contribution is that all methods rely on the grasp sequence and not just the final posture of the hand. This facilitates grasp recognition before the grasp is completed. Also, by analyzing the entire sequence and not just the final grasp, the decision is based on more information and increased robustness of the overall system is achieved. The experimental results show that both arm trajectory and final hand posture provide important information for grasp classification. By combining them, the recognition rate of the overall system is increased.

  • 77.
    Ekvall, Staffan
    et al.
    KTH, Skolan för datavetenskap och kommunikation (CSC), Numerisk Analys och Datalogi, NADA. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Kragic, Danica
    KTH, Skolan för datavetenskap och kommunikation (CSC), Numerisk Analys och Datalogi, NADA. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Integrating object and grasp recognition for dynamic scene interpretation2005Inngår i: 2005 12th International Conference on Advanced Robotics, NEW YORK, NY: IEEE , 2005, s. 331-336Konferansepaper (Fagfellevurdert)
    Abstract [en]

    Understanding and interpreting dynamic scenes and activities is a very challenging problem. In this paper we present a system capable of learning robot tasks from demonstration. Classical robot task programming requires an experienced programmer and a lot of tedious work. In contrast, Programming by Demonstration is a flexible framework that reduces the complexity of programming robot tasks, and allows end-users to demonstrate the tasks instead of writing code. We present our recent steps towards this goal. A system for learning pick-and-place tasks by manually demonstrating them is presented. Each demonstrated task is described by an abstract model involving a set of simple tasks such as what object is moved, where it is moved, and which grasp type was used to move it.

  • 78. Ekvall, Staffan
    et al.
    Kragic, Danica
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP.
    Integrating Object and Grasp Recognition for Dynamic Scene interpretation2005Inngår i: Advanced Robotics, ISSN 0169-1864, E-ISSN 1568-5535Artikkel i tidsskrift (Fagfellevurdert)
    Abstract [en]

    Understanding and interpreting dynamic scenes and activities is a very challenging problem. In this paper, we present a system capable of learning robot tasks from demonstration. Classical robot task programming requires an experienced programmer and a lot of tedious work. In contrast, programming by demonstration is a flexible framework that reduces the complexity of programming robot tasks, and allows end-users to demonstrate the tasks instead of writing code. We present our recent steps towards this goal. A system for learning pick-and-place tasks by manually demonstrating them is presented. Each demonstrated task is described by an abstract model involving a set of simple tasks such as what object is moved, where it is moved, and which grasp type was used to move it

  • 79.
    Ekvall, Staffan
    et al.
    KTH, Tidigare Institutioner, Numerisk analys och datalogi, NADA.
    Kragic, Danica
    KTH, Tidigare Institutioner, Numerisk analys och datalogi, NADA.
    Interactive grasp learning based on human demonstration2004Inngår i: 2004 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION, VOLS 1- 5, PROCEEDINGS, 2004, s. 3519-3524Konferansepaper (Fagfellevurdert)
    Abstract [en]

    We describe our effort in development of an artificial cognitive system, able of performing complex manipulation tasks in a teleoperated or collaborative manner. Some of the work is motivated by human control strategies that, in general, involve comparison between sensory feedback and a-priori known, internal models. According to recent neuroscientific findings, predictions help to reduce the delays in obtaining the sensory information and to perform more complex tasks. This paper deals with the issue of robotic manipulation and grasping in particular. Two main contributions of the paper are: i) evaluation, recognition and modeling of human grasps during the arm transportation sequence, and ii) learning and representation of grasp strategies for different robotic hands.

  • 80.
    Ekvall, Staffan
    et al.
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Kragic, Danica
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Learning and evaluation of the approach vector for automatic grasp generation and planning2007Inngår i: Proceedings - IEEE International Conference on Robotics and Automation: Vols 1-10, 2007, s. 4715-4720Konferansepaper (Fagfellevurdert)
    Abstract [en]

    In this paper, we address the problem of automatic grasp generation for robotic hands where experience and shape primitives are used in synergy so to provide a basis not only for grasp generation but also for a grasp evaluation process when the exact pose of the object is not available. One of the main challenges in automatic grasping is the choice of the object approach vector, which is dependent both on the object shape and pose as well as the grasp type. Using the proposed method, the approach vector is chosen not only based on the sensory input but also on experience that some approach vectors will provide useful tactile information that finally results in stable grasps. A methodology for developing and evaluating grasp controllers is presented where the focus lies on obtaining stable grasps under imperfect vision. The method is used in a teleoperation or a Programming by Demonstration setting where a human demonstrates to a robot how to grasp an object. The system first recognizes the object and grasp type which can then be used by the robot to perform the same action using a mapped version of the human grasping posture.

  • 81.
    Ekvall, Staffan
    et al.
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP.
    Kragic, Danica
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Receptive field cooccurrence histograms for object detection2005Inngår i: 2005 IEEE/RSJ International Conference on Intelligent Robots and Systems, Vols 1-4, 2005, s. 3969-3974Konferansepaper (Fagfellevurdert)
    Abstract [en]

    Object recognition is one of the major research topics in the field of computer vision. In robotics, there is often a need for a system that can locate certain objects in the environment - the capability which we denote as 'object detection'. In this paper, we present a new method for object detection. The method is especially suitable for detecting objects in natural scenes, as it is able to cope with problems such as complex background, varying illumination and object occlusion. The proposed method uses the receptive field representation where each pixel in the image is represented by a combination of its color and response to different filters. Thus, the cooccurrence of certain filter responses within a specific radius in the image serves as information basis for building the representation of the object. The specific goal in this work is the development of an on-line learning scheme that is effective after just one training example but still has the ability to improve its performance with more time and new examples. We describe the details behind the algorithm and demonstrate its strength with an extensive experimental evaluation.

  • 82. Ekvall, Staffan
    et al.
    Kragic, Danica
    KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Robot Learning from Demonstration: A Task-level Planning Approach2008Artikkel i tidsskrift (Fagfellevurdert)
    Abstract [en]

    In this paper, we deal with the problem of learning by demonstration, task level learning and planning for robotic applications that involve object manipulation. Preprogramming robots for execution of complex domestic tasks such as setting a dinner table is of little use, since the same order of subtasks may not be conceivable in the run time due to the changed state of the world. In our approach, we aim to learn the goal of the task and use a task planner to reach the goal given different initial states of the world. For some tasks, there are underlying constraints that must be fulfille, and knowing just the final goal is not sufficient. We propose two techniques for constraint identification. In the first case, the teacher can directly instruct the system about the underlying constraints. In the second case, the constraints are identified by the robot itself based on multiple observations. The constraints are then considered in the planning phase, allowing the task to be executed without violating any of them. We evaluate our work on a real robot performing pick-and-place tasks.

  • 83.
    Ekvall, Staffan
    et al.
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP.
    Kragic, Danica
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP.
    Jensfelt, Patric
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP.
    Object detection and mapping for service robot tasks2007Inngår i: Robotica (Cambridge. Print), ISSN 0263-5747, E-ISSN 1469-8668, Vol. 25, s. 175-187Artikkel i tidsskrift (Fagfellevurdert)
    Abstract [en]

    The problem studied in this paper is a mobile robot that autonomously navigates in a domestic: environment, builds a map as it moves along and localizes its position in it. In addition, the robot detects predefined objects, estimates their position in the environment and integrates this with the localization module to automatically put the objects in the generated map. Thus, we demonstrate one of the possible strategies for the integration of spatial and semantic knowledge in a service robot scenario where a simultaneous localization and mapping (SLAM) and object detection/ recognition system work in synergy to provide a richer representation of the environment than it would be possible with either of the methods alone. Most SLAM systems build maps that are only used for localizing the robot. Such maps are typically based on grids or different types of features such as point and lines. The novelty is the augmentation of this process with an object-recognition system that detects objects in the environment and puts them in the map generated by the SLAM system. The metric map is also split into topological entities corresponding to rooms. In this way, the user can command the robot to retrieve a certain object from a certain room. We present the results of map building and an extensive evaluation of the object detection algorithm performed in an indoor setting.

  • 84. Ekvall, Stefan
    et al.
    Kragic, Danica
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP.
    Learning Task Models from Multiple Human Demonstrations2006Inngår i: Robot and Human Interactive Communication, 2006. ROMAN 2006. The 15th IEEE International Symposium on Issue Date: 6-8 Sept. 2006, 2006, s. 358-363Konferansepaper (Fagfellevurdert)
    Abstract [en]

    In this paper, we present a novel method for learning robot tasks from multiple demonstrations. Each demonstrated task is decomposed into subtasks that allow for segmentation and classification of the input data. The demonstrated tasks are then merged into a flexible task model, describing the task goal and its constraints. The two main contributions of the paper are the state generation and contraints identification methods. We also present a task level planner, that is used to assemble a task plan at run-time, allowing the robot to choose the best strategy depending on the current world state

  • 85. Feix, Thomas
    et al.
    Romero, Javier
    Ek, Carl Henrik
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP.
    Schmiedmayer, Heinz-Bodo
    Kragic, Danica
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP.
    A Metric for Comparing the Anthropomorphic Motion Capability of Artificial Hands2013Inngår i: IEEE Transactions on robotics, ISSN 1552-3098, E-ISSN 1941-0468, Vol. 29, nr 1, s. 82-93Artikkel i tidsskrift (Fagfellevurdert)
    Abstract [en]

    We propose a metric for comparing the anthropomorphic motion capability of robotic and prosthetic hands. The metric is based on the evaluation of how many different postures or configurations a hand can perform by studying the reachable set of fingertip poses. To define a benchmark for comparison, we first generate data with human subjects based on an extensive grasp taxonomy. We then develop a methodology for comparison using generative, nonlinear dimensionality reduction techniques. We assess the performance of different hands with respect to the human hand and with respect to each other. The method can be used to compare other types of kinematic structures.

  • 86. Feix, Thomas
    et al.
    Romero, Javier
    Schmiedmayer, Heinz-Bodo
    Dollar, Aaron M.
    Kragic, Danica
    KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS. KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP.
    The GRASP Taxonomy of Human Grasp Types2016Inngår i: IEEE Transactions on Human-Machine Systems, ISSN 2168-2291, E-ISSN 2168-2305, Vol. 46, nr 1, s. 66-77Artikkel i tidsskrift (Fagfellevurdert)
    Abstract [en]

    In this paper, we analyze and compare existing human grasp taxonomies and synthesize them into a single new taxonomy (dubbed "The GRASP Taxonomy" after the GRASP project funded by the European Commission). We consider only static and stable grasps performed by one hand. The goal is to extract the largest set of different grasps that were referenced in the literature and arrange them in a systematic way. The taxonomy provides a common terminology to define human hand configurations and is important in many domains such as human-computer interaction and tangible user interfaces where an understanding of the human is basis for a proper interface. Overall, 33 different grasp types are found and arranged into the GRASP taxonomy. Within the taxonomy, grasps are arranged according to 1) opposition type, 2) the virtual finger assignments, 3) type in terms of power, precision, or intermediate grasp, and 4) the position of the thumb. The resulting taxonomy incorporates all grasps found in the reviewed taxonomies that complied with the grasp definition. We also show that due to the nature of the classification, the 33 grasp types might be reduced to a set of 17 more general grasps if only the hand configuration is considered without the object shape/size.

  • 87. Fiorini, Paolo
    et al.
    Kragic, Danica
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP.
    Education by competition2006Inngår i: IEEE robotics & automation magazine, ISSN 1070-9932, E-ISSN 1558-223X, Vol. 13, nr 3, s. 6-6Artikkel i tidsskrift (Annet vitenskapelig)
  • 88. Geidenstam, Sebastian
    et al.
    Huebner, K
    Banksell, Daniel
    Kragic, Danica
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP.
    Learning of 2D grasping strategies from box-based 3D object approximations2010Inngår i: Robotics: Science and Systems, MIT Press, 2010, s. 9-16Konferansepaper (Fagfellevurdert)
    Abstract [en]

    In this paper, we bridge and extend the approaches of 3D shape approximation and 2D grasping strategies. We begin by applying a shape decomposition to an object, i.e. its extracted 3D point data, using a flexible hierarchy of minimum volume bounding boxes. From this representation, we use the projections of points onto each of the valid faces as a basis for finding planar grasps. These grasp hypotheses are evaluated using a set of 2D and 3D heuristic quality measures. Finally on this set of quality measures, we use a neural network to learn good grasps and the relevance of each quality measure for a good grasp. We test and evaluate the algorithm in the GraspIt! simulator.

  • 89.
    Ghadirzadeh, Ali
    et al.
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP.
    Bütepage, Judith
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP.
    Kragic, Danica
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP.
    Björkman, Mårten
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP.
    Self-learning and adaptation in a sensorimotor framework2016Inngår i: Proceedings - IEEE International Conference on Robotics and Automation, IEEE conference proceedings, 2016, s. 551-558Konferansepaper (Fagfellevurdert)
    Abstract [en]

    We present a general framework to autonomously achieve the task of finding a sequence of actions that result in a desired state. Autonomy is acquired by learning sensorimotor patterns of a robot, while it is interacting with its environment. Gaussian processes (GP) with automatic relevance determination are used to learn the sensorimotor mapping. In this way, relevant sensory and motor components can be systematically found in high-dimensional sensory and motor spaces. We propose an incremental GP learning strategy, which discerns between situations, when an update or an adaptation must be implemented. The Rapidly exploring Random Tree (RRT∗) algorithm is exploited to enable long-term planning and generating a sequence of states that lead to a given goal; while a gradient-based search finds the optimum action to steer to a neighbouring state in a single time step. Our experimental results prove the suitability of the proposed framework to learn a joint space controller with high data dimensions (10×15). It demonstrates short training phase (less than 12 seconds), real-time performance and rapid adaptations capabilities.

  • 90.
    Ghadirzadeh, Ali
    et al.
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Bütepage, Judith
    KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Maki, Atsuto
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Kragic, Danica
    KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Björkman, Mårten
    KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    A sensorimotor reinforcement learning framework for physical human-robot interaction2016Inngår i: IEEE International Conference on Intelligent Robots and Systems, IEEE, 2016, s. 2682-2688Konferansepaper (Fagfellevurdert)
    Abstract [en]

    Modeling of physical human-robot collaborations is generally a challenging problem due to the unpredictive nature of human behavior. To address this issue, we present a data-efficient reinforcement learning framework which enables a robot to learn how to collaborate with a human partner. The robot learns the task from its own sensorimotor experiences in an unsupervised manner. The uncertainty in the interaction is modeled using Gaussian processes (GP) to implement a forward model and an actionvalue function. Optimal action selection given the uncertain GP model is ensured by Bayesian optimization. We apply the framework to a scenario in which a human and a PR2 robot jointly control the ball position on a plank based on vision and force/torque data. Our experimental results show the suitability of the proposed method in terms of fast and data-efficient model learning, optimal action selection under uncertainty and equal role sharing between the partners.

  • 91.
    Ghadirzadeh, Ali
    et al.
    KTH, Skolan för datavetenskap och kommunikation (CSC), Robotik, perception och lärande, RPL.
    Maki, Atsuto
    KTH, Skolan för datavetenskap och kommunikation (CSC), Robotik, perception och lärande, RPL.
    Kragic, Danica
    KTH, Skolan för datavetenskap och kommunikation (CSC), Robotik, perception och lärande, RPL.
    Björkman, Mårten
    KTH, Skolan för datavetenskap och kommunikation (CSC), Robotik, perception och lärande, RPL.
    Deep predictive policy training using reinforcement learning2017Inngår i: 2017 IEEE/RSJ International Conference on Intelligent Robots and Systems, IROS 2017, Institute of Electrical and Electronics Engineers (IEEE), 2017, s. 2351-2358, artikkel-id 8206046Konferansepaper (Fagfellevurdert)
    Abstract [en]

    Skilled robot task learning is best implemented by predictive action policies due to the inherent latency of sensorimotor processes. However, training such predictive policies is challenging as it involves finding a trajectory of motor activations for the full duration of the action. We propose a data-efficient deep predictive policy training (DPPT) framework with a deep neural network policy architecture which maps an image observation to a sequence of motor activations. The architecture consists of three sub-networks referred to as the perception, policy and behavior super-layers. The perception and behavior super-layers force an abstraction of visual and motor data trained with synthetic and simulated training samples, respectively. The policy super-layer is a small subnetwork with fewer parameters that maps data in-between the abstracted manifolds. It is trained for each task using methods for policy search reinforcement learning. We demonstrate the suitability of the proposed architecture and learning framework by training predictive policies for skilled object grasping and ball throwing on a PR2 robot. The effectiveness of the method is illustrated by the fact that these tasks are trained using only about 180 real robot attempts with qualitative terminal rewards.

  • 92.
    Gratal, Xavi
    et al.
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Bohg, Jeannette
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Björkman, Mårten
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Kragic, Danica
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Scene Representation and Object Grasping Using Active Vision2010Konferansepaper (Fagfellevurdert)
  • 93.
    Gratal, Xavi
    et al.
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Romero, Javier
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Bohg, Jeannette
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Kragic, Danica
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Visual servoing on unknown objects2012Inngår i: Mechatronics (Oxford), ISSN 0957-4158, E-ISSN 1873-4006, Vol. 22, nr 4, s. 423-435Artikkel i tidsskrift (Fagfellevurdert)
    Abstract [en]

    We study visual servoing in a framework of detection and grasping of unknown objects. Classically, visual servoing has been used for applications where the object to be servoed on is known to the robot prior to the task execution. In addition, most of the methods concentrate on aligning the robot hand with the object without grasping it. In our work, visual servoing techniques are used as building blocks in a system capable of detecting and grasping unknown objects in natural scenes. We show how different visual servoing techniques facilitate a complete grasping cycle.

  • 94.
    Gratal, Xavi
    et al.
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Romero, Javier
    KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS. KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP.
    Kragic, Danic
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Virtual Visual Servoing for Real-Time Robot Pose Estimation2011Konferansepaper (Fagfellevurdert)
    Abstract [en]

    We propose a system for markerless pose estimation and tracking of a robot manipulator. By tracking the manipulator, we can obtain an accurate estimate of its position and orientation necessary in many object grasping and manipulation tasks. Tracking the manipulator allows also for better collision avoidance. The method is based on the notion of virtual visual servoing. We also propose the use of distance transform in the control loop, which makes the performance independent of the feature search window.

  • 95.
    Gratal, Xavi
    et al.
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP.
    Smith, Christian
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP.
    Björkman, Mårten
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP.
    Kragic, Danica
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP.
    Integrating 3D features and virtual visual servoing for hand-eye and humanoid robot pose estimation2015Inngår i: IEEE-RAS International Conference on Humanoid Robots, IEEE Computer Society, 2015, nr February, s. 240-245Konferansepaper (Fagfellevurdert)
    Abstract [en]

    In this paper, we propose an approach for vision-based pose estimation of a robot hand or full-body pose. The method is based on virtual visual servoing using a CAD model of the robot and it combines 2-D image features with depth features. The method can be applied to estimate either the pose of a robot hand or pose of the whole body given that its joint configuration is known. We present experimental results that show the performance of the approach as demonstrated on both a mobile humanoid robot and a stationary manipulator.

  • 96.
    Güler, Püren
    et al.
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP.
    Bekiroglu, Yasemin
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP.
    Gratal, Xavi
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP.
    Pauwels, Karl
    Kragic, Danica
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP.
    What's in the Container?: Classifying Object Contents from Vision and Touch2014Inngår i: 2014 IEEE/RSJ International Conference on Intelligent Robots and Systems  (IROS 2014), IEEE , 2014, s. 3961-3968Konferansepaper (Fagfellevurdert)
    Abstract [en]

    Robots operating in household environments need to interact with food containers of different types. Whether a container is filled with milk, juice, yogurt or coffee may affect the way robots grasp and manipulate the container. In this paper, we concentrate on the problem of identifying what kind of content is in a container based on tactile and/or visual feedback in combination with grasping. In particular, we investigate the benefits of using unimodal (visual or tactile) or bimodal (visual-tactile) sensory data for this purpose. We direct our study toward cardboard containers with liquid or solid content or being empty. The motivation for using grasping rather than shaking is that we want to investigate the content prior to applying manipulation actions to a container. Our results show that we achieve comparable classification rates with unimodal data and that the visual and tactile data are complimentary.

  • 97.
    Güler, Püren
    et al.
    KTH, Skolan för datavetenskap och kommunikation (CSC), Robotik, perception och lärande, RPL.
    Pieropan, A.
    Ishikawa, M.
    Kragic, Danica
    KTH, Skolan för datavetenskap och kommunikation (CSC), Robotik, perception och lärande, RPL.
    Estimating deformability of objects using meshless shape matching2017Inngår i: 2017 IEEE/RSJ International Conference on Intelligent Robots and Systems, IROS 2017, Institute of Electrical and Electronics Engineers (IEEE), 2017, s. 5941-5948, artikkel-id 8206489Konferansepaper (Fagfellevurdert)
    Abstract [en]

    Humans interact with deformable objects on a daily basis but this still represents a challenge for robots. To enable manipulation of and interaction with deformable objects, robots need to be able to extract and learn the deformability of objects both prior to and during the interaction. Physics-based models are commonly used to predict the physical properties of deformable objects and simulate their deformation accurately. The most popular simulation techniques are force-based models that need force measurements. In this paper, we explore the applicability of a geometry-based simulation method called meshless shape matching (MSM) for estimating the deformability of objects. The main advantages of MSM are its controllability and computational efficiency that make it popular in computer graphics to simulate complex interactions of multiple objects at the same time. Additionally, a useful feature of the MSM that differentiates it from other physics-based simulation is to be independent of force measurements that may not be available to a robotic framework lacking force/torque sensors. In this work, we design a method to estimate deformability based on certain properties, such as volume conservation. Using the finite element method (FEM) we create the ground truth deformability for various settings to evaluate our method. The experimental evaluation shows that our approach is able to accurately identify the deformability of test objects, supporting the value of MSM for robotic applications.

  • 98.
    Güler, Rezan
    et al.
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS. KTH, Skolan för bioteknologi (BIO), Proteinteknologi.
    Pauwels, Karl
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Pieropan, Alessandro
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Kjellström, Hedvig
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Kragic, Danica
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Estimating the Deformability of Elastic Materials using Optical Flow and Position-based Dynamics2015Inngår i: Humanoid Robots (Humanoids), 2015 IEEE-RAS 15th International Conference on, IEEE conference proceedings, 2015, s. 965-971Konferansepaper (Fagfellevurdert)
    Abstract [en]

    Knowledge of the physical properties of objects is essential in a wide range of robotic manipulation scenarios. A robot may not always be aware of such properties prior to interaction. If an object is incorrectly assumed to be rigid, it may exhibit unpredictable behavior when grasped. In this paper, we use vision based observation of the behavior of an object a robot is interacting with and use it as the basis for estimation of its elastic deformability. This is estimated in a local region around the interaction point using a physics simulator. We use optical flow to estimate the parameters of a position-based dynamics simulation using meshless shape matching (MSM). MSM has been widely used in computer graphics due to its computational efficiency, which is also important for closed-loop control in robotics. In a controlled experiment we demonstrate that our method can qualitatively estimate the physical properties of objects with different degrees of deformability.

  • 99.
    Hang, Kaiyu
    et al.
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Haustein, Joshua
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Li, Miao
    EPFL.
    Billard, Aude
    Smith, Christian
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Kragic, Danica
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    On the Evolution of Fingertip Grasping Manifolds2016Inngår i: IEEE International Conference on Robotics and Automation, IEEE Robotics and Automation Society, 2016, s. 2022-2029, artikkel-id 7487349Konferansepaper (Fagfellevurdert)
    Abstract [en]

    Efficient and accurate planning of fingertip grasps is essential for dexterous in-hand manipulation. In this work, we present a system for fingertip grasp planning that incrementally learns a heuristic for hand reachability and multi-fingered inverse kinematics. The system consists of an online execution module and an offline optimization module. During execution the system plans and executes fingertip grasps using Canny’s grasp quality metric and a learned random forest based hand reachability heuristic. In the offline module, this heuristic is improved based on a grasping manifold that is incrementally learned from the experiences collected during execution. The system is evaluated both in simulation and on a SchunkSDH dexterous hand mounted on a KUKA-KR5 arm. We show that, as the grasping manifold is adapted to the system’s experiences, the heuristic becomes more accurate, which results in an improved performance of the execution module. The improvement is not only observed for experienced objects, but also for previously unknown objects of similar sizes.

  • 100.
    Hang, Kaiyu
    et al.
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Li, Miao
    Stork, Johannes A.
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Bekiroglu, Yasemin
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Billard, Aude
    Kragic, Danica
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Hierarchical Fingertip Space for Synthesizing Adaptable Fingertip Grasps2014Konferansepaper (Fagfellevurdert)
1234567 51 - 100 of 305
RefereraExporteraLink til resultatlisten
Permanent link
Referera
Referensformat
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Annet format
Fler format
Språk
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Annet språk
Fler språk
Utmatningsformat
  • html
  • text
  • asciidoc
  • rtf