Endre søk
Begrens søket
234567 201 - 250 of 305
RefereraExporteraLink til resultatlisten
Permanent link
Referera
Referensformat
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Annet format
Fler format
Språk
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Annet språk
Fler språk
Utmatningsformat
  • html
  • text
  • asciidoc
  • rtf
Treff pr side
  • 5
  • 10
  • 20
  • 50
  • 100
  • 250
Sortering
  • Standard (Relevans)
  • Forfatter A-Ø
  • Forfatter Ø-A
  • Tittel A-Ø
  • Tittel Ø-A
  • Type publikasjon A-Ø
  • Type publikasjon Ø-A
  • Eldste først
  • Nyeste først
  • Skapad (Eldste først)
  • Skapad (Nyeste først)
  • Senast uppdaterad (Eldste først)
  • Senast uppdaterad (Nyeste først)
  • Disputationsdatum (tidligste først)
  • Disputationsdatum (siste først)
  • Standard (Relevans)
  • Forfatter A-Ø
  • Forfatter Ø-A
  • Tittel A-Ø
  • Tittel Ø-A
  • Type publikasjon A-Ø
  • Type publikasjon Ø-A
  • Eldste først
  • Nyeste først
  • Skapad (Eldste først)
  • Skapad (Nyeste først)
  • Senast uppdaterad (Eldste først)
  • Senast uppdaterad (Nyeste først)
  • Disputationsdatum (tidligste først)
  • Disputationsdatum (siste først)
Merk
Maxantalet träffar du kan exportera från sökgränssnittet är 250. Vid större uttag använd dig av utsökningar.
  • 201.
    Kyrki, Ville
    et al.
    KTH, Tidigare Institutioner, Numerisk analys och datalogi, NADA.
    Kragic, Danica
    KTH, Tidigare Institutioner, Numerisk analys och datalogi, NADA.
    Christensen, Henrik I.
    KTH, Tidigare Institutioner, Numerisk analys och datalogi, NADA.
    Measurement errors in visual servoing2004Inngår i: 2004 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION, VOLS 1- 5, PROCEEDINGS, 2004, s. 1861-1867Konferansepaper (Fagfellevurdert)
    Abstract [en]

    In recent years, a number of hybrid visual servoing control algorithms have been proposed and evaluated. For some time now, it has been clear that classical control approaches - image and position based - have some inherent problems. Hybrid approaches try to combine them to overcome these problems. However, most of the proposed approaches concentrate on the design of the control law, neglecting the issue of errors resulting from the sensory system. This paper addresses the issue of measurement errors in visual servoing. The particular contribution is the analysis of the propagation of image error through pose estimation and visual servoing control law. We have chosen to investigate the properties of the vision system and their effect to the performance of the control system. Two approaches are evaluated: i) position, and ii) 2 1/2 D visual servoing. We believe that our evaluation offers a tool to build and analyze hybrid control systems based on, for example, switching [1] or partitioning [2].

  • 202.
    Kyrki, Ville
    et al.
    Lappeenranta University of Technology, Finland.
    Serrano Vicente, Isabel
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Kragic, Danica
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Eklundh, Jan-Olof
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Action Recognition and Understanding using Motor Primitives2007Inngår i: 2007 RO-MAN: 16TH IEEE INTERNATIONAL SYMPOSIUM ON ROBOT AND HUMAN INTERACTIVE COMMUNICATION, 2007, s. 1113-1118Konferansepaper (Fagfellevurdert)
    Abstract [en]

    We investigate modeling and recognition of arm manipulation actions of different levels of complexity. To model the process, we are using a combination of discriminative support vector machines and generative hidden Markov models. The experimental evaluation, performed with 10 people, investigates both definition and structure of primitive motions as well as the validity of the modeling approach taken.

  • 203. Laaksonen, J.
    et al.
    Kyrki, V.
    Kragic, Danica
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Evaluation of feature representation and machine learning methods in grasp stability learning2010Inngår i: 2010 10th IEEE-RAS International Conference on Humanoid Robots, Humanoids 2010, 2010, s. 112-117Konferansepaper (Fagfellevurdert)
    Abstract [en]

    This paper addresses the problem of sensor-based grasping under uncertainty, specifically, the on-line estimation of grasp stability. We show that machine learning approaches can to some extent detect grasp stability from haptic pressure and finger joint information. Using data from both simulations and two real robotic hands, the paper compares different feature representations and machine learning methods to evaluate their performance in determining the grasp stability. A boosting classifier was found to perform the best of the methods tested.

  • 204. Lacroix, Joyca
    et al.
    Hommel, Bernhard
    Piater, Justus
    Häbner, Kai
    Kragic, Danica
    KTH, Tidigare Institutioner, Numerisk analys och datalogi, NADA.
    Asfour, Tamim
    Welke, Kai
    Kräeger, Norbert
    Kraft, Dirk
    Title of the deliverable: The Integration of Objects and Action Plans2004Konferansepaper (Annet vitenskapelig)
  • 205. Laskey, M.
    et al.
    Mahler, J.
    McCarthy, Z.
    Pokorny, F. T.
    Patil, S.
    Van Den Berg, J.
    Kragic, Danica
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Abbeel, P.
    Goldberg, K.
    Multi-armed bandit models for 2D grasp planning with uncertainty2015Inngår i: IEEE International Conference on Automation Science and Engineering, IEEE conference proceedings, 2015, s. 572-579Konferansepaper (Fagfellevurdert)
    Abstract [en]

    For applications such as warehouse order fulfillment, robot grasps must be robust to uncertainty arising from sensing, mechanics, and control. One way to achieve robustness is to evaluate the performance of candidate grasps by sampling perturbations in shape, pose, and gripper approach and to compute the probability of force closure for each candidate to identify a grasp with the highest expected quality. Since evaluating the quality of each grasp is computationally demanding, prior work has turned to cloud computing. To improve computational efficiency and to extend this work, we consider how Multi-Armed Bandit (MAB) models for optimizing decisions can be applied in this context. We formulate robust grasp planning as a MAB problem and evaluate convergence times towards an optimal grasp candidate using 100 object shapes from the Brown Vision 2D Lab Dataset with 1000 grasp candidates per object. We consider the case where shape uncertainty is represented as a Gaussian process implicit surface (GPIS) with Gaussian uncertainty in pose, gripper approach angle, and coefficient of friction. We find that Thompson Sampling and the Gittins index MAB methods converged to within 3% of the optimal grasp up to 10x faster than uniform allocation and 5x faster than iterative pruning.

  • 206. Li, Miao
    et al.
    Hang, Kaiyu
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP.
    Kragic, Danica
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP.
    Billard, Aude
    Dexterous grasping under shape uncertainty2016Inngår i: Robotics and Autonomous Systems, ISSN 0921-8890, E-ISSN 1872-793X, Vol. 75, s. 352-364Artikkel i tidsskrift (Fagfellevurdert)
    Abstract [en]

    An important challenge in robotics is to achieve robust performance in object grasping and manipulation, dealing with noise and uncertainty. This paper presents an approach for addressing the performance of dexterous grasping under shape uncertainty. In our approach, the uncertainty in object shape is parametrized and incorporated as a constraint into grasp planning. The proposed approach is used to plan feasible hand configurations for realizing planned contacts using different robotic hands. A compliant finger closing scheme is devised by exploiting both the object shape uncertainty and tactile sensing at fingertips. Experimental evaluation demonstrates that our method improves the performance of dexterous grasping under shape uncertainty.

  • 207. Lindeberg, Patrik
    et al.
    Kragic, Danica
    KTH, Tidigare Institutioner, Numerisk analys och datalogi, NADA.
    2D1420 Datorseende gk (Period 3; VT 2004)2004Artikkel i tidsskrift (Annet vitenskapelig)
  • 208. Lopez-Nicolas, G.
    et al.
    Sagues, C.
    Guerrero, J. J.
    Kragic, Danica
    KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Jensfelt, Patric
    KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS. KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP.
    Switching visual control based on epipoles for mobile robots2008Inngår i: Robotics and Autonomous Systems, ISSN 0921-8890, E-ISSN 1872-793X, Vol. 56, nr 7, s. 592-603Artikkel i tidsskrift (Fagfellevurdert)
    Abstract [en]

    In this paper, we present a Visual control approach consisting in a switching control scheme based on the epipolar geometry. The method facilitates a classical teach-by-showing approach where a reference image is used to control the robot to the desired pose (position and orientation). As a result of our proposal a mobile robot carries out a smooth trajectory towards the target and the epipolar geometry model is used through the whole motion. The control scheme developed considers the motion constraints of the mobile platform in a framework based on the epipolar geometry that does not rely on artificial markers or specific models of the environment. The proposed method is designed in order to cope with the degenerate estimation case of the epipolar geometry with short baseline. Experimental evaluation has been performed in realistic indoor and outdoor settings.

  • 209.
    Luo, Guoliang
    et al.
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP.
    Bergström, Niklas
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP.
    Ek, Carl Henrik
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP.
    Kragic, Danica
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP.
    Representing actions with Kernels2011Inngår i: IEEE/RSJ International Conference on Intelligent Robots and Systems, 2011, s. 2028-2035Konferansepaper (Fagfellevurdert)
    Abstract [en]

    A long standing research goal is to create robots capable of interacting with humans in dynamic environments.To realise this a robot needs to understand and interpret the underlying meaning and intentions of a human action through a model of its sensory data. The visual domain provides a rich description of the environment and data is readily available in most system through inexpensive cameras. However, such data is very high-dimensional and extremely redundant making modeling challenging.Recently there has been a significant interest in semantic modeling from visual stimuli. Even though results are encouraging available methods are unable to perform robustly in realworld scenarios.In this work we present a system for action modeling from visual data by proposing a new and principled interpretation for representing semantic information. The representation is integrated with a real-time segmentation. The method is robust and flexible making it applicable for modeling in a realistic interaction scenario which demands handling noisy observations and require real-time performance. We provide extensive evaluation and show significant improvements compared to the state-of-the-art.

  • 210. López-Nicolás, G
    et al.
    Sagüés, C.
    Guerrero, J.
    Kragic, Danica
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Jensfelt, Patric
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Nonholonomic epipolar visual servoing2006Inngår i: 2006 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION (ICRA), New York: IEEE , 2006, s. 2378-2384Konferansepaper (Fagfellevurdert)
    Abstract [en]

    A significant amount of work has been reported in the area of visual servoing during the last decade. However, most of the contributions are applied in cases of holonomic robots. More recently, the use of visual feedback for control of nonholonomic vehicles has been reported. Some of the examples are docking and parallel parking maneuvers of cars or vision-based stabilization of a mobile manipulator to a desired pose with respect to a target of interest. Still, many of the approaches are mostly interested in the control part of visual servoing loop considering very simple vision algorithms based on artificial markers. In this paper, we present an approach for nonholonomic visual servoing based on epipolar geometry. The method facilitates a classical teach-by-showing approach where a reference image is used to define the desired pose (position and orientation) of the robot. The major contribution of the paper is the design of the control law that considers nonholonomic constraints of the robot as well as the robust feature detection and matching process based on scale and rotation invariant image features. An extensive experimental evaluation has been performed in a realistic indoor setting and the results are summarized in the paper.

  • 211.
    Madry, Marianna
    et al.
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP.
    Bo, Liefeng
    Amazon Inc, Seattle, WA USA.;Intel Sci & Technol Ctr Pervas Comp, Seattle, WA USA..
    Kragic, Danica
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP.
    Fox, Dieter
    Univ Washington, Dept Comp Sci & Engn, Seattle, WA 98195 USA..
    ST-HMP: Unsupervised Spatio-Temporal Feature Learning for Tactile Data2014Inngår i: 2014 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION (ICRA), IEEE , 2014, s. 2262-2269Konferansepaper (Fagfellevurdert)
    Abstract [en]

    Tactile sensing plays an important role in robot grasping and object recognition. In this work, we propose a new descriptor named Spatio-Temporal Hierarchical Matching Pursuit (ST-HMP) that captures properties of a time series of tactile sensor measurements. It is based on the concept of unsupervised hierarchical feature learning realized using sparse coding. The ST-HMP extracts rich spatio-temporal structures from raw tactile data without the need to predefine discriminative data characteristics. We apply it to two different applications: (1) grasp stability assessment and (2) object instance recognition, presenting its universal properties. An extensive evaluation on several synthetic and real datasets collected using the Schunk Dexterous, Schunk Parallel and iCub hands shows that our approach outperforms previously published results by a large margin.

  • 212.
    Madry, Marianna
    et al.
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Ek, Carl Henrik
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Detry, Renaud
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Hang, Kaiyu
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Kragic, Danica
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Improving Generalization for 3D Object Categorization with Global Structure Histograms2012Inngår i: Intelligent Robots and Systems (IROS), 2012 IEEE/RSJ International Conference on, IEEE conference proceedings, 2012, s. 1379-1386Konferansepaper (Fagfellevurdert)
    Abstract [en]

    We propose a new object descriptor for three dimensional data named the Global Structure Histogram (GSH). The GSH encodes the structure of a local feature response on a coarse global scale, providing a beneficial trade-off between generalization and discrimination. Encoding the structural characteristics of an object allows us to retain low local variations while keeping the benefit of global representativeness. In an extensive experimental evaluation, we applied the framework to category-based object classification in realistic scenarios. We show results obtained by combining the GSH with several different local shape representations, and we demonstrate significant improvements to other state-of-the-art global descriptors.

  • 213.
    Madry, Marianna
    et al.
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Maboudi Afkham, Heydar
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Ek, Carl Henrik
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Carlsson, Stefan
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Kragic, Danica
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Extracting essential local object characteristics for 3D object categorization2013Inngår i: 2013 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), IEEE conference proceedings, 2013, s. 2240-2247Konferansepaper (Fagfellevurdert)
    Abstract [en]

    Most object classes share a considerable amount of local appearance and often only a small number of features are discriminative. The traditional approach to represent an object is based on a summarization of the local characteristics by counting the number of feature occurrences. In this paper we propose the use of a recently developed technique for summarizations that, rather than looking into the quantity of features, encodes their quality to learn a description of an object. Our approach is based on extracting and aggregating only the essential characteristics of an object class for a task. We show how the proposed method significantly improves on previous work in 3D object categorization. We discuss the benefits of the method in other scenarios such as robot grasping. We provide extensive quantitative and qualitative experiments comparing our approach to the state of the art to justify the described approach.

  • 214.
    Madry, Marianna
    et al.
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP.
    Song, Dan
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP.
    Ek, Carl Henrik
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP.
    Kragic, Danica
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP.
    "Robot, bring me something to drink from": object representation for transferring task specific grasps2013Inngår i: In IEEE International Conference on Robotics and Automation (ICRA 2012), Workshop on Semantic Perception, Mapping and Exploration (SPME),  St. Paul, MN, USA, May 13, 2012, 2013Konferansepaper (Fagfellevurdert)
    Abstract [en]

    In this paper, we present an approach for taskspecificobject representation which facilitates transfer of graspknowledge from a known object to a novel one. Our representation encompasses: (a) several visual object properties,(b) object functionality and (c) task constrains in order to provide a suitable goal-directed grasp. We compare various features describing complementary object attributes to evaluate the balance between the discrimination and generalization properties of the representation. The experimental setup is a scene containing multiple objects. Individual object hypotheses are first detected, categorized and then used as the input to a grasp reasoning system that encodes the task information. Our approach not only allows to find objects in a real world scene that afford a desired task, but also to generate and successfully transfer task-based grasp within and across object categories.

  • 215.
    Madry, Marianna
    et al.
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Song, Dan
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Kragic, Danica
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    From object categories to grasp transfer using probabilistic reasoning2012Inngår i: 2012 IEEE International Conference on Robotics and Automation (ICRA), IEEE Computer Society, 2012, s. 1716-1723Konferansepaper (Fagfellevurdert)
    Abstract [en]

    In this paper we address the problem of grasp generation and grasp transfer between objects using categorical knowledge. The system is built upon an i) active scene segmentation module, able of generating object hypotheses and segmenting them from the background in real time, ii) object categorization system using integration of 2D and 3D cues, and iii) probabilistic grasp reasoning system. Individual object hypotheses are first generated, categorized and then used as the input to a grasp generation and transfer system that encodes task, object and action properties. The experimental evaluation compares individual 2D and 3D categorization approaches with the integrated system, and it demonstrates the usefulness of the categorization in task-based grasping and grasp transfer.

  • 216.
    Markdahl, Johan
    et al.
    KTH, Skolan för teknikvetenskap (SCI), Matematik (Inst.), Optimeringslära och systemteori.
    Hu, Xiaoming
    KTH, Skolan för teknikvetenskap (SCI), Matematik (Inst.), Optimeringslära och systemteori.
    Kragic, Danica
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP.
    Karayiannidis, Yiannis
    A Hybrid Control Approach to Task-Priority Based Mobile ManipulationInngår i: IEEE Transactions on Automatic Control, ISSN 0018-9286, E-ISSN 1558-2523Artikkel i tidsskrift (Fagfellevurdert)
  • 217.
    Markdahl, Johan
    et al.
    KTH, Skolan för teknikvetenskap (SCI), Matematik (Inst.), Optimeringslära och systemteori.
    Karayiannidis, Yiannis
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP.
    Hu, Xiaoming
    KTH, Skolan för teknikvetenskap (SCI), Matematik (Inst.), Optimeringslära och systemteori.
    Kragic, Danica
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP.
    Distributed Cooperative Object Attitude Manipulation2012Inngår i: 2012 IEEE International Conference on Robotics and Automation (ICRA), IEEE Computer Society, 2012, s. 2960-2965Konferansepaper (Fagfellevurdert)
    Abstract [en]

    This paper proposes a local information based control law in order to solve the planar manipulation problem of rotating a grasped rigid object to a desired orientation using multiple mobile manipulators. We adopt a multi-agent systems theory approach and assume that: (i) the manipulators (agents) are capable of sensing the relative position to their neighbors at discrete time instances, (ii) neighboring agents may exchange information at discrete time instances, and (iii) the communication topology is connected. Control of the manipulators is carried out at a kinematic level in continuous time and utilizes inverse kinematics. The mobile platforms are assigned trajectory tracking tasks that adjust the positions of the manipulator bases in order to avoid singular arm configurations. Our main result concerns the stability of the proposed control law.

  • 218.
    Martinez, David
    et al.
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Kragic, Danica
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Modeling and recognition of actions through motor primitives2008Inngår i: 2008 IEEE International Conference On Robotics And Automation: Vols 1-9, 2008, s. 1704-1709Konferansepaper (Fagfellevurdert)
    Abstract [en]

    We investigate modeling and recognition of object manipulation actions for the purpose of imitation based learning in robotics. To model the process, we are using a combination of discriminative (support vector machines, conditional random fields) and generative approaches (hidden Markov models). We examine the hypothesis that complex actions can be represented as a sequence of motion or action primitives. The experimental evaluation, performed with five object manipulation actions and 10 people, investigates the modeling approach of the primitive action structure and compares the performance of the considered generative and discriminative models.

  • 219.
    Marzinotto, Alejandro
    et al.
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP.
    Stork, Johannes A.
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP.
    Dimarogonas, Dino V.
    KTH, Skolan för elektro- och systemteknik (EES), Reglerteknik.
    Kragic Jensfelt, Danica
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP.
    Cooperative grasping through topological object representation2015Inngår i: IEEE-RAS International Conference on Humanoid Robots, IEEE Computer Society, 2015, s. 685-692Konferansepaper (Fagfellevurdert)
    Abstract [en]

    We present a cooperative grasping approach based on a topological representation of objects. Using point cloud data we extract loops on objects suitable for generating entanglement. We use the Gauss Linking Integral to derive controllers for multi-agent systems that generate hooking grasps on such loops while minimizing the entanglement between robots. The approach copes well with noisy point cloud data, it is computationally simple and robust. We demonstrate the method for performing object grasping and transportation, through a hooking maneuver, with two coordinated NAO robots.

  • 220. Miao, Li
    et al.
    Bekiroglu, Yasemin
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP.
    Kragic, Danica
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP.
    Billard, Aude
    Learning of Grasp Adaptation through Experience and Tactile Sensing2014Konferansepaper (Fagfellevurdert)
    Abstract [en]

    To perform robust grasping, a multi-fingered robotic hand should be able to adapt its grasping configuration, i.e., how the object is grasped, to maintain the stability of the grasp. Such a change of grasp configuration is called grasp adaptation and it depends on the controller, the employed sensory feedback and the type of uncertainties inherit to the problem. This paper proposes a grasp adaptation strategy to deal with uncertainties about physical properties of objects, such as the object weight and the friction at the contact points. Based on an object-level impedance controller, a grasp stability estimator is first learned in the object frame. Once a grasp is predicted to be unstable by the stability estimator, a grasp adaptation strategy is triggered according to the similarity between the new grasp and the training examples. Experimental results demonstrate that our method improves the grasping performance on novel objects with different physical properties from those used for training.

  • 221.
    Mitsioni, Ioanna
    et al.
    KTH, Skolan för elektroteknik och datavetenskap (EECS), Robotik, perception och lärande, RPL. KTH, Skolan för elektroteknik och datavetenskap (EECS), Centra, Centrum för autonoma system, CAS.
    Karayiannidis, Yiannis
    Division of Systems and Control, Dept. of Electrical Engineering, Chalmers University of Technology, Gothenburg, Sweden.
    Stork, Johannes A.
    Center for Applied Autonomous Sensor Systems (AASS), Örebro University, Örebro, Sweden.
    Kragic, Danica
    KTH, Skolan för elektroteknik och datavetenskap (EECS), Robotik, perception och lärande, RPL. KTH, Skolan för elektroteknik och datavetenskap (EECS), Centra, Centrum för autonoma system, CAS.
    Data-Driven Model Predictive Control for the Contact-Rich Task of Food Cutting2019Konferansepaper (Fagfellevurdert)
    Abstract [en]

    Modelling of contact-rich tasks is challenging and cannot be entirely solved using classical control approaches due to the difficulty of constructing an analytic description of the contact dynamics. Additionally, in a manipulation task like food-cutting, purely learning-based methods such as Reinforcement Learning, require either a vast amount of data that is expensive to collect on a real robot, or a highly realistic simulation environment, which is currently not available. This paper presents a data-driven control approach that employs a recurrent neural network to model the dynamics for a Model Predictive Controller. We build upon earlier work limited to torque-controlled robots and redefine it for velocity controlled ones. We incorporate force/torque sensor measurements, reformulate and further extend the control problem formulation. We evaluate the performance on objects used for training, as well as on unknown objects, by means of the cutting rates achieved and demonstrate that the method can efficiently treat different cases with only one dynamic model. Finally we investigate the behavior of the system during force-critical instances of cutting and illustrate its adaptive behavior in difficult cases.

  • 222. Nalpantidis, L.
    et al.
    Kragic Jensfelt, Danica
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Kostavelis, I.
    Gasteratos, A.
    Theta- disparity: An efficient representation of the 3D scene structure2015Inngår i: 13th International Conference on Intelligent Autonomous Systems, IAS 2014, Springer, 2015, Vol. 302, s. 795-806Konferansepaper (Fagfellevurdert)
    Abstract [en]

    We propose a new representation of 3D scene structure, named thetadisparity. The proposed representation is a 2D angular depth histogram that is calculated using a disparity map. It models the structure of the prominent objects in the scene and reveals their radial distribution relative to a point of interest. The proposed representation is analyzed and used as a basic attention mechanism to autonomously resolve two different robotic scenarios. The method is efficient due to the low computational complexity. We show that the method can be successfully used for the planning of different tasks in the industrial and service robotics domains, e.g., object grasping, manipulation, plane extraction, path detection, and obstacle avoidance.

  • 223.
    Nalpantidis, Lazaros
    et al.
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Björkman, Mårten
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Kragic, Danica
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    YES - YEt another object Segmentation: exploiting camera movement2012Inngår i: Intelligent Robots and Systems (IROS), 2012 IEEE/RSJ International Conference on, IEEE , 2012, s. 2116-2121Konferansepaper (Fagfellevurdert)
    Abstract [en]

    We address the problem of object segmentation in image sequences where no a-priori knowledge of objects is assumed. We take advantage of robots' ability to move, gathering multiple images of the scene. Our approach starts by extracting edges, uses a polar domain representation and performs integration over time based on a simple dilation operation. The proposed system can be used for providing reliable initial segmentation of unknown objects in scenes of varying complexity, allowing for recognition, categorization or physical interaction with the objects. The experimental evaluation on both self-captured and a publicly available dataset shows the efficiency and stability of the proposed method.

  • 224.
    Nazem, Ali
    et al.
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP.
    Kootstra, Geert
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP.
    Kragic, Danica
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP.
    Djurfeldt, Mikael
    KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Parallelldatorcentrum, PDC. KTH, Skolan för datavetenskap och kommunikation (CSC), Beräkningsbiologi, CB.
    Interfacing a parallel simulation of a neuronal network to robotic hardware using MUSIC, with application to real-time figure-ground segregation.2011Inngår i: BMC neuroscience (Online), ISSN 1471-2202, E-ISSN 1471-2202, Vol. 12, nr Suppl 1, s. 78-78Artikkel i tidsskrift (Fagfellevurdert)
  • 225.
    Nazem, Ali
    et al.
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP.
    Kootstra, Geert
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP.
    Kragic, Danica
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP.
    Djurfeldt, Mikael
    KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Parallelldatorcentrum, PDC. KTH, Skolan för datavetenskap och kommunikation (CSC), Beräkningsbiologi, CB.
    Parallel implementation of a biologically inspired model of figure-ground segregation: Application to real-time data using MUSIC2011Inngår i: Frontiers in Neuroinformatics, ISSN 1662-5196, E-ISSN 1662-5196Artikkel i tidsskrift (Fagfellevurdert)
    Abstract [en]

    MUSIC, the multi-simulation coordinator, supports communication between neuronal-network simulators, or other (parallel) applications, running in a cluster super-computer. Here, we have developed a class library that interfaces between MUSIC-enabled software and applications running on computers outside of the cluster. Specifically, we have used this component to interface the cameras of a robotic head to a neuronal-network simulation running on a Blue Gene/L supercomputer. Additionally, we have developed a parallel implementation of a model for figure ground segregation based on neuronal activity in the Macaque visual cortex. The interface enables the figure ground segregation application to receive real-world images in real-time from the robot. Moreover, it enables the robot to be controlled by the neuronal network.

  • 226. Patel, M.
    et al.
    Ek, Carl Henrik
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP.
    Kyriazis, N.
    Argyros, A.
    Miro, J. V.
    Kragic, Danica
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP.
    Language for learning complex human-object interactions2013Inngår i: 2013 IEEE International Conference on Robotics and Automation (ICRA), IEEE Computer Society, 2013, s. 4997-5002Konferansepaper (Fagfellevurdert)
    Abstract [en]

    In this paper we use a Hierarchical Hidden Markov Model (HHMM) to represent and learn complex activities/task performed by humans/robots in everyday life. Action primitives are used as a grammar to represent complex human behaviour and learn the interactions and behaviour of human/robots with different objects. The main contribution is the use of a probabilistic model capable of representing behaviours at multiple levels of abstraction to support the proposed hypothesis. The hierarchical nature of the model allows decomposition of the complex task into simple action primitives. The framework is evaluated with data collected for tasks of everyday importance performed by a human user.

  • 227. Patel, Mitesh
    et al.
    Miro, Jaime Valls
    Kragic, Danica
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP.
    Ek, Carl Henrik
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP.
    Dissanayake, Gamini
    Learning object, grasping and manipulation activities using hierarchical HMMs2014Inngår i: Autonomous Robots, ISSN 0929-5593, E-ISSN 1573-7527, Vol. 37, nr 3, s. 317-331Artikkel i tidsskrift (Fagfellevurdert)
    Abstract [en]

    This article presents a probabilistic algorithm for representing and learning complex manipulation activities performed by humans in everyday life. The work builds on the multi-level Hierarchical Hidden Markov Model (HHMM) framework which allows decomposition of longer-term complex manipulation activities into layers of abstraction whereby the building blocks can be represented by simpler action modules called action primitives. This way, human task knowledge can be synthesised in a compact, effective representation suitable, for instance, to be subsequently transferred to a robot for imitation. The main contribution is the use of a robust framework capable of dealing with the uncertainty or incomplete data inherent to these activities, and the ability to represent behaviours at multiple levels of abstraction for enhanced task generalisation. Activity data from 3D video sequencing of human manipulation of different objects handled in everyday life is used for evaluation. A comparison with a mixed generative-discriminative hybrid model HHMM/SVM (support vector machine) is also presented to add rigour in highlighting the benefit of the proposed approach against comparable state of the art techniques.

  • 228.
    Pauwels, Karl
    et al.
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP.
    Kragic, Danica
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP.
    Scaling Up Real-time Object Pose Tracking to Multiple Objects and Active Cameras2015Inngår i: IEEE International Conference on Robotics and Automation: Workshop on Scaling Up Active Perception, 2015Konferansepaper (Fagfellevurdert)
    Abstract [en]

    We present an overview of our recent work on real-time model-based object pose estimation. We have developed an approach that can simultaneously track the pose of a large number of objects using multiple active cameras. It combines dense motion and depth cues with proprioceptive information to maintain a 3D simulated model of the objects in the scene and the robot operating on them. A constrained optimization method allows for an efficient fusion of the multiple dense cues obtained from each camera into this scene representation. This work is publicly available as a ROS software module for real-time object pose estimation called SimTrack.

  • 229.
    Pauwels, Karl
    et al.
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Kragic, Danica
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    SimTrack: A Simulation-based Framework for Scalable Real-time Object Pose Detection and Tracking2015Inngår i: 2015 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS (IROS), IEEE , 2015, s. 1300-1307Konferansepaper (Fagfellevurdert)
    Abstract [en]

    We propose a novel approach for real-time object pose detection and tracking that is highly scalable in terms of the number of objects tracked and the number of cameras observing the scene. Key to this scalability is a high degree of parallelism in the algorithms employed. The method maintains a single 3D simulated model of the scene consisting of multiple objects together with a robot operating on them. This allows for rapid synthesis of appearance, depth, and occlusion information from each camera viewpoint. This information is used both for updating the pose estimates and for extracting the low-level visual cues. The visual cues obtained from each camera are efficiently fused back into the single consistent scene representation using a constrained optimization method. The centralized scene representation, together with the reliability measures it enables, simplify the interaction between pose tracking and pose detection across multiple cameras. We demonstrate the robustness of our approach in a realistic manipulation scenario. We publicly release this work as a part of a general ROS software framework for real-time pose estimation, SimTrack, that can be integrated easily for different robotic applications.

  • 230. Pauwels, Karl
    et al.
    Kragic Jensfelt, Danica
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP.
    Integrated On-line Robot-camera Calibration and Object Pose Estimation2016Konferansepaper (Fagfellevurdert)
    Abstract [en]

    We present a novel on-line approach for extrinsic robot-camera calibration, a process often referred to as hand-eye calibration, that uses object pose estimates from a real-time model-based tracking approach. While off-line calibration has seen much progress recently due to the incorporation of bundle adjustment techniques, on-line calibration still remains a largely open problem. Since we update the calibration in each frame, the improvements can be incorporated immediately in the pose estimation itself to facilitate object tracking. Our method does not require the camera to observe the robot or to have markers at certain fixed locations on the robot. To comply with a limited computational budget, it maintains a fixed size configuration set of samples. This set is updated each frame in order to maximize an observability criterion. We show that a set of size 20 is sufficient in real-world scenarios with static and actuated cameras. With this set size, only 100 microseconds are required to update the calibration in each frame, and we typically achieve accurate robot-camera calibration in 10 to 20 seconds. Together, these characteristics enable the incorporation of calibration in normal task execution.

  • 231. Petersson, Lars
    et al.
    Austin, David
    Kragic, Danica
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP.
    Christensen, Henrik
    Towards an Intelligent Service Robot System2000Artikkel i tidsskrift (Fagfellevurdert)
    Abstract [en]

    A theoretical and software framework is presented to facilitate the implementation of complex robotic tasks. Essential features of the framework are discussed, along with the actual implementation. To demonstrate the use of the framework, a controller for visually-guided door opening is implemented. This controller shows how a modular system can easily be designed and implemented using our framework. A discussion is also given, comparing this framework with other similar proposals. 1 Introduction The motivation for pursuing research in the area of intelligent service robots is many-fold. Intelligent service robots have a wide variety of potential applications, both at present and in the future. Obvious uses for autonomous robots are help for elderly and/or handicapped people, everyday chores like cleaning the oors, and fetching items (called fetch andcarry tasks). It is of key importance that these systems are easy to use and are safe. Installation should not require that an engi...

  • 232.
    Petersson, Lars
    et al.
    KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Jensfelt, Patric
    KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Tell, Dennis
    KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Strandberg, Morten
    KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Kragic, Danica
    KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Christensen, H.I.
    KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Systems integration for real–world manipulation tasks2002Inngår i: 2002 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION, VOLS I-IV, PROCEEDINGS, 2002, s. 2500-2505Konferansepaper (Fagfellevurdert)
    Abstract [en]

     A system developed to demonstrate integration of a number of key research areas such as localization, recognition, visual tracking, visual servoing and grasping is presented together with the underlying methodology adopted to facilitate the integration. Through sequencing of basic skills, provided by the above mentioned competencies, the system has the potential to carry out flexible grasping for fetch and carry in realistic environments. Through careful fusion of reactive and deliberative control and use of multiple sensory modalities a significant flexibility is achieved. Experimental verification of the integrated system is presented.

  • 233. Petterson, Lars
    et al.
    Austin, David
    Kragic, Danica
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP.
    High-level Control of a Mobile Manipulator for Door Opening2000Konferansepaper (Fagfellevurdert)
    Abstract [en]

    In this paper, off-the-shelf algorithms for force/torque control are used in the context of mobile manipulation, in particular, the task of opening a door is studied. To make the solution robust, as few assumptions as possible are made. By using relaxation of forces as the basic level of control more complex information can be derived from the resulting motion. In our system, the radius and centre of rotation of the door are estimated online. This enables the complete system to have a higher degree of autonomy in an unknown environment. In addition, the redundancy of the robot is exploited in such a way to drive the system towards a desired configuration. The framework of hybrid dynamic systems is used to implement the algorithm which gives a theoretically sound framework for analysing the system with respect to safety and functionality. The integration of the above approaches results in a system which can robustly locate and grasp the handle and then open the door

  • 234.
    Piccolo, Giacomo
    et al.
    KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Karasalo, Maja
    KTH, Skolan för teknikvetenskap (SCI), Matematik (Inst.), Optimeringslära och systemteori. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Kragic, Danica
    KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Hu, Xiaoming
    KTH, Skolan för teknikvetenskap (SCI), Matematik (Inst.), Optimeringslära och systemteori.
    Contour reconstruction using recursive smoothing splines experimental validation2007Inngår i: IEEE International Conference on Intelligent Robots and Systems: Vols 1-9, 2007, s. 2077-2082Konferansepaper (Fagfellevurdert)
    Abstract [en]

    In this paper, a recursive smoothing spline approach for contour reconstruction is studied and evaluated. Periodic smoothing splines are used by a robot to approximate the contour of encountered obstacles in the environment. The splines are generated through minimizing a cost function subject to constraints imposed by a linear control system and accuracy is improved iteratively using a recursive spline algorithm. The filtering effect of the smoothing splines allows for usage of noisy sensor data and the method is robust to odometry drift. Experimental evaluation is performed for contour reconstruction of three objects using a SICK laser scanner mounted on a PowerBot from ActivMedia Robotics.

  • 235.
    Pieropan, Alessandro
    et al.
    KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS. KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP.
    Bergström, N.
    Ishikawa, M.
    Kragic, Danica
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Kjellström, Hedvig
    KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS. KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP.
    Robust tracking of unknown objects through adaptive size estimation and appearance learning2016Inngår i: Proceedings - IEEE International Conference on Robotics and Automation, IEEE conference proceedings, 2016, s. 559-566Konferansepaper (Fagfellevurdert)
    Abstract [en]

    This work employs an adaptive learning mechanism to perform tracking of an unknown object through RGBD cameras. We extend our previous framework to robustly track a wider range of arbitrarily shaped objects by adapting the model to the measured object size. The size is estimated as the object undergoes motion, which is done by fitting an inscribed cuboid to the measurements. The region spanned by this cuboid is used during tracking, to determine whether or not new measurements should be added to the object model. In our experiments we test our tracker with a set of objects of arbitrary shape and we show the benefit of the proposed model due to its ability to adapt to the object shape which leads to more robust tracking results.

  • 236.
    Pinto Basto de Carvalho, Joao Frederico
    et al.
    KTH, Skolan för elektroteknik och datavetenskap (EECS), Intelligenta system, Robotik, perception och lärande, RPL. KTH, Skolan för elektroteknik och datavetenskap (EECS), Centra, Centrum för autonoma system, CAS.
    Vejdemo-Johansson, Mikael
    CUNY College of Staten Island,Mathematics Department,New York,USA.
    Pokorny, Florian T.
    KTH, Skolan för elektroteknik och datavetenskap (EECS), Intelligenta system, Robotik, perception och lärande, RPL. KTH, Skolan för elektroteknik och datavetenskap (EECS), Centra, Centrum för autonoma system, CAS.
    Kragic, Danica
    KTH, Skolan för elektroteknik och datavetenskap (EECS), Intelligenta system, Robotik, perception och lärande, RPL. KTH, Skolan för elektroteknik och datavetenskap (EECS), Centra, Centrum för autonoma system, CAS.
    Long-term Prediction of Motion Trajectories Using Path Homology Clusters2019Inngår i: 2019 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Institute of Electrical and Electronics Engineers (IEEE), 2019Konferansepaper (Fagfellevurdert)
    Abstract [en]

    In order for robots to share their workspace with people, they need to reason about human motion efficiently. In this work we leverage large datasets of paths in order to infer local models that are able to perform long-term predictions of human motion. Further, since our method is based on simple dynamics, it is conceptually simple to understand and allows one to interpret the predictions produced, as well as to extract a cost function that can be used for planning. The main difference between our method and similar systems, is that we employ a map of the space and translate the motion of groups of paths into vector fields on that map. We test our method on synthetic data and show its performance on the Edinburgh forum pedestrian long-term tracking dataset [1] where we were able to outperform a Gaussian Mixture Model tasked with extracting dynamics from the paths.

  • 237. Pokorny, F. T.
    et al.
    Kragic, Danica
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Kavraki, L. E.
    Goldberg, K.
    High-dimensional Winding-Augmented Motion Planning with 2D topological task projections and persistent homology2016Inngår i: Proceedings - IEEE International Conference on Robotics and Automation, IEEE conference proceedings, 2016, s. 24-31Konferansepaper (Fagfellevurdert)
    Abstract [en]

    Recent progress in motion planning has made it possible to determine homotopy inequivalent trajectories between an initial and terminal configuration in a robot configuration space. Current approaches have however either assumed the knowledge of differential one-forms related to a skeletonization of the collision space, or have relied on a simplicial representation of the free space. Both of these approaches are currently however not yet practical for higher dimensional configuration spaces. We propose 2D topological task projections (TTPs): mappings from the configuration space to 2-dimensional spaces where simplicial complex filtrations and persistent homology can identify topological properties of the high-dimensional free configuration space. Our approach only requires the availability of collision free samples to identify winding centers that can be used to determine homotopy inequivalent trajectories. We propose the Winding Augmented RRT and RRT∗ (WA-RRT/RRT∗) algorithms using which homotopy inequivalent trajectories can be found. We evaluate our approach in experiments with configuration spaces of planar linkages with 2-10 degrees of freedom. Results indicate that our approach can reliably identify suitable topological task projections and our proposed WA-RRT and WA-RRT∗ algorithms were able to identify a collection of homotopy inequivalent trajectories in each considered configuration space dimension.

  • 238.
    Pokorny, Florian T.
    et al.
    KTH, Skolan för datavetenskap och kommunikation (CSC), Robotik, perception och lärande, RPL.
    Bekiroglu, Y.
    Pauwels, Karl
    KTH, Skolan för datavetenskap och kommunikation (CSC), Robotik, perception och lärande, RPL.
    Butepage, Judith
    KTH, Skolan för datavetenskap och kommunikation (CSC), Robotik, perception och lärande, RPL.
    Scherer, Clara
    KTH, Skolan för datavetenskap och kommunikation (CSC), Robotik, perception och lärande, RPL.
    Kragic, Danica
    KTH, Skolan för datavetenskap och kommunikation (CSC), Robotik, perception och lärande, RPL.
    A database for reproducible manipulation research: CapriDB – Capture, Print, Innovate2017Inngår i: Data in Brief, ISSN 2352-3409, Vol. 11, s. 491-498Artikkel i tidsskrift (Fagfellevurdert)
    Abstract [en]

    We present a novel approach and database which combines the inexpensive generation of 3D object models via monocular or RGB-D camera images with 3D printing and a state of the art object tracking algorithm. Unlike recent efforts towards the creation of 3D object databases for robotics, our approach does not require expensive and controlled 3D scanning setups and aims to enable anyone with a camera to scan, print and track complex objects for manipulation research. The proposed approach results in detailed textured mesh models whose 3D printed replicas provide close approximations of the originals. A key motivation for utilizing 3D printed objects is the ability to precisely control and vary object properties such as the size, material properties and mass distribution in the 3D printing process to obtain reproducible conditions for robotic manipulation research. We present CapriDB – an extensible database resulting from this approach containing initially 40 textured and 3D printable mesh models together with tracking features to facilitate the adoption of the proposed approach.

  • 239.
    Pokorny, Florian T.
    et al.
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Bekiroglu, Yasemin
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Björkman, Mårten
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Exner, Johannes
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Kragic, Danica
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Grasp Moduli Spaces, Gaussian Processes and Multimodal Sensor Data2014Konferansepaper (Fagfellevurdert)
  • 240.
    Pokorny, Florian T.
    et al.
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Bekiroglu, Yasemin
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Kragic, Danica
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Grasp Moduli Spaces and Spherical Harmonics2014Inngår i: 2014 IEEE International Conference on Robotics and Automation (ICRA), 2014, s. 389-396Konferansepaper (Fagfellevurdert)
  • 241.
    Pokorny, Florian T.
    et al.
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP.
    Ek, Carl Henrik
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP.
    Kjellström, Hedvig
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP.
    Kragic, Danica
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP.
    Persistent Homology for Learning Densities with Bounded Support2012Inngår i: Advances in Neural Information Processing Systems 25: 26th Annual Conference on Neural Information Processing Systems 2012 / [ed] P. Bartlett, F.C.N. Pereira, C.J.C. Burges, L. Bottou and K.Q. Weinberger, Curran Associates, Inc., 2012, s. 1817-1825Konferansepaper (Fagfellevurdert)
    Abstract [en]

    We present a novel method for learning densities with bounded support which enables us to incorporate 'hard' topological constraints. In particular, we show how emerging techniques from computational algebraic topology and the notion of persistent homology can be combined with kernel-based methods from machine learning for the purpose of density estimation. The proposed formalism facilitates learning of models with bounded support in a principled way, and - by incorporating persistent homology techniques in our approach - we are able to encode algebraic-topological constraints which are not addressed in current state of the art probabilistic models. We study the behaviour of our method on two synthetic examples for various sample sizes and exemplify the benefits of the proposed approach on a real-world dataset by learning a motion model for a race car. We show how to learn a model which respects the underlying topological structure of the racetrack, constraining the trajectories of the car.

  • 242.
    Pokorny, Florian T.
    et al.
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP.
    Ek, Carl Henrik
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP.
    Kjellström, Hedvig
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP.
    Kragic, Danica
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP.
    Topological Constraints and Kernel-Based Density Estimation2012Konferansepaper (Fagfellevurdert)
    Abstract [en]

    This extended abstract1 explores the question of how to estimate a probability distribution from a finite number of samples when information about the topology of the support region of an underlying density is known. This workshop contribution is a continuation of our recent work [1] combining persistent homology and kernel-based density estimation for the first time and in which we explored an approach capable of incorporating topological constraints in bandwidth selection. We report on some recent experiments with high-dimensional motion capture data which show that our method is applicable even in high dimensions and develop our ideas for potential future applications of this framework.

  • 243.
    Pokorny, Florian T.
    et al.
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Hang, Kaiyu
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Kragic, Danica
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Grasp Moduli Spaces2013Inngår i: Proceedings of Robotics: Science and Systems (RSS 2013), 2013Konferansepaper (Fagfellevurdert)
    Abstract [en]

    We present a new approach for modelling grasping using an integrated space of grasps and shapes. In particular, we introduce an infinite dimensional space, the Grasp Moduli Space, which represents shapes and grasps in a continuous manner. We define a metric on this space allowing us to formalize ‘nearby’ grasp/shape configurations and we discuss continuous deformations of such configurations. We work in particular with surfaces with cylindrical coordinates and analyse the stability of a popular L1 grasp quality measure Ql under continuous deformations of shapes and grasps. We experimentally determine bounds on the maximal change of Ql in a small neighbourhood around stable grasps with grasp quality above a threshold. In the case of surfaces of revolution, we determine stable grasps which correspond to grasps used by humans and develop an efficient algorithm for generating those grasps in the case of three contact points. We show that sufficiently stable grasps stay stable under small deformations. For larger deformations, we develop a gradient-based method that can transfer stable grasps between different surfaces. Additionally, we show in experiments that our gradient method can be used to find stable grasps on arbitrary surfaces with cylindrical coordinates by deforming such surfaces towards a corresponding ‘canonical’ surface of revolution.

  • 244.
    Pokorny, Florian T.
    et al.
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Kragic, Danica
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Classical Grasp Quality Evaluation: New Theory and Algorithms2013Inngår i: 2013 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), IEEE , 2013, s. 3493-3500Konferansepaper (Fagfellevurdert)
    Abstract [en]

    This paper investigates theoretical properties of a well-known L 1 grasp quality measure Q whose approximation Q- l is commonly used for the evaluation of grasps and where the precision of Q- l depends on an approximation of a cone by a convex polyhedral cone with l edges. We prove the Lipschitz continuity of Q and provide an explicit Lipschitz bound that can be used to infer the stability of grasps lying in a neighbourhood of a known grasp. We think of Q - l as a lower bound estimate to Q and describe an algorithm for computing an upper bound Q+. We provide worst-case error bounds relating Q and Q- l. Furthermore, we develop a novel grasp hypothesis rejection algorithm which can exclude unstable grasps much faster than current implementations. Our algorithm is based on a formulation of the grasp quality evaluation problem as an optimization problem, and we show how our algorithm can be used to improve the efficiency of sampling based grasp hypotheses generation methods.

  • 245.
    Pokorny, Florian T.
    et al.
    KTH, Skolan för datavetenskap och kommunikation (CSC), Robotik, perception och lärande, RPL. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Kragic, Danica
    KTH, Skolan för datavetenskap och kommunikation (CSC), Robotik, perception och lärande, RPL. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Data-driven topological motion planning with persistent cohomology2015Inngår i: Robotics: Science and Systems / [ed] Buchli J.,Hsu D.,Kavraki L.E., MIT Press, 2015, Vol. 11Konferansepaper (Fagfellevurdert)
    Abstract [en]

    In this work, we present an approach to topological motion planning which is fully data-driven in nature and which relies solely on the knowledge of samples in the free configuration space. For this purpose, we discuss the use of persistent cohomology with coefficients in a finite field to compute a basis which allows us to efficiently solve the path planning problem. The proposed approach can be used both in the case where a part of a configuration space is well-approximated by samples and, more generally, with arbitrary filtrations arising from real-world data sets. Furthermore, our approach can generate motions in a subset of the configuration space specified by the sub- or superlevel set of a filtration function such as a cost function or probability distribution. Our experiments show that our approach is highly scalable in low dimensions and we present results on simulated PR2 arm motions as well as GPS trace and motion capture data.

  • 246.
    Pokorny, Florian T.
    et al.
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Stork, Johannes A.
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Kragic, Danica
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Grasping Objects with Holes: A Topological Approach2013Inngår i: 2013 IEEE International Conference on Robotics and Automation (ICRA), New York: IEEE , 2013, s. 1100-1107Konferansepaper (Fagfellevurdert)
    Abstract [en]

    This work proposes a topologically inspired approach for generating robot grasps on objects with `holes'. Starting from a noisy point-cloud, we generate a simplicial representation of an object of interest and use a recently developed method for approximating shortest homology generators to identify graspable loops. To control the movement of the robot hand, a topologically motivated coordinate system is used in order to wrap the hand around such loops. Finally, another concept from topology -- namely the Gauss linking integral -- is adapted to serve as evidence for secure caging grasps after a grasp has been executed. We evaluate our approach in simulation on a Barrett hand using several target objects of different sizes and shapes and present an initial experiment with real sensor data.

  • 247. Popovic, Mila
    et al.
    Kraft, Dirk
    Bodenhagen, Leon
    Baseski, Emre
    Pugeault, Nicolas
    Kragic, Danica
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP.
    Asfour, Tamim
    Kruger, Norbert
    A strategy for grasping unknown objects based on co-planarity and colour information2010Inngår i: Robotics and Autonomous Systems, ISSN 0921-8890, E-ISSN 1872-793X, Vol. 58, nr 5, s. 551-565Artikkel i tidsskrift (Fagfellevurdert)
    Abstract [en]

    In this work, we describe and evaluate a grasping mechanism that does not make use of any specific object prior knowledge. The mechanism makes use of second-order relations between visually extracted multi-modal 3D features provided by an early cognitive vision system. More specifically, the algorithm is based on two relations covering geometric information in terms of a co-planarity constraint as well as appearance based information in terms of co-occurrence of colour properties. We show that our algorithm, although making use of such rather simple constraints, is able to grasp objects with a reasonable success rate in rather complex environments (i.e., cluttered scenes with multiple objects). Moreover, we have embedded the algorithm within a cognitive system that allows for autonomous exploration and learning in different contexts. First, the system is able to perform long action sequences which, although the grasping attempts not being always successful, can recover from mistakes and more importantly, is able to evaluate the success of the grasps autonomously by haptic feedback (i.e., by a force torque sensor at the wrist and proprioceptive information about the distance of the gripper after a gasping attempt). Such labelled data is then used for improving the initially hard-wired algorithm by learning. Moreover, the grasping behaviour has been used in a cognitive system to trigger higher level processes such as object learning and learning of object specific grasping. (C) 2010 Elsevier B.V. All rights reserved.

  • 248.
    Popović, Mila
    et al.
    The Maersk Mc-Kinney Möller Institute, University of Southern Denmark.
    Kootstra, Gert
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP.
    Jørgensen, Jimmy Alison
    The Maersk Mc-Kinney Möller Institute, University of Southern Denmark.
    Kragic, Danica
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP.
    Krüger, Norbert
    The Maersk Mc-Kinney Möller Institute, University of Southern Denmark.
    Grasping Unknown Objects using an Early Cognitive Vision System for General Scene Understanding2011Inngår i: Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), IEEE , 2011, s. 987-994Konferansepaper (Fagfellevurdert)
    Abstract [en]

    Grasping unknown objects based on real-world visual input is a challenging problem. In this paper, we present an Early Cognitive Vision system that builds a hierarchical representation based on edge and texture information, which is a sparse but powerful description of the scene. Based on this representation we generate edge-based and surface-based grasps. The results show that the method generates successful grasps, that the edge and surface information are complementary, and that the method can deal with more complex scenes. We furthermore present a benchmark for visual-based grasping.

  • 249. Preisig, Peter
    et al.
    Kragic, Danica
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Robust Statistics for 3D Object Tracking2006Inngår i: 2006 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION (ICRA), VOLS 1-10, 2006, s. 2403-2408Konferansepaper (Fagfellevurdert)
    Abstract [en]

    This paper focuses on methods that enhance performance of a model based 3D object tracking system. Three statistical methods and an improved edge detector are discussed and compared. The evaluation is performed on a number of characteristic sequences incorporating shift, rotation, texture, weak illumination and occlusion. Considering the deviations of the pose parameters from ground truth, it is shown that improving the measurements' accuracy in the detection step yields better results than improving contaminated measurements with statistical means.

  • 250.
    Rasolzadeh, Babak
    et al.
    KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS. KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP.
    Björkman, Mårten
    KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS. KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP.
    Hübner, Kai
    KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS. KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP.
    Kragic, Danica
    KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS. KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP.
    An Active Vision System for Detecting, Fixating and Manipulating Objects in the Real World2010Inngår i: The international journal of robotics research, ISSN 0278-3649, E-ISSN 1741-3176, Vol. 29, nr 2-3, s. 133-154Artikkel i tidsskrift (Fagfellevurdert)
    Abstract [en]

    The ability to autonomously acquire new knowledge through interaction with the environment is an important research topic in the field of robotics. The knowledge can only be acquired if suitable perception-action capabilities are present: a robotic system has to be able to detect, attend to and manipulate objects in its surrounding. In this paper, we present the results of our long-term work in the area of vision-based sensing and control. The work on finding, attending, recognizing and manipulating objects in domestic environments is studied. We present a stereo-based vision system framework where aspects of top-down and bottom-up attention as well as foveated attention are put into focus and demonstrate how the system can be utilized for robotic object grasping.

234567 201 - 250 of 305
RefereraExporteraLink til resultatlisten
Permanent link
Referera
Referensformat
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Annet format
Fler format
Språk
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Annet språk
Fler språk
Utmatningsformat
  • html
  • text
  • asciidoc
  • rtf