Endre søk
Begrens søket
1234567 151 - 200 of 305
RefereraExporteraLink til resultatlisten
Permanent link
Referera
Referensformat
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Annet format
Fler format
Språk
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Annet språk
Fler språk
Utmatningsformat
  • html
  • text
  • asciidoc
  • rtf
Treff pr side
  • 5
  • 10
  • 20
  • 50
  • 100
  • 250
Sortering
  • Standard (Relevans)
  • Forfatter A-Ø
  • Forfatter Ø-A
  • Tittel A-Ø
  • Tittel Ø-A
  • Type publikasjon A-Ø
  • Type publikasjon Ø-A
  • Eldste først
  • Nyeste først
  • Skapad (Eldste først)
  • Skapad (Nyeste først)
  • Senast uppdaterad (Eldste først)
  • Senast uppdaterad (Nyeste først)
  • Disputationsdatum (tidligste først)
  • Disputationsdatum (siste først)
  • Standard (Relevans)
  • Forfatter A-Ø
  • Forfatter Ø-A
  • Tittel A-Ø
  • Tittel Ø-A
  • Type publikasjon A-Ø
  • Type publikasjon Ø-A
  • Eldste først
  • Nyeste først
  • Skapad (Eldste først)
  • Skapad (Nyeste først)
  • Senast uppdaterad (Eldste først)
  • Senast uppdaterad (Nyeste først)
  • Disputationsdatum (tidligste først)
  • Disputationsdatum (siste først)
Merk
Maxantalet träffar du kan exportera från sökgränssnittet är 250. Vid större uttag använd dig av utsökningar.
  • 151.
    Kootstra, Gert
    et al.
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Bergström, Niklas
    KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS. KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP.
    Kragic, Danica
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Using Symmetry to Select Fixation Points for Segmentation2010Inngår i: Proceedings of the 20th International Conference on Pattern Recognition, IEEE , 2010, s. 3894-3897Konferansepaper (Fagfellevurdert)
    Abstract [en]

    For the interpretation of a visual scene, it is important for a robotic system to pay attention to the objects in the scene and segment them from their background. We focus on the segmentation of previously unseen objects in unknown scenes. The attention model therefore needs to be bottom-up and context-free. In this paper, we propose the use of symmetry, one of the Gestalt principles for figure-ground segregation, to guide the robot’s attention. We show that our symmetry-saliency model outperforms the contrast-saliency model, proposed in. The symmetry model performs better in finding the objects of interest and selects a fixation point closer to the center of the object. Moreover, the objects are better segmented from the background when the initial points are selected on the basis of symmetry.

  • 152.
    Kootstra, Gert
    et al.
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP.
    Kragic, Danica
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP.
    Fast and Bottom-Up Object Detection and Segmentation using Gestalt Principles2011Inngår i: Proceedings of the International Conference on Robotics and Automation (ICRA), IEEE , 2011, s. 3423-3428Konferansepaper (Fagfellevurdert)
    Abstract [en]

    In many scenarios, domestic robot will regularly encounter unknown objects. In such cases, top-down knowledge about the object for detection, recognition, and classification cannot be used. To learn about the object, or to be able to grasp it, bottom-up object segmentation is an important competence for the robot. Also when there is top-down knowledge, prior segmentation of the object can improve recognition and classification. In this paper, we focus on the problem of bottom-up detection and segmentation of unknown objects. Gestalt psychology studies the same phenomenon in human vision. We propose the utilization of a number of Gestalt principles. Our method starts by generating a set of hypotheses about the location of objects using symmetry. These hypotheses are then used to initialize the segmentation process. The main focus of the paper is on the evaluation of the resulting object segments using Gestalt principles to select segments with high figural goodness. The results show that the Gestalt principles can be successfully used for detection and segmentation of unknown objects. The results furthermore indicate that the Gestalt measures for the goodness of a segment correspond well with the objective quality of the segment. We exploit this to improve the overall segmentation performance.

  • 153.
    Kootstra, Gert
    et al.
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Popovic, Mila
    Jorgensen, Jimmy Alison
    Kuklinski, Kamil
    Miatliuk, Konstantsin
    Kragic, Danica
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Krueger, Norbert
    Enabling grasping of unknown objects through a synergistic use of edge and surface information2012Inngår i: The international journal of robotics research, ISSN 0278-3649, E-ISSN 1741-3176, Vol. 31, nr 10, s. 1190-1213Artikkel i tidsskrift (Fagfellevurdert)
    Abstract [en]

    Grasping unknown objects based on visual input, where no a priori knowledge about the objects is used, is a challenging problem. In this paper, we present an Early Cognitive Vision system that builds a hierarchical representation based on edge and texture information which provides a sparse but powerful description of the scene. Based on this representation, we generate contour-based and surface-based grasps. We test our method in two real-world scenarios, as well as on a vision-based grasping benchmark providing a hybrid scenario using real-world stereo images as input and a simulator for extensive and repetitive evaluation of the grasps. The results show that the proposed method is able to generate successful grasps, and in particular that the contour and surface information are complementary for the task of grasping unknown objects. This allows for dealing with rather complex scenes.

  • 154. Kraft, Dirk
    et al.
    Pugeault, Nicolas
    Baseski, Emre
    Popovic, Mila
    Kragic, Danica
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Kalkan, Sinan
    Woergoetter, Florentin
    Krueger, Norbert
    Birth Of The Object: Detection Of Objectness And Extraction Of Object Shape Through Object-Action Complexes2008Inngår i: International Journal of Humanoid Robotics, ISSN 0219-8436, Vol. 5, nr 2, s. 247-265Artikkel i tidsskrift (Fagfellevurdert)
    Abstract [en]

    We describe a process in which the segmentation of objects as well as the extraction of the object shape becomes realized through active exploration of a robot vision system. In the exploration process, two behavioral modules that link robot actions to the visual and haptic perception of objects interact. First, by making use of an object independent grasping mechanism, physical control over potential objects can be gained. Having evaluated the initial grasping mechanism as being successful, a second behavior extracts the object shape by making use of prediction based on the motion induced by the robot. This also leads to the concept of an "object" as a set of features that change predictably over different frames. The system is equipped with a certain degree of generic prior knowledge about the world in terms of a sophisticated visual feature extraction process in an early cognitive vision system, knowledge about its own embodiment as well as knowledge about geometric relationships such as rigid body motion. This prior knowledge allows the extraction of representations that are semantically richer compared to many other approaches.

  • 155. Kraft, Dirk
    et al.
    Pugeault, Nicolas
    Baseski, Emre
    Popovic, Mila
    Kragic, Danica
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Kalkan, Sinan
    Woergoetter, Florentin
    Krueger, Norbert
    BIRTH OF THE OBJECT: DETECTION OF OBJECTNESS AND EXTRACTION OF OBJECT SHAPE THROUGH OBJECT-ACTION COMPLEXES (vol 5, pg 247, 2008)2009Inngår i: International Journal of Humanoid Robotics, ISSN 0219-8436, Vol. 6, nr 3, s. 561-561Artikkel i tidsskrift (Fagfellevurdert)
  • 156.
    Kragic, Danica
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP.
    Confluence of parameters in model based tracking2003Konferansepaper (Fagfellevurdert)
    Abstract [en]

    During the last decade, model based tracking of objects and its necessity in visual servoing and manipulation has been advocated in a number of systems [4], [7], [9], [12], [13], [14]. Most of these systems demonstrate robust performance for cases where either the background or the object are relatively uniform in color. In terms of manipulation, our basic interest is handling of everyday objects in domestic environments such as a home or an office. In this paper, we consider a number of different parameters that effect the performance of a model-based tracking system. Parameters such as color channels, feature detection, validation gates, outliers rejection and feature selection are considered here and their affect to the overall system performance is discussed. Experimental evaluation shows how some of these parameters can successfully be evaluated (learned) on-line and consequently improve the performance of the system.

  • 157.
    Kragic, Danica
    KTH, Skolan för elektroteknik och datavetenskap (EECS), Robotik, perception och lärande, RPL. KTH, Skolan för elektroteknik och datavetenskap (EECS), Centra, Centrum för autonoma system, CAS.
    From active perception to deep learning2018Inngår i: SCIENCE ROBOTICS, ISSN 2470-9476, Vol. 3, nr 23, artikkel-id eaav1778Artikkel i tidsskrift (Annet vitenskapelig)
  • 158.
    Kragic, Danica
    KTH, Tidigare Institutioner, Numerisk analys och datalogi, NADA.
    Modelling, Specification and Robustness Issues for Robotic Manipulation Tasks2004Inngår i: International Journal of Advanced Robotic Systems, ISSN 1729-8806, Vol. 1, nr 2, s. 75-86Artikkel i tidsskrift (Fagfellevurdert)
    Abstract [en]

    In this paper, a system for modeling of service robot tasks is presented. Our work is motivated by the idea that a robotic task may be represented as a set of tractable modules each responsible for a certain part of the task. For general fetch-and-carry robotic applications, there will be varying demands for precision and degrees of freedom involved depending on complexity of the individual module. The particular research problem considered here is the development of a system that supports simple design of complex tasks from a set of basic primitives. The three system levels considered are: i) task graph generation which allows the user to easily design or model a task, ii) task graph execution which executes the task graph, and iii) at the lowest level, the specification and development of primitives required for general fetch-and-carry robotic applications. In terms of robustness, we believe that one way of increasing the robustness of the whole system is by increasing the robustness of individual modules. In particular, we consider a number of different parameters that effect the performance of a model-based tracking system. Parameters such as color channels, feature detection, validation gates, outliers rejection and feature selection are considered here and their affect to the overall system performance is discussed. Experimental evaluation shows how some of these parameters can successfully be evaluated (learned) on-line and consequently improve the performance of the system.

  • 159.
    Kragic, Danica
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP.
    Robot Visions, Robot Vision2013Inngår i: Twelfth Scandinavian Conference on Artificial Intelligence, 2013, s. 11-11Konferansepaper (Fagfellevurdert)
  • 160.
    Kragic, Danica
    KTH, Tidigare Institutioner                               , Numerisk analys och datalogi, NADA.
    Visual Servoing for Manipulation: Robustness and Integration Issues2001Doktoravhandling, monografi (Annet vitenskapelig)
  • 161.
    Kragic, Danica
    KTH, Tidigare Institutioner, Numerisk analys och datalogi, NADA.
    Visual Servoing for Manipulation: Robustness and Integration Issues2001Inngår i: IEEE transactions on robotics and automation, ISSN 1042-296X, s. 230-Artikkel i tidsskrift (Fagfellevurdert)
    Abstract [en]

    Service robots are gradually extended to operation in everyday environments. To be truly useful, a mobile robot should include facilities for interaction with the envi- ronment, in particular methods for manipulation of objects. One of the most flexible sensory modalities to enable this is computational vision. In this thesis the issue of visual servoing and grasping to facilitate such interaction is investigated. A notorious problem for use of vision in natural environments is robustness with respect to variations in the environment. It is also well-known that no single technique is suitable for different tasks a robot is supposed to perform. Robustness is here investigated using several different approaches. The issues of variability are formulated with respect to visual features, the number of cameras used, and task constraints. It is argued that integration of methods facilitate construction of more robust visual servoing systems for realistic tasks. Traditionally, fusion of visual information has been based on explicit models for uncertainty and integration. The most dominating technique has been use of Bayesian statistics, where strong models are employed. Where a large number of visual features are available it is suggested that it might be possible to perform tasks such as visual tracking using weak models for integration. In particular, integration using voting based methods is analyzed. If the object to be manipulated is known or has been recognized, it is possible to use explicit geometric models to facilitate the estimation of its pose. Consequently, a methodology for tracking of objects using wire-frame models has been developed and evaluated in the context of grasping. Visual servoing can be carried out in the image domain and/or using 3D in- formation. In this context, the tradeoff between explicit models and use of multiple cameras strongly influences the performance of a visual servoing system. The relation between visual features, the number of cameras and their placement has been studied to provide guidelines for a design of such a system. An integration of a multi-ocular vision system, suitable visual techniques and task constraints facilitate flexible manipulation of everyday objects. To demonstrate this the developed techniques have been evaluated in the context of manipulation for opening/closing of doors in an everyday setting. In addition, it is demonstrated how the techniques, together with model based information, may be used for grasping and grasp monitoring in the context of a well-known set of objects. In summary, a toolkit for interaction with everyday objects has been investigated and evaluated for real-world tasks. The developed methods provide a rich basis for real-world manipulation of objects in everyday settings.

  • 162.
    Kragic, Danica
    et al.
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Björkman, Mårten
    KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS. KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP.
    Strategies for object manipulation using foveal and peripheral vision2006Inngår i: International Conference on Computer Vision Systems (ICVS), New York, USA, IEEE Computer Society, 2006, s. 50-Konferansepaper (Fagfellevurdert)
    Abstract [en]

    Computer vision is gaining significant importance as a cheap, passive, and information-rich sensor in research areas such as unmanned vehicles, medical robotics, human-machine interaction, autonomous navigation,robotic manipulation and grasping. However, a current trend is to build computer vision systems that are used to perform a specific task which makes it hard to reuse the ideas across different disciplines. In this paper, we concentrate on vision strategies for robotic manipulation tasksin a domestic environment. This work is an extension of our ongoing work on a development of a general vision system for robotic applications. Inparticular, given fetch-and-carry type of tasks, the issues related to the whole detect-approach-grasp loop are considered.

  • 163.
    Kragic, Danica
    et al.
    KTH, Tidigare Institutioner, Numerisk analys och datalogi, NADA.
    Björkman, Mårten
    KTH, Tidigare Institutioner, Numerisk analys och datalogi, NADA.
    Christensen, Henrik I.
    KTH, Tidigare Institutioner, Numerisk analys och datalogi, NADA.
    Eklundh, Jan-Olof
    KTH, Tidigare Institutioner, Numerisk analys och datalogi, NADA.
    Issues and Strategies for Robotic Object Manipulation in Domestic Settings2004Konferansepaper (Annet vitenskapelig)
    Abstract [en]

    Many robotic tasks such as autonomous navigation,human-machine collaboration, object manipulationand grasping facilitate visual information. Some of themajor reasearch and system design issues in terms of visualsystems are robustness and flexibility.In this paper, we present a number of visual strategiesfor robotic object manipulation tasks in natural, domesticenvironments. Given a complex fetch-and-carry type oftasks, the issues related to the whole detect-approachgrasploop are considered. Our vision system integratesa number of algorithms using monocular and binocularcues to achieve robustness in realistic settings. The cuesare considered and used in connection to both foveal andperipheral vision to provide depth information, segmentthe object(s) of interest in the scene, object recognition,tracking and pose estimation. One important propertyof the system is that the step from object recognitionto pose estimation is completely automatic combiningboth appearance and geometric models. Rather thanconcentrating on the integration issues, our primary goalis to investigate the importance and effect of cameraconfiguration, their number and type, to the choice anddesign of the underlying visual algorithms. Experimentalevaluation is performed in a realistic indoor environmentwith occlusions, clutter, changing lighting and backgroundconditions.

  • 164.
    Kragic, Danica
    et al.
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Björkman, Mårten
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Christensen, Henrik I.
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Eklundh, Jan-Olof
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Vision for robotic object manipulation in domestic settings2005Inngår i: Robotics and Autonomous Systems, ISSN 0921-8890, E-ISSN 1872-793X, Vol. 52, nr 1, s. 85-100Artikkel i tidsskrift (Fagfellevurdert)
    Abstract [en]

    In this paper, we present a vision system for robotic object manipulation tasks in natural, domestic environments. Given complex fetch-and-carry robot tasks, the issues related to the whole detect-approach-grasp loop are considered. Our vision system integrates a number of algorithms using monocular and binocular cues to achieve robustness in realistic settings. The cues are considered and used in connection to both foveal and peripheral vision to provide depth information, segmentation of the object(s) of interest, object recognition, tracking and pose estimation. One important property of the system is that the step from object recognition to pose estimation is completely automatic combining both appearance and geometric models. Experimental evaluation is performed in a realistic indoor environment with occlusions, clutter, changing lighting and background conditions.

  • 165.
    Kragic, Danica
    et al.
    KTH, Tidigare Institutioner                               , Numerisk analys och datalogi, NADA.
    Christensen, H. I.
    Cue integration for visual servoing2001Inngår i: IEEE transactions on robotics and automation, ISSN 1042-296X, Vol. 17, nr 1, s. 18-27Artikkel i tidsskrift (Fagfellevurdert)
    Abstract [en]

    The robustness and reliability of vision algorithms is, nowadays, the key issue in robotic research and industrial applications. To control a robot in a closed-loop fashion, different tracking systems have been reported in the literature, A common approach to increased robustness of a tracking system is the use of different models (CAD model of the object, motion model) known a priori, Our hypothesis is that fusion of multiple features facilitates robust detection and tracking of objects in scenes of realistic complexity. A particular application is the estimation of a robot's end-effector position in a sequence of images. The research investigates the following two different approaches to cue integration: 1) voting and 2) fuzzy logic-based fusion, The two approaches have been tested in association,vith scenes of varying complexity. Experimental results clearly demonstrate that fusion of cues results in a tracking system with a robust performance. The robustness is in particular evident for scenes with multiple moving objects and partial occlusion of the tracked object.

  • 166.
    Kragic, Danica
    et al.
    KTH, Tidigare Institutioner                               , Numerisk analys och datalogi, NADA.
    Christensen, H. I.
    Robust visual servoing2003Inngår i: The international journal of robotics research, ISSN 0278-3649, E-ISSN 1741-3176, Vol. 22, nr 11-Oct, s. 923-939Artikkel i tidsskrift (Fagfellevurdert)
    Abstract [en]

    For service robots operating in domestic environments, it is not enough to consider only control level robustness; it is equally important to consider how image information that serves as input to the control process can be used so as to achieve robust and efficient control. In this paper we present an effort towards the development of robust visual techniques used to guide robots in various tasks. Given a task at hand. it e argue that different levels of complexity should be considered: this also defines the choice of the visual technique used to provide the necessary feedback information. We concentrate on visual feedback estimation where we investigate both two- and three-dimensional techniques. In the former case, we art interested in providing coarse information about the object position/velocity in the image plane. In particular a set of simple visual features (cites) is employed in an integrated framework where voting is used for fusing the responses from individual cites. The experimental evaluation shows the system performance for three different cases of camera-robot configurations most common for robotic systems. For cases where the robot is supposed to grasp the object, a two-dimensional position estimate is often not enough. Complete pose (position and orientation) of the object may be required. Therefore, we present a model-based system where a wire-frame model of the object is used to estimate its pose. Since a number of similar systems have been proposed in the literature, we concentrate on the particular part of the system usually neglected-automatic pose initialization. Finally, we show how a number of existing approaches can successfully be integrated in a system that is able to recognize and grasp fairly textured. everyday objects. One of the examples presented in the experimental section shows a mobile robot performing tasks in a real-world environment-a living room.

  • 167.
    Kragic, Danica
    et al.
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Christensen, Henrik
    KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Robust Visual Servoing2014Inngår i: Household Service Robotics, Elsevier, 2014, s. 397-427Kapittel i bok, del av antologi (Annet vitenskapelig)
    Abstract [en]

    For service robots operating in domestic environments, it is not enough to consider only control level robustness; it is equally important to consider how image information that serves as input to the control process can be used so as to achieve robust and efficient control. In this chapter we present an effort toward the development of robust visual techniques used to guide robots in various tasks. Given a task at hand, we argue that different levels of complexity should be considered; this also defines the choice of the visual technique used to provide the necessary feedback information. We concentrate on visual feedback estimation where we investigate both two- and three-dimensional techniques. In the former case, we are interested in providing coarse information about the object position/velocity in the image plane. In particular, a set of simple visual features (cues) is employed in an integrated framework where voting is used for fusing the responses from individual cues. The experimental evaluation shows the system performance for three different cases of camera-robot configurations most common for robotic systems. For cases where the robot is supposed to grasp the object, a two-dimensional position estimate is often not enough. Complete pose (position and orientation) of the object may be required. Therefore, we present a model-based system where a wire-frame model of the object is used to estimate its pose. Since a number of similar systems have been proposed in literature, we concentrate on the particular part of the system usually neglected-automatic pose initialization. Finally, we show how a number of existing approaches can successfully be integrated in a system that is able to recognize and grasp fairly textured, everyday objects. One of the examples presented in the experimental section shows a mobile robot performing tasks in a real-word environment-a living room.

  • 168.
    Kragic, Danica
    et al.
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP.
    Christensen, Henrik I
    A Framework for Visual Servoing2003Inngår i: International Conference on Computer Vision Systems, Springer-Verlag Berlin , 2003, s. 345-354Kapittel i bok, del av antologi (Fagfellevurdert)
    Abstract [en]

    . A general framework for visual servoing tasks is proposed. The objective of the paper is twofold: a) how a complicated servoing task might be composed from a multitude of simple ones, and b) how the integration of basic and simple visual algorithms can be used in order to provide a robust input estimate to a control loop for a mobile platform or a robot manipulator. For that purpose, voting schema and consensus theory approaches are investigated together with some initial vision based algorithms. Voting is known as a model--free approach to integration and therefore interesting for applications in real--world environments which are difficult to model. It is experimentally shown how servoing tasks like pick--and--place, opening doors and fetching mail can be robustly performed using the proposed approach. 1. Introduction In the field of service robotics, robots should continuously interact with objects and human beings in a natural, unstructured and dynamic environment. In ...

  • 169.
    Kragic, Danica
    et al.
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Christensen, Henrik I.
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Advances in robot vision2005Inngår i: Robotics and Autonomous Systems, ISSN 0921-8890, E-ISSN 1872-793X, Vol. 52, nr 1, s. 1-3Artikkel i tidsskrift (Annet vitenskapelig)
  • 170.
    Kragic, Danica
    et al.
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP.
    Christensen, Henrik I
    Integration of visual cues for active tracking of an end-effector1999Artikkel i tidsskrift (Fagfellevurdert)
    Abstract [en]

    We describe and test how information from multiple sources can be combined into a robust visual servoing system. The main objective is integration of visual cues to provide smooth pursuit in a cluttered environment using a minimum or no calibration. For that purpose, voting schema and fuzzy logic command fusion are investigated. It is shown that the integration permits detection and rejection of measurement outliers

  • 171.
    Kragic, Danica
    et al.
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP.
    Christensen, Henrik I
    Model Based Techniques for Robotic Servoing and Grasping2002Konferansepaper (Fagfellevurdert)
    Abstract [en]

     A robotic manipulation of objects typically involves object detection/recognition, servoing to the object, alignment and grasping. To perform fine alignment and finally grasping, it is usually necessary to estimate position and orientation (pose) of the object. In this paper we present a model based tracking system used to estimate and continuously update the pose of the object to be manipulated. Here, a wire-frame model is used to find and track features in the consequent images. One of the important parts of the system is the ability to automatically initiate the tracking process. The strength of the system is the ability to operate in an domestic environment (living room) with changing lighting and background conditions.

  • 172.
    Kragic, Danica
    et al.
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP.
    Christensen, Henrik I
    Survey on Visual Servoing for Manipulation2002Konferansepaper (Fagfellevurdert)
    Abstract [en]

    Vision guided robotics has been one of the major research issue for more than three decades. The more recent technological development facilitated the advancement in the area which has resulted in a number of successful and even commercial systems using off–the–shelf hardware. The applications of visually guided systems are many: from intelligent homes to automotive industry. However, one of the open and commonly stated problems in the area is the need for exchange of experiences and research ideas. In our opinion, a good starting point for this is to advertise the successes and propose a common terminology in form of a survey paper. The paper concentrates on different types of visual servoing: image based, position based and 2 1/2D visual servoing. Different issues concerning both the hardware and software requirements are considered and the most prominent contributions are reviewed. The proposed terminology is used to introduce a young researcher and lead the experts in the field through a three decades long historical field of vision guided robotics. We also include a number of real–world examples from our own research providing not only a conceptual framework but also illustrating most of the issues covered in the paper.

  • 173.
    Kragic, Danica
    et al.
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP.
    Christensen, Henrik I
    Tracking Techniques for Visual Servoing Tasks2000Konferansepaper (Fagfellevurdert)
    Abstract [en]

    Many of today's visual servoing systems rely on the use of markers on the object to provide features for control. There is thus a need for a visual system that provides control features regardless of the appearance of the object. Region based tracking is a natural approach since it does not require any special type of features. In this paper we present two different approaches to region based tracking: 1) a multi-resolution gradient based approach (using optical flow); and 2) a discrete feature based search approach. We present experiments conducted with both techniques for different types of image motions. Finally, the performance, drawbacks and limitations of used techniques are discussed

  • 174.
    Kragic, Danica
    et al.
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP.
    Christensen, Henrik I
    Using a redundant coarsely calibrated vision system for 3d grasping1999Konferansepaper (Fagfellevurdert)
    Abstract [en]

    The influence of a redundant camera system for estimation of 3D object position and orientation in a manipulator´s workspace is analysed. The paper analyses the accuracy that can be achieved using a trinocular stereo system, that only has been qualiatively calibrated.  By using stereo combined with a third camera a significant improvment in accuracy is achieved. An experimental system, which exploits colour and Hough transform for object pose estimation, is used for empirical assessment of accuracy in the context of object grasping.

  • 175.
    Kragic, Danica
    et al.
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP.
    Christensen, Henrik I
    Weak Models and Cue Integration for Real-Time Tracking2002Konferansepaper (Fagfellevurdert)
    Abstract [en]

    Traditionally, fusion of visual information for tracking has been based on explicit models for uncertainty and integration. Most of the approaches use some form of Bayesian statistics where strong models are employed. We argue that for cases where a large number of visual features are available, weak models for integration may be employed. We analyze integration by voting where two methods are proposed and evaluated: i) response and ii) action fusion. The methods differ in the choice of Voting space: the former integrates visual information in image space and latter in velocity space. We also evaluate four weighting techniques for integration.

  • 176.
    Kragic, Danica
    et al.
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP.
    Christensen, H.I.
    Biologically Motivated Visual Servoing and Grasping for Real World Tasks2003Konferansepaper (Fagfellevurdert)
    Abstract [en]

    Hand-eye coordination involves four tasks: i) identification of the object to be manipulated, ii) ballistic arm motion to the vicinity of the object, iii) preshaping and alignment of the hand, and finally iv) manipulation or grasping of the object. Motivated by the operation of biological systems and utilizing some constraints for each of the above mentioned tasks, we are aiming at design of a robust, robotic hand-eye coordination system. Hand-eye coordination tasks we consider here are of the basic fetch-and-carry type useful for service robots operating in everyday environments. Objects to be manipulated are, for example, food items that are simple in shape (polyhedral, cylindrical) but with complex surface texture. To achieve the required robustness and flexibility, we integrate both geometric and appearance based information to solve the task at hand. We show how the research in human visuo-motor system can be facilitated to design a fully operational, visually guided object manipulation system.

  • 177.
    Kragic, Danica
    et al.
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP.
    Crinier, S
    Brunn, D
    Christensen, Henrik I
    Vision and tactile sensing for real world tasks2003Konferansepaper (Fagfellevurdert)
    Abstract [en]

    Robotic fetch-and-carry tasks are commonly facilitated to demonstrate a number of research directions such as navigation, mobile manipulation, systems integration, etc. As a part of an integrated system in terms of a service robot framework, this paper describes a set of methods for real-world object manipulation tasks. We concentrate here on two particular parts of a manipulation sequence: i) robust visual servoing, and ii) grasping strategies. In terms of visual servoing we discuss the handling of singularities during a manipulation sequence. For grasping, we present a biologically motivated strategy using tactile feedback.

  • 178.
    Kragic, Danica
    et al.
    KTH, Skolan för elektroteknik och datavetenskap (EECS), Robotik, perception och lärande, RPL.
    Daniilidis, Kostas
    University of Pennsylvania, Department of Computer and Information Science, 3330 Walnut Street, Philadelphia, PA 19104, United States.
    3-D vision for navigation and grasping2016Inngår i: Springer Handbook of Robotics, Springer International Publishing , 2016, s. 811-824Kapittel i bok, del av antologi (Annet vitenskapelig)
    Abstract [en]

    In this chapter, we describe algorithms for three-dimensional (3-D) vision that help robots accomplish navigation and grasping. To model cameras, we start with the basics of perspective projection and distortion due to lenses. This projection from a 3-D world to a two-dimensional (2-D) image can be inverted only by using information from the world or multiple 2-D views. If we know the 3-D model of an object or the location of 3-D landmarks, we can solve the pose estimation problem from one view. When two views are available, we can compute the 3-D motion and triangulate to reconstruct the world up to a scale factor. When multiple views are given either as sparse viewpoints or a continuous incoming video, then the robot path can be computer and point tracks can yield a sparse 3-D representation of the world. In order to grasp objects, we can estimate 3-D pose of the end effector or 3-D coordinates of the graspable points on the object.

  • 179.
    Kragic, Danica
    et al.
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Ekvall, Staffan
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Jensfelt, Patric
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Aarno, Daniel
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Sensor Integration and Task Planning for Mobile Manipulation2004Konferansepaper (Fagfellevurdert)
    Abstract [en]

    Robotic mobile manipulation in unstructured environments requires integration of a number of key reasearch areas such as localization, navigation, object recognition, visual tracking/servoing, grasping and object manipulation. It has been demonstrated that, given the above, and through simple sequencing of basic skills, a robust system can be designed, [19]. In order to provide the robustness and flexibility required of the overall robotic system in unstructured and dynamic everyday environments, it is important to consider a wide range of individual skills using different sensory modalities. In this work, we consider a combination of deliberative and reactive control together with the use of multiple sensory modalities for modeling and execution of manipulation tasks. Special consideration is given to the design of a vision system necessary for object recognition and scene segmentation as well as learning principles in terms of grasping.

  • 180.
    Kragic, Danica
    et al.
    KTH, Skolan för elektroteknik och datavetenskap (EECS), Robotik, perception och lärande, RPL.
    Gustafson, Joakim
    KTH, Skolan för elektroteknik och datavetenskap (EECS), Tal, musik och hörsel, TMH.
    Karaoǧuz, Hakan
    KTH, Skolan för elektroteknik och datavetenskap (EECS), Robotik, perception och lärande, RPL.
    Jensfelt, Patric
    KTH, Skolan för elektroteknik och datavetenskap (EECS), Robotik, perception och lärande, RPL.
    Krug, Robert
    KTH, Skolan för elektroteknik och datavetenskap (EECS), Robotik, perception och lärande, RPL.
    Interactive, collaborative robots: Challenges and opportunities2018Inngår i: IJCAI International Joint Conference on Artificial Intelligence, International Joint Conferences on Artificial Intelligence , 2018, s. 18-25Konferansepaper (Fagfellevurdert)
    Abstract [en]

    Robotic technology has transformed manufacturing industry ever since the first industrial robot was put in use in the beginning of the 60s. The challenge of developing flexible solutions where production lines can be quickly re-planned, adapted and structured for new or slightly changed products is still an important open problem. Industrial robots today are still largely preprogrammed for their tasks, not able to detect errors in their own performance or to robustly interact with a complex environment and a human worker. The challenges are even more serious when it comes to various types of service robots. Full robot autonomy, including natural interaction, learning from and with human, safe and flexible performance for challenging tasks in unstructured environments will remain out of reach for the foreseeable future. In the envisioned future factory setups, home and office environments, humans and robots will share the same workspace and perform different object manipulation tasks in a collaborative manner. We discuss some of the major challenges of developing such systems and provide examples of the current state of the art.

  • 181.
    Kragic, Danica
    et al.
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP.
    Hager, G.D.
    Task Modeling and Speci cation for Modular Sensory Based Human-Machine Cooperative Systems2003Konferansepaper (Fagfellevurdert)
    Abstract [en]

    This paper is directed towards developing human-machine cooperative systems (HCMS) for augmented surgical manipulation tasks. These tasks are commonly repetitive, sequential, and consist of simple steps. The transitions between these steps can be driven either by the surgeon's input or sensory information. Consequently, complex tasks can be effectively modeled using a set of basic primitives, where each primitive defines some basic type of motion (e.g. translational motion along a line, rotation about an axis, etc.). These steps can be "open-loop" (simply complying to user's demands) or "closed-loop, in which case external sensing is used to define a nominal reference trajectory. The particular research problem considered here is the development of a system that supports simple design of complex surgical procedures from a set of basic control primitives. The three system levels considered are: i) task graph generation which allows the user to easily design or model a task, ii) task graph execution which executes the task graph, and iii) at the lowest level, the specification of primitives which allows the user to easily specify new types of primitive motions. The system has been developed and validated using the JHU Steady Hand Robot as an experimental platform.

  • 182.
    Kragic, Danica
    et al.
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP.
    Hager, Gregory D.
    Special Issue on Robotic Vision2012Inngår i: The international journal of robotics research, ISSN 0278-3649, E-ISSN 1741-3176, Vol. 31, nr 4, s. 379-380Artikkel i tidsskrift (Fagfellevurdert)
  • 183.
    Kragic, Danica
    et al.
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Kyrki, Ville
    Initialization and System Modeling in 3 D Pose Tracking2006Inngår i: 18th International Conference on Pattern Recognition, Vol 4, Proceedings / [ed] Tang, YY; Wang, SP; Lorette, G; Yeung, DS; Yan, H, IEEE Computer Society, 2006, s. 643-646Konferansepaper (Fagfellevurdert)
    Abstract [en]

    Initialization and choice of adequate motion models are two important but seldom discussed problems in 3D model-based pose (position and orientation) tracking. In this paper, we propose an automatic initialization approach suitable for textured objects. In addition, we define, study and experimentally evaluate three motion models commonly used in visual servoing and augmented reality.

  • 184.
    Kragic, Danica
    et al.
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Kyrki, VilleLappeenranta University of Technology.
    Unifying Perspectives in Computational and Robot Vision2008Konferanseproceedings (Fagfellevurdert)
    Abstract [en]

    The proceedings contain 12 papers. The topics discussed include: recent trends in computational and robot vision; extracting planar kinematic models using interactive perception; people detection using multiple sensors on a mobile robot; perceiving objects and movements to generate actions on a humanoid robot; pose estimation and feature tracking for robot assisted surgery with medical imaging; a sliding window filter for incremental slam; topological and metric robot localization through computer vision techniques; more vision for slam; maps, objects and contexts for robots; vision-based navigation strategies; and image-based visual servoing with extra task related constraints in a general framework for sensor-based robot systems.

  • 185.
    Kragic, Danica
    et al.
    KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Marayong, P.
    Li, M.
    Okamura, Allison M.
    Hager, G. A.
    Human-machine collaborative systems for microsurgical applications2005Inngår i: The international journal of robotics research, ISSN 0278-3649, E-ISSN 1741-3176, Vol. 24, nr 9, s. 731-741Artikkel i tidsskrift (Fagfellevurdert)
    Abstract [en]

    Human-machine collaborative systems (HMCSs) are systems that amplify or assist human capabilities during the performance of tasks that require both human judgment and robotic precision. We examine the design and performance of HMCSs in the context of microsurgical procedures such as vitreo-retinal eye surgery. Three specific problems considered are: (1) development of systems tools for describing and implementing HMCSs, (2) segmentation of complex tasks into logical components given sensor traces of human task execution, and (3) measurement and evaluation of HMCS performance. These components can be integrated into a complete workstation with the ability to automatically parse traces of user activities into task models, which are loaded into an execution environment to provide the user with assistance using on-line recognition of task states. The major contributions of this work include an XML task graph modeling framework and execution engine, an algorithm for realtime segmentation of user actions using continuous hidden Markov models, and validation techniques for analyzing the performance of HMCSs.

  • 186.
    Kragic, Danica
    et al.
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Marayong, Panadda
    Li, Ming
    Okamura, Allison M.
    Hager, Gregory D.
    Human-Machine Collaborative Systems for Microsurgical Applications2005Inngår i: Robotics Research, Springer Berlin/Heidelberg, 2005, s. 162-171Konferansepaper (Fagfellevurdert)
    Abstract [en]

     We describe our current progress in developing Human-Machine Collaborative Systems (HMCSs) for microsurgical applications such as vitreo-retinal eye surgery. Three specific problems considered here are (1) developing of systems tools for describing and implementing an HMCS, (2) segmentation of complex tasks into logical components given sensor traces of a human performing the task, and (3) measuring HMCS performance. Our goal is to integrate these into a full microsurgical workstation with the ability to automatically 11 parse" traces of user execution into a task model which is then loaded into the execution environment, providing the user with assistance using online recognition of task state. The major contributions of our work to date include an XML task graph modeling framework and execution engine, an algorithm for real-time segmentation of user actions using continuous Hidden Markov Models, and validation techniques for analyzing the performance of HMCSs.

  • 187.
    Kragic, Danica
    et al.
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP.
    Miller, Andrew T
    Allen, Peter K
    Real-time Tracking Meets Online Grasp Planning2001Konferansepaper (Fagfellevurdert)
    Abstract [en]

    Describes a synergistic integration of a grasping simulator and a real-time visual tracking system, that work in concert to (1) find an object's pose, (2) plan grasps and movement trajectories, and (3) visually monitor task execution. Starting with a CAD model of an object to be grasped, the system can find the object's pose through vision which then synchronizes the state of the robot workcell with an online, model-based grasp planning and visualization system we have developed called GraspIt. GraspIt can then plan a stable grasp for the object, and direct the robotic hand system to perform the grasp. It can also generate trajectories for the movement of the grasped object, which are used by the visual control system to monitor the task and compare the actual grasp and trajectory with the planned ones. We present experimental results using typical grasping tasks.

  • 188.
    Kragic, Danica
    et al.
    KTH, Tidigare Institutioner                               , Numerisk analys och datalogi, NADA.
    Petersson, L.
    Christensen, H. I.
    Visually guided manipulation tasks2002Inngår i: Robotics and Autonomous Systems, ISSN 0921-8890, E-ISSN 1872-793X, Vol. 40, nr 3-Feb, s. 193-203Artikkel i tidsskrift (Fagfellevurdert)
    Abstract [en]

    In this paper, we present a framework for a robotic system with the ability to perform real-world manipulation tasks. The complexity of such tasks determines the precision and freedoms controlled which also affects the robustness and the flexibility of the system. The aspect is on the development of visual system and visual tracking techniques in particular. Since precise tracking and control of a full pose of the object to be manipulated is usually less robust and computationally expensive, we integrate vision and control system where the objectives are to provide the discrete state information required to switch between control modes of different complexity. For this purpose, an integration of simple visual algorithms is used to provide a robust input to the control loop. Consensus theory is investigated as the integration strategy. In addition, a general purpose framework for integration of processes is used to implement the system on a real robot. The proposed approach results in a system which can robustly locate and grasp a door handle and then open the door.

  • 189.
    Kragic, Danica
    et al.
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Vincze, Markus
    Vision for Robotics2010Inngår i: Foundations and Trends in Robotics, ISSN 1935-8253, Vol. 1, nr 1, s. 1-78Artikkel i tidsskrift (Fagfellevurdert)
    Abstract [en]

    Robot vision refers to the capability of a robot to visually perceive the environment and use this information for execution of various tasks. Visual feedback has been used extensively for robot navigation and obstacle avoidance. In the recent years, there are also examples that include interaction with people and manipulation of objects. In this paper, we review some of the work that goes beyond of using artificial landmarks and fiducial markers for the purpose of implementing visionbased control in robots. We discuss different application areas, both from the systems perspective and individual problems such as object tracking and recognition.

  • 190. Krueger, Volker
    et al.
    Kragic, Danica
    KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Ude, Ales
    Geib, Christopher
    The meaning of action: a review on action recognition and mapping2007Inngår i: Advanced Robotics, ISSN 0169-1864, E-ISSN 1568-5535, Vol. 21, nr 13, s. 1473-1501Artikkel, forskningsoversikt (Fagfellevurdert)
    Abstract [en]

    In this paper, we analyze the different approaches taken to date within the computer vision, robotics and artificial intelligence communities for the representation, recognition, synthesis and understanding of action. We deal with action at different levels of complexity and provide the reader with the necessary related literature references. We put the literature references further into context and outline a possible interpretation of action by taking into account the different aspects of action recognition, action synthesis and task-level planning.

  • 191. Krug, R.
    et al.
    Lilienthal, A. J.
    Kragic, Danica
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP.
    Bekiroglu, Y.
    Analytic grasp success prediction with tactile feedback2016Inngår i: Proceedings - IEEE International Conference on Robotics and Automation, Institute of Electrical and Electronics Engineers (IEEE), 2016, s. 165-171Konferansepaper (Fagfellevurdert)
    Abstract [en]

    Predicting grasp success is useful for avoiding failures in many robotic applications. Based on reasoning in wrench space, we address the question of how well analytic grasp success prediction works if tactile feedback is incorporated. Tactile information can alleviate contact placement uncertainties and facilitates contact modeling. We introduce a wrench-based classifier and evaluate it on a large set of real grasps. The key finding of this work is that exploiting tactile information allows wrench-based reasoning to perform on a level with existing methods based on learning or simulation. Different from these methods, the suggested approach has no need for training data, requires little modeling effort and is computationally efficient. Furthermore, our method affords task generalization by considering the capabilities of the grasping device and expected disturbance forces/moments in a physically meaningful way.

  • 192.
    Krug, Robert
    et al.
    KTH, Skolan för elektroteknik och datavetenskap (EECS), Robotik, perception och lärande, RPL.
    Bekiroglu, Yasemin
    Vicarious AI, San Francisco, CA USA..
    Kragic, Danica
    KTH, Skolan för elektroteknik och datavetenskap (EECS), Robotik, perception och lärande, RPL.
    Roa, Maximo A.
    German Aerosp Ctr DLR, Inst Robot & Mechatron, D-82234 Wessling, Germany..
    Evaluating the Quality of Non-Prehensile Balancing Grasps2018Inngår i: 2018 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION (ICRA), IEEE Computer Society, 2018, s. 4215-4220Konferansepaper (Fagfellevurdert)
    Abstract [en]

    Assessing grasp quality and, subsequently, predicting grasp success is useful for avoiding failures in many autonomous robotic applications. In addition, interest in non-prehensile grasping and manipulation has been growing as it offers the potential for a large increase in dexterity. However, while force-closure grasping has been the subject of intense study for many years, few existing works have considered quality metrics for non-prehensile grasps. Furthermore, no studies exist to validate them in practice. In this work we use a real-world data set of non-prehensile balancing grasps and use it to experimentally validate a wrench-based quality metric by means of its grasp success prediction capability. The overall accuracy of up to 84% is encouraging and in line with existing results for force-closure grasps.

  • 193. Kruger, Norbert
    et al.
    Piater, Justus
    Worgotter, Florentin
    Geib, Christopher
    Petrick, Ron
    Steedman, Mark
    Asfour, Tamim
    Kraft, Dirk
    Hommel, Bernhard
    Agostini, Alejandro
    Kragic, Danica
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP.
    Eklundh, Jan-Olof
    Kruger, Volker
    Torras, Carme
    Dillmann, Rudiger
    A Formal Definition of Object-Action Complexes and Examples at Different Levels of the Processing Hierarchy2009Inngår i: Computer and Information Science, ISSN 1913-8989, E-ISSN 1913-8997, s. 1-39Artikkel i tidsskrift (Fagfellevurdert)
    Abstract [en]

    In this report the authors define and describe the concept of Object-Action Complexes and give some examples. OACs combine the concept of affordance with the computational efficiency of STRIPS. Affordance is the relation between a situation and the action that it allows. OACs are proposed as a framework for representing actions, objects and the learning process that constructs such representations at all levels. Formally, an OAC is defined as a triplet, composed of a unique ID, a predition function that codes the systems belief on how the world (which is defined as a kind of global attribute space) will change after applying the OAC and a statisical measure representing the success of an OAC. The prediction function is thereby a mapping within the global attribute space. The measurement captures the accuracy of this prediction function and describes the reliability of the OAC. Therefore, it can be used for optimal decision making, predicion of the outcome of a certain action and learning.

  • 194. Kruger, Volker
    et al.
    Herzog, Dennis L.
    Baby, Sanmohan
    Ude, Ales
    Kragic, Danica
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP.
    Learning Actions from observations Primitive-Based Modeling and Grammar2010Inngår i: IEEE robotics & automation magazine, ISSN 1070-9932, E-ISSN 1558-223X, Vol. 17, nr 2, s. 30-43Artikkel i tidsskrift (Fagfellevurdert)
  • 195. Kyrki, V.
    et al.
    Kragic, Danica
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Recent trends in computational and robot vision2008Inngår i: Unifying perspectives in computational and robot vision / [ed] Danica Kragic, Ville Kyrki, New York: Springer Science+Business Media B.V., 2008, s. 1-10Kapittel i bok, del av antologi (Fagfellevurdert)
    Abstract [en]

    There are many characteristics in common in computer vision research and vision research in robotics. For example, the Structure-and-Motion problem in vision has its analog of SLAM (Simultaneous Localization and Mapping) in robotics, visual SLAM being one of the current hot topics. Tracking is another area seeing great interest in both communities, in its many variations, such as 2-D and 3-D tracking, single and multi-object tracking, rigid and deformable object tracking. Other topics of interest for both communities are object and action recognition. Despite having these common interests, however, "pure" computer vision has seen significant theoretical and methodological advances during the last decade which many of the robotics researchers are not fully aware of. On the other hand, the manipulation and control capabilities of robots as well as the range of application areas have developed greatly. In robotics, vision can not be considered an isolated component, but it is instead a part of a system resulting in an action. Thus, in robotics the vision research should include consideration of the control of the system, in other words, the entire perception-action loop. A holistic system approach would then be useful and could provide significant advances in this application domain.

  • 196. Kyrki, V.
    et al.
    Kragic, Danica
    KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Christensen, Henrik I.
    Measurement errors in visual servoing2006Inngår i: Robotics and Autonomous Systems, ISSN 0921-8890, E-ISSN 1872-793X, Vol. 54, nr 10, s. 815-827Artikkel i tidsskrift (Fagfellevurdert)
    Abstract [en]

    This paper addresses the issue of measurement errors in visual servoing. The error characteristics of the vision based state estimation and the associated uncertainty of the control are investigated. The major contribution is the analysis of the propagation of image error through pose estimation and visual servoing control law. Using the analysis, two classical visual servoing methods are evaluated: position-based and 2.5D visual servoing. The evaluation offers a tool to build and analyze hybrid control systems such as switching or partitioning control.

  • 197. Kyrki, V.
    et al.
    Kragic, Danica
    KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Christensen, Henrik Iskov
    KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    New shortest-path approaches to visual servoing2004Konferansepaper (Fagfellevurdert)
    Abstract [en]

    In recent years, a number of visual servo control algorithms have been proposed. Most approaches try to solve the inherent problems of image-based and position-based servoing by partitioning the control between image and Cartesian spaces. However, partitioning of the control often causes the Cartesian path to become more complex, which might result in operation close to the joint limits. A solution to avoid the joint limits is to use a shortest-path approach, which avoids the limits in most cases. In this paper, two new shortest-path approaches to visual servoing are presented. First, a position-based approach is proposed that guarantees both shortest Cartesian trajectory and object visibility. Then, a variant is presented, which avoids the use of a 3D model of the target object by using homography based partial pose estimation.

  • 198.
    Kyrki, Ville
    et al.
    Lappeenranta University of Technology.
    Kragic, Danica
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Integration of Model-based and Model-free Cues for Visual Object Tracking in 3D2005Inngår i: 2005 IEEE International Conference on Robotics and Automation (ICRA), IEEE Computer Society, 2005, s. 1554-1560Konferansepaper (Fagfellevurdert)
    Abstract [en]

    Vision is one of the most powerful sensory modalities in robotics, allowing operation in dynamic environments. One of our long-term research interests is mobile manipulation, where precise location of the target object is commonly required during task execution. Recently, a number of approaches have been proposed for real-time 3D tracking and most of them utilize an edge (wireframe) model of the target. However, the use of an edge model has significant problems in complex scenes due to occlusions and multiple responses, especially in terms of initialization. In this paper, we propose a new tracking method based on integration of model-based cues with automatically generated model-free cues, in order to improve tracking accuracy and to avoid weaknesses of edge based tracking. The integration is performed in a Kalman filter framework that operates in real-time. Experimental evaluation shows that the inclusion of model-free cues offers superior performance.

  • 199. Kyrki, Ville
    et al.
    Kragic, Danica
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP.
    Tracking rigid objects using integration of model-based and model-free cues2011Inngår i: Machine Vision and Applications, ISSN 0932-8092, E-ISSN 1432-1769, Vol. 22, nr 2, s. 323-335Artikkel i tidsskrift (Fagfellevurdert)
    Abstract [en]

    Model-based 3-D object tracking has earned significant importance in areas such as augmented reality, surveillance, visual servoing, robotic object manipulation and grasping. Key problems to robust and precise object tracking are the outliers caused by occlusion, self-occlusion, cluttered background, reflections and complex appearance properties of the object. Two of the most common solutions to the above problems have been the use of robust estimators and the integration of visual cues. The tracking system presented in this paper achieves robustness by integrating model-based and model-free cues together with robust estimators. As a model-based cue, a wireframe edge model is used. As model-free cues, automatically generated surface texture features are used. The particular contribution of this work is the integration framework where not only polyhedral objects are considered. In particular, we deal also with spherical, cylindrical and conical objects for which the complete pose cannot be estimated using only wireframe models. Using the integration with the model-free features, we show how a full pose estimate can be obtained. Experimental evaluation demonstrates robust system performance in realistic settings with highly textured objects and natural backgrounds.

  • 200.
    Kyrki, Ville
    et al.
    Lappeenranta University of Technology.
    Kragic, Danica
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Tracking Unobservable Rotations by Cue Integration2006Inngår i: 2006 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION (ICRA), 2006, s. 2744-2750Konferansepaper (Fagfellevurdert)
    Abstract [en]

    Model based object tracking has earned significant importance in areas such as augmented reality, surveillance, visual servoing, robotic object manipulation and grasping. Although an active research area, there are still few systems that perform robustly in realistic settings. The key problems to robust and precise object tracking are outliers caused by occlusion, self-occlusion, cluttered background, and reflections. Two most common solutions to the above problems have been the use of robust estimators and the integration of visual cues. The tracking system considered in this paper achieves robustness by integrating model-based and model-free cues. As model-based cues, we consider a CAD model of the object known a priori and as model-free cues,. automatically generated corner features are used. The main idea is to account for relative object motion between consecutive frames using integration of the two cues. The particular contribution of this work is the integration framework where not only polyhedral objects are considered. In particular, we deal with spherical, cylindrical and conical objects for which the complete pose cannot be estimate using only CAD like models. Using the integration with the model-free features, we show how a full pose estimate can be obtained. Experimental evaluation demonstrates robust system performance in realistic settings with highly textured objects.

1234567 151 - 200 of 305
RefereraExporteraLink til resultatlisten
Permanent link
Referera
Referensformat
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Annet format
Fler format
Språk
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Annet språk
Fler språk
Utmatningsformat
  • html
  • text
  • asciidoc
  • rtf