Change search
Refine search result
1 - 22 of 22
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Rows per page
  • 5
  • 10
  • 20
  • 50
  • 100
  • 250
Sort
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
Select
The maximal number of hits you can export is 250. When you want to export more records please use the Create feeds function.
  • 1.
    Aydemir, Alper
    et al.
    KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Pronobis, Andrzej
    KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Gobelbecker, Moritz
    Jensfelt, Patric
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Active Visual Object Search in Unknown Environments Using Uncertain Semantics2013In: IEEE Transactions on robotics, ISSN 1552-3098, E-ISSN 1941-0468, Vol. 29, no 4, p. 986-1002Article in journal (Refereed)
    Abstract [en]

    In this paper, we study the problem of active visual search (AVS) in large, unknown, or partially known environments. We argue that by making use of uncertain semantics of the environment, a robot tasked with finding an object can devise efficient search strategies that can locate everyday objects at the scale of an entire building floor, which is previously unknown to the robot. To realize this, we present a probabilistic model of the search environment, which allows for prioritizing the search effort to those parts of the environment that are most promising for a specific object type. Further, we describe a method for reasoning about the unexplored part of the environment for goal-directed exploration with the purpose of object search. We demonstrate the validity of our approach by comparing it with two other search systems in terms of search trajectory length and time. First, we implement a greedy coverage-based search strategy that is found in previous work. Second, we let human participants search for objects as an alternative comparison for our method. Our results show that AVS strategies that exploit uncertain semantics of the environment are a very promising idea, and our method pushes the state-of-the-art forward in AVS.

  • 2.
    Bechlioulis, Charalampos P.
    et al.
    Natl Tech Univ Athens, Sch Mech Engn, Control Syst Lab, Zografos 15780, Greece..
    Heshmati-alamdari, Shahab
    KTH, School of Electrical Engineering and Computer Science (EECS).
    Karras, George C.
    Natl Tech Univ Athens, Sch Mech Engn, Control Syst Lab, Zografos 15780, Greece..
    Kyriakopoulos, Kostas J.
    Natl Tech Univ Athens, Sch Mech Engn, Control Syst Lab, Zografos 15780, Greece..
    Robust Image-Based Visual Servoing With Prescribed Performance Under Field of View Constraints2019In: IEEE Transactions on robotics, ISSN 1552-3098, E-ISSN 1941-0468, Vol. 35, no 4, p. 1063-1070Article in journal (Refereed)
    Abstract [en]

    In this paper, we propose a visual servoing scheme that imposes predefined performance specifications on the image feature coordinate errors and satisfies the visibility constraints that inherently arise owing to the camera's limited field of view, despite the inevitable calibration and depth measurement errors. Its efficiency is demonstrated via comparative experimental and simulation studies.

  • 3.
    Bekiroglu, Yasemin
    et al.
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Laaksonen, Janne
    Department of Information Technology, Lappeenranta University of Technology, Finland.
    Jorgensen, Jimmy Alison
    The Maersk Mc-Kinney Moller Institute University of Southern Denmark, Denmark.
    Kyrki, Ville
    the Department of Information Technology, Lappeenranta University of Technology, Finland.
    Kragic, Danica
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Assessing Grasp Stability Based on Learning and Haptic Data2011In: IEEE Transactions on robotics, ISSN 1552-3098, E-ISSN 1941-0468, Vol. 27, no 3, p. 616-629Article in journal (Refereed)
    Abstract [en]

    An important ability of a robot that interacts with the environment and manipulates objects is to deal with the uncertainty in sensory data. Sensory information is necessary to, for example, perform online assessment of grasp stability. We present methods to assess grasp stability based on haptic data and machinelearning methods, including AdaBoost, support vector machines (SVMs), and hidden Markov models (HMMs). In particular, we study the effect of different sensory streams to grasp stability. This includes object information such as shape; grasp information such as approach vector; tactile measurements fromfingertips; and joint configuration of the hand. Sensory knowledge affects the success of the grasping process both in the planning stage (before a grasp is executed) and during the execution of the grasp (closed-loop online control). In this paper, we study both of these aspects. We propose a probabilistic learning framework to assess grasp stability and demonstrate that knowledge about grasp stability can be inferred using information from tactile sensors. Experiments on both simulated and real data are shown. The results indicate that the idea to exploit the learning approach is applicable in realistic scenarios, which opens a number of interesting venues for the future research.

  • 4. Bohg, Jeannette
    et al.
    Hausman, Karol
    Sankaran, Bharath
    Brock, Oliver
    Kragic, Danica
    KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS. KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.
    Schaal, Stefan
    Sukhatme, Gaurav S.
    Interactive Perception: Leveraging Action in Perception and Perception in Action2017In: IEEE Transactions on robotics, ISSN 1552-3098, E-ISSN 1941-0468, Vol. 33, no 6, p. 1273-1291Article in journal (Refereed)
    Abstract [en]

    Recent approaches in robot perception follow the insight that perception is facilitated by interaction with the environment. These approaches are subsumed under the term Interactive Perception (IP). This view of perception provides the following benefits. First, interaction with the environment creates a rich sensory signal that would otherwise not be present. Second, knowledge of the regularity in the combined space of sensory data and action parameters facilitates the prediction and interpretation of the sensory signal. In this survey, we postulate this as a principle for robot perception and collect evidence in its support by analyzing and categorizing existing work in this area. We also provide an overview of the most important applications of IP. We close this survey by discussing remaining open questions. With this survey, we hope to help define the field of Interactive Perception and to provide a valuable resource for future research.

  • 5. Bohg, Jeannette
    et al.
    Morales, Antonio
    Asfour, Tamim
    Kragic, Danica
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Data-Driven Grasp Synthesis-A Survey2014In: IEEE Transactions on robotics, ISSN 1552-3098, E-ISSN 1941-0468, Vol. 30, no 2, p. 289-309Article in journal (Refereed)
    Abstract [en]

    We review the work on data-driven grasp synthesis and the methodologies for sampling and ranking candidate grasps. We divide the approaches into three groups based on whether they synthesize grasps for known, familiar, or unknown objects. This structure allows us to identify common object representations and perceptual processes that facilitate the employed data-driven grasp synthesis technique. In the case of known objects, we concentrate on the approaches that are based on object recognition and pose estimation. In the case of familiar objects, the techniques use some form of a similarity matching to a set of previously encountered objects. Finally, for the approaches dealing with unknown objects, the core part is the extraction of specific features that are indicative of good grasps. Our survey provides an overview of the different methodologies and discusses open problems in the area of robot grasping. We also draw a parallel to the classical approaches that rely on analytic formulations.

  • 6.
    Bore, Nils
    et al.
    KTH, School of Electrical Engineering and Computer Science (EECS), Robotics, perception and learning, RPL.
    Ekekrantz, Johan
    KTH, School of Electrical Engineering and Computer Science (EECS), Robotics, perception and learning, RPL.
    Jensfelt, Patric
    KTH, School of Electrical Engineering and Computer Science (EECS), Robotics, perception and learning, RPL.
    Folkesson, John
    KTH, School of Electrical Engineering and Computer Science (EECS), Robotics, perception and learning, RPL.
    Detection and Tracking of General Movable Objects in Large Three-Dimensional Maps2019In: IEEE Transactions on robotics, ISSN 1552-3098, E-ISSN 1941-0468, Vol. 35, no 1, p. 231-247Article in journal (Refereed)
    Abstract [en]

    This paper studies the problem of detection and tracking of general objects with semistatic dynamics observed by a mobile robot moving in a large environment. A key problem is that due to the environment scale, the robot can only observe a subset of the objects at any given time. Since some time passes between observations of objects in different places, the objects might be moved when the robot is not there. We propose a model for this movement in which the objects typically only move locally, but with some small probability they jump longer distances through what we call global motion. For filtering, we decompose the posterior over local and global movements into two linked processes. The posterior over the global movements and measurement associations is sampled, while we track the local movement analytically using Kalman filters. This novel filter is evaluated on point cloud data gathered autonomously by a mobile robot over an extended period of time. We show that tracking jumping objects is feasible, and that the proposed probabilistic treatment outperforms previous methods when applied to real world data. The key to efficient probabilistic tracking in this scenario is focused sampling of the object posteriors.

  • 7.
    Colledanchise, Michele
    et al.
    KTH, School of Computer Science and Communication (CSC), Robotics, perception and learning, RPL.
    Ögren, Petter
    KTH, School of Computer Science and Communication (CSC), Robotics, perception and learning, RPL.
    How Behavior Trees Modularize Hybrid Control Systems and Generalize Sequential Behavior Compositions, the Subsumption Architecture, and Decision Trees2017In: IEEE Transactions on robotics, ISSN 1552-3098, E-ISSN 1941-0468, Vol. 33, no 2, p. 372-389Article in journal (Refereed)
  • 8.
    Dimarogonas, Dimos V.
    et al.
    KTH, School of Electrical Engineering (EES), Automatic Control. KTH, School of Electrical Engineering (EES), Centres, ACCESS Linnaeus Centre.
    Kyriakopoulos, K. J.
    Connectedness Preserving Distributed Swarm Aggregation for Multiple Kinematic Robots2008In: IEEE Transactions on robotics, ISSN 1552-3098, E-ISSN 1941-0468, Vol. 24, no 5, p. 1213-1223Article in journal (Refereed)
    Abstract [en]

    A distributed swarm aggregation algorithm is developed for a team of multiple kinematic agents. Specifically, each agent is assigned a control law, which is the sum of two elements: a repulsive potential field, which is responsible for the collision avoidance objective, and an attractive potential field, which forces the agents to converge to a configuration where they are close to each other. Furthermore, the attractive potential field forces the agents that are initially located within the sensing radius of an agent to remain within this area for all time. In this way, the connectivity properties of the initially formed communication graph are rendered invariant for the trajectories of the closed-loop system. It is shown that under the proposed control law, agents converge to a configuration where each agent is located at a bounded distance from each of its neighbors. The results are also extended to the case of nonholonomic kinematic unicycle-type agents and to the case of dynamic edge addition. In the latter case, we derive a smaller bound in the swarm size than in the static case.

  • 9.
    Ekvall, Staffan
    et al.
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.
    Aarno, Daniel
    KTH, School of Computer Science and Communication (CSC), Numerical Analysis and Computer Science, NADA.
    Kragic, Danica
    KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Online task recognition and real-time adaptive assistance for computer-aided machine control2006In: IEEE Transactions on robotics, ISSN 1552-3098, E-ISSN 1941-0468, Vol. 22, no 5, p. 1029-1033Article in journal (Refereed)
    Abstract [en]

    Segmentation and recognition of operator-generated motions are commonly facilitated to provide appropriate assistance during task execution in teleoperative and human-machine collaborative settings. The assistance is usually provided in a virtual fixture framework where the level of compliance can be altered online, thus improving the performance in terms of execution time and overall precision. However, the fixtures are typically inflexible, resulting in a degraded performance in cases of unexpected obstacles or incorrect fixture models. In this paper, we present a method for online task tracking and propose the use of adaptive virtual fixtures that can cope with the above problems. Here, rather than executing a predefined plan, the operator has the ability to avoid unforeseen obstacles and deviate from the model. To allow this, the probability of following a certain trajectory (subtask) is estimated and used to automatically adjusts the compliance, thus providing the online decision of how to fixture the movement.

  • 10. Feix, Thomas
    et al.
    Romero, Javier
    Ek, Carl Henrik
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.
    Schmiedmayer, Heinz-Bodo
    Kragic, Danica
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.
    A Metric for Comparing the Anthropomorphic Motion Capability of Artificial Hands2013In: IEEE Transactions on robotics, ISSN 1552-3098, E-ISSN 1941-0468, Vol. 29, no 1, p. 82-93Article in journal (Refereed)
    Abstract [en]

    We propose a metric for comparing the anthropomorphic motion capability of robotic and prosthetic hands. The metric is based on the evaluation of how many different postures or configurations a hand can perform by studying the reachable set of fingertip poses. To define a benchmark for comparison, we first generate data with human subjects based on an extensive grasp taxonomy. We then develop a methodology for comparison using generative, nonlinear dimensionality reduction techniques. We assess the performance of different hands with respect to the human hand and with respect to each other. The method can be used to compare other types of kinematic structures.

  • 11.
    Folkesson, John
    et al.
    KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Christensen, Henrik
    Closing the Loop With Graphical SLAM2007In: IEEE Transactions on robotics, ISSN 1552-3098, E-ISSN 1941-0468, Vol. 23, no 4, p. 731-741Article in journal (Refereed)
    Abstract [en]

    The problem of simultaneous localization and mapping (SLAM) is addressed using a graphical method. The main contributions are a computational complexity that scales well with the size of the environment, the elimination of most of the linearization inaccuracies, and a more flexible and robust data association. We also present a detection criteria for closing loops. We show how multiple topological constraints can be imposed on the graphical solution by a process of coarse fitting followed by fine tuning. The coarse fitting is performed using an approximate system. This approximate system can be shown to possess all the local symmetries. Observations made during the SLAM process often contain symmetries, that is to say, directions of change to the state space that do not affect the observed quantities. It is important that these directions do not shift as we approximate the system by, for example, linearization. The approximate system is both linear and block diagonal. This makes it a very simple system to work with especially when imposing global topological constraints on the solution. These global constraints are nonlinear. We show how these constraints can be discovered automatically. We develop a method of testing multiple hypotheses for data matching using the graph. This method is derived from statistical theory and only requires simple counting of observations. The central insight is to examine the probability of not observing the same features on a return to a region. We present results with data from an outdoor scenario using a SICK laser scanner.

  • 12.
    Folkesson, John
    et al.
    Massacusetts Institute of Technology, Cambridge, MA .
    Jensfelt, Patric
    KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS. KTH, School of Computer Science and Communication (CSC), Numerical Analysis and Computer Science, NADA.
    Christensen, Henrik I.
    Georgia Institute of Tech- nology, Atlanta, GA.
    The m-space feature representation for slam2007In: IEEE Transactions on robotics, ISSN 1552-3098, E-ISSN 1941-0468, ISSN 1552-3098, Vol. 23, no 5, p. 1024-1035Article in journal (Refereed)
    Abstract [en]

    In this paper, a new feature representation for simultaneous localization and mapping (SLAM) is discussed. The representation addresses feature symmetries and constraints explicitly to make the basic model numerically robust. In previous SLAM work, complete initialization of features is typically performed prior to introduction of a new feature into the map. This results in delayed use of new data. To allow early use of sensory data, the new feature representation addresses the use of features that initially have been partially observed. This is achieved by explicitly modelling the subspace of a feature that has been observed. In addition to accounting for the special properties of each feature type, the commonalities can be exploited in the new representation to create a feature framework that allows for interchanging of SLAM algorithms, sensor and features. Experimental results are presented using a low-cost Web-cam, a laser range scanner, and combinations thereof.

  • 13. Gustavi, Tove
    et al.
    Hu, Xiaoming
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Optimization and Systems Theory. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS. KTH, School of Electrical Engineering (EES), Centres, ACCESS Linnaeus Centre.
    Observer-Based Leader-Following Formation Control Using Onboard Sensor Information2008In: IEEE Transactions on robotics, ISSN 1552-3098, E-ISSN 1941-0468, ISSN 1552-3098, Vol. 24, no 6, p. 1457-1462Article in journal (Refereed)
    Abstract [en]

    In this paper, leader-following formation control for mobile multiagent systems with limited sensor information is studied. The control algorithms developed require information available from onboard sensors only, and in particular, the measurement of the leader (neighbor) speed is not needed. Instead, an observer Is designed for the estimation of this speed, With the proposed control algorithms as building blocks, many complex formations can be obtained.

  • 14.
    Hang, Kaiyu
    et al.
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Li, Miao
    EPFL.
    Stork, Johannes A.
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Bekiroglu, Yasemin
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Pokorny, Florian T.
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Billard, Aude
    Kragic, Danica
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Hierarchical Fingertip Space: A Unified Framework for Grasp Planning and In-Hand Grasp Adaptation2016In: IEEE Transactions on robotics, ISSN 1552-3098, E-ISSN 1941-0468, Vol. 32, no 4, p. 960-972, article id 7530865Article in journal (Refereed)
    Abstract [en]

    We present a unified framework for grasp planning and in-hand grasp adaptation using visual, tactile and proprioceptive feedback. The main objective of the proposed framework is to enable fingertip grasping by addressing problems of changed weight of the object, slippage and external disturbances. For this purpose, we introduce the Hierarchical Fingertip Space (HFTS) as a representation enabling optimization for both efficient grasp synthesis and online finger gaiting. Grasp synthesis is followed by a grasp adaptation step that consists of both grasp force adaptation through impedance control and regrasping/finger gaiting when the former is not sufficient. Experimental evaluation is conducted on an Allegro hand mounted on a Kuka LWR arm.

  • 15.
    Karayiannidis, Yiannis
    et al.
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. Chalmers, Sweden.
    Smith, Christian
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.
    Barrientos, Francisco Eli Vina
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.
    Ögren, Petter
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.
    Kragic, Danica
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.
    An Adaptive Control Approach for Opening Doors and Drawers Under Uncertainties2016In: IEEE Transactions on robotics, ISSN 1552-3098, E-ISSN 1941-0468, Vol. 32, no 1, p. 161-175Article in journal (Refereed)
    Abstract [en]

    We study the problem of robot interaction with mechanisms that afford one degree of freedom motion, e.g., doors and drawers. We propose a methodology for simultaneous compliant interaction and estimation of constraints imposed by the joint. Our method requires no prior knowledge of the mechanisms' kinematics, including the type of joint, prismatic or revolute. The method consists of a velocity controller that relies on force/torque measurements and estimation of the motion direction, the distance, and the orientation of the rotational axis. It is suitable for velocity controlled manipulators with force/torque sensor capabilities at the end-effector. Forces and torques are regulated within given constraints, while the velocity controller ensures that the end-effector of the robot moves with a task-related desired velocity. We give proof that the estimates converge to the true values under valid assumptions on the grasp, and error bounds for setups with inaccuracies in control, measurements, or modeling. The method is evaluated in different scenarios involving opening a representative set of door and drawer mechanisms found in household environments.

  • 16.
    Meng, Ziyang
    et al.
    KTH, School of Electrical Engineering (EES), Automatic Control. KTH, School of Electrical Engineering (EES), Centres, ACCESS Linnaeus Centre.
    Dimarogonas, Dimos V.
    KTH, School of Electrical Engineering (EES), Automatic Control. KTH, School of Electrical Engineering (EES), Centres, ACCESS Linnaeus Centre.
    Johansson, Karl Henrik
    KTH, School of Electrical Engineering (EES), Automatic Control. KTH, School of Electrical Engineering (EES), Centres, ACCESS Linnaeus Centre.
    Leader-Follower Coordinated Tracking of Multiple Heterogeneous Lagrange Systems Using Continuous Control2014In: IEEE Transactions on robotics, ISSN 1552-3098, E-ISSN 1941-0468, Vol. 30, no 3, p. 739-745Article in journal (Refereed)
    Abstract [en]

    In this paper, we study the coordinated tracking problem of multiple heterogeneous Lagrange systems with a dynamic leader. Only nominal parameters of Lagrange dynamics are assumed to be available. Under the local interaction constraints, i.e., the followers only have access to their neighbors' information and the leader being a neighbor of only a subset of the followers, continuous coordinated tracking algorithms with adaptive coupling gains are proposed. Except for the benefit of the chattering-free control achieved, the proposed algorithm also has the attribute that it does not require the neighbors' generalized coordinate derivatives. Global asymptotic coordinated tracking is guaranteed, and the tracking errors between the followers and the leader are shown to converge to zero. Examples are given to validate the effectiveness of the proposed algorithms.

  • 17. Montijano, Eduardo
    et al.
    Thunberg, Johan
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Optimization and Systems Theory.
    Hu, Xiaoming
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Optimization and Systems Theory.
    Saguees, Carlos
    Epipolar Visual Servoing for Multirobot Distributed Consensus2013In: IEEE Transactions on robotics, ISSN 1552-3098, E-ISSN 1941-0468, Vol. 29, no 5, p. 1212-1225Article in journal (Refereed)
    Abstract [en]

    In this paper, we give a distributed solution to the problem of making a team of nonholonomic robots reach consensus about their orientations using monocular cameras. We consider a scheme where the motions of the robots are decided using nearest neighbor rules. Each robot is equipped with a camera and can only exchange visual information with a subset of the other robots. The main contribution of this paper is a new controller that uses the epipoles that are computed from the images provided by neighboring robots, eventually reaching consensus in their orientations without the necessity of directly observing each other. In addition, the controller only requires a partial knowledge of the calibration of the cameras in order to achieve the desired configuration. We also demonstrate that the controller is robust to changes in the topology of the network and we use this robustness to propose strategies to reduce the computational load of the robots. Finally, we test our controller in simulations using a virtual environment and with real robots moving in indoor and outdoor scenarios.

  • 18. Romero, Javier
    et al.
    Feix, Thomas
    Ek, Carl Henrik
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Kjellström, Hedvig
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Kragic, Danica
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Extracting Postural Synergies for Robotic Grasping2013In: IEEE Transactions on robotics, ISSN 1552-3098, E-ISSN 1941-0468, Vol. 29, no 6, p. 1342-1352Article in journal (Refereed)
    Abstract [en]

    We address the problem of representing and encoding human hand motion data using nonlinear dimensionality reduction methods. We build our work on the notion of postural synergies being typically based on a linear embedding of the data. In addition to addressing the encoding of postural synergies using nonlinear methods, we relate our work to control strategies of combined reaching and grasping movements. We show the drawbacks of the (commonly made) causality assumption and propose methods that model the data as being generated from an inferred latent manifold to cope with the problem. Another important contribution is a thorough analysis of the parameters used in the employed dimensionality reduction techniques. Finally, we provide an experimental evaluation that shows how the proposed methods outperform the standard techniques, both in terms of recognition and generation of motion patterns.

  • 19.
    Song, Dan
    et al.
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.
    Ek, Carl Henrik
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.
    Hübner, Kai
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.
    Kragic, Danica
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.
    Task-Based Robot Grasp Planning Using Probabilistic Inference2015In: IEEE Transactions on robotics, ISSN 1552-3098, E-ISSN 1941-0468, Vol. 31, no 3, p. 546-561Article in journal (Refereed)
    Abstract [en]

    Grasping and manipulating everyday objects in a goal-directed manner is an important ability of a service robot. The robot needs to reason about task requirements and ground these in the sensorimotor information. Grasping and interaction with objects are challenging in real-world scenarios, where sensorimotor uncertainty is prevalent. This paper presents a probabilistic framework for the representation and modeling of robot-grasping tasks. The framework consists of Gaussian mixture models for generic data discretization, and discrete Bayesian networks for encoding the probabilistic relations among various task-relevant variables, including object and action features as well as task constraints. We evaluate the framework using a grasp database generated in a simulated environment including a human and two robot hand models. The generative modeling approach allows the prediction of grasping tasks given uncertain sensory data, as well as object and grasp selection in a task-oriented manner. Furthermore, the graphical model framework provides insights into dependencies between variables and features relevant for object grasping.

  • 20.
    Strandberg, Morten
    et al.
    KTH, School of Electrical Engineering (EES), Automatic Control.
    Wahlberg, Bo
    KTH, School of Electrical Engineering (EES), Centres, ACCESS Linnaeus Centre. KTH, School of Electrical Engineering (EES), Automatic Control.
    A method for grasp evaluation based on disturbance force rejection2006In: IEEE Transactions on robotics, ISSN 1552-3098, E-ISSN 1941-0468, Vol. 22, no 3, p. 461-469Article in journal (Refereed)
    Abstract [en]

    This paper presents a method for grasp evaluation. It is based on the ability of the grasp to reject disturbance forces. The procedure takes the geometry of object into account, and it is also possible to incorporate task-oriented information. The evaluation criterion is formulated as a min-max optimization problem, for which an efficient algorithm is proposed and analyzed. The result of this algorithm is independent of scale and choice of reference frame, and can easily be visualized as a surface in the force space. The method is illustrated with several examples.

  • 21.
    Varava, Anastasiia
    et al.
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Kragic, Danica
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Pokorny, Florian T.
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Caging Grasps of Rigid and Partially Deformable 3-D Objects With Double Fork and Neck Features2016In: IEEE Transactions on robotics, ISSN 1552-3098, E-ISSN 1941-0468, Vol. 32, no 6, p. 1479-1497Article in journal (Refereed)
    Abstract [en]

    Caging provides an alternative to point-contact-based rigid grasping, relying on reasoning about the global free configuration space of an object under consideration. While substantial progress has been made toward the analysis, verification, and synthesis of cages of polygonal objects in the plane, the use of caging as a tool for manipulating general complex objects in 3-D remains challenging. In this work, we introduce the problem of caging rigid and partially deformable 3-D objects, which exhibit geometric features we call double forks and necks. Our approach is based on the linking number-a classical topological invariant, allowing us to determine sufficient conditions for caging objects with these features even in the case when the object under consideration is partially deformable under a set of neck or double fork preserving deformations. We present synthesis and verification algorithms and demonstrations of applying these algorithms to cage 3-D meshes.

  • 22.
    Ögren, Petter
    et al.
    Mech. & Aerosp. Eng. Dept., Princeton Univ., NJ, USA.
    Leonard, Naomi Ehrich
    Mech. & Aerosp. Eng. Dept., Princeton Univ., NJ, USA.
    A Convergent Dynamic Window Approach to Obstacle Avoidance2005In: IEEE Transactions on robotics, ISSN 1552-3098, E-ISSN 1941-0468, Vol. 21, no 2, p. 188-195Article in journal (Refereed)
    Abstract [en]

    The dynamic window approach (DWA) is a well-known navigation scheme developed by Fox et al. and extended by Brock and Khatib. It is safe by construction, and has been shown to perform very efficiently in experimental setups. However, one can construct examples where the proposed scheme fails to attain the goal configuration. What has been lacking is a theoretical treatment of the algorithm's convergence properties. Here we present such a treatment by merging the ideas of the DWA with the convergent, but less performance-oriented, scheme suggested by Rimon and Koditschek. Viewing the DWA as a model predictive control (MPC) method and using the control Lyapunov function (CLF) framework of Rimon and Koditschek, we draw inspiration from an MPC/CLF framework put forth by Primbs to propose a version of the DWA that is tractable and convergent.

1 - 22 of 22
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf