Change search
Refine search result
1 - 30 of 30
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Rows per page
  • 5
  • 10
  • 20
  • 50
  • 100
  • 250
Sort
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
Select
The maximal number of hits you can export is 250. When you want to export more records please use the Create feeds function.
  • 1.
    Aarno, Daniel
    et al.
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Kragic, Danica
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Motion intention recognition in robot assisted applications2008In: Robotics and Autonomous Systems, ISSN 0921-8890, E-ISSN 1872-793X, Vol. 56, no 8, p. 692-705Article in journal (Refereed)
    Abstract [en]

    Acquiring, representing and modelling human skills is one of the key research areas in teleoperation, programming-by-demonstration and human-machine collaborative settings. The problems are challenging mainly because of the lack of a general mathematical model to describe human skills. One of the common approaches is to divide the task that the operator is executing into several subtasks or low-level subsystems in order to provide manageable modelling. In this paper we consider the use of a Layered Hidden Markov Model (LHMM) to model human skills. We evaluate a gesteme classifier that classifies motions into basic action-primitives, or gestemes. The gesteme classifiers are then used in a LHMM to model a teleoperated task. The proposed methodology uses three different HMM models at the gesteme level: one-dimensional HMM, multi-dimensional HMM and multidimensional HMM with Fourier transform. The online and off-line classification performance of these three models is evaluated with respect to the number of gestemes, the influence of the number of training samples, the effect of noise and the effect of the number of observation symbols. We also apply the LHMM to data recorded during the execution of a trajectory tracking task in 2D and 3D with a mobile manipulator in order to provide qualitative as well as quantitative results for the proposed approach. The results indicate that the LHMM is suitable for modelling teleoperative trajectory-tracking tasks and that the difference in classification performance between one and multidimensional HMMs for gesteme classification is small. It can also be seen that the LHMM is robust with respect to misclassifications in the underlying gesteme classifiers.

  • 2.
    Asif, Rizwan
    et al.
    KTH. SMME, RISE Lab, NUST Main Campus,Sect H-12, Islamabad, Pakistan.;;Natl Univ Sci & Technol, Sch Mech & Mfg Engn, RISE Lab, Islamabad, Pakistan..
    Athar, Ali
    Tech Univ Munich, Arcisstr 21, D-80333 Munich, Germany..
    Mehmood, Faisal
    SMME, RISE Lab, NUST Main Campus,Sect H-12, Islamabad, Pakistan..
    Islam, Fahad
    Robot Inst, 5000 Forbes Ave, Pittsburgh, PA USA..
    Ayaz, Yasar
    SMME, RISE Lab, NUST Main Campus,Sect H-12, Islamabad, Pakistan.;Natl Univ Sci & Technol, Sch Mech & Mfg Engn, RISE Lab, Islamabad, Pakistan..
    Whole-body motion and footstep planning for humanoid robots with multi-heuristic search2019In: Robotics and Autonomous Systems, ISSN 0921-8890, E-ISSN 1872-793X, Vol. 116, p. 51-63Article in journal (Refereed)
    Abstract [en]

    In this paper, we present a motion planning framework for humanoid robots that combines whole-body motions as well as footsteps under a quasi-static flat ground plane assumption. Traditionally, these two have been treated as separate research domains. One of the major challenges behind whole body motion planning is the high DoF (Degrees of Freedom) nature of the problem, in addition to strict constraints on obstacle avoidance and stability. On the other hand footstep planning on its own is a comparatively simpler problem due to the low DoF search space, but coalescing it into a larger framework that includes whole-body motion planning adds further complexity in reaching a solution within a suitable time frame that satisfies all the constraints. In this work, we treat motion planning as a graph search problem, and employ Shared Multi-heuristic A* (SMHA*) to generate efficient, stable and collision-free motion plans given only the starting state of the robot and the desired end-effector pose. 

  • 3.
    Bohg, Jeannette
    et al.
    KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS. KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.
    Kragic, Danica
    KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS. KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.
    Learning grasping points with shape context2010In: Robotics and Autonomous Systems, ISSN 0921-8890, E-ISSN 1872-793X, Vol. 58, no 4, p. 362-377Article in journal (Refereed)
    Abstract [en]

    This paper presents work on vision based robotic grasping. The proposed method adopts a learning framework where prototypical grasping points are learnt from several examples and then used on novel objects. For representation purposes, we apply the concept of shape context and for learning we use a supervised learning approach in which the classifier is trained with labelled synthetic images. We evaluate and compare the performance of linear and non-linear classifiers. Our results show that a combination of a descriptor based on shape context with a non-linear classification algorithm leads to a stable detection of grasping points for a variety of objects.

  • 4.
    Bore, Nils
    et al.
    KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Ambrus, Rares
    KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Jensfelt, Patric
    KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Folkesson, John
    KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Efficient retrieval of arbitrary objects from long-term robot observations2017In: Robotics and Autonomous Systems, ISSN 0921-8890, E-ISSN 1872-793X, Vol. 91, p. 139-150Article in journal (Refereed)
    Abstract [en]

    We present a novel method for efficient querying and retrieval of arbitrarily shaped objects from large amounts of unstructured 3D point cloud data. Our approach first performs a convex segmentation of the data after which local features are extracted and stored in a feature dictionary. We show that the representation allows efficient and reliable querying of the data. To handle arbitrarily shaped objects, we propose a scheme which allows incremental matching of segments based on similarity to the query object. Further, we adjust the feature metric based on the quality of the query results to improve results in a second round of querying. We perform extensive qualitative and quantitative experiments on two datasets for both segmentation and retrieval, validating the results using ground truth data. Comparison with other state of the art methods further enforces the validity of the proposed method. Finally, we also investigate how the density and distribution of the local features within the point clouds influence the quality of the results.

  • 5.
    Doulgeri, Zoe
    et al.
    Department of Electrical and Computer Eng., Aristotle University of Thessaloniki.
    Karayiannidis, Yiannis
    Department of Electrical and Computer Eng., Aristotle University of Thessaloniki.
    Force position control for a robot finger with a soft tip and kinematic uncertainties2007In: Robotics and Autonomous Systems, ISSN 0921-8890, E-ISSN 1872-793X, Vol. 55, no 4, p. 328-336Article in journal (Refereed)
    Abstract [en]

    We consider the problem of force and position regulation for a robot finger with a soft tip in contact with a surface with unknown geometrical characteristics. An adaptive controller is proposed, and the asymptotic convergence of the applied force error and the estimated position error of the tip to zero is shown for the spatial case. Simulation results demonstrate the controller performance.

  • 6. Drimus, Alin
    et al.
    Kootstra, Gert
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Bilberg, Arne
    Kragic, Danica
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Design of a flexible tactile sensor for classification of rigid and deformable objects2014In: Robotics and Autonomous Systems, ISSN 0921-8890, E-ISSN 1872-793X, Vol. 62, no 1, p. 3-15Article in journal (Refereed)
    Abstract [en]

    For both humans and robots, tactile sensing is important for interaction with the environment: it is the core sensing used for exploration and manipulation of objects. In this paper, we present a novel tactile-array sensor based on flexible piezoresistive rubber. We describe the design of the sensor and data acquisition system. We evaluate the sensitivity and robustness of the sensor, and show that it is consistent over time with little relaxation. Furthermore, the sensor has the benefit of being flexible, having a high resolution, it is easy to mount, and simple to manufacture. We demonstrate the use of the sensor in an active object-classification system. A robotic gripper with two sensors mounted on its fingers performs a palpation procedure on a set of objects. By squeezing an object, the robot actively explores the material properties, and the system acquires tactile information corresponding to the resulting pressure. Based on a k nearest neighbor classifier and using dynamic time warping to calculate the distance between different time series, the system is able to successfully classify objects. Our sensor demonstrates similar classification performance to the Weiss Robotics tactile sensor, while having additional benefits.

  • 7. Fletcher, L.
    et al.
    Loy, Gareth
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.
    Barnes, N.
    Zelinsky, A.
    Correlating driver gaze with the road scene for driver assistance systems2005In: Robotics and Autonomous Systems, ISSN 0921-8890, E-ISSN 1872-793X, Vol. 52, no 1, p. 71-84Article in journal (Refereed)
    Abstract [en]

    A driver assistance system (DAS) should support the driver by monitoring road and vehicle events and presenting relevant and timely information to the driver. It is impossible to know what a driver is thinking, but we can monitor the driver's gaze direction and compare it with the position of information in the driver's viewfield to make inferences. In this way, not only do we monitor the driver's actions, we monitor the driver's observations as well. In this paper we present the automated detection and recognition of road signs, combined with the monitoring of the driver's response. We present a complete system that reads speed signs in real-time, compares the driver's gaze, and provides immediate feedback if it appears the sign has been missed by the driver.

  • 8.
    Huebner, Kai
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.
    BADGr-A toolbox for box-based approximation, decomposition and GRasping2012In: Robotics and Autonomous Systems, ISSN 0921-8890, E-ISSN 1872-793X, Vol. 60, no 3, p. 367-376Article in journal (Refereed)
    Abstract [en]

    In this paper, we conclude our work on shape approximation by box primitives for the goal of simple and efficient grasping. As a main product of our research, we present the BADGr toolbox for Box-based Approximation, Decomposition and Grasping of objects. The contributions of the work presented here are twofold: in terms of shape approximation, we provide an algorithm for creating a 3D box primitive representation to identify object parts from 3D point clouds. We motivate and evaluate this choice particularly towards the task of grasping. As a contribution in the field of grasping, we further provide a grasp hypothesis generation framework that utilizes the chosen box presentation in a flexible manner.

  • 9.
    Karasalo, Maja
    et al.
    KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Hu, Xiaoming
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Optimization and Systems Theory.
    Gustavi, Tove
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.).
    Robust Formation Control using Switching Range Sensors2010In: Robotics and Autonomous Systems, ISSN 0921-8890, E-ISSN 1872-793X, Vol. 58, no 8, p. 1003-1016Article in journal (Refereed)
    Abstract [en]

    In this paper, control algorithms are presented for formation keeping and path followingfor non-holonomic platforms. The controls are based on feedback from onboard directional range sensors, and a switching Kalman filter is introduced for active sensing.Stability is analyzed theoretically and robustness isdemonstrated in experiments and simulations.

  • 10.
    Karasalo, Maja
    et al.
    KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Piccolo, Giacomo
    KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Kragic, Danica
    KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Hu, Xiaoming
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Optimization and Systems Theory.
    Contour Reconstruction using Recursive Smoothing Splines - Algorithms and Experimental Validation2009In: Robotics and Autonomous Systems, ISSN 0921-8890, E-ISSN 1872-793X, Vol. 57, no 6-7, p. 617-628Article in journal (Refereed)
    Abstract [en]

    In this paper, a recursive smoothing splineapproach for contour reconstruction is studied and evaluated.  Periodic smoothing splines areused by a robot to approximate the contour of encountered obstaclesin the environment.  The splines are generated through minimizing acost function subject to constraints imposed by a linear controlsystem and accuracy is improved iteratively using a recursive splinealgorithm.  The filtering effect of the smoothing splines allows forusage of noisy sensor data and the method is robust with respect to odometrydrift. The algorithm is extensively evaluated in simulationsfor various contours and in experiments using a SICK laser scanner mounted on a PowerBot fromActivMedia Robotics

  • 11.
    Karayiannidis, Yiannis
    et al.
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Doulgeri, Zoe
    Aristotle University of Thessaloniki, Greece.
    Model-free robot joint position regulation and tracking with prescribed performance guarantees2012In: Robotics and Autonomous Systems, ISSN 0921-8890, E-ISSN 1872-793X, Vol. 60, no 2, p. 214-226Article in journal (Refereed)
    Abstract [en]

    The problem of robot joint position control with prescribed performance guarantees is considered; the control objective is the error evolution within prescribed performance bounds in both problems of regulation and tracking. The proposed controllers do not utilize either the robot dynamic model or any approximation structures and are composed by simple PID or PD controllers enhanced by a proportional term of a transformed error through a transformation related gain. Under a sufficient condition for the damping gain, the proposed controllers are able to guarantee (i) predefined minimum speed of convergence, maximum steady state error and overshoot concerning the position error and (ii) uniformly ultimate boundedness (UUB) of the velocity error. The use of the integral term reduces residual errors allowing the proof of asymptotic convergence of both velocity and position errors to zero for the regulation problem under constant disturbances. Performance is a priori guaranteed irrespective of the selection of the control gain values. Simulation results of a three dof spatial robotic manipulator and experimental results of one dof manipulator are given to confirm the theoretical findings.

  • 12.
    Karayiannidis, Yiannis
    et al.
    Department of Electrical and Computer Eng., Aristotle University of Thessaloniki.
    Doulgeri, Zoe
    Department of Electrical and Computer Eng., Aristotle University of Thessaloniki.
    Robot contact tasks in the presence of control target distortions2010In: Robotics and Autonomous Systems, ISSN 0921-8890, E-ISSN 1872-793X, Vol. 58, no 5, p. 596-606Article in journal (Refereed)
    Abstract [en]

    This work refers to the problem of controlling robot motion and force in frictional contacts under environmental errors and particularly orientation errors that distort the desired control targets and control subspaces. The proposed method uses online estimates of the surface normal (tangent) direction to dynamically modify the control target and control space decomposition. It is proved that these estimates converge to the actual value even though the elasticity and friction parameters are unknown. The proposed control solution is demonstrated through simulation examples in three-dimensional robot motion tasks contacting both planar and curved surfaces.

  • 13.
    Kootstra, Gert
    et al.
    Faculty of Mathematics and Natural Sciences, University of Groningen, The Netherlands.
    de Boer, Bart
    Univesity of Amsterdam, The Netherlands.
    Tackling the Premature Convergence Problem in Monte-Carlo Localization2009In: Robotics and Autonomous Systems, ISSN 0921-8890, E-ISSN 1872-793X, Vol. 57, no 11, p. 1107-1118Article in journal (Refereed)
    Abstract [en]

    Monte-Carlo localization uses particle filtering to estimate the position of the robot. The method is known to suffer from the loss of potential positions when there is ambiguity present in the environment. Since many indoor environments are highly symmetric, this problem of premature convergence is problematic for indoor robot navigation. It is, however, rarely studied in particle filters. We introduce a number of so-called niching methods used in genetic algorithms, and implement them on a particle filter for Monte-Carlo localization. The experiments show a significant improvement in the diversity maintaining performance of the particle filter.

  • 14. Kostavelis, I.
    et al.
    Nalpantidis, Lazaros
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Gasteratos, A.
    Collision risk assessment for autonomous robots by offline traversability learning2012In: Robotics and Autonomous Systems, ISSN 0921-8890, E-ISSN 1872-793X, Vol. 60, no 11, p. 1367-1376Article in journal (Refereed)
    Abstract [en]

    Autonomous robots should be able to move freely in unknown environments and avoid impacts with obstacles. The overall traversability estimation of the terrain and the subsequent selection of an obstacle-free route are prerequisites of a successful autonomous operation. This work proposes a computationally efficient technique for the traversability estimation of the terrain, based on a machine learning classification method. Additionally, a new method for collision risk assessment is introduced. The proposed system uses stereo vision as a first step in order to obtain information about the depth of the scene. Then, a v-disparity image calculation processing step extracts information-rich features about the characteristics of the scene, which are used to train a support vector machine (SVM) separating the traversable and non-traversable scenes. The ones classified as traversable are further processed exploiting the polar transformation of the depth map. The result is a distribution of obstacle existence likelihoods for each direction, parametrized by the robot's embodiment.

  • 15.
    Kragic, Danica
    et al.
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Björkman, Mårten
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Christensen, Henrik I.
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Eklundh, Jan-Olof
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Vision for robotic object manipulation in domestic settings2005In: Robotics and Autonomous Systems, ISSN 0921-8890, E-ISSN 1872-793X, Vol. 52, no 1, p. 85-100Article in journal (Refereed)
    Abstract [en]

    In this paper, we present a vision system for robotic object manipulation tasks in natural, domestic environments. Given complex fetch-and-carry robot tasks, the issues related to the whole detect-approach-grasp loop are considered. Our vision system integrates a number of algorithms using monocular and binocular cues to achieve robustness in realistic settings. The cues are considered and used in connection to both foveal and peripheral vision to provide depth information, segmentation of the object(s) of interest, object recognition, tracking and pose estimation. One important property of the system is that the step from object recognition to pose estimation is completely automatic combining both appearance and geometric models. Experimental evaluation is performed in a realistic indoor environment with occlusions, clutter, changing lighting and background conditions.

  • 16.
    Kragic, Danica
    et al.
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Christensen, Henrik I.
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Advances in robot vision2005In: Robotics and Autonomous Systems, ISSN 0921-8890, E-ISSN 1872-793X, Vol. 52, no 1, p. 1-3Article in journal (Other academic)
  • 17.
    Kragic, Danica
    et al.
    KTH, Superseded Departments, Numerical Analysis and Computer Science, NADA.
    Petersson, L.
    Christensen, H. I.
    Visually guided manipulation tasks2002In: Robotics and Autonomous Systems, ISSN 0921-8890, E-ISSN 1872-793X, Vol. 40, no 3-Feb, p. 193-203Article in journal (Refereed)
    Abstract [en]

    In this paper, we present a framework for a robotic system with the ability to perform real-world manipulation tasks. The complexity of such tasks determines the precision and freedoms controlled which also affects the robustness and the flexibility of the system. The aspect is on the development of visual system and visual tracking techniques in particular. Since precise tracking and control of a full pose of the object to be manipulated is usually less robust and computationally expensive, we integrate vision and control system where the objectives are to provide the discrete state information required to switch between control modes of different complexity. For this purpose, an integration of simple visual algorithms is used to provide a robust input to the control loop. Consensus theory is investigated as the integration strategy. In addition, a general purpose framework for integration of processes is used to implement the system on a real robot. The proposed approach results in a system which can robustly locate and grasp a door handle and then open the door.

  • 18. Kyrki, V.
    et al.
    Kragic, Danica
    KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Christensen, Henrik I.
    Measurement errors in visual servoing2006In: Robotics and Autonomous Systems, ISSN 0921-8890, E-ISSN 1872-793X, Vol. 54, no 10, p. 815-827Article in journal (Refereed)
    Abstract [en]

    This paper addresses the issue of measurement errors in visual servoing. The error characteristics of the vision based state estimation and the associated uncertainty of the control are investigated. The major contribution is the analysis of the propagation of image error through pose estimation and visual servoing control law. Using the analysis, two classical visual servoing methods are evaluated: position-based and 2.5D visual servoing. The evaluation offers a tool to build and analyze hybrid control systems such as switching or partitioning control.

  • 19. Li, Miao
    et al.
    Hang, Kaiyu
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.
    Kragic, Danica
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.
    Billard, Aude
    Dexterous grasping under shape uncertainty2016In: Robotics and Autonomous Systems, ISSN 0921-8890, E-ISSN 1872-793X, Vol. 75, p. 352-364Article in journal (Refereed)
    Abstract [en]

    An important challenge in robotics is to achieve robust performance in object grasping and manipulation, dealing with noise and uncertainty. This paper presents an approach for addressing the performance of dexterous grasping under shape uncertainty. In our approach, the uncertainty in object shape is parametrized and incorporated as a constraint into grasp planning. The proposed approach is used to plan feasible hand configurations for realizing planned contacts using different robotic hands. A compliant finger closing scheme is devised by exploiting both the object shape uncertainty and tactile sensing at fingertips. Experimental evaluation demonstrates that our method improves the performance of dexterous grasping under shape uncertainty.

  • 20. Lopez-Nicolas, G.
    et al.
    Sagues, C.
    Guerrero, J. J.
    Kragic, Danica
    KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Jensfelt, Patric
    KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS. KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.
    Switching visual control based on epipoles for mobile robots2008In: Robotics and Autonomous Systems, ISSN 0921-8890, E-ISSN 1872-793X, Vol. 56, no 7, p. 592-603Article in journal (Refereed)
    Abstract [en]

    In this paper, we present a Visual control approach consisting in a switching control scheme based on the epipolar geometry. The method facilitates a classical teach-by-showing approach where a reference image is used to control the robot to the desired pose (position and orientation). As a result of our proposal a mobile robot carries out a smooth trajectory towards the target and the epipolar geometry model is used through the whole motion. The control scheme developed considers the motion constraints of the mobile platform in a framework based on the epipolar geometry that does not rely on artificial markers or specific models of the environment. The proposed method is designed in order to cope with the degenerate estimation case of the epipolar geometry with short baseline. Experimental evaluation has been performed in realistic indoor and outdoor settings.

  • 21.
    Mateus, Andre
    et al.
    Inst Super Tecn, Inst Sistemas & Robot LARSyS, Lisbon, Portugal..
    Ribeiro, David
    Inst Super Tecn, Inst Sistemas & Robot LARSyS, Lisbon, Portugal..
    Dos Santos Miraldo, Pedro
    KTH, School of Electrical Engineering and Computer Science (EECS), Automatic Control.
    Nascimento, Jacinto C.
    Inst Super Tecn, Inst Sistemas & Robot LARSyS, Lisbon, Portugal..
    Efficient and robust Pedestrian Detection using Deep Learning for Human-Aware Navigation2019In: Robotics and Autonomous Systems, ISSN 0921-8890, E-ISSN 1872-793X, Vol. 113, p. 23-37Article in journal (Refereed)
    Abstract [en]

    This paper addresses the problem of Human-Aware Navigation (HAN), using multi camera sensors to implement a vision-based person tracking system. The main contributions of this paper are as follows: a novel and efficient Deep Learning person detection and a standardization of human-aware constraints. In the first stage of the approach, we propose to cascade the Aggregate Channel Features (ACF) detector with a deep Convolutional Neural Network (CNN) to achieve fast and accurate Pedestrian Detection (PD). Regarding the human awareness (that can be defined as constraints associated with the robot's motion), we use a mixture of asymmetric Gaussian functions, to define the cost functions associated to each constraint. Both methods proposed herein are evaluated individually to measure the impact of each of the components. The final solution (including both the proposed pedestrian detection and the human-aware constraints) is tested in a typical domestic indoor scenario, in four distinct experiments. The results show that the robot is able to cope with human-aware constraints, defined after common proxemics and social rules.

  • 22. Mozos, O.M.
    et al.
    Triebel, R.
    Jensfelt, Patric
    KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS. KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.
    Rottmann, A.
    Burgard, W.
    Supervised semantic labeling of places using information extracted from sensor data2007In: Robotics and Autonomous Systems, ISSN 0921-8890, E-ISSN 1872-793X, Vol. 55, no 5, p. 391-402Article in journal (Refereed)
    Abstract [en]

    Indoor environments can typically be divided into places with different functionalities like corridors, rooms or doorways. The ability to learn such semantic categories from sensor data enables a mobile robot to extend the representation of the environment facilitating the interaction with humans. As an example, natural language terms like ``corridor" or ``room" can be used to communicate the position of the robot in a map in a more intuitive way. In this work, we first propose an approach based on supervised learning to classify the pose of a mobile robot into semantic classes. Our method uses AdaBoost to boost simple features extracted from sensor range data into a strong classifier. We present two main applications of this approach. Firstly, we show how our approach can be utilized by a moving robot for an online classification of the poses traversed along its path using a hidden Markov model. In this case we additionally use as features objects extracted from images. Secondly, we introduce an approach to learn topological maps from geometric maps by applying our semantic classification procedure in combination with a probabilistic relaxation method. Alternatively, we apply associative Markov networks to classify geometric maps and compare the results with the relaxation approach. Experimental results obtained in simulation and with real robots demonstrate the effectiveness of our approach in various indoor environments

  • 23.
    Nalpantidis, Lazaros
    et al.
    Production and Management Engineering Dept., Democritus University of Thrace, Greece.
    Gasteratos, Antonios
    Production and Management Engineering Dept., Democritus University of Thrace, Greece.
    Biologically and psychophysically inspired adaptive support weights algorithm for stereo correspondence2010In: Robotics and Autonomous Systems, ISSN 0921-8890, E-ISSN 1872-793X, Vol. 58, no 5, p. 457-464Article in journal (Refereed)
    Abstract [en]

    In this paper a novel stereo correspondence algorithm is presented. It incorporates many biologically and psychologically inspired features to an adaptive weighted sum of absolute differences (SAD) framework in order to determine the correct depth of a scene. In addition to ideas already exploited, such as the color information utilization, gestalt laws of proximity and similarity, new ones have been adopted. The presented algorithm introduces the use of circular support regions, the gestalt law of continuity as well as the psychophysically-based logarithmic response law. All the aforementioned perceptual tools act complementarily inside a straightforward computational algorithm applicable to robotic applications. The results of the algorithm have been evaluated and compared to those of similar algorithms.

  • 24. Popovic, Mila
    et al.
    Kraft, Dirk
    Bodenhagen, Leon
    Baseski, Emre
    Pugeault, Nicolas
    Kragic, Danica
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.
    Asfour, Tamim
    Kruger, Norbert
    A strategy for grasping unknown objects based on co-planarity and colour information2010In: Robotics and Autonomous Systems, ISSN 0921-8890, E-ISSN 1872-793X, Vol. 58, no 5, p. 551-565Article in journal (Refereed)
    Abstract [en]

    In this work, we describe and evaluate a grasping mechanism that does not make use of any specific object prior knowledge. The mechanism makes use of second-order relations between visually extracted multi-modal 3D features provided by an early cognitive vision system. More specifically, the algorithm is based on two relations covering geometric information in terms of a co-planarity constraint as well as appearance based information in terms of co-occurrence of colour properties. We show that our algorithm, although making use of such rather simple constraints, is able to grasp objects with a reasonable success rate in rather complex environments (i.e., cluttered scenes with multiple objects). Moreover, we have embedded the algorithm within a cognitive system that allows for autonomous exploration and learning in different contexts. First, the system is able to perform long action sequences which, although the grasping attempts not being always successful, can recover from mistakes and more importantly, is able to evaluate the success of the grasps autonomously by haptic feedback (i.e., by a force torque sensor at the wrist and proprioceptive information about the distance of the gripper after a gasping attempt). Such labelled data is then used for improving the initially hard-wired algorithm by learning. Moreover, the grasping behaviour has been used in a cognitive system to trigger higher level processes such as object learning and learning of object specific grasping. (C) 2010 Elsevier B.V. All rights reserved.

  • 25.
    Pronobis, Andrzej
    et al.
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.
    Caputo, B
    Jensfelt, Patric
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.
    Christensen, H. I.
    A realistic benchmark for visual indoor place recognition2010In: Robotics and Autonomous Systems, ISSN 0921-8890, E-ISSN 1872-793X, Vol. 58, no 1, p. 81-96Article in journal (Refereed)
    Abstract [en]

    An important competence for a mobile robot system is the ability to localize and perform context interpretation. This is required to perform basic navigation and to facilitate local specific services. Recent advances in vision have made this modality a viable alternative to the traditional range sensors, and visual place recognition algorithms emerged as a useful and widely applied tool for obtaining information about robot's position. Several place recognition methods have been proposed using vision alone or combined with sonar and/or laser. This research calls for standard benchmark datasets for development, evaluation and comparison of solutions. To this end, this paper presents two carefully designed and annotated image databases augmented with an experimental procedure and extensive baseline evaluation. The databases were gathered in an uncontrolled indoor office environment using two mobile robots and a standard camera. The acquisition spanned across a time range of several months and different illumination and weather conditions. Thus, the databases are very well suited for evaluating the robustness of algorithms with respect to a broad range of variations, often occurring in real-world settings. We thoroughly assessed the databases with a purely appearance-based place recognition method based on support vector machines and two types of rich visual features (global and local).

  • 26.
    Severinson Eklundh, Kerstin
    et al.
    KTH, Superseded Departments, Numerical Analysis and Computer Science, NADA.
    Green, A.
    Hüttenrauch, Helge
    KTH, Superseded Departments, Numerical Analysis and Computer Science, NADA.
    Social and collaborative aspects of interaction with a service robot2003In: Robotics and Autonomous Systems, ISSN 0921-8890, E-ISSN 1872-793X, Vol. 42, no 04-mar, p. 223-234Article in journal (Refereed)
    Abstract [en]

    To an increasing extent, robots are being designed to become a part of the lives of ordinary people. This calls for new models of the interaction between humans and robots, taking advantage of human social and communicative skills. Furthermore, human-robot relationships must be understood in the context of use of robots, and based on empirical studies of humans and robots in real settings. This paper discusses social aspects of interaction with a service robot, departing from our experiences of designing a fetch-and-carry robot for motion-impaired users in an office environment. We present the motivations behind the design of the Cero robot, especially its communication paradigm. Finally, we discuss experiences from a recent usage study, and research issues emerging from this work. A conclusion is that addressing only the primary user in service robotics is unsatisfactory, and that the focus should be on the setting, activities and social interactions of the group of people where the robot is to be used.

  • 27.
    Sjöö, Kristoffer
    et al.
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Aydemir, Alper
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Jensfelt, Patric
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Topological spatial relations for active visual search2012In: Robotics and Autonomous Systems, ISSN 0921-8890, E-ISSN 1872-793X, Vol. 60, no 9, p. 1093-1107Article in journal (Refereed)
    Abstract [en]

    If robots are to assume their long anticipated place by humanity's side and be of help to us in our partially structured environments, we believe that adopting human-like cognitive patterns will be valuable. Such environments are the products of human preferences, activity and thought; they are imbued with semantic meaning. In this paper we investigate qualitative spatial relations with the aim of both perceiving those semantics, and of using semantics to perceive. More specifically, in this paper we introduce general perceptual measures for two common topological spatial relations, "on" and "in", that allow a robot to evaluate object configurations, possible or actual, in terms of those relations. We also show how these spatial relations can be used as a way of guiding visual object search. We do this by providing a principled approach for indirect search in which the robot can make use of known or assumed spatial relations between objects, significantly increasing the efficiency of search by first looking for an intermediate object that is easier to find. We explain our design, implementation and experimental setup and provide extensive experimental results to back up our thesis.

  • 28.
    Smith, Christian
    et al.
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Karayiannidis, Ioannis
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Nalpantidis, Lazaros
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Gratal, Javier
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Qi, Peng
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Dimarogonas, Dimos
    KTH, School of Electrical Engineering (EES), Automatic Control. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Kragic, Danica
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Dual arm manipulation-A survey2012In: Robotics and Autonomous Systems, ISSN 0921-8890, E-ISSN 1872-793X, Vol. 60, no 10, p. 1340-1353Article, review/survey (Refereed)
    Abstract [en]

    Recent advances in both anthropomorphic robots and bimanual industrial manipulators had led to an increased interest in the specific problems pertaining to dual arm manipulation. For the future, we foresee robots performing human-like tasks in both domestic and industrial settings. It is therefore natural to study specifics of dual arm manipulation in humans and methods for using the resulting knowledge in robot control. The related scientific problems range from low-level control to high level task planning and execution. This review aims to summarize the current state of the art from the heterogenous range of fields that study the different aspects of these problems specifically in dual arm manipulation.

  • 29.
    Tardioli, Danilo
    et al.
    University of Zaragoza.
    Parasuraman, Ramviyas
    University of Georgia.
    Ögren, Petter
    KTH, School of Electrical Engineering and Computer Science (EECS), Robotics, perception and learning, RPL.
    Pound: A multi-master ROS node for reducing delay and jitter in wireless multi-robot networks2019In: Robotics and Autonomous Systems, ISSN 0921-8890, E-ISSN 1872-793X, Vol. 111, p. 73-87Article in journal (Refereed)
    Abstract [en]

    The Robot Operating System (ROS) is a popular and widely used software framework for building robotics systems. With the growth of its popularity, it has started to be used in multi-robot systems as well. However, the TCP connections that the platform relies on for connecting the so-called ROS nodes presents several issues regarding limited-bandwidth, delays, and jitter, when used in wireless multi-hop networks. In this paper, we present a thorough analysis of the problem and propose a new ROS node called Pound to improve the wireless communication performance by reducing delay and jitter in data exchanges, especially in multi-hop networks. Pound allows the use of multiple ROS masters (roscores), features data compression, and importantly, introduces a priority scheme that allows favoring more important flows over less important ones. We compare Pound to the state-of-the-art solutions through extensive experiments and show that it performs equally well, or better in all the test cases, including a control-over-network example.

  • 30. Zender, H.
    et al.
    Mozos, O. Martinez
    Jensfelt, Patric
    KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS. KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.
    Kruijff, G. J. M.
    Burgard, W.
    Conceptual spatial representations for indoor mobile robots2008In: Robotics and Autonomous Systems, ISSN 0921-8890, E-ISSN 1872-793X, Vol. 56, no 6, p. 493-502Article in journal (Refereed)
    Abstract [en]

    We present an approach for creating conceptual representations of human-made indoor environments using mobile robots. The concepts refer to spatial and functional properties of typical indoor environments. Following different findings in spatial cognition, our model is composed of layers representing maps at different levels of abstraction. The complete system is integrated in a mobile robot endowed with laser and vision sensors for place and object recognition. The system also incorporates a linguistic framework that actively supports the map acquisition process, and which is used for situated dialogue. Finally, we discuss the capabilities of the integrated system.

1 - 30 of 30
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf