Change search
Refine search result
1 - 11 of 11
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Rows per page
  • 5
  • 10
  • 20
  • 50
  • 100
  • 250
Sort
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
Select
The maximal number of hits you can export is 250. When you want to export more records please use the Create feeds function.
  • 1.
    Ambrus, Rares
    et al.
    KTH, School of Computer Science and Communication (CSC), Robotics, perception and learning, RPL. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Claici, Sebastian
    Wendt, Axel
    Automatic Room Segmentation From Unstructured 3-D Data of Indoor Environments2017In: IEEE Robotics and Automation Letters, ISSN 2377-3766, E-ISSN 1949-3045, Vol. 2, no 2, p. 749-756Article in journal (Refereed)
    Abstract [en]

    We present an automatic approach for the task of reconstructing a 2-D floor plan from unstructured point clouds of building interiors. Our approach emphasizes accurate and robust detection of building structural elements and, unlike previous approaches, does not require prior knowledge of scanning device poses. The reconstruction task is formulated as a multiclass labeling problem that we approach using energy minimization. We use intuitive priors to define the costs for the energy minimization problem and rely on accurate wall and opening detection algorithms to ensure robustness. We provide detailed experimental evaluation results, both qualitative and quantitative, against state-of-the-art methods and labeled ground-truth data.

  • 2.
    Barbosa, Fernando S.
    et al.
    KTH, School of Electrical Engineering and Computer Science (EECS), Robotics, Perception and Learning, RPL.
    Duberg, Daniel
    KTH, School of Electrical Engineering and Computer Science (EECS), Robotics, Perception and Learning, RPL.
    Jensfelt, Patric
    KTH, School of Electrical Engineering and Computer Science (EECS), Robotics, Perception and Learning, RPL.
    Tumova, Jana
    KTH, School of Electrical Engineering and Computer Science (EECS), Robotics, Perception and Learning, RPL.
    Guiding Autonomous Exploration with Signal Temporal Logic2019In: IEEE Robotics and Automation Letters, ISSN 2377-3766, E-ISSN 1949-3045, Vol. 4, no 4, p. 3332-3339Article in journal (Refereed)
    Abstract [en]

    Algorithms for autonomous robotic exploration usually focus on optimizing time and coverage, often in a greedy fashion. However, obstacle inflation is conservative and might limit mapping capabilities and even prevent the robot from moving through narrow, important places. This letter proposes a method to influence the manner the robot moves in the environment by taking into consideration a user-defined spatial preference formulated in a fragment of signal temporal logic (STL). We propose to guide the motion planning toward minimizing the violation of such preference through a cost function that integrates the quantitative semantics, i.e., robustness of STL. To demonstrate the effectiveness of the proposed approach, we integrate it into the autonomous exploration planner (AEP). Results from simulations and real-world experiments are presented, highlighting the benefits of our approach.

  • 3. Faeulhammer, Thomas
    et al.
    Ambrus, Rares
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Burbridge, Christopher
    Zillich, Micheal
    Folkesson, John
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Hawes, Nick
    Jensfelt, Patric
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Vincze, Marcus
    Autonomous Learning of Object Models on a Mobile Robot2017In: IEEE Robotics and Automation Letters, ISSN 2377-3766, E-ISSN 1949-3045, Vol. 2, no 1, p. 26-33, article id 7393491Article in journal (Refereed)
    Abstract [en]

    In this article we present and evaluate a system which allows a mobile robot to autonomously detect, model and re-recognize objects in everyday environments. Whilst other systems have demonstrated one of these elements, to our knowledge we present the first system which is capable of doing all of these things, all without human interaction, in normal indoor scenes. Our system detects objects to learn by modelling the static part of the environment and extracting dynamic elements. It then creates and executes a view plan around a dynamic element to gather additional views for learning. Finally these views are fused to create an object model. The performance of the system is evaluated on publicly available datasets as well as on data collected by the robot in both controlled and uncontrolled scenarios.

  • 4.
    Hang, Kaiyu
    et al.
    KTH, School of Computer Science and Communication (CSC), Robotics, perception and learning, RPL.
    Stork, Johannes A.
    KTH, School of Computer Science and Communication (CSC), Robotics, perception and learning, RPL.
    Pollard, Nancy S.
    Kragic, Danica
    KTH, School of Computer Science and Communication (CSC), Robotics, perception and learning, RPL.
    A Framework for Optimal Grasp Contact Planning2017In: IEEE Robotics and Automation Letters, ISSN 2377-3766, E-ISSN 1949-3045, Vol. 2, no 2, p. 704-711Article in journal (Refereed)
    Abstract [en]

    We consider the problem of finding grasp contacts that are optimal under a given grasp quality function on arbitrary objects. Our approach formulates a framework for contact-level grasping as a path finding problem in the space of supercontact grasps. The initial supercontact grasp contains all grasps and in each step along a path grasps are removed. For this, we introduce and formally characterize search space structure and cost functions underwhich minimal cost paths correspond to optimal grasps. Our formulation avoids expensive exhaustive search and reduces computational cost by several orders of magnitude. We present admissible heuristic functions and exploit approximate heuristic search to further reduce the computational cost while maintaining bounded suboptimality for resulting grasps. We exemplify our formulation with point-contact grasping for which we define domain specific heuristics and demonstrate optimality and bounded suboptimality by comparing against exhaustive and uniform cost search on example objects. Furthermore, we explain how to restrict the search graph to satisfy grasp constraints for modeling hand kinematics. We also analyze our algorithm empirically in terms of created and visited search states and resultant effective branching factor.

  • 5.
    Karayiannidis, Yiannis
    et al.
    KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS. Department of Signals and Systems, Chalmers University of Technology, Gothenburg, Sweden.
    Papageorgiou, D.
    Doulgeri, Z.
    A Model-Free Controller for Guaranteed Prescribed Performance Tracking of Both Robot Joint Positions and Velocities2016In: IEEE Robotics and Automation Letters, ISSN 2377-3766, E-ISSN 1949-3045, Vol. 1, no 1, p. 267-273, article id 7377028Article in journal (Refereed)
    Abstract [en]

    The problem of robot joint position and velocity tracking with prescribed performance guarantees is considered. The proposed controller is able to guarantee a prescribed transient and steady state behavior for the position and the velocity tracking errors without utilizing either the robot dynamic model or any approximation structures. Its performance is demonstrated and assessed via experiments with a KUKA LWR4+ arm. 

  • 6.
    Mohanty, Sumit
    et al.
    KTH, School of Information and Communication Technology (ICT).
    Hong, Ayoung
    Alcantara, Carlos
    Petruska, Andrew J.
    Nelson, Bradley J.
    Stereo Holographic Diffraction Based Tracking of Microrobots2018In: IEEE Robotics and Automation Letters, ISSN 2377-3766, E-ISSN 1949-3045, Vol. 3, no 1, p. 567-572Article in journal (Refereed)
    Abstract [en]

    Three-dimensional (3-D) tracking of microrobots is demonstrated using stereo holographic projections. The method detects the lateral position of a microrobot in two orthogonal in-line holography images and triangulates to obtain the 3-D position in an observable volume of 1 cm(3). The algorithm is capable of processing holograms at 25 Hz on a desktop computer and has an accuracy of 24.7 mu mand 15.2 mu min the two independent directions and 7.3 mu m in the shared direction of the two imaging planes. This is the first use of stereo holograms to track an object in real time and does not rely on the computationally expensive process of holographic reconstruction.

  • 7.
    Palmieri, Luigi
    et al.
    Robert Bosch GmbH, Corp Res, D-70049 Stuttgart, Germany..
    Bruns, Leonard
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Robotics, Perception and Learning, RPL. RWTH Aachen University, Germany.
    Meurer, Michael
    Rhein Westfal TH Aachen, German Aerosp Ctr DLR, D-82234 Wessling, Germany..
    Arras, Kai O.
    Robert Bosch GmbH, Corp Res, D-70049 Stuttgart, Germany..
    Dispertio: Optimal Sampling For Safe Deterministic Motion Planning2020In: IEEE Robotics and Automation Letters, ISSN 2377-3766, E-ISSN 1949-3045, Vol. 5, no 2, p. 362-368Article in journal (Refereed)
    Abstract [en]

    A key challenge in robotics is the efficient generation of optimal robot motion with safety guarantees in cluttered environments. Recently, deterministic optimal sampling-based motion planners have been shown to achieve good performance towards this end, in particular in terms of planning efficiency, final solution cost, quality guarantees as well as non-probabilistic completeness. Yet their application is still limited to relatively simple systems (i.e., linear, holonomic, Euclidean state spaces). In this work, we extend this technique to the class of symmetric and optimal driftless systems by presenting Dispertio, an offline dispersion optimization technique for computing sampling sets, aware of differential constraints, for sampling-based robot motion planning. We prove that the approach, when combined with PRM*, is deterministically complete and retains asymptotic optimality. Furthermore, in our experiments we show that the proposed deterministic sampling technique outperforms several baselines and alternative methods in terms of planning efficiency and solution cost.

  • 8.
    Selin, Magnus
    et al.
    KTH, School of Electrical Engineering and Computer Science (EECS), Centres, Centre for Autonomous Systems, CAS. Linkoping Univ, Dept Comp & Informat Sci, S-58183 Linkoping, Sweden.
    Tiger, Maths
    Linkoping Univ, Dept Comp & Informat Sci, S-58183 Linkoping, Sweden..
    Duberg, Daniel
    KTH, School of Electrical Engineering and Computer Science (EECS), Centres, Centre for Autonomous Systems, CAS.
    Heintz, Fredrik
    Linkoping Univ, Dept Comp & Informat Sci, S-58183 Linkoping, Sweden..
    Jensfelt, Patric
    KTH, School of Electrical Engineering and Computer Science (EECS), Centres, Centre for Autonomous Systems, CAS.
    Efficient Autonomous Exploration Planning of Large-Scale 3-D Environments2019In: IEEE Robotics and Automation Letters, ISSN 2377-3766, E-ISSN 1949-3045, Vol. 4, no 2, p. 1699-1706Article in journal (Refereed)
    Abstract [en]

    Exploration is an important aspect of robotics, whether it is for mapping, rescue missions, or path planning in an unknown environment. Frontier Exploration planning (FEP) and Receding Horizon Next-Best-View planning (RH-NBVP) are two different approaches with different strengths and weaknesses. FEP explores a large environment consisting of separate regions with ease, but is slow at reaching full exploration due to moving back and forth between regions. RH-NBVP shows great potential and efficiently explores individual regions, but has the disadvantage that it can get stuck in large environments not exploring all regions. In this letter, we present a method that combines both approaches, with FEP as a global exploration planner and RH-NBVP for local exploration. We also present techniques to estimate potential information gain faster, to cache previously estimated gains and to exploit these to efficiently estimate new queries.

  • 9.
    Tang, Jiexiong
    et al.
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Robotics, Perception and Learning, RPL. KTH, School of Electrical Engineering and Computer Science (EECS), Centres, Centre for Autonomous Systems, CAS.
    Ericson, Ludvig
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Robotics, Perception and Learning, RPL. KTH, School of Electrical Engineering and Computer Science (EECS), Centres, Centre for Autonomous Systems, CAS.
    Folkesson, John
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Robotics, Perception and Learning, RPL. KTH, School of Electrical Engineering and Computer Science (EECS), Centres, Centre for Autonomous Systems, CAS.
    Jensfelt, Patric
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Robotics, Perception and Learning, RPL. KTH, School of Electrical Engineering and Computer Science (EECS), Centres, Centre for Autonomous Systems, CAS.
    GCNv2: Efficient Correspondence Prediction for Real-Time SLAM2019In: IEEE Robotics and Automation Letters, ISSN 2377-3766, E-ISSN 1949-3045, Vol. 4, no 4, p. 3505-3512Article in journal (Refereed)
    Abstract [en]

    In this letter, we present a deep learning-based network, GCNv2, for generation of keypoints and descriptors. GCNv2 is built on our previous method, GCN, a network trained for 3D projective geometry. GCNv2 is designed with a binary descriptor vector as the ORB feature so that it can easily replace ORB in systems such as ORB-SLAM2. GCNv2 significantly improves the computational efficiency over GCN that was only able to run on desktop hardware. We show how a modified version of ORBSLAM2 using GCNv2 features runs on a Jetson TX2, an embedded low-power platform. Experimental results show that GCNv2 retains comparable accuracy as GCN and that it is robust enough to use for control of a flying drone. Source code is available at: https://github.com/jiexiong2016/GCNv2_SLAM.

  • 10.
    Tang, Jiexiong
    et al.
    KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Folkesson, John
    KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Jensfelt, Patric
    KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Geometric Correspondence Network for Camera Motion Estimation2018In: IEEE Robotics and Automation Letters, ISSN 2377-3766, E-ISSN 1949-3045, Vol. 3, no 2, p. 1010-1017Article in journal (Refereed)
    Abstract [en]

    In this paper, we propose a new learning scheme for generating geometric correspondences to be used for visual odometry. A convolutional neural network (CNN) combined with a recurrent neural network (RNN) are trained together to detect the location of keypoints as well as to generate corresponding descriptors in one unified structure. The network is optimized by warping points from source frame to reference frame, with a rigid body transform. Essentially, learning from warping. The overall training is focused on movements of the camera rather than movements within the image, which leads to better consistency in the matching and ultimately better motion estimation. Experimental results show that the proposed method achieves better results than both related deep learning and hand crafted methods. Furthermore, as a demonstration of the promise of our method we use a naive SLAM implementation based on these keypoints and get a performance on par with ORB-SLAM.

  • 11.
    Tang, Jiexiong
    et al.
    KTH, School of Electrical Engineering and Computer Science (EECS), Robotics, perception and learning, RPL.
    Folkesson, John
    KTH, School of Electrical Engineering and Computer Science (EECS), Robotics, perception and learning, RPL.
    Jensfelt, Patric
    KTH, School of Electrical Engineering and Computer Science (EECS), Robotics, perception and learning, RPL.
    Sparse2Dense: From Direct Sparse Odometry to Dense 3-D Reconstruction2019In: IEEE Robotics and Automation Letters, ISSN 2377-3766, E-ISSN 1949-3045, Vol. 4, no 2, p. 530-537Article in journal (Refereed)
    Abstract [en]

    In this letter, we proposed a new deep learning based dense monocular simultaneous localization and mapping (SLAM) method. Compared to existing methods, the proposed framework constructs a dense three-dimensional (3-D) model via a sparse to dense mapping using learned surface normals. With single view learned depth estimation as prior for monocular visual odometry, we obtain both accurate positioning and high-quality depth reconstruction. The depth and normal are predicted by a single network trained in a tightly coupled manner. Experimental results show that our method significantly improves the performance of visual tracking and depth prediction in comparison to the state-of-the-art in deep monocular dense SLAM.

1 - 11 of 11
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf