Change search
Refine search result
12 1 - 50 of 65
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Rows per page
  • 5
  • 10
  • 20
  • 50
  • 100
  • 250
Sort
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
Select
The maximal number of hits you can export is 250. When you want to export more records please use the Create feeds function.
  • 1.
    Alberti, Marina
    et al.
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.
    Jensfelt, Patric
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS. KTH, School of Chemical Science and Engineering (CHE).
    Folkesson, John
    KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS. KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.
    Relational approaches for joint object classification andscene similarity measurement in indoor environments2014In: Proc. of 2014 AAAI Spring Symposium QualitativeRepresentations for Robots 2014, Palo Alto, California: The AAAI Press , 2014Conference paper (Refereed)
    Abstract [en]

    The qualitative structure of objects and their spatial distribution,to a large extent, define an indoor human environmentscene. This paper presents an approach forindoor scene similarity measurement based on the spatialcharacteristics and arrangement of the objects inthe scene. For this purpose, two main sets of spatialfeatures are computed, from single objects and objectpairs. A Gaussian Mixture Model is applied both onthe single object features and the object pair features, tolearn object class models and relationships of the objectpairs, respectively. Given an unknown scene, the objectclasses are predicted using the probabilistic frameworkon the learned object class models. From the predictedobject classes, object pair features are extracted. A fi-nal scene similarity score is obtained using the learnedprobabilistic models of object pair relationships. Ourmethod is tested on a real world 3D database of deskscenes, using a leave-one-out cross-validation framework.To evaluate the effect of varying conditions on thescene similarity score, we apply our method on mockscenes, generated by removing objects of different categoriesin the test scenes.

  • 2.
    Ambrus, Rares
    et al.
    KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Bore, Nils
    KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Folkesson, John
    KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Jensfelt, Patric
    KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Autonomous meshing, texturing and recognition of object models with a mobile robot2017In: 2017 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS (IROS) / [ed] Bicchi, A Okamura, A, IEEE , 2017, p. 5071-5078Conference paper (Refereed)
    Abstract [en]

    We present a system for creating object models from RGB-D views acquired autonomously by a mobile robot. We create high-quality textured meshes of the objects by approximating the underlying geometry with a Poisson surface. Our system employs two optimization steps, first registering the views spatially based on image features, and second aligning the RGB images to maximize photometric consistency with respect to the reconstructed mesh. We show that the resulting models can be used robustly for recognition by training a Convolutional Neural Network (CNN) on images rendered from the reconstructed meshes. We perform experiments on data collected autonomously by a mobile robot both in controlled and uncontrolled scenarios. We compare quantitatively and qualitatively to previous work to validate our approach.

  • 3.
    Ambrus, Rares
    et al.
    KTH, School of Computer Science and Communication (CSC), Robotics, perception and learning, RPL. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Bore, Nils
    KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Folkesson, John
    KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS. KTH, School of Computer Science and Communication (CSC), Robotics, perception and learning, RPL.
    Jensfelt, Patric
    KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS. KTH, School of Computer Science and Communication (CSC), Robotics, perception and learning, RPL.
    Autonomous meshing, texturing and recognition of objectmodels with a mobile robot2017Conference paper (Refereed)
    Abstract [en]

    We present a system for creating object modelsfrom RGB-D views acquired autonomously by a mobile robot.We create high-quality textured meshes of the objects byapproximating the underlying geometry with a Poisson surface.Our system employs two optimization steps, first registering theviews spatially based on image features, and second aligningthe RGB images to maximize photometric consistency withrespect to the reconstructed mesh. We show that the resultingmodels can be used robustly for recognition by training aConvolutional Neural Network (CNN) on images rendered fromthe reconstructed meshes. We perform experiments on datacollected autonomously by a mobile robot both in controlledand uncontrolled scenarios. We compare quantitatively andqualitatively to previous work to validate our approach.

  • 4.
    Ambrus, Rares
    et al.
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Bore, Nils
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Folkesson, John
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Jensfelt, Patric
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Meta-rooms: Building and Maintaining Long Term Spatial Models in a Dynamic World2014In: 2014 IEEE/RSJ International Conference on Intelligent Robots and Systems, (IROS 2014), IEEE conference proceedings, 2014, p. 1854-1861Conference paper (Refereed)
    Abstract [en]

    We present a novel method for re-creating the static structure of cluttered office environments -which we define as the " meta-room" -from multiple observations collected by an autonomous robot equipped with an RGB-D depth camera over extended periods of time. Our method works directly with point clusters by identifying what has changed from one observation to the next, removing the dynamic elements and at the same time adding previously occluded objects to reconstruct the underlying static structure as accurately as possible. The process of constructing the meta-rooms is iterative and it is designed to incorporate new data as it becomes available, as well as to be robust to environment changes. The latest estimate of the meta-room is used to differentiate and extract clusters of dynamic objects from observations. In addition, we present a method for re-identifying the extracted dynamic objects across observations thus mapping their spatial behaviour over extended periods of time.

  • 5.
    Ambrus, Rares
    et al.
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Ekekrantz, Johan
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Folkesson, John
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Jensfelt, Patric
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Unsupervised learning of spatial-temporal models of objects in a long-term autonomy scenario2015In: 2015 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS (IROS), IEEE , 2015, p. 5678-5685Conference paper (Refereed)
    Abstract [en]

    We present a novel method for clustering segmented dynamic parts of indoor RGB-D scenes across repeated observations by performing an analysis of their spatial-temporal distributions. We segment areas of interest in the scene using scene differencing for change detection. We extend the Meta-Room method and evaluate the performance on a complex dataset acquired autonomously by a mobile robot over a period of 30 days. We use an initial clustering method to group the segmented parts based on appearance and shape, and we further combine the clusters we obtain by analyzing their spatial-temporal behaviors. We show that using the spatial-temporal information further increases the matching accuracy.

  • 6.
    Ambrus, Rares
    et al.
    KTH, School of Computer Science and Communication (CSC), Robotics, perception and learning, RPL. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Folkesson, John
    KTH, School of Computer Science and Communication (CSC), Robotics, perception and learning, RPL. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Jensfelt, Patric
    KTH, School of Computer Science and Communication (CSC), Robotics, perception and learning, RPL. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Unsupervised object segmentation through change detection in a long term autonomy scenario2016In: IEEE-RAS International Conference on Humanoid Robots, IEEE, 2016, p. 1181-1187Conference paper (Refereed)
    Abstract [en]

    In this work we address the problem of dynamic object segmentation in office environments. We make no prior assumptions on what is dynamic and static, and our reasoning is based on change detection between sparse and non-uniform observations of the scene. We model the static part of the environment, and we focus on improving the accuracy and quality of the segmented dynamic objects over long periods of time. We address the issue of adapting the static structure over time and incorporating new elements, for which we train and use a classifier whose output gives an indication of the dynamic nature of the segmented elements. We show that the proposed algorithms improve the accuracy and the rate of detection of dynamic objects by comparing with a labelled dataset.

  • 7.
    Axelsson, Unnar
    et al.
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.
    Folkesson, John
    KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS. KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.
    Underwater feature extraction and pillar mapping2015Report (Other academic)
    Abstract [en]

    A mechanicaly scanned imaging sonar, MSIS, pro-duces a 2D image of the range and bearing of return intensities.The pattern produced in this image depends on the envior-mental feature that caused it. These features are very usefulfor underwater navigation but the inverse mapping of sonarimage pattern to environmental feature can be ambiguous. Weinvestigate problems associated with using MSIS for navigation.In particular we show that support vector machines can be usedto classify the existance and types of feature in a sonar image.We develop a sonar processing pipleline that can be used fornavigation. This is tested on two sonar datasets collected fromROV’s. 1

  • 8.
    Aydemir, Alper
    et al.
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Jensfelt, Patric
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Folkesson, John
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    What can we learn from 38,000 rooms?: Reasoning about unexplored space in indoor environments2012In: Intelligent Robots and Systems (IROS), 2012 IEEE/RSJ International Conference on, IEEE , 2012, p. 4675-4682Conference paper (Refereed)
    Abstract [en]

    Many robotics tasks require the robot to predict what lies in the unexplored part of the environment. Although much work focuses on building autonomous robots that operate indoors, indoor environments are neither well understood nor analyzed enough in the literature. In this paper, we propose and compare two methods for predicting both the topology and the categories of rooms given a partial map. The methods are motivated by the analysis of two large annotated floor plan data sets corresponding to the buildings of the MIT and KTH campuses. In particular, utilizing graph theory, we discover that local complexity remains unchanged for growing global complexity in real-world indoor environments, a property which we exploit. In total, we analyze 197 buildings, 940 floors and over 38,000 real-world rooms. Such a large set of indoor places has not been investigated before in the previous work. We provide extensive experimental results and show the degree of transferability of spatial knowledge between two geographically distinct locations. We also contribute the KTH data set and the software tools to with it.

  • 9.
    Aydemir, Alper
    et al.
    KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS. KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.
    Sjöö, Kristoffer
    KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS. KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.
    Folkesson, John
    KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS. KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.
    Pronobis, Andrzej
    KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS. KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.
    Jensfelt, Patric
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Search in the real world: Active visual object search based on spatial relations2011In: IEEE International Conference on Robotics and Automation (ICRA), 2011, IEEE , 2011, p. 2818-2824Conference paper (Refereed)
    Abstract [en]

    Objects are integral to a robot’s understandingof space. Various tasks such as semantic mapping, pick-andcarrymissions or manipulation involve interaction with objects.Previous work in the field largely builds on the assumption thatthe object in question starts out within the ready sensory reachof the robot. In this work we aim to relax this assumptionby providing the means to perform robust and large-scaleactive visual object search. Presenting spatial relations thatdescribe topological relationships between objects, we thenshow how to use these to create potential search actions. Weintroduce a method for efficiently selecting search strategiesgiven probabilities for those relations. Finally we performexperiments to verify the feasibility of our approach.

  • 10.
    Bore, Nils
    et al.
    KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Ambrus, Rares
    KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Jensfelt, Patric
    KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Folkesson, John
    KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Efficient retrieval of arbitrary objects from long-term robot observations2017In: Robotics and Autonomous Systems, ISSN 0921-8890, E-ISSN 1872-793X, Vol. 91, p. 139-150Article in journal (Refereed)
    Abstract [en]

    We present a novel method for efficient querying and retrieval of arbitrarily shaped objects from large amounts of unstructured 3D point cloud data. Our approach first performs a convex segmentation of the data after which local features are extracted and stored in a feature dictionary. We show that the representation allows efficient and reliable querying of the data. To handle arbitrarily shaped objects, we propose a scheme which allows incremental matching of segments based on similarity to the query object. Further, we adjust the feature metric based on the quality of the query results to improve results in a second round of querying. We perform extensive qualitative and quantitative experiments on two datasets for both segmentation and retrieval, validating the results using ground truth data. Comparison with other state of the art methods further enforces the validity of the proposed method. Finally, we also investigate how the density and distribution of the local features within the point clouds influence the quality of the results.

  • 11.
    Bore, Nils
    et al.
    KTH, School of Electrical Engineering and Computer Science (EECS), Robotics, perception and learning, RPL.
    Ekekrantz, Johan
    KTH, School of Electrical Engineering and Computer Science (EECS), Robotics, perception and learning, RPL.
    Jensfelt, Patric
    KTH, School of Electrical Engineering and Computer Science (EECS), Robotics, perception and learning, RPL.
    Folkesson, John
    KTH, School of Electrical Engineering and Computer Science (EECS), Robotics, perception and learning, RPL.
    Detection and Tracking of General Movable Objects in Large Three-Dimensional Maps2019In: IEEE Transactions on robotics, ISSN 1552-3098, E-ISSN 1941-0468, Vol. 35, no 1, p. 231-247Article in journal (Refereed)
    Abstract [en]

    This paper studies the problem of detection and tracking of general objects with semistatic dynamics observed by a mobile robot moving in a large environment. A key problem is that due to the environment scale, the robot can only observe a subset of the objects at any given time. Since some time passes between observations of objects in different places, the objects might be moved when the robot is not there. We propose a model for this movement in which the objects typically only move locally, but with some small probability they jump longer distances through what we call global motion. For filtering, we decompose the posterior over local and global movements into two linked processes. The posterior over the global movements and measurement associations is sampled, while we track the local movement analytically using Kalman filters. This novel filter is evaluated on point cloud data gathered autonomously by a mobile robot over an extended period of time. We show that tracking jumping objects is feasible, and that the proposed probabilistic treatment outperforms previous methods when applied to real world data. The key to efficient probabilistic tracking in this scenario is focused sampling of the object posteriors.

  • 12.
    Bore, Nils
    et al.
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Jensfelt, Patric
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Folkesson, John
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Querying 3D Data by Adjacency Graphs2015In: Computer Vision Systems / [ed] Nalpantidis, Lazaros and Krüger, Volker and Eklundh, Jan-Olof and Gasteratos, Antonios, Springer Publishing Company, 2015, p. 243-252Chapter in book (Refereed)
    Abstract [en]

    The need for robots to search the 3D data they have saved is becoming more apparent. We present an approach for finding structures in 3D models such as those built by robots of their environment. The method extracts geometric primitives from point cloud data. An attributed graph over these primitives forms our representation of the surface structures. Recurring substructures are found with frequent graph mining techniques. We investigate if a model invariant to changes in size and reflection using only the geometric information of and between primitives can be discriminative enough for practical use. Experiments confirm that it can be used to support queries of 3D models.

  • 13.
    Bore, Nils
    et al.
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.
    Jensfelt, Patric
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.
    Folkesson, John
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.
    Retrieval of Arbitrary 3D Objects From Robot Observations2015In: Retrieval of Arbitrary 3D Objects From Robot Observations, Lincoln: IEEE Robotics and Automation Society, 2015, p. 1-8Conference paper (Refereed)
    Abstract [en]

    We have studied the problem of retrieval of arbi-trary object instances from a large point cloud data set. Thecontext is autonomous robots operating for long periods of time,weeks up to months and regularly saving point cloud data. Theever growing collection of data is stored in a way that allowsranking candidate examples of any query object, given in theform of a single view point cloud, without the need to accessthe original data. The top ranked ones can then be compared ina second phase using the point clouds themselves. Our methoddoes not assume that the point clouds are segmented or that theobjects to be queried are known ahead of time. This means thatwe are able to represent the entire environment but it also posesproblems for retrieval. To overcome this our approach learnsfrom each actual query to improve search results in terms of theranking. This learning is automatic and based only on the queries.We demonstrate our system on data collected autonomously by arobot operating over 13 days in our building. Comparisons withother techniques and several variations of our method are shown.

  • 14.
    Bore, Nils
    et al.
    KTH, School of Electrical Engineering and Computer Science (EECS), Robotics, perception and learning, RPL.
    Torroba, Ignacio
    KTH, School of Electrical Engineering and Computer Science (EECS), Robotics, perception and learning, RPL.
    Folkesson, John
    KTH, School of Electrical Engineering and Computer Science (EECS), Robotics, perception and learning, RPL.
    Sparse Gaussian Process SLAM, Storage and Filtering for AUV Multibeam Bathymetry2018In: 2018 IEEE OES Autonomous Underwater Vehicle Symposium, 2018Conference paper (Refereed)
    Abstract [en]

    With dead-reckoning from velocity sensors,AUVs may construct short-term, local bathymetry mapsof the sea floor using multibeam sensors. However, theposition estimate from dead-reckoning will include somedrift that grows with time. In this work, we focus on long-term onboard storage of these local bathymetry maps,and the alignment of maps with respect to each other. Wepropose using Sparse Gaussian Processes for this purpose,and show that the representation has several advantages,including an intuitive alignment optimization, data com-pression, and sensor noise filtering. We demonstrate thesethree key capabilities on two real-world datasets.

  • 15.
    Chen, Xi
    et al.
    KTH, School of Electrical Engineering and Computer Science (EECS), Robotics, Perception and Learning, RPL.
    Ghadirzadeh, Ali
    KTH, School of Electrical Engineering and Computer Science (EECS), Robotics, Perception and Learning, RPL.
    Folkesson, John
    KTH, School of Electrical Engineering and Computer Science (EECS), Robotics, Perception and Learning, RPL.
    Björkman, Mårten
    KTH, School of Electrical Engineering and Computer Science (EECS), Robotics, Perception and Learning, RPL.
    Jensfelt, Patric
    KTH, School of Electrical Engineering and Computer Science (EECS), Robotics, Perception and Learning, RPL.
    Deep Reinforcement Learning to Acquire Navigation Skills for Wheel-Legged Robots in Complex Environments2018In: 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 2018Conference paper (Refereed)
    Abstract [en]

    Mobile robot navigation in complex and dynamic environments is a challenging but important problem. Reinforcement learning approaches fail to solve these tasks efficiently due to reward sparsities, temporal complexities and high-dimensionality of sensorimotor spaces which are inherent in such problems. We present a novel approach to train action policies to acquire navigation skills for wheel-legged robots using deep reinforcement learning. The policy maps height-map image observations to motor commands to navigate to a target position while avoiding obstacles. We propose to acquire the multifaceted navigation skill by learning and exploiting a number of manageable navigation behaviors. We also introduce a domain randomization technique to improve the versatility of the training samples. We demonstrate experimentally a significant improvement in terms of data-efficiency, success rate, robustness against irrelevant sensory data, and also the quality of the maneuver skills.

  • 16. Christensen, H.
    et al.
    Folkesson, John
    KTH, Superseded Departments, Numerical Analysis and Computer Science, NADA.
    Hedström, A.
    Lundberg, C.
    Ugv technology for urban intervention2004Conference paper (Refereed)
  • 17.
    Christensen, Henrik
    et al.
    KTH, Superseded Departments, Numerical Analysis and Computer Science, NADA.
    Folkesson, John
    KTH, Superseded Departments, Numerical Analysis and Computer Science, NADA.
    Hedström, Andreas
    UGV technology for urban navigation2004In: UNMANNED GROUND VEHICLE TECHNOLOGY VI / [ed] Gerhart, GR; Shoemaker, CM; Gage, DW, BELLINGHAM: SPIE-INT SOC OPTICAL ENGINEERING , 2004, Vol. 5422, p. 191-197Conference paper (Refereed)
    Abstract [en]

    Deployment of humans in an urban setting for search and rescue type missions poses a major risk to the personnel. In rescue missions the risk can stem from debris, gas, etc and in a strategic setting the risk can stem from snipers, mines, gas etc. There is consequently a natural interest in studies of how UGV technology can be deployed for tasks such as reconnaissance, retrieval of objects (bombs, injured people, etc.). Today most vehicles used by the military and bomb squads are tele-operated and without any autonomy. This implies that operation of the vehicles is a stressful and demanding task. Part of this stress can be removed through introduction of autonomous functionality. Autonomy implicitly requires use of map information to allow the system to localize and traverse a particular area, in addition autonomous mapping of an area is a valuable functionality as part of reconnaissance missions to provide an initial inventory of a new area. A host of different sensory modalities can be used for mapping. In general no single modality is, however, sufficient for robust and efficient mapping. In the present study GPS, Inertial Cues, Laser ranging and Odometry is used for simultaneous mapping and localisation in urban environments. The mapping is carried out autonomously using a coverage strategy to ensure full mapping of a particular area. In relation to mapping another important issue is the design of an efficient user interface that allows a regular rescue worker, or a soldier, to operate the vehicle without detailed knowledge about robotics. A number of different designs for user interfaces will be presented and results from studies with a range of end-users (soldiers) will also be reported. The complete system has been tested in an urban warfare facility outside of Stockholm. Detailed results will be reposted from two different test facilities.

  • 18.
    Ekekrantz, Johan
    et al.
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Pronobis, Andrzej
    KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Folkesson, John
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Jensfelt, Patric
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Adaptive Iterative Closest Keypoint2013In: 2013 European Conference on Mobile Robots, ECMR 2013 - Conference Proceedings, New York: IEEE , 2013, p. 80-87Conference paper (Refereed)
    Abstract [en]

    Finding accurate correspondences between overlapping 3D views is crucial for many robotic applications, from multi-view 3D object recognition to SLAM. This step, often referred to as view registration, plays a key role in determining the overall system performance. In this paper, we propose a fast and simple method for registering RGB-D data, building on the principle of the Iterative Closest Point (ICP) algorithm. In contrast to ICP, our method exploits both point position and visual appearance and is able to smoothly transition the weighting between them with an adaptive metric. This results in robust initial registration based on appearance and accurate final registration using 3D points. Using keypoint clustering we are able to utilize a non exhaustive search strategy, reducing runtime of the algorithm significantly. We show through an evaluation on an established benchmark that the method significantly outperforms current methods in both robustness and precision.

  • 19.
    Ekekrantz, Johan
    et al.
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.
    Thippur, Akshaya
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.
    Folkesson, John
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Jensfelt, Patric
    KTH, School of Computer Science and Communication (CSC).
    Probabilistic Primitive Refinement algorithm for colored point cloud data2015In: 2015 European Conference on Mobile Robots (ECMR), Lincoln: IEEE conference proceedings, 2015Conference paper (Refereed)
    Abstract [en]

    In this work we present the Probabilistic Primitive Refinement (PPR) algorithm, an iterative method for accurately determining the inliers of an estimated primitive (such as planes and spheres) parametrization in an unorganized, noisy point cloud. The measurement noise of the points belonging to the proposed primitive surface are modelled using a Gaussian distribution and the measurements of extraneous points to the proposed surface are modelled as a histogram. Given these models, the probability that a measurement originated from the proposed surface model can be computed. Our novel technique to model the noisy surface from the measurement data does not require a priori given parameters for the sensor noise model. The absence of sensitive parameters selection is a strength of our method. Using the geometric information obtained from such an estimate the algorithm then builds a color-based model for the surface, further boosting the accuracy of the segmentation. If used iteratively the PPR algorithm can be seen as a variation of the popular mean-shift algorithm with an adaptive stochastic kernel function.

  • 20. Evestedt, Niclas
    et al.
    Ward, Erik
    KTH, School of Computer Science and Communication (CSC), Robotics, perception and learning, RPL.
    Folkesson, John
    KTH, School of Computer Science and Communication (CSC), Robotics, perception and learning, RPL.
    Axehill, Daniel
    Interaction aware trajectory planning for merge scenarios in congested traffic situations2016In: 2016 IEEE 19th International Conference on Intelligent Transportation Systems, IEEE, 2016, p. 465-472Conference paper (Refereed)
    Abstract [en]

    In many traffic situations there are times where interaction with other drivers is necessary and unavoidable in order to safely progress towards an intended destination. This is especially true for merge manoeuvres into dense traffic, where drivers sometimes must be somewhat aggressive and show the intention of merging in order to interact with the other driver and make the driver open the gap needed to execute the manoeuvre safely. Many motion planning frameworks for autonomous vehicles adopt a reactive approach where simple models of other traffic participants are used and therefore need to adhere to large margins in order to behave safely. However, the large margins needed can sometimes get the system stuck in congested traffic where time gaps between vehicles are too small. In other situations, such as a highway merge, it can be significantly more dangerous to stop on the entrance ramp if the gaps are found to be too small than to make a slightly more aggressive manoeuvre and let the driver behind open the gap needed. To remedy this problem, this work uses the Intelligent Driver Model (IDM) to explicitly model the interaction of other drivers and evaluates the risk by their required deceleration in a similar manner as the Minimum Overall Breaking Induced by Lane change (MOBIL) model that has been used in large scale traffic simulations before. This allows the algorithm to evaluate the effect on other drivers depending on our own trajectory plans by simulating the nearby traffic situation. Finding a globally optimal solution is often intractable in these situations so instead a large set of candidate trajectories are generated that are evaluated against the traffic scene by forward simulations of other traffic participants. By discretization and using an efficient trajectory generator together with efficient modelling of the traffic scene real-time demands can be met.

  • 21. Faeulhammer, Thomas
    et al.
    Ambrus, Rares
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Burbridge, Christopher
    Zillich, Micheal
    Folkesson, John
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Hawes, Nick
    Jensfelt, Patric
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Vincze, Marcus
    Autonomous Learning of Object Models on a Mobile Robot2017In: IEEE Robotics and Automation Letters, ISSN 2377-3766, E-ISSN 1949-3045, Vol. 2, no 1, p. 26-33, article id 7393491Article in journal (Refereed)
    Abstract [en]

    In this article we present and evaluate a system which allows a mobile robot to autonomously detect, model and re-recognize objects in everyday environments. Whilst other systems have demonstrated one of these elements, to our knowledge we present the first system which is capable of doing all of these things, all without human interaction, in normal indoor scenes. Our system detects objects to learn by modelling the static part of the environment and extracting dynamic elements. It then creates and executes a view plan around a dynamic element to gather additional views for learning. Finally these views are fused to create an object model. The performance of the system is evaluated on publicly available datasets as well as on data collected by the robot in both controlled and uncontrolled scenarios.

  • 22. Fallon, Maurice F.
    et al.
    Folkesson, John
    KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    McClelland, Hunter
    Leonard, John J.
    Relocating Underwater Features Autonomously Using Sonar-Based SLAM2013In: IEEE Journal of Oceanic Engineering, ISSN 0364-9059, E-ISSN 1558-1691, Vol. 38, no 3, p. 500-513Article in journal (Refereed)
    Abstract [en]

    This paper describes a system for reacquiring features of interest in a shallow-water ocean environment, using autonomous underwater vehicles (AUVs) equipped with low-cost sonar and navigation sensors. In performing mine countermeasures, it is critical to enable AUVs to navigate accurately to previously mapped objects of interest in the water column or on the seabed, for further assessment or remediation. An important aspect of the overall system design is to keep the size and cost of the reacquisition vehicle as low as possible, as it may potentially be destroyed in the reacquisition mission. This low-cost requirement prevents the use of sophisticated AUV navigation sensors, such as a Doppler velocity log (DVL) or an inertial navigation system (INS). Our system instead uses the Proviewer 900-kHz imaging sonar from Blueview Technologies, which produces forward-looking sonar (FLS) images at ranges up to 40 m at approximately 4 Hz. In large volumes, it is hoped that this sensor can be manufactured at low cost. Our approach uses a novel simultaneous localization and mapping (SLAM) algorithm that detects and tracks features in the FLS images to renavigate to a previously mapped target. This feature-based navigation (FBN) system incorporates a number of recent advances in pose graph optimization algorithms for SLAM. The system has undergone extensive field testing over a period of more than four years, demonstrating the potential for the use of this new approach for feature reacquisition. In this report, we review the methodologies and components of the FBN system, describe the system's technological features, review the performance of the system in a series of extensive in-water field tests, and highlight issues for future research.

  • 23.
    Fallon, Maurice F.
    et al.
    MIT.
    Johannsson, Hordur
    MIT.
    Kaess,, Michael
    MIT.
    Folkesson, John
    KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS. KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.
    McClelland, Hunter
    MIT.
    Englot, Brendan J.
    MIT.
    Hover, Franz S.
    MIT.
    Leonard, John J.
    MIT.
    Simultaneous Localization and Mapping in Marine Environments2013In: Marine Robot Autonomy, New York: Springer, 2013, p. 329-372Chapter in book (Refereed)
    Abstract [en]

    Accurate navigation is a fundamental requirement for robotic systems—marine and terrestrial. For an intelligent autonomous system to interact effectively and safely with its environment, it needs to accurately perceive its surroundings. While traditional dead-reckoning filtering can achieve extremely low drift rates, the localization accuracy decays monotonically with distance traveled. Other approaches (such as external beacons) can help; nonetheless, the typical prerogative is to remain at a safe distance and to avoid engaging with the environment. In this chapter we discuss alternative approaches which utilize onboard sensors so that the robot can estimate the location of sensed objects and use these observations to improve its own navigation as well as its perception of the environment. This approach allows for meaningful interaction and autonomy. Three motivating autonomous underwater vehicle (AUV) applications are outlined herein. The first fuses external range sensing with relative sonar measurements. The second application localizes relative to a prior map so as to revisit a specific feature, while the third builds an accurate model of an underwater structure which is consistent and complete. In particular we demonstrate that each approach can be abstracted to a core problem of incremental estimation within a sparse graph of the AUV’s trajectory and the locations of features of interest which can be updated and optimized in real time on board the AUV.

  • 24.
    Folkesson, John
    KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS. KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.
    Robustness of the Quadratic Antiparticle Filter forRobot Localization2011In: European Conference on Mobile Robots / [ed] Achim J. Lilienthal and Tom Duckett, 2011, p. 297-302Conference paper (Refereed)
    Abstract [en]

    Robot localization using odometry and feature measurementsis a nonlinear estimation problem. An efficient solutionis found using the extended Kalman filter, EKF. The EKFhowever suffers from divergence and inconsistency when thenonlinearities are significant. We recently developed a new typeof filter based on an auxiliary variable Gaussian distributionwhich we call the antiparticle filter AF as an alternative nonlinearestimation filter that has improved consistency and stability. TheAF reduces to the iterative EKF, IEKF, when the posterior distributionis well represented by a simple Gaussian. It transitions to amore complex representation as required. We have implementedan example of the AF which uses a parameterization of the meanas a quadratic function of the auxiliary variables which we callthe quadratic antiparticle filter, QAF. We present simulationof robot feature based localization in which we examine therobustness to bias, and disturbances with comparison to the EKF.

  • 25.
    Folkesson, John
    KTH, School of Computer Science and Communication (CSC), Numerical Analysis and Computer Science, NADA.
    Simultaneous localization and mapping with robots2005Doctoral thesis, monograph (Other scientific)
    Abstract [en]

    A fundamental competence of any mobile robot system is the ability to remain localized while operating in an environment. For unknown/partially known environments there is a need to combine localization with automatic mapping to facilitate the localization process. The process of Simultaneous Localization and Mapping (SLAM) is the topic of this thesis.

    SLAM is a topic that has been studied for more than 2 decades using a variety of different methodologies, yet it deployment has been hampered by problems in terms of computational complexity, consistent integration of partially observable features, divergence due to linearization of the process, introduction of topological constraints into the estimation process, and efficient handling of ambiguities in the data-association process. The present study is an attempt to address and overcome these limitations.

    Initially a new model for features, inspired by the SP-map model, is derived for consistent handling of a variety of sensor features such as point, lines and planes. The new feature model enable incremental initialization of the estimation process and efficient integration of sensory data for partially observable features. The new feature model at the same time allow for consistent handling of all features within a unied framework.

    To address the problems associated with data-association, computational complexity and topological constraints a graphical estimation method is de- rived. The estimation of features and pose is based on energy optimization. Through graph based optimization it is possible to design a feature model where the key non-linearities are identified and handled in a consistent man- ner so as to avoid earlier discovered divergence problems. At the same time any-time data-association can be handled in an efficient manner. Loop closing in the new representation is easily facilitated and the resulting maps show superior consistency even for large scale mapping problems.

    The developed methods have been empirically evaluated for SLAM using laser and video data. Experimental results are provided both for in-door and out-door environments.

    The methods presented in this study provide new solutions to the lin- earization problem, feature observability, any-time data association, and integration of topological constraints.

  • 26.
    Folkesson, John
    KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS. KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.
    The Antiparticle Filter: an Adaptive Nonlinear Estimator2011In: International Symposium of Robotics Research, 2011Conference paper (Refereed)
    Abstract [en]

    We introduce the antiparticle filter, AF, a new type of recursive Bayesian estimator that is unlike either the extended Kalman Filter, EKF, unscented Kalman Filter, UKF or the particle filter PF. We show that for a classic problem of robot localization the AF can substantially outperform these other filters in some situations. The AF estimates the posterior distribution as an auxiliary variable Gaussian which gives an analytic formula using no random samples. It adaptively changes the complexity of the posterior distribution as the uncertainty changes. It is equivalent to the EKF when theuncertainty is low while being able to represent non-Gaussian distributions as the uncertainty increases. The computation time can be much faster than a particle filter for the same accuracy. We have simulated comparisons of two types of AF to the EKF, the iterative EKF, the UKF, an iterative UKF, and the PF demonstrating that AF can reduce the error to a consistent accurate value.

  • 27.
    Folkesson, John
    KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    The antiparticle filter—an adaptive nonlinear estimator2017In: 15th International Symposium of Robotics Research, 2011, Springer, 2017, p. 219-234Conference paper (Refereed)
    Abstract [en]

    We introduce the antiparticle filter, AF, a new type of recursive Bayesian estimator that is unlike either the extended Kalman Filter, EKF, unscented Kalman Filter, UKF or the particle filter PF. We show that for a classic problem of robot localization the AF can substantially outperform these other filters in some situations. The AF estimates the posterior distribution as an auxiliary variable Gaussian which gives an analytic formula using no random samples. It adaptively changes the complexity of the posterior distribution as the uncertainty changes. It is equivalent to the EKF when the uncertainty is low while being able to represent non-Gaussian distributions as the uncertainty increases. The computation time can be much faster than a particle filter for the same accuracy. We have simulated comparisons of two types of AF to the EKF, the iterative EKF, the UKF, an iterative UKF, and the PF demonstrating that AF can reduce the error to a consistent accurate value.

  • 28.
    Folkesson, John
    et al.
    KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Christensen, Henrik
    Closing the Loop With Graphical SLAM2007In: IEEE Transactions on robotics, ISSN 1552-3098, E-ISSN 1941-0468, Vol. 23, no 4, p. 731-741Article in journal (Refereed)
    Abstract [en]

    The problem of simultaneous localization and mapping (SLAM) is addressed using a graphical method. The main contributions are a computational complexity that scales well with the size of the environment, the elimination of most of the linearization inaccuracies, and a more flexible and robust data association. We also present a detection criteria for closing loops. We show how multiple topological constraints can be imposed on the graphical solution by a process of coarse fitting followed by fine tuning. The coarse fitting is performed using an approximate system. This approximate system can be shown to possess all the local symmetries. Observations made during the SLAM process often contain symmetries, that is to say, directions of change to the state space that do not affect the observed quantities. It is important that these directions do not shift as we approximate the system by, for example, linearization. The approximate system is both linear and block diagonal. This makes it a very simple system to work with especially when imposing global topological constraints on the solution. These global constraints are nonlinear. We show how these constraints can be discovered automatically. We develop a method of testing multiple hypotheses for data matching using the graph. This method is derived from statistical theory and only requires simple counting of observations. The central insight is to examine the probability of not observing the same features on a return to a region. We present results with data from an outdoor scenario using a SICK laser scanner.

  • 29.
    Folkesson, John
    et al.
    KTH, Superseded Departments, Numerical Analysis and Computer Science, NADA. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Christensen, Henrik
    KTH, Superseded Departments, Numerical Analysis and Computer Science, NADA. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Graphical SLAM:  a self-correcting map2004In: 2004 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION, PROCEEDINGS  , 2004, p. 383-390Conference paper (Refereed)
    Abstract [en]

    We describe an approach to simultaneous localization and mapping, SLAM. This approach has the highly desirable property of robustness to data association errors. Another important advantage of our algorithm is that non-linearities are computed exactly, so that global constraints can be imposed even if they result in large shifts to the map. We represent the map as a graph and use the graph to find an efficient map update algorithm. We also show how topological consistency can be imposed on the map, such as, closing a loop. The algorithm has been implemented on an outdoor robot and we have experimental validation of our ideas. We also explain how the graph can be simplified leading to linear approximations of sections of the map. This reduction gives us a natural way to connect local map patches into a much larger global map.

  • 30.
    Folkesson, John
    et al.
    KTH, Superseded Departments, Numerical Analysis and Computer Science, NADA. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Christensen, Henrik
    KTH, Superseded Departments, Numerical Analysis and Computer Science, NADA. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Outdoor exploration and SLAM using a compressed filter2003In: Proceedings - IEEE International Conference on Robotics and Automation, 2003, p. 419-427Conference paper (Refereed)
    Abstract [en]

    In this paper we describe the use of automatic explorationfor autonomous mapping of outdoor scenes. We describe areal-time SLAM implementation along with an autonomous explorationalgorithm. We have implemented SLAM with a compressedextended Kalman filter (CEKF) on an outdoor robot. Our implementationuses walls of buildings as features. The state predictions aremade by using a combination of odometry and inertial data. The systemwas tested on a 200 x 200 m site with 18 buildings on variableterrain. The paper helps explain some of the implementation detailsof the compressed filter such as, how to organize the map as well asmore general issues like, how to include the effects of pitch and rolland efficient feature detection.

  • 31.
    Folkesson, John
    et al.
    Massachusetts Institute of Technology, Cambridge, MA.
    Christensen, Henrik
    Georgia Institute of Technology, Atlanta, GA.
    SIFT Based Graphical SLAM on a Packbot2008In: Springer Tracts in Advanced Robotics, ISSN 1610-7438, E-ISSN 1610-742X, Vol. 42, p. 317-328Article in journal (Refereed)
    Abstract [en]

    We present an implementation of Simultaneous Localization and Mapping (SLAM) that uses infrared (IR) camera images collected at 10 Hz from a Packbot robot. The Packbot has a number of challenging characteristics with regard to vision based SLAM. The robot travels on tracks which causes the odometry to be poor especially while turning. The IMU is of relatively low quality as well making the drift in the motion prediction greater than on conventional robots. In addition, the very low placement of the camera and its fixed orientation looking forward is not ideal for estimating motion from the images. Several novel ideas are tested here. Harris corners are extracted from every 5 th frame and used as image features for our SLAM. Scale Invariant Feature Transform, SIFT, descriptors are formed from each of these. These are used to match image features over these 5 frame intervals. Lucas-Kanade tracking is done to find corresponding pixels in the frames between the SIFT frames. This allows a substantial computational savings over doing SIFT matching every frame. The epipolar constraints between all these matches that are implied by the dead-reckoning are used to further test the matches and eliminate poor features. Finally, the features are initialized on the map at once using an inverse depth parameterization which eliminates the delay in initialization of the 3D point features.

  • 32.
    Folkesson, John
    et al.
    Department of Mechanical Engineering, Massachusetts Institute of Technology, Cambridge, Masachusetts.
    Christensen, Henrik I.
    KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Graphical SLAM for Outdoor Applications2007In: Journal of Field Robotics, ISSN 1556-4959, Vol. 24, no 1-2, p. 51-70Article in journal (Refereed)
    Abstract [en]

    Application of SLAM outdoors is challenged by complexity, handling of non-linearities and flexible integration of a diverse set of features. A graphical approach to SLAM is introduced that enables flexible data-association. The method allows for handling of non-linearities. The method also enables easy introduction of global constraints. Computational issues can be addressed as a graph reduction problem. A complete framework for graphical based SLAM is presented. The framework is demonstrated for a number of outdoor experiments using an ATRV robot equipped with a SICK laser scanner and a CrossBow Inertial Unit. The experiments include handling of large outdoor environments with loop closing. The presented system operates at 5Hz on a 800 MHz computer.

  • 33.
    Folkesson, John
    et al.
    KTH, Superseded Departments, Numerical Analysis and Computer Science, NADA.
    Christensen, Henrik I.
    KTH, Superseded Departments, Numerical Analysis and Computer Science, NADA.
    Robust SLAM2004In: IAV-2004, 2004Conference paper (Refereed)
  • 34.
    Folkesson, John
    et al.
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.
    Jensfelt, Patric
    KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS. KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.
    Christensen, Henrik
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.
    Graphical SLAM using vision and the measurement subspace2005In: 2005 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS, VOLS 1-4, IEEE conference proceedings, 2005, p. 325-330Conference paper (Refereed)
    Abstract [en]

    In this paper we combine a graphical approach for simultaneous localization and mapping, SLAM, with a feature representation that addresses symmetries and constraints in the feature coordinates, the measurement subspace, M-space. The graphical method has the advantages of delayed linearizations and soft commitment to feature measurement matching. It also allows large maps to be built up as a network of small local patches, star nodes. This local map net is then easier to work with. The formation of the star nodes is explicitly stable and invariant with all the symmetries of the original measurements. All linearization errors are kept small by using a local frame. The construction of this invariant star is made clearer by the M-space feature representation. The M-space allows the symmetries and constraints of the measurements to be explicitly represented. We present results using both vision and laser sensors.

  • 35.
    Folkesson, John
    et al.
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Jensfelt, Patric
    KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Christensen, Henrik
    KTH, School of Computer Science and Communication (CSC), Numerical Analysis and Computer Science, NADA. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Vision SLAM in the Measurement Subspace2005In: 2005 IEEE International Conference on Robotics and Automation (ICRA), Vols 1-4  Book Series, 2005, p. 30-35Conference paper (Refereed)
    Abstract [en]

    In this paper we describe an approach to feature representation for simultaneous localization and mapping, SLAM. It is a general representation for features that addresses symmetries and constraints in the feature coordinates. Furthermore, the representation allows for the features to be added to the map with partial initialization. This is an important property when using oriented vision features where angle information can be used before their full pose is known. The number of the dimensions for a feature can grow with time as more information is acquired. At the same time as the special properties of each type of feature are accounted for, the commonalities of all map features are also exploited to allow SLAM algorithms to be interchanged as well as choice of sensors and features. In other words the SLAM implementation need not be changed at all when changing sensors and features and vice versa. Experimental results both with vision and range data and combinations thereof are presented.

  • 36.
    Folkesson, John
    et al.
    Massacusetts Institute of Technology, Cambridge, MA .
    Jensfelt, Patric
    KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS. KTH, School of Computer Science and Communication (CSC), Numerical Analysis and Computer Science, NADA.
    Christensen, Henrik I.
    Georgia Institute of Tech- nology, Atlanta, GA.
    The m-space feature representation for slam2007In: IEEE Transactions on robotics, ISSN 1552-3098, E-ISSN 1941-0468, ISSN 1552-3098, Vol. 23, no 5, p. 1024-1035Article in journal (Refereed)
    Abstract [en]

    In this paper, a new feature representation for simultaneous localization and mapping (SLAM) is discussed. The representation addresses feature symmetries and constraints explicitly to make the basic model numerically robust. In previous SLAM work, complete initialization of features is typically performed prior to introduction of a new feature into the map. This results in delayed use of new data. To allow early use of sensory data, the new feature representation addresses the use of features that initially have been partially observed. This is achieved by explicitly modelling the subspace of a feature that has been observed. In addition to accounting for the special properties of each feature type, the commonalities can be exploited in the new representation to create a feature framework that allows for interchanging of SLAM algorithms, sensor and features. Experimental results are presented using a low-cost Web-cam, a laser range scanner, and combinations thereof.

  • 37.
    Folkesson, John
    et al.
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.
    Leederkerken, Jacques
    MIT.
    Williams, Rob
    MIT.
    Patrikalakis, Andrew
    MIT.
    Leonard, John
    MIT.
    A Feature Based Navigation System for an Autonomous Underwater Robot2008In: Field And Service Robotics: Results Of The 6th International Conference / [ed] Laugier, C; Siegwart, R, Springer Berlin/Heidelberg, 2008, Vol. 42, p. 105-114Conference paper (Refereed)
    Abstract [en]

    We present a system for autonomous underwater navigation as implemented on a Nekton Ranger autonomous underwater vehicle, AUV. This is one of the first implementations of a practical application for simultaneous localization and mapping on an AUV. Besides being an application of real-time SLAM, the implemtation demonstrates a novel data fusion solution where data from 7 sources are fused at different time scales in 5 separate estimators. By modularizing the data fusion problem in this way each estimator can be tuned separately to provide output useful to the end goal of localizing the AUV, on an a priori map. The Ranger AUV is equipped with a BlueView blazed array sonar which is used to detect features in the underwater environment. Underwater testing results are presented. The features in these tests are deployed radar reflectors.

  • 38.
    Folkesson, John
    et al.
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.
    Leonard, John
    MIT.
    Autonomy through SLAM for an Underwater Robot2011In: Robotics Research The 14th International Symposium ISRR, Springer Berlin/Heidelberg, 2011, Vol. 70, p. 55-70Conference paper (Refereed)
    Abstract [en]

    An autonomous underwater vehicle (AUV) is achieved that integrates state of the art simultaneous localization and mapping (SLAM) into the decision processes. This autonomy is used to carry out undersea target reacquisition missions that would otherwise be impossible with a low-cost platform. The AUV requires only simple sensors and operates without navigation equipment such as Doppler Velocity Log, inertial navigation or acoustic beacons. Demonstrations of the capability show that the vehicle can carry out the task in an ocean environment. The system includes a forward looking sonar and a set of simple vehicle sensors. The functionality includes feature tracking using a graphical square root smoothing SLAM algorithm, global localization using multiple EKF estimators, and knowledge adaptive mission execution. The global localization incorporates a unique robust matching criteria which utilizes both positive and negative information. Separate match hypotheses are maintained by each EKF estimator allowing all matching decisions to be reversible.

  • 39.
    Folkesson, John
    et al.
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.
    Leonard, John
    MIT.
    Leederkerken, Jacques
    MIT.
    Williams, Rob
    MIT.
    Feature tracking for underwater navigation using sonar2007In: Proceedings of the 2007 IEEE/RSJ International Conference on Intelligent Robots and Systems San Diego, CA, USA, Oct 29 - Nov 2, 2007: Vols 1-9, IEEE conference proceedings, 2007, p. 3678-3684Conference paper (Refereed)
    Abstract [en]

    Tracking sonar features in real time on an underwater robot is a challenging task. One reason is the low observability of the sonar in some directions. For example, using a blazed array sonar one observes range and the angle to the array axis with fair precision. The angle around the axis is poorly constrained. This situation is problematic for tracking features in world frame Cartesian coordinates as the error surfaces will not be ellipsoids. Thus Gaussian tracking of the features will not work properly. The situation is similar to the problem of tracking features in camera images. There the unconstrained direction is depth and its errors are highly non-Gaussian. We propose a solution to the sonar problem that is analogous to the successful inverse depth feature parameterization for vision tracking, introduced by [1]. We parameterize the features by the robot pose where it was first seen and the range/bearing from that pose. Thus the 3D features have 9 parameters that specify their world coordinates. We use a nonlinear transformation on the poorly observed bearing angle to give a more accurate Gaussian approximation to the uncertainty. These features are tracked in a SLAM framework until there is enough information to initialize world frame Cartesian coordinates for them. The more compact representation can then be used for a global SLAM or localization purposes. We present results for a system running real time underwater SLAM/localization. These results show that the parameterization leads to greater consistency in the feature location estimates.

  • 40. Hawes, N
    et al.
    Ambrus, Rares
    KTH, School of Computer Science and Communication (CSC), Robotics, perception and learning, RPL.
    Bore, Nils
    KTH, School of Computer Science and Communication (CSC), Robotics, perception and learning, RPL.
    Folkesson, John
    KTH, School of Computer Science and Communication (CSC), Robotics, perception and learning, RPL.
    Jensfelt, Patric
    KTH, School of Computer Science and Communication (CSC), Robotics, perception and learning, RPL.
    Hanheide, Marc
    et al.,
    The STRANDS Project Long-Term Autonomy in Everyday Environments2017In: IEEE robotics & automation magazine, ISSN 1070-9932, E-ISSN 1558-223X, Vol. 24, no 3, p. 146-156Article in journal (Refereed)
  • 41.
    Jensfelt, Patric
    et al.
    KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Folkesson, John
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Kragic, Danica
    KTH, School of Computer Science and Communication (CSC), Numerical Analysis and Computer Science, NADA. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Christensen, Henrik I.
    KTH, School of Computer Science and Communication (CSC), Numerical Analysis and Computer Science, NADA. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Exploiting distinguishable image features in robotic mapping and localization2006In: European Robotics Symposium 2006 / [ed] Christensen, HI, 2006, Vol. 22, p. 143-157Conference paper (Refereed)
    Abstract [en]

    Simultaneous localization and mapping (SLAM) is an important research area in robotics. Lately, systems that use a single bearing-only sensors have received significant attention and the use of visual sensors have been strongly advocated. In this paper, we present a framework for 3D bearing only SLAM using a single camera. We concentrate on image feature selection in order to achieve precise localization and thus good reconstruction in 3D. In addition, we demonstrate how these features can be managed to provide real-time performance and fast matching, to detect loop-closing situations. The proposed vision system has been combined with an extended Kalman Filter (EKF) based SLAM method. A number of experiments have been performed in indoor environments which demonstrate the validity and effectiveness of the approach. We also show how the SLAM generated map can be used for robot localization. The use of vision features which are distinguishable allows a straightforward solution to the "kidnapped-robot" scenario.

  • 42.
    Jensfelt, Patric
    et al.
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Kragic, Danica
    KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS. KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.
    Folkesson, John
    KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS. KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.
    Björkman, Mårten
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.
    A framework for vision based bearing only 3D SLAM2006In: Proceedings of the 2006 IEEE International Conference on Robotics and Automation, Orlando, Florida - May 2006: Vols 1-10, IEEE , 2006, p. 1944-1950Conference paper (Refereed)
    Abstract [en]

    This paper presents a framework for 3D vision based bearing only SLAM using a single camera, an interesting setup for many real applications due to its low cost. The focus in is on the management of the features to achieve real-time performance in extraction, matching and loop detection. For matching image features to map landmarks a modified, rotationally variant SIFT descriptor is used in combination with a Harris-Laplace detector. To reduce the complexity in the map estimation while maintaining matching performance only a few, high quality, image features are used for map landmarks. The rest of the features are used for matching. The framework has been combined with an EKF implementation for SLAM. Experiments performed in indoor environments are presented. These experiments demonstrate the validity and effectiveness of the approach. In particular they show how the robot is able to successfully match current image features to the map when revisiting an area.

  • 43.
    Karaoguz, Hakan
    et al.
    KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Bore, Nils
    KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Folkesson, John
    KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Jensfelt, Patric
    KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Human-Centric Partitioning of the Environment2017In: 26th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN), IEEE, 2017, p. 844-850Conference paper (Refereed)
    Abstract [en]

    In this paper, we present an object based approach for human-centric partitioning of the environment. Our approach for determining the human-centric regionsis to detect the objects that are commonly associated withfrequent human presence. In order to detect these objects, we employ state of the art perception techniques. The detected objects are stored with their spatio-temporal information inthe robot’s memory to be later used for generating the regions.The advantages of our method is that it is autonomous, requires only a small set of perceptual data and does not even require people to be present while generating the regions.The generated regions are validated using a 1-month dataset collected in an indoor office environment. The experimental results show that although a small set of perceptual data isused, the regions are generated at densely occupied locations.

  • 44.
    Kunze, Lars
    et al.
    University of Birmingham.
    Burbridge, Christopher
    University of Birmingham.
    Alberti, Marina
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.
    Thippur, Akshaya
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.
    Folkesson, John
    KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS. KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.
    Jensfelt, Patric
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS. KTH, School of Chemical Science and Engineering (CHE).
    Hawes, Nick
    University of Birmingham.
    Combining Top-down Spatial Reasoning and Bottom-up Object Class Recognition for Scene Understanding2014In: Proc. of 2014 IEEE/RSJ International Conference on IntelligentRobots and Systems 2014, IEEE conference proceedings, 2014, p. 2910-2915Conference paper (Refereed)
    Abstract [en]

    Many robot perception systems are built to only consider intrinsic object features to recognise the class of an object. By integrating both top-down spatial relational reasoning and bottom-up object class recognition the overall performance of a perception system can be improved. In this paper we present a unified framework that combines a 3D object class recognition system with learned, spatial models of object relations. In robot experiments we show that our combined approach improves the classification results on real world office desks compared to pure bottom-up perception. Hence, by using spatial knowledge during object class recognition perception becomes more efficient and robust and robots can understand scenes more effectively.

  • 45.
    Lundberg, Carl
    et al.
    KTH, Superseded Departments, Numerical Analysis and Computer Science, NADA.
    Barck-Holst, Carl
    KTH, Superseded Departments, Numerical Analysis and Computer Science, NADA.
    Folkesson, John
    KTH, Superseded Departments, Numerical Analysis and Computer Science, NADA.
    Christensen, Henrik I.
    KTH, Superseded Departments, Numerical Analysis and Computer Science, NADA.
    PDA interface for a field robot2003In: Proc. of the IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS03), 2003, p. 2882-2888Conference paper (Refereed)
    Abstract [en]

    Operating robots in an outdoor setting poses interesting problems in terms of interaction. To interact with the robot there is a need for a flexible computer interface. In this paper a PDA-based (personal digital assistant, i.e. a handheld computer) approach to robot interaction is presented. The system is designed to allow non-expert users to utilise the robot for operation in an urban exploration setup. The basic design is outlined and a first set of experiments are reported.

  • 46.
    Mänttäri, Joonatan
    et al.
    KTH, School of Electrical Engineering and Computer Science (EECS).
    Folkesson, John
    KTH, School of Electrical Engineering and Computer Science (EECS), Robotics, Perception and Learning, RPL.
    Incorporating Uncertainty in Predicting Vehicle Maneuvers at Intersections With Complex Interactions2019In: 2019 IEEE Intelligent Vehicles Symposium (IV), IEEE, 2019Conference paper (Refereed)
    Abstract [en]

    Highly automated driving systems are required to make robust decisions in many complex driving environments, such as urban intersections with high traffic. In order to make as informed and safe decisions as possible, it is necessary for the system to be able to predict the future maneuvers and positions of other traffic agents, as well as to provide information about the uncertainty in the prediction to the decision making module. While Bayesian approaches are a natural way of modeling uncertainty, recently deep learning-based methods have emerged to address this need as well. However, balancing the computational and system complexity, while also taking into account agent interactions and uncertainties, remains a difficult task. The work presented in this paper proposes a method of producing predictions of other traffic agents' trajectories in intersections with a singular Deep Learning module, while incorporating uncertainty and the interactions between traffic participants. The accuracy of the generated predictions is tested on a simulated intersection with a high level of interaction between agents, and different methods of incorporating uncertainty are compared. Preliminary results show that the CVAE-based method produces qualitatively and quantitatively better measurements of uncertainty and manage to more accurately assign probability to the future occupied space of traffic agents.

  • 47.
    Mänttäri, Joonatan
    et al.
    KTH, School of Electrical Engineering and Computer Science (EECS), Robotics, perception and learning, RPL.
    Folkesson, John
    KTH, School of Electrical Engineering and Computer Science (EECS), Robotics, perception and learning, RPL.
    Ward, Erik
    KTH, School of Electrical Engineering and Computer Science (EECS), Robotics, perception and learning, RPL.
    Learning to Predict Lane Changes in Highway Scenarios Using Dynamic Filters on a Generic Traffic Representation2018In: IEEE Intelligent Vehicles Symposium, Proceedings, Institute of Electrical and Electronics Engineers Inc. , 2018, p. 1385-1392Conference paper (Refereed)
    Abstract [en]

    In highway driving scenarios it is important for highly automated driving systems to be able to recognize and predict the intended maneuvers of other drivers in order to make robust and informed decisions. Many methods utilize the current kinematics of vehicles to make these predictions, but it is possible to examine the relations between vehicles as well to gain more information about the traffic scene and make more accurate predictions. The work presented in this paper proposes a novel method of predicting lane change maneuvers in highway scenarios using deep learning and a generic visual representation of the traffic scene. Experimental results suggest that by operating on the visual representation, the spacial relations between arbitrary vehicles can be captured by our method and used for more informed predictions without the need for explicit dynamic or driver interaction models. The proposed method is evaluated on highway driving scenarios using the Interstate-80 dataset and compared to a kinematics based prediction model, with results showing that the proposed method produces more robust predictions across the prediction horizon than the comparison model.

  • 48. Peng, Dongdong
    et al.
    Folkesson, John
    KTH, School of Electrical Engineering and Computer Science (EECS), Robotics, Perception and Learning, RPL.
    Xu, Chao
    Robust Particle Filter Based on Huber Function for Underwater Terrain Aided Navigation2019In: IET radar, sonar & navigation, ISSN 1751-8784, E-ISSN 1751-8792Article in journal (Refereed)
    Abstract [en]

    Terrain aided navigation is a promising technique to determine the location of underwater vehicle by matching terrain measurement against a known map. The particle filter is a natural choice for terrain aided navigation because of its ability to handle nonlinear, multimodal problems. However, the terrain measurements are vulnerable to outliers, which will cause the particle filter to degrade or even diverge. Modification of the Gaussian likelihood function by using robust cost functions is a way to reduce the effect of outliers on an estimate. We propose to use the Huber function to modify the measurement model used to set importance weights in a particle filter. We verify our method in simulations of multi-beam sonar in a real underwater digital map. The results demonstrate that the proposed method is more robust to outliers than the standard particle filter.

  • 49.
    Rixon Fuchs, Louise
    et al.
    KTH, School of Electrical Engineering and Computer Science (EECS), Robotics, perception and learning, RPL.
    Gällström, Andreas
    Folkesson, John
    KTH, School of Electrical Engineering and Computer Science (EECS), Robotics, perception and learning, RPL.
    Object Recognition in Forward Looking Sonar Images using Transfer Learning2018Conference paper (Refereed)
    Abstract [en]

    Forward Looking Sonars (FLS) are a typical choiceof sonar for autonomous underwater vehicles. They are mostoften the main sensor for obstacle avoidance and can be usedfor monitoring, homing, following and docking as well. Thosetasks require discrimination between noise and various classes ofobjects in the sonar images. Robust recognition of sonar data stillremains a problem, but if solved it would enable more autonomyfor underwater vehicles providing more reliable informationabout the surroundings to aid decision making. Recent advancesin image recognition using Deep Learning methods have beenrapid. While image recognition with Deep Learning is known torequire large amounts of labeled data, there are data-efficientlearning methods using generic features learned by a networkpre-trained on data from a different domain. This enables usto work with much smaller domain-specific datasets, makingthe method interesting to explore for sonar object recognitionwith limited amounts of training data. We have developed aConvolutional Neural Network (CNN) based classifier for FLS-images and compared its performance to classification usingclassical methods and hand-crafted features.

  • 50.
    Schilling, Fabian
    et al.
    KTH, School of Computer Science and Communication (CSC), Robotics, perception and learning, RPL.
    chen, xi
    KTH, School of Computer Science and Communication (CSC), Robotics, perception and learning, RPL.
    Folkesson, John
    KTH, School of Computer Science and Communication (CSC), Robotics, perception and learning, RPL.
    Jensfelt, Patric
    KTH, School of Computer Science and Communication (CSC), Robotics, perception and learning, RPL.
    Geometric and visual terrain classification for autonomous mobile navigation2017In: 2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), IEEE, 2017, article id 8206092Conference paper (Refereed)
    Abstract [en]

    In this paper, we present a multi-sensory terrain classification algorithm with a generalized terrain representation using semantic and geometric features. We compute geometric features from lidar point clouds and extract pixel-wise semantic labels from a fully convolutional network that is trained using a dataset with a strong focus on urban navigation. We use data augmentation to overcome the biases of the original dataset and apply transfer learning to adapt the model to new semantic labels in off-road environments. Finally, we fuse the visual and geometric features using a random forest to classify the terrain traversability into three classes: safe, risky and obstacle. We implement the algorithm on our four-wheeled robot and test it in novel environments including both urban and off-road scenes which are distinct from the training environments and under summer and winter conditions. We provide experimental result to show that our algorithm can perform accurate and fast prediction of terrain traversability in a mixture of environments with a small set of training data.

12 1 - 50 of 65
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf