Change search
Refine search result
1 - 15 of 15
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Rows per page
  • 5
  • 10
  • 20
  • 50
  • 100
  • 250
Sort
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
Select
The maximal number of hits you can export is 250. When you want to export more records please use the Create feeds function.
  • 1.
    Aydemir, Alper
    et al.
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.
    Bishop, Adrian N.
    KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS. KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.
    Jensfelt, Patric
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.
    Simultaneous Object Class and Pose Estimation for Mobile Robotic Applications with Minimalistic Recognition2010In: 2010 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION (ICRA)    / [ed] Rakotondrabe M; Ivan IA, 2010, p. 2020-2027Conference paper (Refereed)
    Abstract [en]

    In this paper we address the problem of simultaneous object class and pose estimation using nothing more than object class label measurements from a generic object classifier. We detail a method for designing a likelihood function over the robot configuration space. This function provides a likelihood measure of an object being of a certain class given that the robot (from some position) sees and recognizes an object as being of some (possibly different) class. Using this likelihood function in a recursive Bayesian framework allows us to achieve a kind of spatial averaging and determine the object pose (up to certain ambiguities to be made precise). We show how inter-class confusion from certain robot viewpoints can actually increase the ability to determine the object pose. Our approach is motivated by the idea of minimalistic sensing since we use only class label measurements albeit we attempt to estimate the object pose in addition to the class.

  • 2.
    Basiri, Meysam
    et al.
    KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Bishop, Adrian N.
    KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Jensfelt, Patric
    KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Distributed control of triangular formations with angle-only constraints2010In: Systems & control letters (Print), ISSN 0167-6911, E-ISSN 1872-7956, Vol. 59, no 2, p. 147-154Article in journal (Refereed)
    Abstract [en]

    This paper considers the coupled, bearing-only formation control of three mobile agents moving in the plane. Each agent has only local inter-agent bearing knowledge and is required to maintain a specified angular separation relative to both neighbor agents. Assuming that the desired angular separation of each agent relative to the group is feasible, a triangle is generated. The control law is distributed and accordingly each agent can determine their own control law using only the locally measured bearings. A convergence result is established in this paper which guarantees global asymptotic convergence of the formation to the desired formation shape.

  • 3.
    Basiri, Meysam
    et al.
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.
    Bishop, Adrian N.
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.
    Jensfelt, Patric
    KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS. KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.
    Distributed Control of Triangular Sensor Formations with Angle-Only Constraints2009In: 2009 INTERNATIONAL CONFERENCE ON INTELLIGENT SENSORS, SENSOR NETWORKS AND INFORMATION PROCESSING (ISSNIP 2009), NEW YORK: IEEE , 2009, p. 121-126Conference paper (Refereed)
    Abstract [en]

    This paper considers the coupled formation control of three mobile agents moving in the plane. Each agent has only local inter-agent bearing knowledge and is required to maintain a specified angular separation relative to its neighbors. The problem considered in this paper differs from similar problems in the literature since no inter-agent distance measurements are employed and the desired formation is specified entirely by the internal triangle angles. Each agent's control law is distributed and based only on its locally measured bearings. A convergence result is established which guarantees global convergence of the formation to the desired formation shape.

  • 4.
    Bishop, Adrian
    et al.
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.
    Jensfelt, Patric
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.
    Stochastically convergent localization of objects and actively controllable sensor-object pose2009In: Proceedings of 10th European Control Conference (ECC 2009), 2009Conference paper (Refereed)
    Abstract [en]

    The problem of object (network) localization using a mobile sensor is examined in this paper. Specifically, we consider a set of stationary objects located in the plane and a single mobile nonholonomic sensor tasked at estimating their relative position from range and bearing measurements. We derive a coordinate transform and a relative sensor-object motion model that leads to a novel problem formulation where the measurements are linear in the object positions. We then apply an extended Kalman filter-like algorithm to the estimation problem. Using stochastic calculus we provide an analysis of the convergence properties of the filter. We then illustrate that it is possible to steer the mobile sensor to achieve a relative sensor-object pose using a continuous control law. This last fact is significant since we circumvent Brockett's theorem and control the relative sensor-source pose using a simple controller.

  • 5.
    Bishop, Adrian N.
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    A tutorial on constraints for positioning on the plane2010In: 2010 IEEE 21st International Symposiumon Personal Indoor and Mobile Radio Communications (PIMRC), IEEE , 2010, p. 1689-1694Conference paper (Refereed)
    Abstract [en]

    This paper introduces and surveys a number of determinant constraints on the measurement errors in a variety of positioning scenarios. An algorithm for exploiting the constraints for accurate positioning is introduced and the relationship between the proposed algorithm and a so-called traditional maximum likelihood algorithm is examined.

  • 6.
    Bishop, Adrian N.
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. Australian National University, Canberra, Australia.
    Gaussian-sum-based probability hypothesis density filtering with delayed and out-of-sequence measurements2010In: 18th Mediterranean Conference on Control and Automation, MED'10 - Conference Proceedings, 2010, p. 1423-1428Conference paper (Refereed)
    Abstract [en]

    The problem of multiple-sensor-based multipleobject tracking is studied for adverse environments involving clutter (false positives), missing measurements (false negatives) and random target births and deaths (a priori unknown target numbers). Various (potentially spatially separated) sensors are assumed to generate signals which are sent to the estimator via parallel channels which incur independent delays. These signals may arrive out of order, be corrupted or even lost. In addition, there may be periods when the estimator receives no information. A closed-form, recursive solution to the considered problem is detailed that generalizes the Gaussian-mixture probability hypothesis density (GM-PHD) filter previously detailed in the literature. This generalization allows the GM-PHD framework to be applied in more realistic network scenarios involving not only transmission delays but rather more general irregular measurement sequences where particular measurements from some sensors can arrive out of order with respect to the generating sensor and also with respect to the signals generated by the other sensors in the network.

  • 7.
    Bishop, Adrian N.
    et al.
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. Australian National University (ANU), Australia .
    Basiri, M.
    Bearing-only triangular formation control on the plane and the sphere2010In: 18th Mediterranean Conference on Control and Automation, MED'10 - Conference Proceedings, 2010, p. 790-795Conference paper (Refereed)
    Abstract [en]

    We consider the problem of distributed bearing-only formation control. Each agent measures the inter-agent bearings in a local coordinate system and is tasked at maintaining a specified angular separation relative to its neighbors. The problem we consider differs from other problems in the literature since no inter-agent distance measurements are employed. Each agent's control law is distributed and based only on its locally measured bearings. A strong convergence result is established which guarantees global convergence of the formation to the desired shape while at the same time ensures that collisions are avoided naturally. We show that the control scheme is robust to agent motion failures and the presence of additional group motion inputs. Finally, we extend our system to the case where the agent's motion is restricted to a sphere.

  • 8.
    Bishop, Adrian N.
    et al.
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.
    Fidan, Baris
    Anderson, Brian D. O.
    Dogancay, Kutluyil
    Pathirana, Pubudu N.
    Optimal Range-Difference-Based Localization Considering Geometrical Constraints2008In: IEEE Journal of Oceanic Engineering, ISSN 0364-9059, E-ISSN 1558-1691, Vol. 33, no 3, p. 289-301Article in journal (Refereed)
    Abstract [en]

    This paper proposes a new type of algorithm aimed at finding the traditional maximum-likelihood (TML) estimate of the position of a target given time-difference-of-arrival (TDOA) information, contaminated by noise. The novelty lies in the fact that a performance index, akin to but not identical with that in maximum likelihood (ML), is a minimized subject to a number of constraints, which flow from geometric constraints inherent in the underlying problem. The minimization is in a higher dimensional space than for TML, and has the advantage that the algorithm can be very straightforwardly and systematically initialized. Simulation evidence shows that failure to converge to a solution of the localization problem near the true value is less likely to occur with this new algorithm than with TML. This makes it attractive to use in adverse geometric situations.

  • 9.
    Bishop, Adrian N.
    et al.
    KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS. KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.
    Fidan, Baris
    Anderson, Brian D. O.
    Dogancay, Kutluyil
    Pathirana, Pubudu N.
    Optimality analysis of sensor-target localization geometries2010In: Automatica, ISSN 0005-1098, E-ISSN 1873-2836, Vol. 46, no 3, p. 479-492Article in journal (Refereed)
    Abstract [en]

    The problem of target localization involves estimating the position of a target from multiple noisy sensor measurements. It is well known that the relative sensor-target geometry can significantly affect the performance of any particular localization algorithm. The localization performance can be explicitly characterized by certain measures, for example, by the Cramer-Rao lower bound (which is equal to the inverse Fisher information matrix) on the estimator variance. In addition, the Cramer-Rao lower bound is commonly used to generate a so-called uncertainty ellipse which characterizes the spatial variance distribution of an efficient estimate, i.e. an estimate which achieves the lower bound. The aim of this work is to identify those relative sensor-target geometries which result in a measure of the uncertainty ellipse being minimized. Deeming such sensor-target geometries to be optimal with respect to the chosen measure, the optimal sensor-target geometries for range-only, time-of-arrival-based and bearing-only localization are identified and studied in this work. The optimal geometries for an arbitrary number of sensors are identified and it is shown that an optimal sensor-target configuration is not, in general, unique. The importance of understanding the influence of the sensor-target geometry on the potential localization performance is highlighted via formal analytical results and a number of illustrative examples.

  • 10.
    Bishop, Adrian N.
    et al.
    KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Jensfelt, Patric
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.
    A Stochastically Stable Solution to the Problem of Robocentric Mapping2009In: ICRA: 2009 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION, 2009, p. 1540-1547Conference paper (Refereed)
    Abstract [en]

    This paper provides a novel solution for robo-centric mapping using an autonomous mobile robot. The robot dynamic model is the standard unicycle model and the robot is assumed to measure both the range and relative bearing to the landmarks. The algorithm introduced in this paper relies on a coordinate transformation and an extended Kalman filter like algorithm. The coordinate transformation considered in this paper has not been previously considered for robocentric mapping applications. Moreover, we provide a rigorous stochastic stability analysis of the filter employed and we examine the conditions under which the mean-square estimation error converges to a steady-state value.

  • 11.
    Bishop, Adrian N.
    et al.
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.
    Jensfelt, Patric
    KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS. KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.
    An Optimality Analysis of Sensor-Target Geometries for Signal Strength Based Localization2009In: 2009 INTERNATIONAL CONFERENCE ON INTELLIGENT SENSORS, SENSOR NETWORKS AND INFORMATION PROCESSING (ISSNIP 2009), NEW YORK: IEEE , 2009, p. 127-132Conference paper (Refereed)
    Abstract [en]

    In this paper we characterize the bounds on localization accuracy in signal strength based localization. In particular, we provide a novel and rigorous analysis of the relative receiver-transmitter geometry and the effect of this geometry on the potential localization performance. We show that uniformly spacing sensors around the target is not optimal if the sensor-target ranges are not identical and is not necessary in any case. Indeed, we show that in general the optimal sensor-target geometry for signal strength based localization is not unique.

  • 12.
    Bishop, Adrian N.
    et al.
    KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Jensfelt, Patric
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Stochastically convergent localization of objects by mobile sensors and actively controllable relative sensor-object2015In: 2009 European Control Conference, ECC 2009, 2015, p. 2384-2389Conference paper (Refereed)
    Abstract [en]

    The problem of object (network) localization using a mobile sensor is examined in this paper. Specifically, we consider a set of stationary objects located in the plane and a single mobile nonholonomic sensor tasked at estimating their relative position from range and bearing measurements. We derive a coordinate transform and a relative sensor-object motion model that leads to a novel problem formulation where the measurements are linear in the object positions. We then apply an extended Kalman filter-like algorithm to the estimation problem. Using stochastic calculus we provide an analysis of the convergence properties of the filter. We then illustrate that it is possible to steer the mobile sensor to achieve a relative sensor-object pose using a continuous control law. This last fact is significant since we circumvent Brockett's theorem and control the relative sensor-source pose using a simple controller.

  • 13.
    Bishop, Adrian N.
    et al.
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.
    Savkin, Andrey V.
    Pathirana, Pubudu N.
    Vision-Based Target Tracking and Surveillance With Robust Set-Valued State Estimation2010In: IEEE Signal Processing Letters, ISSN 1070-9908, E-ISSN 1558-2361, Vol. 17, no 3, p. 289-292Article in journal (Refereed)
    Abstract [en]

    Tracking a target from a video stream (or a sequence of image frames) involves nonlinear measurements in Cartesian coordinates. However, the target dynamics, modeled in Cartesian coordinates, result in a linear system. We present a robust linear filter based on an analytical nonlinear to linear measurement conversion algorithm. Using ideas from robust control theory, a rigorous theoretical analysis is given which guarantees that the state estimation error for the filter is bounded, i.e., a measure against filter divergence is obtained. In fact, an ellipsoidal set-valued estimate is obtained which is guaranteed to contain the true target location with an arbitrarily high probability. The algorithm is particularly suited to visual surveillance and tracking applications involving targets moving on a plane.

  • 14.
    Boberg, Anders
    et al.
    KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Bishop, Adrian N.
    KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Jensfelt, Patric
    KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS. KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.
    Robocentric Mapping and Localization in Modified Spherical Coordinates with Bearing Measurements2009In: 2009 INTERNATIONAL CONFERENCE ON INTELLIGENT SENSORS, SENSOR NETWORKS AND INFORMATION PROCESSING (ISSNIP 2009), NEW YORK: IEEE , 2009, p. 139-144Conference paper (Refereed)
    Abstract [en]

    In this paper, a new approach to robotic mapping is presented that uses modified spherical coordinates in a robot-centered reference frame and a bearing-only measurement model. The algorithm provided in this paper permits robust delay-free state initialization and is computationally more efficient than the current standard in bearing-only (delay-free initialized) simultaneous localization and mapping (SLAM). Importantly, we provide a detailed nonlinear observability analysis which shows the system is generally observable. We also analyze the error convergence of the filter using stochastic stability analysis. We provide an explicit bound on the asymptotic mean state estimation error. A comparison of the performance of this filter is also made against a standard world-centric SLAM algorithm in a simulated environment.

  • 15.
    Pronobis, Andrzej
    et al.
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Sjöö, Kristoffer
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Aydemir, Alper
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Bishop, Adrian N.
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Jensfelt, Patric
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    A Framework for Robust Cognitive Spatial Mapping2009In: 2009 International Conference on Advanced Robotics, ICAR 2009, IEEE , 2009, p. 686-693Conference paper (Refereed)
    Abstract [en]

    Spatial knowledge constitutes a fundamental component of the knowledge base of a cognitive, mobile agent. This paper introduces a rigorously defined framework for building a cognitive spatial map that permits high level reasoning about space along with robust navigation and localization. Our framework builds on the concepts of places and scenes expressed in terms of arbitrary, possibly complex features as well as local spatial relations. The resulting map is topological and discrete, robocentric and specific to the agent's perception. We analyze spatial mapping design mechanics in order to obtain rules for how to define the map components and attempt to prove that if certain design rules are obeyed then certain map properties are guaranteed to be realized. The idea of this paper is to take a step back from existing algorithms and literature and see how a rigorous formal treatment can lead the way towards a powerful spatial representation for localization and navigation. We illustrate the power of our analysis and motivate our cognitive mapping characteristics with some illustrative examples.

1 - 15 of 15
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf