Change search
Refine search result
12 1 - 50 of 73
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Rows per page
  • 5
  • 10
  • 20
  • 50
  • 100
  • 250
Sort
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
Select
The maximal number of hits you can export is 250. When you want to export more records please use the 'Create feeds' function.
  • 1.
    Aarno, Daniel
    et al.
    KTH, Superseded Departments, Numerical Analysis and Computer Science, NADA.
    Kragic, Danica
    KTH, Superseded Departments, Numerical Analysis and Computer Science, NADA.
    Christensen, Henrik
    KTH, Superseded Departments, Numerical Analysis and Computer Science, NADA.
    Artificial potential biased probabilistic roadmap method2004In: 2004 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION, VOLS 1- 5, PROCEEDINGS, 2004, p. 461-466Conference paper (Refereed)
    Abstract [en]

    Probabilistic roadmap methods (PRMs) have been successfully used to solve difficult path planning problems but their efficiency is limited when the free space contains narrow passages through which the robot must pass. This paper presents a new sampling scheme that aims to increase the probability of finding paths through narrow passages. Here, a biased sampling scheme is used to increase the distribution of nodes in narrow regions of the free space. A partial computation of the artificial potential field is used to bias the distribution of nodes.

  • 2.
    Althaus, Philipp
    et al.
    KTH, Superseded Departments, Numerical Analysis and Computer Science, NADA.
    Ishiguro, H.
    Kanda, T.
    Miyashita, T.
    Christensen, Henrik
    KTH, Superseded Departments, Numerical Analysis and Computer Science, NADA.
    Navigation for human-robot interaction tasks2004In: 2004 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION, VOLS 1- 5, PROCEEDINGS, 2004, p. 1894-1900Conference paper (Refereed)
    Abstract [en]

    One major design goal in human-robot interaction is that the robots behave in an intelligent manner, preferably in a similar way as humans. This constraint must also be taken into consideration when the navigation system for the platform is developed. However, research in human-robot interaction is often restricted to other components of the system including gestures, manipulation, and speech. On the other hand, research for mobile robot navigation focuses primarily on the task of reaching a certain goal point in an environment. We believe that these two problems can not be treated separately for a personal robot that coexists with humans in the same surrounding. Persons move constantly while they are interacting with each other. Hence, also a robot should do that, which poses constraints on the navigation system. This type of navigation is the focus of this paper. Methods have been developed for a robot to join a group of people engaged in a conversation. Preliminary results show that the platform's moving patterns are very similar to the ones of the persons. Moreover, this dynamic interaction has been judged naturally by the test subjects, which greatly increases the perceived intelligence of the robot.

  • 3.
    Bertolli, Federico
    et al.
    KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Jensfelt, Patric
    KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS. KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.
    Christensen, Henrik I.
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    SLAM using visual scan-matching with distinguishable 3D points2006In: 2006 IEEE/RSJ International Conference on Intelligent Robots and Systems, Vols 1-12, NEW YORK: IEEE , 2006, p. 4042-4047Conference paper (Refereed)
    Abstract [en]

    Scan-matching based on data from a laser scanner is frequently used for mapping and localization. This paper presents an scan-matching approach based instead on visual information from a stereo system. The Scale Invariant Feature Transform (SIFT) is used together with epipolar constraints to get high matching precision between the stereo images. Calculating the 3D position of the corresponding points in the world results in a visual scan where each point has a descriptor attached to it. These descriptors can be used when matching scans acquired from different positions. Just like in the work with laser based scan matching a map can be defined as a set of reference scans and their corresponding acquisition point. In essence this reduces each visual scan that can consist of hundreds of points to a single entity for which only the corresponding robot pose has to be estimated in the map. This reduces the overall complexity of the map. The SIFT descriptor attached to each of the points in the reference allows for robust matching and detection of loop closing situations. The paper presents real-world experimental results from an indoor office environment.

  • 4.
    Bratt, Mattias
    et al.
    KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Smith, Christian
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Christensen, Henrik I.
    KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Design of a Control Strategy for Teleoperation of a Platform with Significant Dynamics2006In: 2006 IEEE/RSJ International Conference on Intelligent Robots and Systems, Vols 1-12, NEW YORK, NY: IEEE , 2006, p. 1700-1705Conference paper (Refereed)
    Abstract [en]

    A teleoperation system for controlling a robot with fast dynamics over the Internet has been constructed. It employs a predictive control structure with an accurate dynamic model of the robot to overcome problems caused by varying delays. The operator interface uses a stereo virtual reality display of the robot cell, and a haptic device for force feed-back including virtual obstacle avoidance forces.

  • 5.
    Bratt, Mattias
    et al.
    KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Smith, Christian
    KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Christensen, Henrik I.
    KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Minimum jerk based prediction of user actions for a ball catching task2007In: IEEE International Conference on Intelligent Robots and Systems: Vols 1-9, IEEE conference proceedings, 2007, p. 2716-2722Conference paper (Refereed)
    Abstract [en]

    The present paper examines minimum jerk models for human kinematics as a tool to predict user input in teleoperation with significant dynamics. Predictions of user input can be a powerful tool to bridge time-delays and to trigger autonomous sub-sequences. In this paper an example implementation is presented, along with the results of a pilot experiment in which a virtual reality simulation of a teleoperated ball-catching scenario is used to test the predictive power of the model. The results show that delays up to 100 ms can potentially be bridged with this approach.

  • 6.
    Christensen, Henrik
    KTH, School of Computer Science and Communication (CSC), Numerical Analysis and Computer Science, NADA.
    EURON - The European Robotics Network2005In: IEEE robotics & automation magazine, ISSN 1070-9932, E-ISSN 1558-223X, Vol. 12, no 2, p. 10-13Article in journal (Refereed)
    Abstract [en]

    The European Robotics Network (EURON) is a network of excellence established to ensure broad involvement of robotics across many different fields of applications and research. Currently, the network involves 145 groups covering almost all countries in Europe. Within the network, activities are organized around five major efforts referred to as key-area activities. These include research coordination, training and education, industrial links, dissemination, and international links.

  • 7.
    Christensen, Henrik
    et al.
    KTH, Superseded Departments, Numerical Analysis and Computer Science, NADA.
    Folkesson, John
    KTH, Superseded Departments, Numerical Analysis and Computer Science, NADA.
    Hedström, Andreas
    UGV technology for urban navigation2004In: UNMANNED GROUND VEHICLE TECHNOLOGY VI / [ed] Gerhart, GR; Shoemaker, CM; Gage, DW, BELLINGHAM: SPIE-INT SOC OPTICAL ENGINEERING , 2004, Vol. 5422, p. 191-197Conference paper (Refereed)
    Abstract [en]

    Deployment of humans in an urban setting for search and rescue type missions poses a major risk to the personnel. In rescue missions the risk can stem from debris, gas, etc and in a strategic setting the risk can stem from snipers, mines, gas etc. There is consequently a natural interest in studies of how UGV technology can be deployed for tasks such as reconnaissance, retrieval of objects (bombs, injured people, etc.). Today most vehicles used by the military and bomb squads are tele-operated and without any autonomy. This implies that operation of the vehicles is a stressful and demanding task. Part of this stress can be removed through introduction of autonomous functionality. Autonomy implicitly requires use of map information to allow the system to localize and traverse a particular area, in addition autonomous mapping of an area is a valuable functionality as part of reconnaissance missions to provide an initial inventory of a new area. A host of different sensory modalities can be used for mapping. In general no single modality is, however, sufficient for robust and efficient mapping. In the present study GPS, Inertial Cues, Laser ranging and Odometry is used for simultaneous mapping and localisation in urban environments. The mapping is carried out autonomously using a coverage strategy to ensure full mapping of a particular area. In relation to mapping another important issue is the design of an efficient user interface that allows a regular rescue worker, or a soldier, to operate the vehicle without detailed knowledge about robotics. A number of different designs for user interfaces will be presented and results from studies with a range of end-users (soldiers) will also be reported. The complete system has been tested in an urban warfare facility outside of Stockholm. Detailed results will be reposted from two different test facilities.

  • 8.
    Christensen, Henrik I.
    KTH, School of Computer Science and Communication (CSC), Numerical Analysis, NA (closed 2012-06-30).
    Session summary2005In: Robotics Research: The Eleventh International Symposium, Springer Berlin/Heidelberg, 2005, p. 57-59Chapter in book (Refereed)
    Abstract [en]

    While the current part carries the title “path planning” the contributions in this section cover two topics: mapping and planning. In some sense one might argue that intelligent (autonomous) mapping actually requires path planning. While this is correct the contributions actually have a broader scope as is outlined below. A common theme to all of the presentations in this section is the adoption of hybrid representations to facilitate efficient processing in complex environments. Purely geometric models allow for accurate estimation of position and motion generation, but they scale poorly with environmental complexity while qualitative geometric models have a limited accuracy and are well suited for global estimation of trajectories/locations. Through fusion of qualitative and quantitative models it becomes possible to develop systems that have tractable complexity while maintaining geometric accuracy.

  • 9.
    Christensen, Henrik I.
    et al.
    KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Kragic, Danica
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Sandberg, F
    KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Computational Vision for Interaction with People and RobotsManuscript (preprint) (Other academic)
    Abstract [en]

    Facilities for sensing and modification of the environmentis crucial to delivery of robotics facilities that can interact with humansand objects in the environment. Both for recognition of objectsand interpretation of human activities (for instruction and avoidance)the by far most versatile sensory modality is computational vision.Use of vision for interpretation of human gestures and for manipulationof objects is outlined in this paper. It is here described how combinationof multiple visual cues can be used to achieve robustness andthe tradeoff between models and cue integration is illustrated. Thedescribed vision competences are demonstrated in the context of anintelligent service robot that operates in a regular domestic setting.

  • 10.
    Christensen, Henrik I.
    et al.
    KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Nagel, Hans Hellmut
    Introductory remarks2006In: COGNITIVE VISION SYSTEMS: SAMPLING THE SPECTRUM OF APPROACHERS, 2006, p. 1-+Conference paper (Refereed)
  • 11.
    Christensen, Henrik I.
    et al.
    KTH, School of Computer Science and Communication (CSC), Numerical Analysis and Computer Science, NADA. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Pacchierotti, Elena
    KTH, School of Computer Science and Communication (CSC), Numerical Analysis and Computer Science, NADA. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Embodied social interaction for robots2005In: AISB'05 Convention: Social Intelligence and Interaction in Animals, Robots and Agents: Proceedings of the Symposium on Robot Companions: Hard Problems and Open Challenges in Robot-Human Interaction, 2005, p. 40-45Conference paper (Refereed)
    Abstract [en]

    A key aspect of service robotics for everyday use is the motion of systems in close proximity to humans. It is here essential that the robot exhibits a behaviour that signals safe motion and awareness of the other actors in its environment. To facilitate this there is a need to endow the system with facilities for detection and tracking of objects in the vicinity of the platform, and to design a control law that enables motion generation which is considered socially acceptable. We present a system for in-door navigation in which the rules of proxemics are used to define interaction strategies for the platform.

  • 12.
    Christensen, Henrik Iskov
    KTH, Superseded Departments, Numerical Analysis and Computer Science, NADA.
    The curse of JDL2004In: Proc. Seventh Int. Conf. Inf. Fusion, 2004, p. 528-529Conference paper (Refereed)
    Abstract [en]

    The JDL model provides a methodology for organisation of research infusion, it is however, important to recognize that it does not provide an architectural framework for design of systems, and as such there may be complex interaction between levels in the JDL model, which is not directly captured by the model. A reductionistic approach to research in which each of the layers are considered independently might be a danger of not showing the real complexity of information fusion systems.

  • 13.
    Edén, Johan
    et al.
    KTH, Superseded Departments, Numerical Analysis and Computer Science, NADA.
    Christensen, Henrik I.
    KTH, Superseded Departments, Numerical Analysis and Computer Science, NADA.
    Local straightness: A contrast independent statistical edge measure for color and gray level images2004In: PROCEEDINGS OF THE 17TH INTERNATIONAL CONFERENCE ON PATTERN RECOGNITION, VOL 2 / [ed] Kittler, J; Petrou, M; Nixon, M, 2004, p. 451-454Conference paper (Refereed)
    Abstract [en]

    Most existing methods for edge detection rely on contrast dependent thresholds. We show that a local measurement defined by the ratio of the smallest to the largest eigenvalue of the second moment matrix of filter kernels, can be used to separate smooth, low curvature curves and straight lines from noise, independent of contrast, in both color and gray level images. This is done without applying a threshold to the gradient magnitude. The edge images are defined as zero crossings in the gradient direction. The covariance matrix can easily be computed for both gray level images and color images. Further we show the potentiality of such a measure by integrating it with the Hough transform to extract long straight lines in noisy color images. The method is shown to successfully extract consistent line features from color images of a scene, captured under drastically different lightening conditions.

  • 14.
    Elfwing, Stefan
    et al.
    KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Uchibe, E.
    Neural Computation Unit, Okinawa Institute of Science and Technology, Japan.
    Doya, K.
    Neural Computation Unit, Okinawa Institute of Science and Technology, Japan.
    Christensen, Henrik
    KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Biologically Inspired Embodied Evolution of Survival2005In: 2005 IEEE Congress on Evolutionary Computation, IEEE CEC 2005. Proceedings, 2005, p. 2210-2216Conference paper (Refereed)
    Abstract [en]

    Embodied evolution is a methodology for evolutionary robotics that mimics the distributed, asynchronous and autonomous properties of biological evolution. The evaluation, selection and reproduction are carried out by and between the robots, without any need for human intervention. In this paper we propose a biologically inspired embodied evolution framework, which fully integrates self-preservation, recharging from external batteries in the environment, and self-reproduction, pair-wise exchange of genetic material, into a survival system. The individuals are, explicitly, evaluated for the performance of the battery capturing task, but also, implicitly, for the mating task by the fact that an individual that mates frequently has larger probability to spread its gene in the population. We have evaluated our method in simulation experiments and the simulation results show that the solutions obtained by our embodied evolution method were able to optimize the two survival tasks, battery capturing and mating, simultaneously. We have also performed preliminary experiments in hardware, with promising results.

  • 15.
    Elfwing, Stefan
    et al.
    KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Uchibe, E.
    Neural Computation Unit, Okinawa Institute of Science and Technology, Japan.
    Doya, K.
    Neural Computation Unit, Okinawa Institute of Science and Technology, Japan.
    Christensen, Henrik
    KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Co-Evolution of Shaping Rewards and Meta-Parameters in Reinforcement Learning2008In: Adaptive Behavior, ISSN 1059-7123, E-ISSN 1741-2633, Vol. 16, no 6, p. 400-412Article in journal (Refereed)
    Abstract [en]

    In this article, we explore an evolutionary approach to the optimization of potential-based shaping rewards and meta-parameters in reinforcement learning. Shaping rewards is a frequently used approach to increase the learning performance of reinforcement learning, with regards to both initial performance and convergence speed. Shaping rewards provide additional knowledge to the agent in the form of richer reward signals, which guide learning to high-rewarding states. Reinforcement learning depends critically on a few meta-parameters that modulate the learning updates or the exploration of the environment, such as the learning rate alpha, the discount factor of future rewards gamma, and the temperature tau that controls the trade-off between exploration and exploitation in softmax action selection. We validate the proposed approach in simulation using the mountain-car task. We also transfer shaping rewards and meta-parameters, evolutionarily obtained in simulation, to hardware, using a robotic foraging task.

  • 16.
    Elfwing, Stefan
    et al.
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.
    Uchibe, E.
    Neural Computation Unit, Initial Research Project, Okinawa Institute of Science and Technology, Japan.
    Doya, K.
    Neural Computation Unit, Initial Research Project, Okinawa Institute of Science and Technology, Japan.
    Christensen, Henrik
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.
    Evolutionary Development of Hierarchical Learning Structures2007In: IEEE Transactions on Evolutionary Computation, ISSN 1089-778X, E-ISSN 1941-0026, Vol. 11, no 2, p. 249-264Article in journal (Refereed)
    Abstract [en]

    Hierarchical reinforcement learning (RL) algorithms can learn a policy faster than standard RL algorithms. However, the applicability of hierarchical RL algorithms is limited by the fact that the task decomposition has to be performed in advance by the human designer. We propose a Lamarckian evolutionary approach for automatic development of the learning structure in hierarchical RL. The proposed method combines the MAXQ hierarchical RL method and genetic programming (GP). In the MAXQ framework, a subtask can optimize the policy independently of its parent task's policy, which makes it possible to reuse learned policies of the subtasks. In the proposed method, the MAXQ method learns the policy based on the task hierarchies obtained by GP, while the GP explores the appropriate hierarchies using the result of the MAXQ method. To show the validity of the proposed method, we have performed simulation experiments for a foraging task in three different environmental settings. The results show strong interconnection between the obtained learning structures and the 'given task environments. The main conclusion of the experiments is that the GP can find a minimal strategy, i.e., a hierarchy that minimizes the number of primitive subtasks that can be executed for each type of situation. The experimental results for the most challenging environment also show that the policies of the subtasks can continue to improve, even after the structure of the hierarchy has been evolutionary stabilized, as an effect of Lamarckian mechanisms.

  • 17.
    Elfwing, Stefan
    et al.
    KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Uchibe, E.
    ATR Computational Neuroscience Labs, Japan.
    Doya, K.
    ATR Computational Neuroscience Labs, Japan.
    Christensen, Henrik
    KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Multi-Agent Reinforcement Learning: Using Macro Actions to Learn a Mating Task2004In: 2004 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS); Sendai, 2004, p. 3164-3169Conference paper (Refereed)
    Abstract [en]

    Standard reinforcement learning methods are inefficient and often inadequate for learning cooperative multi-agent tasks. For these kinds of tasks the behavior of one agent strongly depends on dynamic interaction with other agents, not only with the interaction with a static environment as in standard reinforcement learning. The success of the learning is therefore coupled to the agents' ability to predict the other agents' behaviors. In this study we try to overcome this problem by adding a few simple macro actions, actions that are extended in time for more than one time step. The macro actions improve the learning by making search of the state space more effective and thereby making the behavior more predictable for the other agent. In this study we have considered a cooperative mating task, which is the first step towards our aim to perform embodied evolution, where the evolutionary selection process is an integrated part of the task. We show, in simulation and hardware, that in the case of learning without macro actions, the agents fail to learn a meaningful behavior. In contrast, for the learning with macro action the agents learn a good mating behavior in reasonable time, in both simulation and hardware.

  • 18.
    Elfwing, Stefan
    et al.
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.
    Uchibe, Eiji
    Doya, Kenji
    KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Christensen, Henrik
    KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Darwinian Embodied Evolution of the Learning Ability for Survival2011In: Adaptive Behavior, ISSN 1059-7123, E-ISSN 1741-2633, Vol. 19, no 2, p. 101-102Article in journal (Refereed)
    Abstract [en]

    In this article we propose a framework for performing embodied evolution with a limited number of robots, by utilizing time-sharing in subpopulations of virtual agents hosted in each robot. Within this framework, we explore the combination of within-generation learning of basic survival behaviors by reinforcement learning, and evolutionary adaptations over the generations of the basic behavior selection policy, the reward functions, and metaparameters for reinforcement learning. We apply a biologically inspired selection scheme, in which there is no explicit communication of the individuals' fitness information. The individuals can only reproduce offspring by mating-a pair-wise exchange of genotypes-and the probability that an individual reproduces offspring in its own subpopulation is dependent on the individual's "health," that is, energy level, at the mating occasion. We validate the proposed method by comparing it with evolution using standard centralized selection, in simulation, and by transferring the obtained solutions to hardware using two real robots.

  • 19.
    Folkesson, John
    et al.
    KTH, Superseded Departments, Numerical Analysis and Computer Science, NADA. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Christensen, Henrik
    KTH, Superseded Departments, Numerical Analysis and Computer Science, NADA. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Graphical SLAM:  a self-correcting map2004In: 2004 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION, PROCEEDINGS  , 2004, p. 383-390Conference paper (Refereed)
    Abstract [en]

    We describe an approach to simultaneous localization and mapping, SLAM. This approach has the highly desirable property of robustness to data association errors. Another important advantage of our algorithm is that non-linearities are computed exactly, so that global constraints can be imposed even if they result in large shifts to the map. We represent the map as a graph and use the graph to find an efficient map update algorithm. We also show how topological consistency can be imposed on the map, such as, closing a loop. The algorithm has been implemented on an outdoor robot and we have experimental validation of our ideas. We also explain how the graph can be simplified leading to linear approximations of sections of the map. This reduction gives us a natural way to connect local map patches into a much larger global map.

  • 20.
    Folkesson, John
    et al.
    KTH, Superseded Departments, Numerical Analysis and Computer Science, NADA. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Christensen, Henrik
    KTH, Superseded Departments, Numerical Analysis and Computer Science, NADA. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Outdoor exploration and SLAM using a compressed filter2003In: Proceedings - IEEE International Conference on Robotics and Automation, 2003, p. 419-427Conference paper (Refereed)
    Abstract [en]

    In this paper we describe the use of automatic explorationfor autonomous mapping of outdoor scenes. We describe areal-time SLAM implementation along with an autonomous explorationalgorithm. We have implemented SLAM with a compressedextended Kalman filter (CEKF) on an outdoor robot. Our implementationuses walls of buildings as features. The state predictions aremade by using a combination of odometry and inertial data. The systemwas tested on a 200 x 200 m site with 18 buildings on variableterrain. The paper helps explain some of the implementation detailsof the compressed filter such as, how to organize the map as well asmore general issues like, how to include the effects of pitch and rolland efficient feature detection.

  • 21.
    Folkesson, John
    et al.
    Massachusetts Institute of Technology, Cambridge, MA.
    Christensen, Henrik
    Georgia Institute of Technology, Atlanta, GA.
    SIFT Based Graphical SLAM on a Packbot2008In: Springer Tracts in Advanced Robotics, ISSN 1610-7438, E-ISSN 1610-742X, Vol. 42, p. 317-328Article in journal (Refereed)
    Abstract [en]

    We present an implementation of Simultaneous Localization and Mapping (SLAM) that uses infrared (IR) camera images collected at 10 Hz from a Packbot robot. The Packbot has a number of challenging characteristics with regard to vision based SLAM. The robot travels on tracks which causes the odometry to be poor especially while turning. The IMU is of relatively low quality as well making the drift in the motion prediction greater than on conventional robots. In addition, the very low placement of the camera and its fixed orientation looking forward is not ideal for estimating motion from the images. Several novel ideas are tested here. Harris corners are extracted from every 5 th frame and used as image features for our SLAM. Scale Invariant Feature Transform, SIFT, descriptors are formed from each of these. These are used to match image features over these 5 frame intervals. Lucas-Kanade tracking is done to find corresponding pixels in the frames between the SIFT frames. This allows a substantial computational savings over doing SIFT matching every frame. The epipolar constraints between all these matches that are implied by the dead-reckoning are used to further test the matches and eliminate poor features. Finally, the features are initialized on the map at once using an inverse depth parameterization which eliminates the delay in initialization of the 3D point features.

  • 22.
    Folkesson, John
    et al.
    Department of Mechanical Engineering, Massachusetts Institute of Technology, Cambridge, Masachusetts.
    Christensen, Henrik I.
    KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Graphical SLAM for Outdoor Applications2007In: Journal of Field Robotics, ISSN 1556-4959, Vol. 24, no 1-2, p. 51-70Article in journal (Refereed)
    Abstract [en]

    Application of SLAM outdoors is challenged by complexity, handling of non-linearities and flexible integration of a diverse set of features. A graphical approach to SLAM is introduced that enables flexible data-association. The method allows for handling of non-linearities. The method also enables easy introduction of global constraints. Computational issues can be addressed as a graph reduction problem. A complete framework for graphical based SLAM is presented. The framework is demonstrated for a number of outdoor experiments using an ATRV robot equipped with a SICK laser scanner and a CrossBow Inertial Unit. The experiments include handling of large outdoor environments with loop closing. The presented system operates at 5Hz on a 800 MHz computer.

  • 23.
    Folkesson, John
    et al.
    KTH, Superseded Departments, Numerical Analysis and Computer Science, NADA.
    Christensen, Henrik I.
    KTH, Superseded Departments, Numerical Analysis and Computer Science, NADA.
    Robust SLAM2004In: IAV-2004, 2004Conference paper (Refereed)
  • 24.
    Folkesson, John
    et al.
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.
    Jensfelt, Patric
    KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS. KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.
    Christensen, Henrik
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.
    Graphical SLAM using vision and the measurement subspace2005In: 2005 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS, VOLS 1-4, IEEE conference proceedings, 2005, p. 325-330Conference paper (Refereed)
    Abstract [en]

    In this paper we combine a graphical approach for simultaneous localization and mapping, SLAM, with a feature representation that addresses symmetries and constraints in the feature coordinates, the measurement subspace, M-space. The graphical method has the advantages of delayed linearizations and soft commitment to feature measurement matching. It also allows large maps to be built up as a network of small local patches, star nodes. This local map net is then easier to work with. The formation of the star nodes is explicitly stable and invariant with all the symmetries of the original measurements. All linearization errors are kept small by using a local frame. The construction of this invariant star is made clearer by the M-space feature representation. The M-space allows the symmetries and constraints of the measurements to be explicitly represented. We present results using both vision and laser sensors.

  • 25.
    Folkesson, John
    et al.
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Jensfelt, Patric
    KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Christensen, Henrik
    KTH, School of Computer Science and Communication (CSC), Numerical Analysis and Computer Science, NADA. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Vision SLAM in the Measurement Subspace2005In: 2005 IEEE International Conference on Robotics and Automation (ICRA), Vols 1-4  Book Series, 2005, p. 30-35Conference paper (Refereed)
    Abstract [en]

    In this paper we describe an approach to feature representation for simultaneous localization and mapping, SLAM. It is a general representation for features that addresses symmetries and constraints in the feature coordinates. Furthermore, the representation allows for the features to be added to the map with partial initialization. This is an important property when using oriented vision features where angle information can be used before their full pose is known. The number of the dimensions for a feature can grow with time as more information is acquired. At the same time as the special properties of each type of feature are accounted for, the commonalities of all map features are also exploited to allow SLAM algorithms to be interchanged as well as choice of sensors and features. In other words the SLAM implementation need not be changed at all when changing sensors and features and vice versa. Experimental results both with vision and range data and combinations thereof are presented.

  • 26.
    Folkesson, John
    et al.
    Massacusetts Institute of Technology, Cambridge, MA .
    Jensfelt, Patric
    KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS. KTH, School of Computer Science and Communication (CSC), Numerical Analysis and Computer Science, NADA.
    Christensen, Henrik I.
    Georgia Institute of Tech- nology, Atlanta, GA.
    The m-space feature representation for slam2007In: IEEE Transactions on robotics, ISSN 1552-3098, E-ISSN 1941-0468, ISSN 1552-3098, Vol. 23, no 5, p. 1024-1035Article in journal (Refereed)
    Abstract [en]

    In this paper, a new feature representation for simultaneous localization and mapping (SLAM) is discussed. The representation addresses feature symmetries and constraints explicitly to make the basic model numerically robust. In previous SLAM work, complete initialization of features is typically performed prior to introduction of a new feature into the map. This results in delayed use of new data. To allow early use of sensory data, the new feature representation addresses the use of features that initially have been partially observed. This is achieved by explicitly modelling the subspace of a feature that has been observed. In addition to accounting for the special properties of each feature type, the commonalities can be exploited in the new representation to create a feature framework that allows for interchanging of SLAM algorithms, sensor and features. Experimental results are presented using a low-cost Web-cam, a laser range scanner, and combinations thereof.

  • 27.
    Frintrop, Simone
    et al.
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.
    Jensfelt, Patric
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.
    Christensen, Henrik I.
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.
    Attentional landmark selection for visual SLAM2006In: 2006 IEEE/RSJ International Conference on Intelligent Robots and Systems, Vols 1-12, NEW YORK: IEEE , 2006, p. 2582-2587Conference paper (Refereed)
    Abstract [en]

    In this paper, we introduce a new method to automatically detect useful landmarks for visual SLAM. A biologically motivated attention system detects regions of interest which "pop-out" automatically due to strong contrasts and the uniqueness of features. This property makes the regions easily redetectable and thus they are useful candidates for visual landmarks. Matching based on scene prediction and feature similarity allows not only short-term tracking of the regions, but also redetection in loop closing situations. The paper demonstrates how regions are determined and how they are matched reliably. Various experimental results on real-world data show that the landmarks are useful with respect to be tracked in consecutive frames and to enable closing loops.

  • 28.
    Frintrop, Simone
    et al.
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.
    Jensfelt, Patric
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.
    Christensen, Henrik I.
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.
    Pay attention when selecting features2006In: 18th International Conference on Pattern Recognition, Vol 2, Proceedings / [ed] Tang, YY; Wang, SP; Lorette, G; Yeung, DS; Yan, H, 2006, p. 163-166Conference paper (Refereed)
    Abstract [en]

    In this paper we propose anew, hierarchical approach to landmark selection for simultaneous robot localization and mapping based on visual sensors: a biologically motivated attention system finds salient regions of interest (ROIs) in images, and within these regions, Harris corners are detected. This combines the advantages of the ROIs (reducing complexity, enabling good redetactability of regions) with the advantages of the Harris corners (high stability). Reducing complexity is important to meet real-time requirements and stability of features is essential to compute the depth of landmarks from structure from motion with a small baseline. We show that the number of landmarks is highly reduced compared to all Harris corners while maintaining the stability of features for the mapping task.

  • 29.
    Hedström, Andreas
    et al.
    KTH, School of Computer Science and Communication (CSC), Numerical Analysis and Computer Science, NADA.
    Christensen, Henrik I.
    KTH, School of Computer Science and Communication (CSC), Numerical Analysis and Computer Science, NADA.
    Lundberg, Carl
    KTH, School of Computer Science and Communication (CSC), Numerical Analysis and Computer Science, NADA.
    A wearable GUI for field robots2006In: Field and Service Robotics / [ed] Corke, P; Sukkarieh, S, BERLIN: SPRINGER-VERLAG BERLIN , 2006, Vol. 25, p. 367-376Conference paper (Refereed)
    Abstract [en]

    In most search and rescue or reconnaissance missions involving field robots the requirements of the operator being mobile and alert to sudden changes in the near environment, are just as important as the ability to control the robot proficiently. This implies that the GUI platform should be light-weight and portable, and that the GUI itself is carefully designed for the task at hand. In this paper different platform solutions and design of a user-friendly GUI for a packbot will be discussed. Our current wearable system will be presented along with some results from initial field tests in urban search and rescue facilities.

  • 30.
    Hüttenrauch, Helge
    et al.
    KTH, School of Computer Science and Communication (CSC), Numerical Analysis and Computer Science, NADA.
    Severinson Eklundh,, Kerstin
    KTH, School of Computer Science and Communication (CSC).
    Green, Anders
    Topp, Elin A.
    KTH, School of Computer Science and Communication (CSC).
    Christensen, Henrik
    KTH, School of Computer Science and Communication (CSC), Numerical Analysis and Computer Science, NADA.
    What's in the gap?: Interaction Transitions that make the HRI work2006In: Proceedings of the 15th IEEE international symposium on robot and human interactive communication, 2006Conference paper (Other academic)
    Abstract [en]

    This paper presents an in-depth analysis from a Human Robot Interaction (HRI) study on spatial positioning and interaction episode transitions. Subjects showed a living room to a robot to teach it new places and objects. This joint task was analyzed with respect to organizing strategies for interaction episodes. Noticing the importance of transitions between interaction episodes, small adaptive movements in posturewere observed. This finding needs to be incorporated into HRI modules that plan and execute robots’ spatial behavior in interaction, e.g., through dynamic adaptation of spatial formations and distances depending on interaction episode.

  • 31.
    Jensfelt, Patric
    et al.
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.
    Christensen, Henrik Iskov
    GeorgiaTech.
    Mobile robot2005Patent (Other (popular science, discussion, etc.))
    Abstract [en]

    A mobile robot (1) arranged to operate in an environment is described as well as a method for building a map (20). The mobile robot (1) is in an installation mode arranged to store representations of detected objects (19) in a storage means (7) based on detected movement in order to create a map (20). The mobile robot (1) is in a maintenance mode arranged to move in the environment using the map (20) created in the installation mode. The mobile robot (1) comprises editing means for editing, in the installation mode, the map (20) in the storage means (7) based on the map (20) output from the output means (13).

  • 32.
    Jensfelt, Patric
    et al.
    KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Folkesson, John
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Kragic, Danica
    KTH, School of Computer Science and Communication (CSC), Numerical Analysis and Computer Science, NADA. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Christensen, Henrik I.
    KTH, School of Computer Science and Communication (CSC), Numerical Analysis and Computer Science, NADA. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Exploiting distinguishable image features in robotic mapping and localization2006In: European Robotics Symposium 2006 / [ed] Christensen, HI, 2006, Vol. 22, p. 143-157Conference paper (Refereed)
    Abstract [en]

    Simultaneous localization and mapping (SLAM) is an important research area in robotics. Lately, systems that use a single bearing-only sensors have received significant attention and the use of visual sensors have been strongly advocated. In this paper, we present a framework for 3D bearing only SLAM using a single camera. We concentrate on image feature selection in order to achieve precise localization and thus good reconstruction in 3D. In addition, we demonstrate how these features can be managed to provide real-time performance and fast matching, to detect loop-closing situations. The proposed vision system has been combined with an extended Kalman Filter (EKF) based SLAM method. A number of experiments have been performed in indoor environments which demonstrate the validity and effectiveness of the approach. We also show how the SLAM generated map can be used for robot localization. The use of vision features which are distinguishable allows a straightforward solution to the "kidnapped-robot" scenario.

  • 33.
    Kragic, Danica
    et al.
    KTH, Superseded Departments, Numerical Analysis and Computer Science, NADA.
    Björkman, Mårten
    KTH, Superseded Departments, Numerical Analysis and Computer Science, NADA.
    Christensen, Henrik I.
    KTH, Superseded Departments, Numerical Analysis and Computer Science, NADA.
    Eklundh, Jan-Olof
    KTH, Superseded Departments, Numerical Analysis and Computer Science, NADA.
    Issues and Strategies for Robotic Object Manipulation in Domestic Settings2004Conference paper (Other academic)
    Abstract [en]

    Many robotic tasks such as autonomous navigation,human-machine collaboration, object manipulationand grasping facilitate visual information. Some of themajor reasearch and system design issues in terms of visualsystems are robustness and flexibility.In this paper, we present a number of visual strategiesfor robotic object manipulation tasks in natural, domesticenvironments. Given a complex fetch-and-carry type oftasks, the issues related to the whole detect-approachgrasploop are considered. Our vision system integratesa number of algorithms using monocular and binocularcues to achieve robustness in realistic settings. The cuesare considered and used in connection to both foveal andperipheral vision to provide depth information, segmentthe object(s) of interest in the scene, object recognition,tracking and pose estimation. One important propertyof the system is that the step from object recognitionto pose estimation is completely automatic combiningboth appearance and geometric models. Rather thanconcentrating on the integration issues, our primary goalis to investigate the importance and effect of cameraconfiguration, their number and type, to the choice anddesign of the underlying visual algorithms. Experimentalevaluation is performed in a realistic indoor environmentwith occlusions, clutter, changing lighting and backgroundconditions.

  • 34.
    Kragic, Danica
    et al.
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Björkman, Mårten
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Christensen, Henrik I.
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Eklundh, Jan-Olof
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Vision for robotic object manipulation in domestic settings2005In: Robotics and Autonomous Systems, ISSN 0921-8890, E-ISSN 1872-793X, Vol. 52, no 1, p. 85-100Article in journal (Refereed)
    Abstract [en]

    In this paper, we present a vision system for robotic object manipulation tasks in natural, domestic environments. Given complex fetch-and-carry robot tasks, the issues related to the whole detect-approach-grasp loop are considered. Our vision system integrates a number of algorithms using monocular and binocular cues to achieve robustness in realistic settings. The cues are considered and used in connection to both foveal and peripheral vision to provide depth information, segmentation of the object(s) of interest, object recognition, tracking and pose estimation. One important property of the system is that the step from object recognition to pose estimation is completely automatic combining both appearance and geometric models. Experimental evaluation is performed in a realistic indoor environment with occlusions, clutter, changing lighting and background conditions.

  • 35.
    Kragic, Danica
    et al.
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Christensen, Henrik
    KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Robust Visual Servoing2014In: Household Service Robotics, Elsevier, 2014, p. 397-427Chapter in book (Other academic)
    Abstract [en]

    For service robots operating in domestic environments, it is not enough to consider only control level robustness; it is equally important to consider how image information that serves as input to the control process can be used so as to achieve robust and efficient control. In this chapter we present an effort toward the development of robust visual techniques used to guide robots in various tasks. Given a task at hand, we argue that different levels of complexity should be considered; this also defines the choice of the visual technique used to provide the necessary feedback information. We concentrate on visual feedback estimation where we investigate both two- and three-dimensional techniques. In the former case, we are interested in providing coarse information about the object position/velocity in the image plane. In particular, a set of simple visual features (cues) is employed in an integrated framework where voting is used for fusing the responses from individual cues. The experimental evaluation shows the system performance for three different cases of camera-robot configurations most common for robotic systems. For cases where the robot is supposed to grasp the object, a two-dimensional position estimate is often not enough. Complete pose (position and orientation) of the object may be required. Therefore, we present a model-based system where a wire-frame model of the object is used to estimate its pose. Since a number of similar systems have been proposed in literature, we concentrate on the particular part of the system usually neglected-automatic pose initialization. Finally, we show how a number of existing approaches can successfully be integrated in a system that is able to recognize and grasp fairly textured, everyday objects. One of the examples presented in the experimental section shows a mobile robot performing tasks in a real-word environment-a living room.

  • 36.
    Kragic, Danica
    et al.
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Christensen, Henrik I.
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Advances in robot vision2005In: Robotics and Autonomous Systems, ISSN 0921-8890, E-ISSN 1872-793X, Vol. 52, no 1, p. 1-3Article in journal (Other academic)
  • 37. Kruijff, G.-J. M.
    et al.
    Zender, H.
    Jensfelt, Patric
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Christensen, Henrik I.
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Situated dialogue and spatial organization: What, where... and why?2007In: International Journal of Advanced Robotic Systems, ISSN 1729-8806, Vol. 4, no 1, p. 125-138Article in journal (Refereed)
    Abstract [en]

    The paper presents an HRI architecture for human-augmented mapping, which has been implemented and tested on an autonomous mobile robotic platform. Through interaction with a human, the robot can augment its autonomously acquired metric map with qualitative information about locations and objects in the environment. The system implements various interaction strategies observed in independently performed Wizard-of-Oz studies. The paper discusses an ontology-based approach to multi-layered conceptual spatial mapping that provides a common ground for human-robot dialogue. This is achieved by combining acquired knowledge with innate conceptual commonsense knowledge in order to infer new knowledge. The architecture bridges the gap between the rich semantic representations of the meaning expressed by verbal utterances on the one hand and the robot's internal sensor-based world representation on the other. It is thus possible to establish references to spatial areas in a situated dialogue between a human and a robot about their environment. The resulting conceptual descriptions represent qualitative knowledge about locations in the environment that can serve as a basis for achieving a notion of situational awareness.

  • 38. Kruijff, G.-J. M.
    et al.
    Zender, Hendrik
    Language Technology Lab., DFKI GmbH.
    Jensfelt, Patric
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Christensen, Henrik I.
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Clarification dialogues in human-augmented mapping2006In: HRI 2006: Proceedings of the 2006 ACM Conference on Human-Robot Interaction, 2006, p. 282-289Conference paper (Refereed)
    Abstract [en]

    An approach to dialogue based interaction for resolution of ambiguities encountered as part of Human-Augmented Mapping (HAM) is presented. The paper focuses on issues related to spatial organisation and localisation. The dialogue pattern naturally arises as robots are introduced to novel environments. The paper discusses an approach based on the notion of Questions under Discussion (QUD). The presented approach has been implemented on a mobile platform that has dialogue capabilities and methods for metric SLAM. Experimental results from a pilot study clearly demonstrate that the system can resolve problematic situations.

  • 39. Kruijff, G.-J.
    et al.
    Zender, Hendrik
    Language Technology Lab., DFKI GmbH.
    Jensfelt, Patric
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Christensen, Henrik I.
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Situated dialogue and understanding spatial organization: Knowing what is where and what you can do there2006In: Proceedings - IEEE International Workshop on Robot and Human Interactive Communication, 2006, p. 328-333Conference paper (Refereed)
    Abstract [en]

    The paper presents an HRI architecture for human-augmented mapping. Through interaction with a human, the robot can augment its autonomously learnt metric map with qualitative information about locations and objects in the environment. The system implements various interaction strategies observed in independent Wizard-of-Oz studies. The paper discusses an ontology-based approach to representing and inferring 2.5D spatial organization, and presents how knowledge of spatial organization can be acquired autonomously or through spoken dialogue interaction.

  • 40. Kyrki, V.
    et al.
    Kragic, Danica
    KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Christensen, Henrik Iskov
    KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    New shortest-path approaches to visual servoing2004Conference paper (Refereed)
    Abstract [en]

    In recent years, a number of visual servo control algorithms have been proposed. Most approaches try to solve the inherent problems of image-based and position-based servoing by partitioning the control between image and Cartesian spaces. However, partitioning of the control often causes the Cartesian path to become more complex, which might result in operation close to the joint limits. A solution to avoid the joint limits is to use a shortest-path approach, which avoids the limits in most cases. In this paper, two new shortest-path approaches to visual servoing are presented. First, a position-based approach is proposed that guarantees both shortest Cartesian trajectory and object visibility. Then, a variant is presented, which avoids the use of a 3D model of the target object by using homography based partial pose estimation.

  • 41.
    Kyrki, Ville
    et al.
    KTH, Superseded Departments, Numerical Analysis and Computer Science, NADA.
    Kragic, Danica
    KTH, Superseded Departments, Numerical Analysis and Computer Science, NADA.
    Christensen, Henrik I.
    KTH, Superseded Departments, Numerical Analysis and Computer Science, NADA.
    Measurement errors in visual servoing2004In: 2004 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION, VOLS 1- 5, PROCEEDINGS, 2004, p. 1861-1867Conference paper (Refereed)
    Abstract [en]

    In recent years, a number of hybrid visual servoing control algorithms have been proposed and evaluated. For some time now, it has been clear that classical control approaches - image and position based - have some inherent problems. Hybrid approaches try to combine them to overcome these problems. However, most of the proposed approaches concentrate on the design of the control law, neglecting the issue of errors resulting from the sensory system. This paper addresses the issue of measurement errors in visual servoing. The particular contribution is the analysis of the propagation of image error through pose estimation and visual servoing control law. We have chosen to investigate the properties of the vision system and their effect to the performance of the control system. Two approaches are evaluated: i) position, and ii) 2 1/2 D visual servoing. We believe that our evaluation offers a tool to build and analyze hybrid control systems based on, for example, switching [1] or partitioning [2].

  • 42. Li, W.
    et al.
    Christensen, Henrik I.
    KTH, School of Computer Science and Communication (CSC), Numerical Analysis and Computer Science, NADA. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Orebäck, Anders
    KTH, School of Computer Science and Communication (CSC), Numerical Analysis and Computer Science, NADA. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Architecture and its implementation for robots to navigate in unknown indoor environments2005In: Chinese Journal of Mechanical Engineering (English Edition), ISSN 1000-9345, Vol. 18, no 3, p. 366-370Article in journal (Refereed)
    Abstract [en]

    It is discussed with the design and implementation of an architecture for a mobile robot to navigate in dynamic and unknown indoor environments. The architecture is based on the framework of Open Robot Control Software at KTH (OROCOS@KTH), which is also discussed and evaluated to navigate indoor efficiently, a new algorithm named door-like-exit detection is proposed which employs 2D feature of a door and extracts key points of pathway from the raw data of a laser scanner. As a hybrid architecture, it is decomposed into several basic components which can be classified as either deliberative or reactive. Each component can concurrently execute and communicate with another. It is expansible and transferable and its components are reusable.

  • 43. Li, W.
    et al.
    Christensen, Henrik
    KTH, Superseded Departments, Numerical Analysis and Computer Science, NADA.
    Orebäck, Anders
    KTH, Superseded Departments, Numerical Analysis and Computer Science, NADA.
    Chen, D.
    An architecture for indoor navigation2004In: Proceedings - IEEE International Conference on Robotics and Automation, 2004, no 2, p. 1783-1788Conference paper (Refereed)
    Abstract [en]

    This paper is concerned with the design and implementation of a control architecture for a mobile robot that is to navigate in dynamic unknown indoor environments. It is based on the framework of Open Robot Control Software @ KTH, which is discussed and evaluated in this paper. As a hybrid architecture, it is decomposed into several basic components which can be classified as either deliberative or reactive. Each component can concurrently execute and communicate with another using unified communication interfaces. Scalability and portability and reusability are the goals of the design.

  • 44.
    Lundberg, Carl
    et al.
    KTH, Superseded Departments, Numerical Analysis and Computer Science, NADA.
    Barck-Holst, Carl
    KTH, Superseded Departments, Numerical Analysis and Computer Science, NADA.
    Folkesson, John
    KTH, Superseded Departments, Numerical Analysis and Computer Science, NADA.
    Christensen, Henrik I.
    KTH, Superseded Departments, Numerical Analysis and Computer Science, NADA.
    PDA interface for a field robot2003In: Proc. of the IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS03), 2003, p. 2882-2888Conference paper (Refereed)
    Abstract [en]

    Operating robots in an outdoor setting poses interesting problems in terms of interaction. To interact with the robot there is a need for a flexible computer interface. In this paper a PDA-based (personal digital assistant, i.e. a handheld computer) approach to robot interaction is presented. The system is designed to allow non-expert users to utilise the robot for operation in an urban exploration setup. The basic design is outlined and a first set of experiments are reported.

  • 45.
    Lundberg, Carl
    et al.
    KTH, School of Computer Science and Communication (CSC), Numerical Analysis and Computer Science, NADA. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Christensen, Henrik I.
    KTH, School of Computer Science and Communication (CSC), Numerical Analysis and Computer Science, NADA. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Hedström, Andreas
    KTH, School of Computer Science and Communication (CSC), Numerical Analysis and Computer Science, NADA. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    The use of robots in harsh and unstructured field applications2005In: 2005 IEEE International Workshop on Robot and Human Interactive Communication (RO-MAN), NEW YORK, NY: IEEE , 2005, p. 143-150Conference paper (Refereed)
    Abstract [en]

    Robots have a potential to be a significant aid in high risk, unstructured and stressing situations such as experienced by police, fire brigade, rescue workers and military. In this project we have explored the abilities of today's robot technology in the mentioned fields. This was done by, studying the user, identifying scenarios where a robot could be used and implementing a robot system for these cases. We have concluded that highly portable field robots are emerging to be an available technology but that the human-robot interaction is currently a major limiting factor of today's systems. Further we have found that operational protocols, stating how to use the robots, have to be designed in order to make robots an effective tool in harsh and unstructured field environments.

  • 46.
    Lundberg, Carl
    et al.
    KTH, School of Computer Science and Communication (CSC), Numerical Analysis and Computer Science, NADA. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Christensen, Henrik Iskov
    KTH, School of Computer Science and Communication (CSC), Numerical Analysis and Computer Science, NADA. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Evaluation of mapping with a tele-operated robot with video feedback2006In: Proc. IEEE Int. Workshop Robot Human Interact. Commun., 2006, p. 164-170Conference paper (Refereed)
    Abstract [en]

    This research has examined robot operators' abilities to gain situational awareness while performing teleoperation with video feedback. The research included a user study in which 20 test persons explored and drew a map of a corridor and several rooms, which they had not visited before. Half of the participants did the exploration and mapping using a teleoperated robot (IRobot PackBot) with video feedback but without being able to see or enter the exploration area themselves. The other half fulfilled the task manually by walking through the premises. The two groups were evaluated regarding time consumption and the rendered maps were evaluated concerning error rate and dimensional and logical accuracy. Dimensional accuracy describes the test person's ability to estimate and reproduce dimensions in the map. Logical accuracy refers to missed, added, misinterpreted, reversed and inconsistent objects or shapes in the depiction. The evaluation showed that fulfilling the task with the robot on average took 96% longer time and rendered 44% more errors than doing it without the robot. Robot users overestimated dimensions with an average of 16% while non-robot users made an average overestimation of 1%. Further, the robot users had a 69% larger standard deviation in their dimensional estimations and on average made 23% more logical errors during the test.

  • 47. Newman, P.
    et al.
    Christensen, Henrik Iskov
    KTH, School of Computer Science and Communication (CSC), Numerical Analysis and Computer Science, NADA.
    Session overview simultaneous localisation and mapping2007In: Robotics Research: Results of the 12th International Symposium ISRR, Springer Berlin/Heidelberg, 2007, p. 187-189Conference paper (Refereed)
  • 48. Okamura, Allison M.
    et al.
    Mataric, Maja J.
    Christensen, Henrik I.
    KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS. KTH, School of Computer Science and Communication (CSC), Numerical Analysis, NA.
    Medical and Health-Care Robotics Achievements and Opportunities2010In: IEEE robotics & automation magazine, ISSN 1070-9932, E-ISSN 1558-223X, Vol. 17, no 3, p. 26-37Article in journal (Refereed)
  • 49.
    Pacchierotti, Elena
    et al.
    KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS. KTH, School of Computer Science and Communication (CSC), Numerical Analysis and Computer Science, NADA.
    Christensen, Henrik I.
    KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS. KTH, School of Computer Science and Communication (CSC), Numerical Analysis and Computer Science, NADA.
    Jensfelt, Patric
    KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS. KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.
    Design of an office-guide robot for social interaction studies2006In: 2006 IEEE/RSJ International Conference on Intelligent Robots and Systems, Vols 1-12, NEW YORK: IEEE , 2006, p. 4965-4970Conference paper (Refereed)
    Abstract [en]

    In this paper, the design of an office-guide robot for social interaction studies is presented. We are interested in studying the impact of passage behaviours in casual encounters. While the system offers assistance in locating the appropriate office that a visitor wants to reach, it is expected to engage in a passing behaviour to allow free passage for other persons that it may encounter. Through use of such an approach it is possible to study the effect of social interaction in a situation that is much more natural than out-of-context user studies. The system has been tested in an early evaluation phase when it worked for almost 7 hours. A total of 64 interactions with people were registered and 13 passage behaviors were performed to conclude that this framework can be successfully used for the evaluation of passing behaviors in natural contexts of operation.

  • 50.
    Pacchierotti, Elena
    et al.
    KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Christensen, Henrik I.
    KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS. KTH, School of Computer Science and Communication (CSC), Numerical Analysis and Computer Science, NADA.
    Jensfelt, Patric
    KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS. KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.
    Embodied social interaction for service robots in hallway environments2006In: Field and Service Robotics / [ed] Corke, P; Sukkarieh, S, BERLIN: SPRINGER-VERLAG BERLIN , 2006, Vol. 25, p. 293-304Conference paper (Refereed)
    Abstract [en]

    A key aspect of service robotics for everyday use is the motion in close proximity to humans. It is essential that the robot exhibits a behavior that signals safety of motion and awareness of the persons in the environment. To achieve this, there is a need to define control strategies that are perceived as socially acceptable by users that are not familiar with robots. In this paper a system for navigation in a hallway is presented, in which the rules of proxemics are used to define the interaction strategies. The experimental results show the contribution to the establishment of effective spatial interaction patterns between the robot and a person.

12 1 - 50 of 73
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf