Endre søk
Begrens søket
1234567 51 - 100 of 416
RefereraExporteraLink til resultatlisten
Permanent link
Referera
Referensformat
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Annet format
Fler format
Språk
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Annet språk
Fler språk
Utmatningsformat
  • html
  • text
  • asciidoc
  • rtf
Treff pr side
  • 5
  • 10
  • 20
  • 50
  • 100
  • 250
Sortering
  • Standard (Relevans)
  • Forfatter A-Ø
  • Forfatter Ø-A
  • Tittel A-Ø
  • Tittel Ø-A
  • Type publikasjon A-Ø
  • Type publikasjon Ø-A
  • Eldste først
  • Nyeste først
  • Skapad (Eldste først)
  • Skapad (Nyeste først)
  • Senast uppdaterad (Eldste først)
  • Senast uppdaterad (Nyeste først)
  • Disputationsdatum (tidligste først)
  • Disputationsdatum (siste først)
  • Standard (Relevans)
  • Forfatter A-Ø
  • Forfatter Ø-A
  • Tittel A-Ø
  • Tittel Ø-A
  • Type publikasjon A-Ø
  • Type publikasjon Ø-A
  • Eldste først
  • Nyeste først
  • Skapad (Eldste først)
  • Skapad (Nyeste først)
  • Senast uppdaterad (Eldste først)
  • Senast uppdaterad (Nyeste først)
  • Disputationsdatum (tidligste først)
  • Disputationsdatum (siste først)
Merk
Maxantalet träffar du kan exportera från sökgränssnittet är 250. Vid större uttag använd dig av utsökningar.
  • 51.
    Bohg, Jeannette
    et al.
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Johnson-Roberson, Matthew
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Leon, Beatriz
    Universitat Jaume I, Castellon, Spain.
    Felip, Javier
    Universitat Jaume I, Castellon, Spain.
    Gratal, Xavi
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Bergström, Niklas
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Kragic, Danica
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Morales, Antonio
    Universitat Jaume I, Castellon, Spain.
    Mind the Gap - Robotic Grasping under Incomplete Observation2011Inngår i: 2011 IEEE International Conference on Robotics and Automation, Shanghai, China, May 9-13, 2011, New York: IEEE , 2011, s. 686-693Konferansepaper (Fagfellevurdert)
    Abstract [en]

    We consider the problem of grasp and manipulation planning when the state of the world is only partially observable. Specifically, we address the task of picking up unknown objects from a table top. The proposed approach to object shape prediction aims at closing the knowledge gaps in the robot's understanding of the world. A completed state estimate of the environment can then be provided to a simulator in which stable grasps and collision-free movements are planned. The proposed approach is based on the observation that many objects commonly in use in a service robotic scenario possess symmetries. We search for the optimal parameters of these symmetries given visibility constraints. Once found, the point cloud is completed and a surface mesh reconstructed. Quantitative experiments show that the predictions are valid approximations of the real object shape. By demonstrating the approach on two very different robotic platforms its generality is emphasized.

    Fulltekst (pdf)
    2011_ICRA_kthuji.pdf
  • 52. Bohg, Jeannette
    et al.
    Morales, Antonio
    Asfour, Tamim
    Kragic, Danica
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Data-Driven Grasp Synthesis-A Survey2014Inngår i: IEEE Transactions on robotics, ISSN 1552-3098, E-ISSN 1941-0468, Vol. 30, nr 2, s. 289-309Artikkel i tidsskrift (Fagfellevurdert)
    Abstract [en]

    We review the work on data-driven grasp synthesis and the methodologies for sampling and ranking candidate grasps. We divide the approaches into three groups based on whether they synthesize grasps for known, familiar, or unknown objects. This structure allows us to identify common object representations and perceptual processes that facilitate the employed data-driven grasp synthesis technique. In the case of known objects, we concentrate on the approaches that are based on object recognition and pose estimation. In the case of familiar objects, the techniques use some form of a similarity matching to a set of previously encountered objects. Finally, for the approaches dealing with unknown objects, the core part is the extraction of specific features that are indicative of good grasps. Our survey provides an overview of the different methodologies and discusses open problems in the area of robot grasping. We also draw a parallel to the classical approaches that rely on analytic formulations.

  • 53. Bohg, Jeannette
    et al.
    Welke, Kai
    Institute for Anthropomatics, Karlsruhe Institute of Technology, Germany.
    Leon, Beatriz
    Department of Computer Science and Engineering, Universitat Jaume I, Spain.
    Do, Martin
    Institute for Anthropomatics, Karlsruhe Institute of Technology, Germany.
    Song, Dan
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP.
    Wohlkinger, Walter
    Automation and Control Institute, Technische Universität Wien, Austria.
    Madry, Marianna
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP.
    Aldoma, Aitor
    Automation and Control Institute, Technische Universität Wien, Austria.
    Przybylski, Markus
    Institute for Anthropomatics, Karlsruhe Institute of Technology, Germany.
    Asfour, Tamim
    Institute for Anthropomatics, Karlsruhe Institute of Technology, Germany.
    Marti, Higinio
    Department of Computer Science and Engineering, Universitat Jaume I, Spain.
    Kragic, Danica
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP.
    Morales, Antonio
    Department of Computer Science and Engineering, Universitat Jaume I, Spain.
    Vincze, Markus
    Automation and Control Institute, Technische Universität Wien, Austria.
    Task-based Grasp Adaptation on a Humanoid Robot2012Inngår i: Proceedings 10th IFAC Symposium on Robot Control, 2012, s. 779-786Konferansepaper (Fagfellevurdert)
    Abstract [en]

    In this paper, we present an approach towards autonomous grasping of objects according to their category and a given task. Recent advances in the field of object segmentation and categorization as well as task-based grasp inference have been leveraged by integrating them into one pipeline. This allows us to transfer task-specific grasp experience between objects of the same category. The effectiveness of the approach is demonstrated on the humanoid robot ARMAR-IIIa.

  • 54.
    Bore, Nils
    KTH, Skolan för datavetenskap och kommunikation (CSC), Robotik, perception och lärande, RPL.
    Detection and Tracking of General Movable Objects in Large 3D MapsManuskript (preprint) (Annet vitenskapelig)
    Abstract [en]

    This paper studies the problem of detection and tracking of general objects with long-term dynamics, observed by a mobile robot moving in a large environment. A key problem is that due to the environment scale, it can only observe a subset of the objects at any given time. Since some time passes between observations of objects in different places, the objects might be moved when the robot is not there. We propose a model for this movement in which the objects typically only move locally, but with some small probability they jump longer distances, through what we call global motion. For filtering, we decompose the posterior over local and global movements into two linked processes. The posterior over the global movements and measurement associations is sampled, while we track the local movement analytically using Kalman filters. This novel filter is evaluated on point cloud data gathered autonomously by a mobile robot over an extended period of time. We show that tracking jumping objects is feasible, and that the proposed probabilistic treatment outperforms previous methods when applied to real world data. The key to efficient probabilistic tracking in this scenario is focused sampling of the object posteriors.

  • 55.
    Bore, Nils
    KTH, Skolan för datavetenskap och kommunikation (CSC), Robotik, perception och lärande, RPL. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Multiple Object Detection, Tracking and Long-Term Dynamics Learning in Large 3D MapsManuskript (preprint) (Annet vitenskapelig)
    Abstract [en]

    In this work, we present a method for tracking and learning the dynamics of all objects in a large scale robot environment. A mobile robot patrols the environment and visits the different locations one by one. Movable objects are discovered by change detection, and tracked throughout the robot deployment. For tracking, we extend our previous Rao-Blackwellized particle filter with birth and death processes, enabling the method to handle an arbitrary number of objects. Target births and associations are sampled using Gibbs sampling. The parameters of the system are then learnt using the Expectation Maximization algorithm in an unsupervised fashion. The system therefore enables learning of the dynamics of one particular environment, and of its objects. The algorithm is evaluated on data collected autonomously by a mobile robot in an office environment during a real-world deployment. We show that the algorithm automatically identifies and tracks the moving objects within 3D maps and infers plausible dynamics models, significantly decreasing the modeling bias of our previous work. The proposed method represents an improvement over previous methods for environment dynamics learning as it allows for learning of fine grained processes.

  • 56.
    Bore, Nils
    KTH, Skolan för datavetenskap och kommunikation (CSC), Robotik, perception och lärande, RPL.
    Object Instance Detection and Dynamics Modeling in a Long-Term Mobile Robot Context2017Doktoravhandling, med artikler (Annet vitenskapelig)
    Abstract [en]

    In the last years, simple service robots such as autonomous vacuum cleaners and lawn mowers have become commercially available and increasingly common. The next generation of service robots should perform more advanced tasks, such as to clean up objects. Robots then need to learn to robustly navigate, and manipulate, cluttered environments, such as an untidy living room. In this thesis, we focus on representations for tasks such as general cleaning and fetching of objects. We discuss requirements for these specific tasks, and argue that solving them would be generally useful, because of their object-centric nature. We rely on two fundamental insights in our approach to understand environments on a fine-grained level. First, many of today's robot map representations are limited to the spatial domain, and ignore that there is a time axis that constrains how much an environment may change during a given period. We argue that it is of critical importance to also consider the temporal domain. By studying the motion of individual objects, we can enable tasks such as general cleaning and object fetching. The second insight comes from that mobile robots are becoming more robust. They can therefore collect large amounts of data from those environments. With more data, unsupervised learning of models becomes feasible, allowing the robot to adapt to changes in the environment, and to scenarios that the designer could not foresee. We view these capabilities as vital for robots to become truly autonomous. The combination of unsupervised learning and dynamics modelling creates an interesting symbiosis: the dynamics vary between different environments and between the objects in one environment, and learning can capture these variations. A major difficulty when modeling environment dynamics is that the whole environment can not be observed at one time, since the robot is moving between different places. We demonstrate how this can be dealt with in a principled manner, by modeling several modes of object movement. We also demonstrate methods for detection and learning of objects and structures in the static parts of the maps. Using the complete system, we can represent and learn many aspects of the full environment. In real-world experiments, we demonstrate that our system can keep track of varied objects in large and highly dynamic environments.​

    Fulltekst (pdf)
    fulltext
  • 57.
    Bore, Nils
    et al.
    KTH, Skolan för elektroteknik och datavetenskap (EECS), Robotik, perception och lärande, RPL.
    Ekekrantz, Johan
    KTH, Skolan för elektroteknik och datavetenskap (EECS), Robotik, perception och lärande, RPL.
    Jensfelt, Patric
    KTH, Skolan för elektroteknik och datavetenskap (EECS), Robotik, perception och lärande, RPL.
    Folkesson, John
    KTH, Skolan för elektroteknik och datavetenskap (EECS), Robotik, perception och lärande, RPL.
    Detection and Tracking of General Movable Objects in Large Three-Dimensional Maps2019Inngår i: IEEE Transactions on robotics, ISSN 1552-3098, E-ISSN 1941-0468, Vol. 35, nr 1, s. 231-247Artikkel i tidsskrift (Fagfellevurdert)
    Abstract [en]

    This paper studies the problem of detection and tracking of general objects with semistatic dynamics observed by a mobile robot moving in a large environment. A key problem is that due to the environment scale, the robot can only observe a subset of the objects at any given time. Since some time passes between observations of objects in different places, the objects might be moved when the robot is not there. We propose a model for this movement in which the objects typically only move locally, but with some small probability they jump longer distances through what we call global motion. For filtering, we decompose the posterior over local and global movements into two linked processes. The posterior over the global movements and measurement associations is sampled, while we track the local movement analytically using Kalman filters. This novel filter is evaluated on point cloud data gathered autonomously by a mobile robot over an extended period of time. We show that tracking jumping objects is feasible, and that the proposed probabilistic treatment outperforms previous methods when applied to real world data. The key to efficient probabilistic tracking in this scenario is focused sampling of the object posteriors.

  • 58.
    Bore, Nils
    et al.
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Jensfelt, Patric
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Folkesson, John
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Querying 3D Data by Adjacency Graphs2015Inngår i: Computer Vision Systems / [ed] Nalpantidis, Lazaros and Krüger, Volker and Eklundh, Jan-Olof and Gasteratos, Antonios, Springer Publishing Company, 2015, s. 243-252Kapittel i bok, del av antologi (Fagfellevurdert)
    Abstract [en]

    The need for robots to search the 3D data they have saved is becoming more apparent. We present an approach for finding structures in 3D models such as those built by robots of their environment. The method extracts geometric primitives from point cloud data. An attributed graph over these primitives forms our representation of the surface structures. Recurring substructures are found with frequent graph mining techniques. We investigate if a model invariant to changes in size and reflection using only the geometric information of and between primitives can be discriminative enough for practical use. Experiments confirm that it can be used to support queries of 3D models.

    Fulltekst (pdf)
    fulltext
  • 59.
    Bore, Nils
    et al.
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP.
    Jensfelt, Patric
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP.
    Folkesson, John
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP.
    Retrieval of Arbitrary 3D Objects From Robot Observations2015Inngår i: Retrieval of Arbitrary 3D Objects From Robot Observations, Lincoln: IEEE Robotics and Automation Society, 2015, s. 1-8Konferansepaper (Fagfellevurdert)
    Abstract [en]

    We have studied the problem of retrieval of arbi-trary object instances from a large point cloud data set. Thecontext is autonomous robots operating for long periods of time,weeks up to months and regularly saving point cloud data. Theever growing collection of data is stored in a way that allowsranking candidate examples of any query object, given in theform of a single view point cloud, without the need to accessthe original data. The top ranked ones can then be compared ina second phase using the point clouds themselves. Our methoddoes not assume that the point clouds are segmented or that theobjects to be queried are known ahead of time. This means thatwe are able to represent the entire environment but it also posesproblems for retrieval. To overcome this our approach learnsfrom each actual query to improve search results in terms of theranking. This learning is automatic and based only on the queries.We demonstrate our system on data collected autonomously by arobot operating over 13 days in our building. Comparisons withother techniques and several variations of our method are shown.

  • 60.
    Bortolin, Gianantonio
    et al.
    KTH, Skolan för teknikvetenskap (SCI), Matematik (Inst.), Optimeringslära och systemteori.
    Gutman, P O
    Nilsson, B
    On modelling of curl in multi-ply paperboard2006Inngår i: Journal of Process Control, ISSN 0959-1524, E-ISSN 1873-2771, Vol. 16, nr 4, s. 419-429Artikkel i tidsskrift (Fagfellevurdert)
    Abstract [en]

    This paper describes a grey-box model for the curl and twist of the carton board produced at AssiDoman Frovi, Sweden. The main equations are based on classical lamination theory of composite materials, and each constituent ply is considered as a macroscopic homogeneous, elastic medium. The model used data from June to September 2004, and shows a general agreement between predicted and measured curvatures. The data were cleaned from outliers by means of the Hampel filter, a nonlinear moving window filter, and with a model based method. Regularization and backwards elimination were used to cope with the low identifiability of the problem. The model was then complemented with a sub-model of immeasurable/unmodelled disturbances estimated with an extended Kalman filter. (C) 2005 Elsevier Ltd. All rights reserved.

  • 61.
    Bratt, Mattias
    et al.
    KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Smith, Christian
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Christensen, Henrik I.
    KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Design of a Control Strategy for Teleoperation of a Platform with Significant Dynamics2006Inngår i: 2006 IEEE/RSJ International Conference on Intelligent Robots and Systems, Vols 1-12, NEW YORK, NY: IEEE , 2006, s. 1700-1705Konferansepaper (Fagfellevurdert)
    Abstract [en]

    A teleoperation system for controlling a robot with fast dynamics over the Internet has been constructed. It employs a predictive control structure with an accurate dynamic model of the robot to overcome problems caused by varying delays. The operator interface uses a stereo virtual reality display of the robot cell, and a haptic device for force feed-back including virtual obstacle avoidance forces.

    Fulltekst (pdf)
    Bratt_iros06.pdf
  • 62. Bray, Matthieu
    et al.
    Sidenbladh, Hedvig
    KTH, Tidigare Institutioner, Numerisk analys och datalogi, NADA.
    Eklundh, Jan-Olof
    KTH, Tidigare Institutioner, Numerisk analys och datalogi, NADA.
    Recognition of gestures in the context of speech2002Inngår i: 16th International Conference on Pattern Recognition, 2002. Proceedings., 2002Konferansepaper (Fagfellevurdert)
    Abstract [en]

    The scope of this paper is the interpretation of a user's intention via a video camera and a speech recognizer In comparison to previous work which only takes into account gesture recognition, we demonstrate that by including speech, system comprehension increases. For the gesture recognition, the user must wear a colored glove, then we extract the velocity of the center of gravity of the hand. A Hidden Markov Model (HMM) is learned for each gesture that we want to recognize. In a dynamic action, to know if a gesture has been performed or not, we implement a threshold model below which the gesture is not detected. The off line tests for gesture recognition have a success rate exceeding 85% for each gesture. The combination of speech and gestures is realized using Bayesian theory.

  • 63. Brooks, A.
    et al.
    Kaupp, T.
    Makarenko, A.
    Williams, S.
    Orebäck, Anders
    KTH, Skolan för datavetenskap och kommunikation (CSC), Numerisk Analys och Datalogi, NADA.
    Orca: A component model and repository2007Inngår i: Software Engineering for Experimental Robotics, Springer, 2007, s. 231-251Konferansepaper (Fagfellevurdert)
    Abstract [en]

    This Chapter describes Orca: an open-source project which applies Component-Based Software Engineering principles to robotics. It provides the means for defining and implementing interfaces such that components developed independently are likely to be inter-operable. In addition it provides a repository of free re-useable components. Orca attempts to be widely applicable by imposing minimal design constraints. This Chapter describes lessons learned while using Orca and steps taken to improve the framework based on those lessons. Improvements revolve around middleware issues and the problems encountered while scaling to larger distributed systems. Results are presented from systems that were implemented.

  • 64.
    Bueno, Jesus Ignacio
    et al.
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP.
    Kragic, Danica
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP.
    Integration of tracking and adaptive Gaussian mixture models for posture recognition2006Inngår i: Proc. IEEE Int. Workshop Robot Human Interact. Commun., 2006, s. 623-628Konferansepaper (Fagfellevurdert)
    Abstract [en]

    In this paper, we present a system for continuous posture recognition. The main contributions of the proposed approach are the integration of an adaptive color model with a tracking system that allows for robust continuous posture recognition based on Principal Component Analysis. The adaptive color model uses Gaussian Mixture Models for skin and background color representation, Bayesian framework for classification and Kalman filter for tracking hands and head of a person that interacts with the robot. Experimental evaluation shows that the integration of tracking and an adaptive color model supports the robustness and flexibility of the system when illumination changes occur.

  • 65.
    Butepage, Judith
    et al.
    KTH, Skolan för elektroteknik och datavetenskap (EECS), Robotik, perception och lärande, RPL.
    Cruciani, Silvia
    KTH, Skolan för elektroteknik och datavetenskap (EECS), Robotik, perception och lärande, RPL.
    Kokic, Mia
    KTH, Skolan för elektroteknik och datavetenskap (EECS), Robotik, perception och lärande, RPL.
    Welle, Michael
    KTH, Skolan för elektroteknik och datavetenskap (EECS), Robotik, perception och lärande, RPL.
    Kragic, Danica
    KTH, Skolan för elektroteknik och datavetenskap (EECS), Robotik, perception och lärande, RPL.
    From Visual Understanding to Complex Object Manipulation2019Inngår i: Annual Review of Control, Robotics, and Autonomous Systems, Vol. 2, s. 161-179Artikkel, forskningsoversikt (Fagfellevurdert)
    Abstract [en]

    Planning and executing object manipulation requires integrating multiple sensory and motor channels while acting under uncertainty and complying with task constraints. As the modern environment is tuned for human hands, designing robotic systems with similar manipulative capabilities is crucial. Research on robotic object manipulation is divided into smaller communities interested in, e.g., motion planning, grasp planning, sensorimotor learning, and tool use. However, few attempts have been made to combine these areas into holistic systems. In this review, we aim to unify the underlying mechanics of grasping and in-hand manipulation by focusing on the temporal aspects of manipulation, including visual perception, grasp planning and execution, and goal-directed manipulation. Inspired by human manipulation, we envision that an emphasis on the temporal integration of these processes opens the way for human-like object use by robots.

  • 66.
    Båberg, Fredrik
    et al.
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP.
    Caccamo, Sergio
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP.
    Smets, Nanja
    Neerincx, Mark
    Ögren, Petter
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP.
    Free Look UGV Teleoperation Control Tested in Game Environment: Enhanced Performance and Reduced Workload2016Inngår i: International Symposium on Safety,Security and Rescue Robotics, 2016Konferansepaper (Fagfellevurdert)
    Abstract [en]

    Concurrent telecontrol of the chassis and camera ofan Unmanned Ground Vehicle (UGV) is a demanding task forUrban Search and Rescue (USAR) teams. The standard way ofcontrolling UGVs is called Tank Control (TC), but there is reasonto believe that Free Look Control (FLC), a control mode used ingames, could reduce this load substantially by decoupling, andproviding separate controls for, camera translation and rotation.The general hypothesis is that FLC (1) reduces robot operators’workload and (2) enhances their performance for dynamic andtime-critical USAR scenarios. A game-based environment wasset-up to systematically compare FLC with TC in two typicalsearch and rescue tasks: navigation and exploration. The resultsshow that FLC improves mission performance in both exploration(search) and path following (navigation) scenarios. In the former,more objects were found, and in the latter shorter navigationtimes were achieved. FLC also caused lower workload and stresslevels in both scenarios, without inducing a significant differencein the number of collisions. Finally, FLC was preferred by 75% of the subjects for exploration, and 56% for path following.

  • 67.
    Båberg, Fredrik
    et al.
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Wang, Yuquan
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Caccamo, Sergio
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Ögren, Petter
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Adaptive object centered teleoperation control of a mobile manipulator2016Inngår i: 2016 IEEE International Conference on Robotics and Automation (ICRA), Institute of Electrical and Electronics Engineers (IEEE), 2016, s. 455-461Konferansepaper (Fagfellevurdert)
    Abstract [en]

    Teleoperation of a mobile robot manipulating and exploring an object shares many similarities with the manipulation of virtual objects in a 3D design software such as AutoCAD. The user interfaces are however quite different, mainly for historical reasons. In this paper we aim to change that, and draw inspiration from the 3D design community to propose a teleoperation interface control mode that is identical to the ones being used to locally navigate the virtual viewpoint of most Computer Aided Design (CAD) softwares.

    The proposed mobile manipulator control framework thus allows the user to focus on the 3D objects being manipulated, using control modes such as orbit object and pan object, supported by data from the wrist mounted RGB-D sensor. The gripper of the robot performs the desired motions relative to the object, while the manipulator arm and base moves in a way that realizes the desired gripper motions. The system redundancies are exploited in order to take additional constraints, such as obstacle avoidance, into account, using a constraint based programming framework.

    Fulltekst (pdf)
    fulltext
  • 68.
    Bütepage, Judith
    KTH, Skolan för elektroteknik och datavetenskap (EECS), Robotik, perception och lärande, RPL.
    Generative models for action generation and action understanding2019Doktoravhandling, med artikler (Annet vitenskapelig)
    Abstract [en]

    The question of how to build intelligent machines raises the question of how to rep-resent the world to enable intelligent behavior. In nature, this representation relies onthe interplay between an organism’s sensory input and motor input. Action-perceptionloops allow many complex behaviors to arise naturally. In this work, we take these sen-sorimotor contingencies as an inspiration to build robot systems that can autonomouslyinteract with their environment and with humans. The goal is to pave the way for robotsystems that can learn motor control in an unsupervised fashion and relate their ownsensorimotor experience to observed human actions. By combining action generationand action understanding we hope to facilitate smooth and intuitive interaction betweenrobots and humans in shared work spaces.To model robot sensorimotor contingencies and human behavior we employ gen-erative models. Since generative models represent a joint distribution over relevantvariables, they are flexible enough to cover the range of tasks that we are tacklinghere. Generative models can represent variables that originate from multiple modali-ties, model temporal dynamics, incorporate latent variables and represent uncertaintyover any variable - all of which are features required to model sensorimotor contin-gencies. By using generative models, we can predict the temporal development of thevariables in the future, which is important for intelligent action selection.We present two lines of work. Firstly, we will focus on unsupervised learning ofmotor control with help of sensorimotor contingencies. Based on Gaussian Processforward models we demonstrate how the robot can execute goal-directed actions withthe help of planning techniques or reinforcement learning. Secondly, we present anumber of approaches to model human activity, ranging from pure unsupervised mo-tion prediction to including semantic action and affordance labels. Here we employdeep generative models, namely Variational Autoencoders, to model the 3D skeletalpose of humans over time and, if required, include semantic information. These twolines of work are then combined to implement physical human-robot interaction tasks.Our experiments focus on real-time applications, both when it comes to robot ex-periments and human activity modeling. Since many real-world scenarios do not haveaccess to high-end sensors, we require our models to cope with uncertainty. Additionalrequirements are data-efficient learning, because of the wear and tear of the robot andhuman involvement, online employability and operation under safety and complianceconstraints. We demonstrate how generative models of sensorimotor contingencies canhandle these requirements in our experiments satisfyingly.

    Fulltekst (pdf)
    fulltext
  • 69.
    Bütepage, Judith
    et al.
    KTH, Skolan för elektroteknik och datavetenskap (EECS), Robotik, perception och lärande, RPL.
    Kjellström, Hedvig
    KTH, Skolan för elektroteknik och datavetenskap (EECS), Robotik, perception och lärande, RPL.
    Kragic, Danica
    KTH, Skolan för elektroteknik och datavetenskap (EECS), Robotik, perception och lärande, RPL.
    A Probabilistic Semi-Supervised Approach to Multi-Task Human Activity ModelingManuskript (preprint) (Annet vitenskapelig)
    Abstract [en]

    Human behavior is a continuous stochastic spatio-temporal process which is governed by semantic actions and affordances as well as latent factors. Therefore, video-based human activity modeling is concerned with a number of tasks such as inferring current and future semantic labels, predicting future continuous observations as well as imagining possible future label and feature sequences. In this paper we present a semi-supervised probabilistic deep latent variable model that can represent both discrete labels and continuous observations as well as latent dynamics over time. This allows the model to solve several tasks at once without explicit fine-tuning. We focus here on the tasks of action classification, detection, prediction and anticipation as well as motion prediction and synthesis based on 3D human activity data recorded with Kinect. We further extend the model to capture hierarchical label structure and to model the dependencies between multiple entities, such as a human and objects. Our experiments demonstrate that our principled approach to human activity modeling can be used to detect current and anticipate future semantic labels and to predict and synthesize future label and feature sequences. When comparing our model to state-of-the-art approaches, which are specifically designed for e.g. action classification, we find that our probabilistic formulation outperforms or is comparable to these task specific models.

    Fulltekst (pdf)
    fulltext
  • 70.
    Caccamo, Sergio
    et al.
    KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Bekiroglu, Yasemin
    Ek, Carl Henrik
    KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Kragic, Danica
    KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Active Exploration Using Gaussian Random Fields and Gaussian Process Implicit Surfaces2016Inngår i: 2016 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS (IROS 2016), Institute of Electrical and Electronics Engineers (IEEE), 2016, s. 582-589Konferansepaper (Fagfellevurdert)
    Abstract [en]

    In this work we study the problem of exploring surfaces and building compact 3D representations of the environment surrounding a robot through active perception. We propose an online probabilistic framework that merges visual and tactile measurements using Gaussian Random Field and Gaussian Process Implicit Surfaces. The system investigates incomplete point clouds in order to find a small set of regions of interest which are then physically explored with a robotic arm equipped with tactile sensors. We show experimental results obtained using a PrimeSense camera, a Kinova Jaco2 robotic arm and Optoforce sensors on different scenarios. We then demostrate how to use the online framework for object detection and terrain classification.

  • 71. Carignan, C. R.
    et al.
    Olsson, Pontus
    KTH, Skolan för elektro- och systemteknik (EES). Imaging Sci./Info. Systems Center, Department of Radiology, Georgetown University, United States .
    Cooperative control of virtual objects over the internet using force-reflecting master arms2004Inngår i: Proceedings - IEEE International Conference on Robotics and Automation, 2004, nr 2, s. 1221-1226Konferansepaper (Fagfellevurdert)
    Abstract [en]

    Force-reflecting master arms are explored for use as haptic displays in physical therapy interventions over the internet. Rehabilitation tasks can be constructed in which both the patient and therapist can interact with a common object from distant locations. Each haptic master exerts "forces" on a virtual object which, in response, generates desired velocities for the master arm to track. A novel cooperative control architecture based on wave variables is implemented to counter the destabilizing effect of internet time-delay. The control scheme is validated experimentally using a pair of InMotion2 robots in a virtual beam manipulation task between remote sites.

  • 72. Carli, Ruggero
    et al.
    Fagnani, Fabio
    Focoso, Marco
    Speranzon, Alberto
    KTH, Skolan för elektro- och systemteknik (EES), Reglerteknik.
    Zampieri, Sandro
    Symmetries in the coordinated consensus problem2006Inngår i: Networked Embedded Sensing And Control / [ed] Antsaklis, PJ; Tabuada, P, Springer Berlin/Heidelberg, 2006, Vol. 331, s. 25-51Konferansepaper (Fagfellevurdert)
    Abstract [en]

    In this paper we consider a widely studied problem in the robotics and control communities, called consensus problem. The aim of the paper is to characterize the relationship between the amount of information exchanged by the vehicles and the speed of convergence to the consensus. Time-invariant communication graphs that exhibit particular symmetries are shown to yield slow convergence if the amount of information exchanged does not scale with the number of vehicles. On the other hand, we show that retaining symmetries in time-varying communication networks allows to increase the speed of convergence even in the presence of limited information exchange.

  • 73.
    Chen, Guang
    et al.
    Tongji Univ, Coll Automot Engn, Shanghai, Peoples R China.;Tech Univ Munich, Robot Artificial Intelligence & Real Time Syst, Munich, Germany..
    Cao, Hu
    Hunan Univ, State Key Lab Adv Design & Mfg Vehicle Body, Changsha, Hunan, Peoples R China..
    Ye, Canbo
    Tongji Univ, Coll Automot Engn, Shanghai, Peoples R China..
    Zhang, Zhenyan
    Tongji Univ, Coll Automot Engn, Shanghai, Peoples R China..
    Liu, Xingbo
    Tongji Univ, Coll Automot Engn, Shanghai, Peoples R China..
    Mo, Xuhui
    Hunan Univ, State Key Lab Adv Design & Mfg Vehicle Body, Changsha, Hunan, Peoples R China..
    Qu, Zhongnan
    Swiss Fed Inst Technol, Comp Engn & Networks Lab, Zurich, Switzerland..
    Conradt, Jörg
    KTH, Skolan för elektroteknik och datavetenskap (EECS), Beräkningsvetenskap och beräkningsteknik (CST).
    Roehrbein, Florian
    Tech Univ Munich, Robot Artificial Intelligence & Real Time Syst, Munich, Germany..
    Knoll, Alois
    Tech Univ Munich, Robot Artificial Intelligence & Real Time Syst, Munich, Germany..
    Multi-Cue Event Information Fusion for Pedestrian Detection With Neuromorphic Vision Sensors2019Inngår i: Frontiers in Neurorobotics, ISSN 1662-5218, Vol. 13, artikkel-id 10Artikkel i tidsskrift (Fagfellevurdert)
    Abstract [en]

    Neuromorphic vision sensors are bio-inspired cameras that naturally capture the dynamics of a scene with ultra-low latency, filtering out redundant information with low power consumption. Few works are addressing the object detection with this sensor. In this work, we propose to develop pedestrian detectors that unlock the potential of the event data by leveraging multi-cue information and different fusion strategies. To make the best out of the event data, we introduce three different event-stream encoding methods based on Frequency, Surface of Active Event (SAE) and Leaky Integrate-and-Fire (LIF). We further integrate them into the state-of-the-art neural network architectures with two fusion approaches: the channel-level fusion of the raw feature space and decision-level fusion with the probability assignments. We present a qualitative and quantitative explanation why different encoding methods are chosen to evaluate the pedestrian detection and which method performs the best. We demonstrate the advantages of the decision-level fusion via leveraging multi-cue event information and show that our approach performs well on a self-annotated event-based pedestrian dataset with 8,736 event frames. This work paves the way of more fascinating perception applications with neuromorphic vision sensors.

  • 74.
    Chen, Xi
    et al.
    KTH, Skolan för elektroteknik och datavetenskap (EECS), Robotik, perception och lärande, RPL.
    Ghadirzadeh, Ali
    KTH, Skolan för elektroteknik och datavetenskap (EECS), Robotik, perception och lärande, RPL.
    Folkesson, John
    KTH, Skolan för elektroteknik och datavetenskap (EECS), Robotik, perception och lärande, RPL.
    Björkman, Mårten
    KTH, Skolan för elektroteknik och datavetenskap (EECS), Robotik, perception och lärande, RPL.
    Jensfelt, Patric
    KTH, Skolan för elektroteknik och datavetenskap (EECS), Robotik, perception och lärande, RPL.
    Deep Reinforcement Learning to Acquire Navigation Skills for Wheel-Legged Robots in Complex Environments2018Inngår i: 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 2018Konferansepaper (Fagfellevurdert)
    Abstract [en]

    Mobile robot navigation in complex and dynamic environments is a challenging but important problem. Reinforcement learning approaches fail to solve these tasks efficiently due to reward sparsities, temporal complexities and high-dimensionality of sensorimotor spaces which are inherent in such problems. We present a novel approach to train action policies to acquire navigation skills for wheel-legged robots using deep reinforcement learning. The policy maps height-map image observations to motor commands to navigate to a target position while avoiding obstacles. We propose to acquire the multifaceted navigation skill by learning and exploiting a number of manageable navigation behaviors. We also introduce a domain randomization technique to improve the versatility of the training samples. We demonstrate experimentally a significant improvement in terms of data-efficiency, success rate, robustness against irrelevant sensory data, and also the quality of the maneuver skills.

  • 75.
    Chung, Michael Jae-Yoon
    et al.
    University of Washington, Seattle.
    Pronobis, Andrzej
    University of Washington, Seattle.
    Cakmak, Maya
    University of Washington, Seattle.
    Fox, Dieter
    University of Washington, Seattle.
    Rao, Rajesh P. N.
    University of Washington, Seattle.
    Autonomous Question Answering with Mobile Robots in Human-Populated Environments2016Inngår i: Proceedings of the 2016 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS’16), IEEE, 2016Konferansepaper (Fagfellevurdert)
    Abstract [en]

    Autonomous mobile robots will soon become ubiquitous in human-populated environments. Besides their typical applications in fetching, delivery, or escorting, such robots present the opportunity to assist human users in their daily tasks by gathering and reporting up-to-date knowledge about the environment. In this paper, we explore this use case and present an end-to-end framework that enables a mobile robot to answer natural language questions about the state of a large-scale, dynamic environment asked by the inhabitants of that environment. The system parses the question and estimates an initial viewpoint that is likely to contain information for answering the question based on prior environment knowledge. Then, it autonomously navigates towards the viewpoint while dynamically adapting to changes and new information. The output of the system is an image of the most relevant part of the environment that allows the user to obtain an answer to their question. We additionally demonstrate the benefits of a continuously operating information gathering robot by showing how the system can answer retrospective questions about the past state of the world using incidentally recorded sensory data. We evaluate our approach with a custom mobile robot deployed in a university building, with questions collected from occupants of the building. We demonstrate our system's ability to respond to these questions in different environmental conditions.

  • 76.
    Chung, Michael Jae-Yoon
    et al.
    University of Washington, Seattle.
    Pronobis, Andrzej
    University of Washington, Seattle.
    Cakmak, Maya
    University of Washington, Seattle.
    Fox, Dieter
    University of Washington, Seattle.
    Rao, Rajesh P. N.
    University of Washington, Seattle.
    Designing Information Gathering Robots for Human-Populated Environments2015Inngår i: Proceedings of the 2015 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS’15), IEEE, 2015Konferansepaper (Fagfellevurdert)
    Abstract [en]

    Advances in mobile robotics have enabled robots that can autonomously operate in human-populated environments. Although primary tasks for such robots might be fetching, delivery, or escorting, they present an untapped potential as information gathering agents that can answer questions for the community of co-inhabitants. In this paper, we seek to better understand requirements for such information gathering robots (InfoBots) from the perspective of the user requesting the information. We present findings from two studies: (i) a user survey conducted in two office buildings and (ii) a 4-day long deployment in one of the buildings, during which inhabitants of the building could ask questions to an InfoBot through a web-based interface. These studies allow us to characterize the types of information that InfoBots can provide for their users.

  • 77.
    Chung, Michael Jae-Yoon
    et al.
    University of Washington, Seattle.
    Pronobis, Andrzej
    University of Washington, Seattle.
    Cakmak, Maya
    University of Washington, Seattle.
    Fox, Dieter
    University of Washington, Seattle.
    Rao, Rajesh P. N.
    University of Washington, Seattle.
    Exploring the Potential of Information Gathering Robots2015Inngår i: Proceedings of the 10th Annual ACM/IEEE International Conference on Human-Robot Interaction Extended Abstracts (HRI’15), ACM Digital Library, 2015Konferansepaper (Fagfellevurdert)
    Abstract [en]

    Autonomous mobile robots equipped with a number of sensors will soon be ubiquitous in human populated environments. In this paper we present an initial exploration into the potential of using such robots for information gathering. We present findings from a formative user survey and a 4-day long Wizard-of-Oz deployment of a robot that answers questions such as "Is there free food on the kitchen table?" Our studies allow us to characterize the types of information that InfoBots might be most useful for.

  • 78. Civera, Javier
    et al.
    Ciocarlie, Matei
    Aydemir, Alper
    KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Bekris, Kostas
    Sarma, Sanjay
    Special Issue on Cloud Robotics and Automation2015Inngår i: IEEE Transactions on Automation Science and Engineering, ISSN 1545-5955, E-ISSN 1558-3783, Vol. 12, nr 2, s. 396-397Artikkel i tidsskrift (Annet vitenskapelig)
    Abstract [en]

    The articles in this special section focus on the use of cloud computing in the robotics industry. The Internet and the availability of vast computational resources, ever-growing data and storage capacity have the potential to define a new paradigm for robotics and automation. An intelligent system connected to the Internet can expand its onboard local data, computation and sensors with huge data repositories from similar and very different domains, massive parallel computation from server farms and sensor/actuator streams from other robots and automata. It is the potential and also the research challenges of the field that become the focus on this special section. The goal is to group together and to show the state-of-the-art of this newly emerged field, identify the relevant advances and topics, point out the current lines of research and potential applications, and discuss the main research challenges and future work directions.

  • 79.
    Colledanchise, Michele
    et al.
    Istituto Italiano di Tecnologia - IIT, Genoa, Italy.
    Almeida, Diogo
    KTH, Skolan för elektroteknik och datavetenskap (EECS), Intelligenta system, Robotik, perception och lärande, RPL.
    Ögren, Petter
    KTH, Skolan för elektroteknik och datavetenskap (EECS), Intelligenta system, Robotik, perception och lärande, RPL.
    Towards Blended Reactive Planning and Acting using Behavior Trees2019Inngår i: 2019 International Conference on Robotics And Automation (ICRA), IEEE Robotics and Automation Society, 2019, s. 8839-8845Konferansepaper (Fagfellevurdert)
    Abstract [en]

    In this paper, we show how a planning algorithm can be used to automatically create and update a Behavior Tree (BT), controlling a robot in a dynamic environment. The planning part of the algorithm is based on the idea of back chaining. Starting from a goal condition we iteratively select actions to achieve that goal, and if those actions have unmet preconditions, they are extended with actions to achieve them in the same way. The fact that BTs are inherently modular and reactive makes the proposed solution blend acting and planning in a way that enables the robot to effectively react to external disturbances. If an external agent undoes an action the robot re- executes it without re-planning, and if an external agent helps the robot, it skips the corresponding actions, again without re- planning. We illustrate our approach in two different robotics scenarios.

    Fulltekst (pdf)
    fulltext
  • 80.
    Colledanchise, Michele
    et al.
    KTH, Skolan för datavetenskap och kommunikation (CSC), Robotik, perception och lärande, RPL.
    Marzinotto, Alejandro
    KTH, Skolan för datavetenskap och kommunikation (CSC), Robotik, perception och lärande, RPL.
    Dimarogonas, Dimos V.
    KTH, Skolan för elektro- och systemteknik (EES), Reglerteknik.
    Ögren, Petter
    KTH, Skolan för datavetenskap och kommunikation (CSC), Robotik, perception och lärande, RPL.
    The advantages of using behavior trees in multi-robot systems2016Inngår i: 47th International Symposium on Robotics, ISR 2016, VDE Verlag GmbH, 2016, s. 23-30Konferansepaper (Fagfellevurdert)
    Abstract [en]

    Multi-robot teams offer possibilities of improved performance and fault tolerance, compared to single robot solutions. In this paper, we show how to realize those possibilities when starting from a single robot system controlled by a Behavior Tree (BT). By extending the single robot BT to a multi-robot BT, we are able to combine the fault tolerant properties of the BT, in terms of built-in fallbacks, with the fault tolerance inherent in multi-robot approaches, in terms of a faulty robot being replaced by another one. Furthermore, we improve performance by identifying and taking advantage of the opportunities of parallel task execution, that are present in the single robot BT. Analyzing the proposed approach, we present results regarding how mission performance is affected by minor faults (a robot losing one capability) as well as major faults (a robot losing all its capabilities).

  • 81.
    Colledanchise, Michele
    et al.
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Marzinotto, Alejandro
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP.
    Ögren, Peter
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP.
    Performance Analysis of Stochastic Behavior Trees2014Inngår i: ICRA 2014, 2014Konferansepaper (Fagfellevurdert)
    Abstract [en]

    This paper presents a mathematical framework for performance analysis of Behavior Trees (BTs). BTs are a recent alternative to Finite State Machines (FSMs), for doing modular task switching in robot control architectures. By encoding the switching logic in a tree structure, instead of distributing it in the states of a FSM, modularity and reusability are improved.

    In this paper, we compute performance measures, such as success/failure probabilities and execution times, for plans encoded and executed by BTs. To do this, we first introduce Stochastic Behavior Trees (SBT), where we assume that the probabilistic performance measures of the basic action controllers are given. We then show how Discrete Time Markov Chains (DTMC) can be used to aggregate these measures from one level of the tree to the next. The recursive structure of the tree then enables us to step by step propagate such estimates from the leaves (basic action controllers) to the root (complete task execution). Finally, we verify our analytical results using massive Monte Carlo simulations, and provide an illustrative example of the results for a complex robotic task.

  • 82.
    Colledanchise, Michele
    et al.
    KTH, Skolan för datavetenskap och kommunikation (CSC), Robotik, perception och lärande, RPL.
    Ögren, Petter
    KTH, Skolan för datavetenskap och kommunikation (CSC), Robotik, perception och lärande, RPL.
    How Behavior Trees Modularize Hybrid Control Systems and Generalize Sequential Behavior Compositions, the Subsumption Architecture, and Decision Trees2017Inngår i: IEEE Transactions on robotics, ISSN 1552-3098, E-ISSN 1941-0468, Vol. 33, nr 2, s. 372-389Artikkel i tidsskrift (Fagfellevurdert)
    Fulltekst (pdf)
    fulltext
  • 83. Colledancise, Michele
    et al.
    Parasuraman, Ramviyas Nattanmai
    Petter, Ögren
    KTH, Skolan för elektroteknik och datavetenskap (EECS), Robotik, perception och lärande, RPL.
    Learning of Behavior Trees for Autonomous Agents2018Inngår i: IEEE Transactions on Games, ISSN 2475-1502Artikkel i tidsskrift (Fagfellevurdert)
    Abstract [en]

    In this paper, we study the problem of automatically synthesizing a successful Behavior Tree (BT) in an a-priori unknown dynamic environment. Starting with a given set of behaviors, a reward function, and sensing in terms of a set of binary conditions, the proposed algorithm incrementally learns a switching structure in terms of a BT, that is able to handle the situations encountered. Exploiting the fact that BTs generalize And-Or-Trees and also provide very natural chromosome mappings for genetic pro- gramming, we combine the long term performance of Genetic Programming with a greedy element and use the And-Or analogy to limit the size of the resulting structure. Finally, earlier results on BTs enable us to provide certain safety guarantees for the resulting system. Using the testing environment Mario AI we compare our approach to alternative methods for learning BTs and Finite State Machines. The evaluation shows that the proposed approach generated solutions with better performance, and often fewer nodes than the other two methods.

  • 84.
    Cornelius, Hugo
    et al.
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP.
    Kragic, Danica
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP.
    Eklundh, Jan-Olof
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP.
    Object and pose recognition using contour and shape information2005Inngår i: 2005 12th International Conference on Advanced Robotics, NEW YORK, NY: IEEE , 2005, s. 613-620Konferansepaper (Fagfellevurdert)
    Abstract [en]

    Object recognition and pose estimation are of significant importance for robotic visual servoing, manipulation and grasping tasks. Traditionally, contour and shape based methods have been considered as most adequate for estimating stable and feasible grasps, [1]. More recently, a new research direction has been advocated in visual servoing where image moments are used to define a suitable error function to be minimized. Compared to appearance based methods, contour and shape based approaches are also suitable for use with range sensors such as, for example, lasers. In this paper, we evaluate a contour based object recognition system building on the method in [2], suitable for objects of uniform color properties such as cups, cutlery, fruits etc. This system is one of the building blocks of a more complex object recognition system based both on stereo and appearance cues, [3]. The system has a significant potential both in terms of service robot and programming by demonstration tasks. Experimental evaluation shows promising results in terms of robustness to occlusion and noise.

  • 85.
    Correia, Filipa
    et al.
    Univ Lisbon, INESC ID, Inst Super Tecn, Lisbon, Portugal..
    Mascarenhas, Samuel F.
    Univ Lisbon, INESC ID, Inst Super Tecn, Lisbon, Portugal..
    Gomes, Samuel
    Univ Lisbon, INESC ID, Inst Super Tecn, Lisbon, Portugal..
    Arriaga, Patricia
    CIS IUL, Inst Univ Lisboa ISCTE IUL, Lisbon, Portugal..
    Leite, Iolanda
    KTH, Skolan för elektroteknik och datavetenskap (EECS), Robotik, perception och lärande, RPL.
    Prada, Rui
    Univ Lisbon, INESC ID, Inst Super Tecn, Lisbon, Portugal..
    Melo, Francisco S.
    Univ Lisbon, INESC ID, Inst Super Tecn, Lisbon, Portugal..
    Paiva, Ana
    Univ Lisbon, INESC ID, Inst Super Tecn, Lisbon, Portugal..
    Exploring Prosociality in Human-Robot Teams2019Inngår i: HRI '19: 2019 14TH ACM/IEEE INTERNATIONAL CONFERENCE ON HUMAN-ROBOT INTERACTION, IEEE , 2019, s. 143-151Konferansepaper (Fagfellevurdert)
    Abstract [en]

    This paper explores the role of prosocial behaviour when people team up with robots in a collaborative game that presents a social dilemma similar to a public goods game. An experiment was conducted with the proposed game in which each participant joined a team with a prosocial robot and a selfish robot. During 5 rounds of the game, each player chooses between contributing to the team goal (cooperate) or contributing to his individual goal (defect). The prosociality level of the robots only affects their strategies to play the game, as one always cooperates and the other always defects. We conducted a user study at the office of a large corporation with 70 participants where we manipulated the game result (winning or losing) in a between-subjects design. Results revealed two important considerations: (1) the prosocial robot was rated more positively in terms of its social attributes than the selfish robot, regardless of the game result; (2) the perception of competence, the responsibility attribution (blame/credit), and the preference for a future partner revealed significant differences only in the losing condition. These results yield important concerns for the creation of robotic partners, the understanding of group dynamics and, from a more general perspective, the promotion of a prosocial society.

  • 86.
    Cruciani, Silvia
    et al.
    KTH, Skolan för elektroteknik och datavetenskap (EECS), Robotik, perception och lärande, RPL.
    Almeida, Diogo
    KTH, Skolan för elektroteknik och datavetenskap (EECS), Robotik, perception och lärande, RPL. KTH.
    Kragic, Danica
    KTH, Skolan för elektroteknik och datavetenskap (EECS), Robotik, perception och lärande, RPL.
    Karayiannidis, Yiannis
    KTH, Skolan för elektroteknik och datavetenskap (EECS), Robotik, perception och lärande, RPL.
    Discrete Bimanual Manipulation for Wrench BalancingManuskript (preprint) (Annet vitenskapelig)
    Abstract [en]

    Dual-arm robots can overcome grasping force and payload limitations of a single arm by jointly grasping an object.However, if the distribution of mass of the grasped object is not even, each arm will experience different wrenches that can exceed its payload limits.In this work, we consider the problem of balancing the wrenches experienced by  a dual-arm robot grasping a rigid tray.The distribution of wrenches among the robot arms changes due to objects being placed on the tray.We present an approach to reduce the wrench imbalance among arms through discrete bimanual manipulation.Our approach is based on sequential sliding motions of the grasp points on the surface of the object, to attain a more balanced configuration.%This is achieved in a discrete manner, one arm at a time, to minimize the potential for undesirable object motion during execution.We validate our modeling approach and system design through a set of robot experiments.

    Fulltekst (pdf)
    fulltext
  • 87.
    Cruciani, Silvia
    et al.
    KTH, Skolan för elektroteknik och datavetenskap (EECS), Intelligenta system, Robotik, perception och lärande, RPL.
    Hang, Kaiyu
    Yale University.
    Smith, Christian
    KTH, Skolan för elektroteknik och datavetenskap (EECS), Intelligenta system, Robotik, perception och lärande, RPL.
    Kragic, Danica
    KTH, Skolan för elektroteknik och datavetenskap (EECS), Intelligenta system, Robotik, perception och lärande, RPL.
    Dual-Arm In-Hand Manipulation Using Visual Feedback2019Konferansepaper (Fagfellevurdert)
    Abstract [en]

    In this work, we address the problem of executing in-hand manipulation based on visual input. Given an initial grasp, the robot has to change its grasp configuration without releasing the object. We propose a method for in-hand manipulation planning and execution based on information on the object’s shape using a dual-arm robot. From the available information on the object, which can be a complete point cloud but also partial data, our method plans a sequence of rotations and translations to reconfigure the object’s pose. This sequence is executed using non-prehensile pushes defined as relative motions between the two robot arms.

  • 88.
    Cruciani, Silvia
    et al.
    KTH, Skolan för elektroteknik och datavetenskap (EECS), Intelligenta system, Robotik, perception och lärande, RPL. KTH Royal Inst Technol, Div Robot Percept & Learning, EECS, S-11428 Stockholm, Sweden..
    Sundaralingam, Balakumar
    Univ Utah, Robot Ctr, Salt Lake City, UT 84112 USA.;Univ Utah, Sch Comp, Salt Lake City, UT 84112 USA..
    Hang, Kaiyu
    Yale Univ, Dept Mech Engn & Mat Sci, New Haven, CT 06520 USA..
    Kumar, Vikash
    Google AI, San Francisco, CA 94110 USA..
    Hermans, Tucker
    Univ Utah, Robot Ctr, Salt Lake City, UT 84112 USA.;Univ Utah, Sch Comp, Salt Lake City, UT 84112 USA.;NVIDIA Res, Santa Clara, CA USA..
    Kragic, Danica
    KTH, Tidigare Institutioner (före 2005), Numerisk analys och datalogi, NADA. KTH, Skolan för elektroteknik och datavetenskap (EECS), Intelligenta system, Robotik, perception och lärande, RPL. KTH, Skolan för elektroteknik och datavetenskap (EECS), Centra, Centrum för autonoma system, CAS. KTH Royal Inst Technol, Div Robot Percept & Learning, EECS, S-11428 Stockholm, Sweden..
    Benchmarking In-Hand Manipulation2020Inngår i: IEEE Robotics and Automation Letters, ISSN 2377-3766, E-ISSN 1949-3045, Vol. 5, nr 2, s. 588-595Artikkel i tidsskrift (Fagfellevurdert)
    Abstract [en]

    The purpose of this benchmark is to evaluate the planning and control aspects of robotic in-hand manipulation systems. The goal is to assess the systems ability to change the pose of a hand-held object by either using the fingers, environment or a combination of both. Given an object surface mesh from the YCB data-set, we provide examples of initial and goal states (i.e. static object poses and fingertip locations) for various in-hand manipulation tasks. We further propose metrics that measure the error in reaching the goal state from a specific initial state, which, when aggregated across all tasks, also serves as a measure of the systems in-hand manipulation capability. We provide supporting software, task examples, and evaluation results associated with the benchmark.

  • 89.
    Dani, Ashwin
    et al.
    University of Illinois at Urbana-Champaign (UIUC).
    Panahandeh, Ghazaleh
    KTH, Skolan för elektro- och systemteknik (EES), Signalbehandling. KTH, Skolan för elektro- och systemteknik (EES), Centra, ACCESS Linnaeus Centre.
    Chung, Soon-Jo
    University of Illinois at Urbana-Champaign (UIUC).
    Hutchinson, Seth
    University of Illinois at Urbana-Champaign (UIUC).
    Image Moments for Higher-Level Feature Based Navigation2013Inngår i: 2013 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 2013, s. 602-609Konferansepaper (Fagfellevurdert)
    Abstract [en]

    This paper presents a novel vision-based localization and mapping algorithm using image moments of region features. The environment is represented using regions, such as planes and/or 3D objects instead of only a dense set of feature points. The regions can be uniquely defined using a small number of parameters; e.g., a plane can be completely characterized by normal vector and distance to a local coordinate frame attached to the plane. The variation of image moments of the regions in successive images can be related to the parameters of the regions. Instead of tracking a large number of feature points, variations of image moments of regions can be computed by tracking the segmented regions or a few feature points on the objects in successive images. A map represented by regions can be characterized using a minimal set of parameters. The problem is formulated as a nonlinear filtering problem. A new discrete-time nonlinear filter based on the state-dependent coefficient (SDC) form of nonlinear functions is presented. It is shown via Monte-Carlo simulations that the new nonlinear filter is more accurate and consistent than EKF by evaluating the root-mean squared error (RMSE) and normalized estimation error squared (NEES).

  • 90. Danielsson, O.
    et al.
    Syberfeldt, A.
    Brewster, R.
    Wang, Lihui
    KTH, Skolan för industriell teknik och management (ITM), Industriell produktion, Produktionssystem.
    Assessing Instructions in Augmented Reality for Human-robot Collaborative Assembly by Using Demonstrators2017Inngår i: Manufacturing Systems 4.0 – Proceedings of the 50th CIRP Conference on Manufacturing Systems, Elsevier, 2017, Vol. 63, s. 89-94Konferansepaper (Fagfellevurdert)
    Abstract [en]

    Robots are becoming more adaptive and aware of their surroundings. This has opened up the research area of tight human-robot collaboration, where humans and robots work directly interconnected rather than in separate cells. The manufacturing industry is in constant need of developing new products. This means that operators are in constant need of learning new ways of manufacturing. If instructions to operators and interaction between operators and robots can be virtualized this has the potential of being more modifiable and available to the operators. Augmented Reality has previously shown to be effective in giving operators instructions in assembly, but there are still knowledge gaps regarding evaluation and general design guidelines. This paper has two aims. Firstly it aims to assess if demonstrators can be used to simulate human-robot collaboration. Secondly it aims to assess if Augmented Reality-based interfaces can be used to guide test-persons through a previously unknown assembly procedure. The long-term goal of the demonstrator is to function as a test-module for how to efficiently instruct operators collaborating with a robot. Pilot-tests have shown that Augmented Reality instructions can give enough information for untrained workers to perform simple assembly-tasks where parts of the steps are done with direct collaboration with a robot. Misunderstandings of the instructions from the test-persons led to multiple errors during assembly so future research is needed in how to efficiently design instructions.

  • 91. Deghat, M.
    et al.
    Davis, E.
    See, T.
    Shames, Iman
    KTH, Skolan för elektro- och systemteknik (EES), Reglerteknik. KTH, Skolan för elektro- och systemteknik (EES), Centra, ACCESS Linnaeus Centre.
    Anderson, B. D. O.
    Yu, C.
    Target localization and circumnavigation by a non-holonomic robot2012Inngår i: Intelligent Robots and Systems (IROS), 2012 IEEE/RSJ International Conference on, 2012, s. 1227-1232Konferansepaper (Fagfellevurdert)
    Abstract [en]

    This paper addresses a surveillance problem in which the goal is to achieve a circular motion around a target by a non-holonomic agent. The agent only knows its own position with respect to its initial frame, and the bearing angle of the target in that frame. It is assumed that the position of the target is unknown. An estimator and a controller are proposed to estimate the position of the target and make the agent move on a circular trajectory with a desired radius around it. The performance of the proposed algorithm is verified both through simulations and experiments. Robustness is also established in the face of noise and target motion.

  • 92.
    Detry, Renaud
    et al.
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Ek, Carl Henrik
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Madry, Marianna
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Kragic, Danica
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Learning a dictionary of prototypical grasp-predicting parts from grasping experience2013Inngår i: 2013 IEEE International Conference on Robotics and Automation (ICRA), New York: IEEE , 2013, s. 601-608Konferansepaper (Fagfellevurdert)
    Abstract [en]

    We present a real-world robotic agent that is capable of transferring grasping strategies across objects that share similar parts. The agent transfers grasps across objects by identifying, from examples provided by a teacher, parts by which objects are often grasped in a similar fashion. It then uses these parts to identify grasping points onto novel objects. We focus our report on the definition of a similarity measure that reflects whether the shapes of two parts resemble each other, and whether their associated grasps are applied near one another. We present an experiment in which our agent extracts five prototypical parts from thirty-two real-world grasp examples, and we demonstrate the applicability of the prototypical parts for grasping novel objects.

  • 93. Do, Martin
    et al.
    Romero, Javier
    KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Kjellström, Hedvig
    KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Azad, Pedram
    Asfour, Tamim
    Kragic, Danica
    KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Dillman, Rüdiger
    Grasp recognition and mapping on humanoid robots2009Inngår i: 9th IEEE-RAS International Conference on Humanoid Robots, HUMANOIDS09, 2009, s. 465-471Konferansepaper (Fagfellevurdert)
  • 94. Donde, Shrinish
    Optimal Robot Localisation Techniques for Real World Scenarios2019Konferansepaper (Annet vitenskapelig)
    Abstract [en]

    The following paper is a thorough review of our research based on the localisation of robots. It is an attempt to segregate the techniques, to locate the accurate position of robots in different environments and to discuss the superlative methods to localise such robots or in some cases an unmanned vehicle. The paper is divided into three real world problem statements and a thorough analysis of the best technique amongst other is made, for all the three conditions namely underwater, indoor and space, based on a number of parameters such as the cost, accuracy, efficiency, implementation of the technique and the environmental conditions around the robot. The primary reason to select these problem statements is due to the recent trends of research in these domains. Each of the technique has been chosen after discarding several previously applied techniques and an effective approach has been put forward for localising the robots in different areas.

  • 95.
    Drimus, Alin
    et al.
    Mads Clausen Institute for Product Innovation, University of Southern Denmark.
    Kootstra, Gert
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Bilberg, A.
    Mads Clausen Institute for Product Innovation, University of Southern Denmark.
    Kragic, Danica
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Classification of Rigid and Deformable Objects Using a Novel Tactile Sensor2011Inngår i: Proceedings of the 15th International Conference on Advanced Robotics (ICAR), IEEE , 2011, s. 427-434Konferansepaper (Fagfellevurdert)
    Abstract [en]

    In this paper, we present a novel tactile-array sensor for use in robotic grippers based on flexible piezoresistive rubber. We start by describing the physical principles of piezoresistive materials, and continue by outlining how to build a flexible tactile-sensor array using conductive thread electrodes. A real-time acquisition system scans the data from the array which is then further processed. We validate the properties of the sensor in an application that classifies a number of household objects while performing a palpation procedure with a robotic gripper. Based on the haptic feedback, we classify various rigid and deformable objects. We represent the array of tactile information as a time series of features and use this as the input for a k-nearest neighbors classifier. Dynamic time warping is used to calculate the distances between different time series. The results from our novel tactile sensor are compared to results obtained from an experimental setup using a Weiss Robotics tactile sensor with similar characteristics. We conclude by exemplifying how the results of the classification can be used in different robotic applications.

    Fulltekst (pdf)
    drimus11icar.pdf
  • 96. Drimus, Alin
    et al.
    Kootstra, Gert
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Bilberg, Arne
    Kragic, Danica
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Design of a flexible tactile sensor for classification of rigid and deformable objects2014Inngår i: Robotics and Autonomous Systems, ISSN 0921-8890, E-ISSN 1872-793X, Vol. 62, nr 1, s. 3-15Artikkel i tidsskrift (Fagfellevurdert)
    Abstract [en]

    For both humans and robots, tactile sensing is important for interaction with the environment: it is the core sensing used for exploration and manipulation of objects. In this paper, we present a novel tactile-array sensor based on flexible piezoresistive rubber. We describe the design of the sensor and data acquisition system. We evaluate the sensitivity and robustness of the sensor, and show that it is consistent over time with little relaxation. Furthermore, the sensor has the benefit of being flexible, having a high resolution, it is easy to mount, and simple to manufacture. We demonstrate the use of the sensor in an active object-classification system. A robotic gripper with two sensors mounted on its fingers performs a palpation procedure on a set of objects. By squeezing an object, the robot actively explores the material properties, and the system acquires tactile information corresponding to the resulting pressure. Based on a k nearest neighbor classifier and using dynamic time warping to calculate the distance between different time series, the system is able to successfully classify objects. Our sensor demonstrates similar classification performance to the Weiss Robotics tactile sensor, while having additional benefits.

  • 97. Duerr, Hans-Bernd
    et al.
    Stankovic, Milos S.
    Johansson, Karl Henrik
    KTH, Skolan för elektro- och systemteknik (EES), Reglerteknik. KTH, Skolan för elektro- och systemteknik (EES), Centra, ACCESS Linnaeus Centre.
    Ebenbauer, Christian
    Extremum seeking on submanifolds in the Euclidian space2014Inngår i: Automatica, ISSN 0005-1098, E-ISSN 1873-2836, Vol. 50, nr 10, s. 2591-2596Artikkel i tidsskrift (Fagfellevurdert)
    Abstract [en]

    Extremum seeking is a powerful control method to steer a dynamical system to an extremum of a partially unknown function. In this paper, we introduce extremum seeking systems on submanifolds in the Euclidian space. Using a trajectory approximation technique based on Lie brackets, we prove that uniform asymptotic stability of the so-called Lie bracket system on the manifold implies practical uniform asymptotic stability of the corresponding extremum seeking system on the manifold. We illustrate the approach with an example of extremum seeking on a torus.

  • 98.
    Ekekrantz, Johan
    et al.
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Pronobis, Andrzej
    KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Folkesson, John
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Jensfelt, Patric
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Adaptive Iterative Closest Keypoint2013Inngår i: 2013 European Conference on Mobile Robots, ECMR 2013 - Conference Proceedings, New York: IEEE , 2013, s. 80-87Konferansepaper (Fagfellevurdert)
    Abstract [en]

    Finding accurate correspondences between overlapping 3D views is crucial for many robotic applications, from multi-view 3D object recognition to SLAM. This step, often referred to as view registration, plays a key role in determining the overall system performance. In this paper, we propose a fast and simple method for registering RGB-D data, building on the principle of the Iterative Closest Point (ICP) algorithm. In contrast to ICP, our method exploits both point position and visual appearance and is able to smoothly transition the weighting between them with an adaptive metric. This results in robust initial registration based on appearance and accurate final registration using 3D points. Using keypoint clustering we are able to utilize a non exhaustive search strategy, reducing runtime of the algorithm significantly. We show through an evaluation on an established benchmark that the method significantly outperforms current methods in both robustness and precision.

    Fulltekst (pdf)
    fulltext
  • 99.
    Ekekrantz, Johan
    et al.
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP.
    Thippur, Akshaya
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP.
    Folkesson, John
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Jensfelt, Patric
    KTH, Skolan för datavetenskap och kommunikation (CSC).
    Probabilistic Primitive Refinement algorithm for colored point cloud data2015Inngår i: 2015 European Conference on Mobile Robots (ECMR), Lincoln: IEEE conference proceedings, 2015Konferansepaper (Fagfellevurdert)
    Abstract [en]

    In this work we present the Probabilistic Primitive Refinement (PPR) algorithm, an iterative method for accurately determining the inliers of an estimated primitive (such as planes and spheres) parametrization in an unorganized, noisy point cloud. The measurement noise of the points belonging to the proposed primitive surface are modelled using a Gaussian distribution and the measurements of extraneous points to the proposed surface are modelled as a histogram. Given these models, the probability that a measurement originated from the proposed surface model can be computed. Our novel technique to model the noisy surface from the measurement data does not require a priori given parameters for the sensor noise model. The absence of sensitive parameters selection is a strength of our method. Using the geometric information obtained from such an estimate the algorithm then builds a color-based model for the surface, further boosting the accuracy of the segmentation. If used iteratively the PPR algorithm can be seen as a variation of the popular mean-shift algorithm with an adaptive stochastic kernel function.

  • 100.
    Ekvall, Staffan
    et al.
    KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS. KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP.
    Jensfelt, Patric
    KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS. KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP.
    Kragic, Danica
    KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS. KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP.
    Integrating active mobile robot object recognition and SLAM in natural environments2006Inngår i: 2006 IEEE/RSJ International Conference on Intelligent Robots and Systems, Vols 1-12, NEW YORK: IEEE , 2006, s. 5792-5797Konferansepaper (Fagfellevurdert)
    Abstract [en]

    Linking semantic and spatial information has become an important research area in robotics since, for robots interacting with humans and performing tasks in natural environments, it is of foremost importance to be able to reason beyond simple geometrical and spatial levels. In this paper, we consider this problem in a service robot scenario where a mobile robot autonomously navigates in a domestic environment, builds a map as it moves along, localizes its position in it, recognizes objects on its way and puts them in the map. The experimental evaluation is performed in a realistic setting where the main concentration is put on the synergy of object recognition and Simultaneous Localization and Mapping systems.

1234567 51 - 100 of 416
RefereraExporteraLink til resultatlisten
Permanent link
Referera
Referensformat
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Annet format
Fler format
Språk
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Annet språk
Fler språk
Utmatningsformat
  • html
  • text
  • asciidoc
  • rtf