Change search
Refine search result
1234567 101 - 150 of 416
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Rows per page
  • 5
  • 10
  • 20
  • 50
  • 100
  • 250
Sort
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
Select
The maximal number of hits you can export is 250. When you want to export more records please use the Create feeds function.
  • 101.
    Ekvall, Staffan
    et al.
    KTH, School of Computer Science and Communication (CSC), Numerical Analysis and Computer Science, NADA. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Kragic, Danica
    KTH, School of Computer Science and Communication (CSC), Numerical Analysis and Computer Science, NADA. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Integrating object and grasp recognition for dynamic scene interpretation2005In: 2005 12th International Conference on Advanced Robotics, NEW YORK, NY: IEEE , 2005, p. 331-336Conference paper (Refereed)
    Abstract [en]

    Understanding and interpreting dynamic scenes and activities is a very challenging problem. In this paper we present a system capable of learning robot tasks from demonstration. Classical robot task programming requires an experienced programmer and a lot of tedious work. In contrast, Programming by Demonstration is a flexible framework that reduces the complexity of programming robot tasks, and allows end-users to demonstrate the tasks instead of writing code. We present our recent steps towards this goal. A system for learning pick-and-place tasks by manually demonstrating them is presented. Each demonstrated task is described by an abstract model involving a set of simple tasks such as what object is moved, where it is moved, and which grasp type was used to move it.

  • 102.
    Ekvall, Staffan
    et al.
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.
    Kragic, Danica
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.
    Jensfelt, Patric
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.
    Object detection and mapping for service robot tasks2007In: Robotica (Cambridge. Print), ISSN 0263-5747, E-ISSN 1469-8668, Vol. 25, p. 175-187Article in journal (Refereed)
    Abstract [en]

    The problem studied in this paper is a mobile robot that autonomously navigates in a domestic: environment, builds a map as it moves along and localizes its position in it. In addition, the robot detects predefined objects, estimates their position in the environment and integrates this with the localization module to automatically put the objects in the generated map. Thus, we demonstrate one of the possible strategies for the integration of spatial and semantic knowledge in a service robot scenario where a simultaneous localization and mapping (SLAM) and object detection/ recognition system work in synergy to provide a richer representation of the environment than it would be possible with either of the methods alone. Most SLAM systems build maps that are only used for localizing the robot. Such maps are typically based on grids or different types of features such as point and lines. The novelty is the augmentation of this process with an object-recognition system that detects objects in the environment and puts them in the map generated by the SLAM system. The metric map is also split into topological entities corresponding to rooms. In this way, the user can command the robot to retrieve a certain object from a certain room. We present the results of map building and an extensive evaluation of the object detection algorithm performed in an indoor setting.

  • 103.
    Engelhardt, Sara
    et al.
    KTH, School of Computer Science and Communication (CSC), Robotics, perception and learning, RPL.
    Hansson, Emmeli
    KTH, School of Computer Science and Communication (CSC), Robotics, perception and learning, RPL.
    Leite, Iolanda
    KTH, School of Computer Science and Communication (CSC), Robotics, perception and learning, RPL.
    Better faulty than sorry: Investigating social recovery strategies to minimize the impact of failure in human-robot interaction2017In: WCIHAI 2017 Workshop on Conversational Interruptions in Human-Agent Interactions: Proceedings of the first Workshop on Conversational Interruptions in Human-Agent Interactions co-located with 17th International Conference on International Conference on Intelligent Virtual Agents (IVA 2017) Stockholm, Sweden, August 27, 2017., CEUR-WS , 2017, Vol. 1943, p. 19-27Conference paper (Refereed)
    Abstract [en]

    Failure happens in most social interactions, possibly even more so in interactions between a robot and a human. This paper investigates different failure recovery strategies that robots can employ to minimize the negative effect on people's perception of the robot. A between-subject Wizard-of-Oz experiment with 33 participants was conducted in a scenario where a robot and a human play a collaborative game. The interaction was mainly speech-based and controlled failures were introduced at specific moments. Three types of recovery strategies were investigated, one in each experimental condition: ignore (the robot ignores that a failure has occurred and moves on with the task), apology (the robot apologizes for failing and moves on) and problem-solving (the robot tries to solve the problem with the help of the human). Our results show that the apology-based strategy scored the lowest on measures such as likeability and perceived intelligence, and that the ignore strategy lead to better perceptions of perceived intelligence and animacy than the employed recovery strategies.

  • 104. Faeulhammer, Thomas
    et al.
    Ambrus, Rares
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Burbridge, Christopher
    Zillich, Micheal
    Folkesson, John
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Hawes, Nick
    Jensfelt, Patric
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Vincze, Marcus
    Autonomous Learning of Object Models on a Mobile Robot2017In: IEEE Robotics and Automation Letters, ISSN 2377-3766, E-ISSN 1949-3045, Vol. 2, no 1, p. 26-33, article id 7393491Article in journal (Refereed)
    Abstract [en]

    In this article we present and evaluate a system which allows a mobile robot to autonomously detect, model and re-recognize objects in everyday environments. Whilst other systems have demonstrated one of these elements, to our knowledge we present the first system which is capable of doing all of these things, all without human interaction, in normal indoor scenes. Our system detects objects to learn by modelling the static part of the environment and extracting dynamic elements. It then creates and executes a view plan around a dynamic element to gather additional views for learning. Finally these views are fused to create an object model. The performance of the system is evaluated on publicly available datasets as well as on data collected by the robot in both controlled and uncontrolled scenarios.

    Download full text (pdf)
    fulltext
  • 105.
    Fallon, Maurice F.
    et al.
    MIT.
    Johannsson, Hordur
    MIT.
    Kaess,, Michael
    MIT.
    Folkesson, John
    KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS. KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.
    McClelland, Hunter
    MIT.
    Englot, Brendan J.
    MIT.
    Hover, Franz S.
    MIT.
    Leonard, John J.
    MIT.
    Simultaneous Localization and Mapping in Marine Environments2013In: Marine Robot Autonomy, New York: Springer, 2013, p. 329-372Chapter in book (Refereed)
    Abstract [en]

    Accurate navigation is a fundamental requirement for robotic systems—marine and terrestrial. For an intelligent autonomous system to interact effectively and safely with its environment, it needs to accurately perceive its surroundings. While traditional dead-reckoning filtering can achieve extremely low drift rates, the localization accuracy decays monotonically with distance traveled. Other approaches (such as external beacons) can help; nonetheless, the typical prerogative is to remain at a safe distance and to avoid engaging with the environment. In this chapter we discuss alternative approaches which utilize onboard sensors so that the robot can estimate the location of sensed objects and use these observations to improve its own navigation as well as its perception of the environment. This approach allows for meaningful interaction and autonomy. Three motivating autonomous underwater vehicle (AUV) applications are outlined herein. The first fuses external range sensing with relative sonar measurements. The second application localizes relative to a prior map so as to revisit a specific feature, while the third builds an accurate model of an underwater structure which is consistent and complete. In particular we demonstrate that each approach can be abstracted to a core problem of incremental estimation within a sparse graph of the AUV’s trajectory and the locations of features of interest which can be updated and optimized in real time on board the AUV.

    Download full text (pdf)
    fulltext
  • 106. Farhadi, H.
    et al.
    Atai, J.
    Skoglund, Mikael
    KTH, School of Electrical Engineering (EES), Communication Theory. KTH, School of Electrical Engineering (EES), Centres, ACCESS Linnaeus Centre.
    Nadimi, E. S.
    Pahlavan, K.
    Tarokh, V.
    An adaptive localization technique for wireless capsule endoscopy2016In: International Symposium on Medical Information and Communication Technology, ISMICT, IEEE Computer Society, 2016Conference paper (Refereed)
    Abstract [en]

    Wireless capsule endoscopy (WCE) is an emerging technique to enhance Gastroenterologists information about the patient's gastrointestinal (G.I.) tract. Localization of capsule inside human body in this case is an active area of research. This can be thought of as a sub-domain of micro and bio-robotics fields. If capsule and micro-robot localization problem in human body is solved, then it may potentially lead to less invasive treatments for G.I. diseases and other micro-robot assisted medical procedures. Several approaches have been investigated by the researchers to estimate capsule location. The proposed solutions are mainly static and thus prone to the changes in the propagation medium. We propose an adaptive algorithm based on expectation maximization technique for capsule localization. The proposed algorithm adaptively updates the estimated location based on the received radio frequency (RF) signal measurements.

  • 107.
    Fidai, Muhammad Hassan
    KTH, School of Electrical Engineering (EES), Industrial Information and Control Systems.
    Implementation of DC Supervisory Control: Optimal Power Flow Calculator2014Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Integration of renewable resources such as remote solar or wind farms and electricpower trading between neighbouring countries lead to new requirements on the development of thetransmission grids. Since AC grid expansion is limited by e.g. legislations issues, High VoltageDirect Current (HVDC) technology with its diverse benets compared to AC is being considered asappropriate alternative solution. The developed HVDC grid can be either embedded inside one ACgrid or connects several AC areas. In both architectures, the separate DC supervisory control can beproposed to control the HVDC grids using the interfacing information from AC Supervisory ControlAnd Data Acquisition (SCADA). The supervisory control is supposed to calculate the optimal power ow (OPF) in order to run the system in the most optimal situation. Based on the architecture, therequired information, boundary of the system and also objective function can vary.

    The aim of the thesis is to present the ndings of a feasibility study to implement a supervisorycontrol for bipolar Voltage Source Converter (VSC) HVDC grids in possible real time platforms. DCsupervisory control has a network topology manager to identify the grid conguration and employsan OPF calculator based on interior point optimization method to determine the set-point valuesfor all HVDC stations in a grid. OPF calculator takes into account the DC voltage, converter andDC line constraints.ii

    Download full text (pdf)
    fulltext
  • 108.
    Filotheou, Alexandros
    KTH, School of Electrical Engineering (EES), Automatic Control.
    Robust Decentralized Control of Cooperative Multi-robot Systems: An inter-constraint Receding Horizon approach2017Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    In this work, a robust decentralized model predictive control regime for a team of cooperating robot systems is designed. Their assumed dynamics are in continuous time and non-linear. The problem involves agents whose dynamics are independent of one-another, and its solution couples their constraints as a means of capturing the cooperative behaviour required. Analytical proofs are given to show that, under the proposed control regime: (a) Subject to initial feasibility, the optimization solved at each step by each agent will always be feasible, irrespective of whether or not disturbances affect the agents. In the former case, recursive feasibility is established through successive restriction of each agent's constraints during the periodic solution to its respective optimization problem. (b) Each (sub)system can be stabilized to a desired configuration, either asymptotically when uncertainty is absent, or within a neighbourhood of it, when uncertainty is present, thus attenuating the affecting disturbance. In this context, disturbances are assumed to be additive and bounded. Simulations verify the efficacy of the proposed method over a range of different operating environments.

    Download full text (pdf)
    fulltext
  • 109.
    Folkesson, John
    KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS. KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.
    Robustness of the Quadratic Antiparticle Filter forRobot Localization2011In: European Conference on Mobile Robots / [ed] Achim J. Lilienthal and Tom Duckett, 2011, p. 297-302Conference paper (Refereed)
    Abstract [en]

    Robot localization using odometry and feature measurementsis a nonlinear estimation problem. An efficient solutionis found using the extended Kalman filter, EKF. The EKFhowever suffers from divergence and inconsistency when thenonlinearities are significant. We recently developed a new typeof filter based on an auxiliary variable Gaussian distributionwhich we call the antiparticle filter AF as an alternative nonlinearestimation filter that has improved consistency and stability. TheAF reduces to the iterative EKF, IEKF, when the posterior distributionis well represented by a simple Gaussian. It transitions to amore complex representation as required. We have implementedan example of the AF which uses a parameterization of the meanas a quadratic function of the auxiliary variables which we callthe quadratic antiparticle filter, QAF. We present simulationof robot feature based localization in which we examine therobustness to bias, and disturbances with comparison to the EKF.

    Download full text (pdf)
    RobustAnti
  • 110.
    Folkesson, John
    KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS. KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.
    The Antiparticle Filter: an Adaptive Nonlinear Estimator2011In: International Symposium of Robotics Research, 2011Conference paper (Refereed)
    Abstract [en]

    We introduce the antiparticle filter, AF, a new type of recursive Bayesian estimator that is unlike either the extended Kalman Filter, EKF, unscented Kalman Filter, UKF or the particle filter PF. We show that for a classic problem of robot localization the AF can substantially outperform these other filters in some situations. The AF estimates the posterior distribution as an auxiliary variable Gaussian which gives an analytic formula using no random samples. It adaptively changes the complexity of the posterior distribution as the uncertainty changes. It is equivalent to the EKF when theuncertainty is low while being able to represent non-Gaussian distributions as the uncertainty increases. The computation time can be much faster than a particle filter for the same accuracy. We have simulated comparisons of two types of AF to the EKF, the iterative EKF, the UKF, an iterative UKF, and the PF demonstrating that AF can reduce the error to a consistent accurate value.

    Download full text (pdf)
    Antiparticle
  • 111.
    Folkesson, John
    et al.
    Department of Mechanical Engineering, Massachusetts Institute of Technology, Cambridge, Masachusetts.
    Christensen, Henrik I.
    KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Graphical SLAM for Outdoor Applications2007In: Journal of Field Robotics, ISSN 1556-4959, Vol. 24, no 1-2, p. 51-70Article in journal (Refereed)
    Abstract [en]

    Application of SLAM outdoors is challenged by complexity, handling of non-linearities and flexible integration of a diverse set of features. A graphical approach to SLAM is introduced that enables flexible data-association. The method allows for handling of non-linearities. The method also enables easy introduction of global constraints. Computational issues can be addressed as a graph reduction problem. A complete framework for graphical based SLAM is presented. The framework is demonstrated for a number of outdoor experiments using an ATRV robot equipped with a SICK laser scanner and a CrossBow Inertial Unit. The experiments include handling of large outdoor environments with loop closing. The presented system operates at 5Hz on a 800 MHz computer.

  • 112.
    Folkesson, John
    et al.
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Jensfelt, Patric
    KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Christensen, Henrik
    KTH, School of Computer Science and Communication (CSC), Numerical Analysis and Computer Science, NADA. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Vision SLAM in the Measurement Subspace2005In: 2005 IEEE International Conference on Robotics and Automation (ICRA), Vols 1-4  Book Series, 2005, p. 30-35Conference paper (Refereed)
    Abstract [en]

    In this paper we describe an approach to feature representation for simultaneous localization and mapping, SLAM. It is a general representation for features that addresses symmetries and constraints in the feature coordinates. Furthermore, the representation allows for the features to be added to the map with partial initialization. This is an important property when using oriented vision features where angle information can be used before their full pose is known. The number of the dimensions for a feature can grow with time as more information is acquired. At the same time as the special properties of each type of feature are accounted for, the commonalities of all map features are also exploited to allow SLAM algorithms to be interchanged as well as choice of sensors and features. In other words the SLAM implementation need not be changed at all when changing sensors and features and vice versa. Experimental results both with vision and range data and combinations thereof are presented.

    Download full text (pdf)
    Vision SLAM in the Measurement Subspace
  • 113.
    Folkesson, John
    et al.
    Massacusetts Institute of Technology, Cambridge, MA .
    Jensfelt, Patric
    KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS. KTH, School of Computer Science and Communication (CSC), Numerical Analysis and Computer Science, NADA.
    Christensen, Henrik I.
    Georgia Institute of Tech- nology, Atlanta, GA.
    The m-space feature representation for slam2007In: IEEE Transactions on robotics, ISSN 1552-3098, E-ISSN 1941-0468, ISSN 1552-3098, Vol. 23, no 5, p. 1024-1035Article in journal (Refereed)
    Abstract [en]

    In this paper, a new feature representation for simultaneous localization and mapping (SLAM) is discussed. The representation addresses feature symmetries and constraints explicitly to make the basic model numerically robust. In previous SLAM work, complete initialization of features is typically performed prior to introduction of a new feature into the map. This results in delayed use of new data. To allow early use of sensory data, the new feature representation addresses the use of features that initially have been partially observed. This is achieved by explicitly modelling the subspace of a feature that has been observed. In addition to accounting for the special properties of each feature type, the commonalities can be exploited in the new representation to create a feature framework that allows for interchanging of SLAM algorithms, sensor and features. Experimental results are presented using a low-cost Web-cam, a laser range scanner, and combinations thereof.

    Download full text (pdf)
    fulltext
  • 114. Fornell, Anna
    et al.
    Nilsson, Johan
    Jonsson, Linus
    Rajeswari, Prem Kumar Periyannan
    KTH, School of Biotechnology (BIO), Proteomics and Nanobiotechnology.
    Jönsson, Håkan N.
    KTH, School of Biotechnology (BIO), Proteomics and Nanobiotechnology.
    Tenje, Maria
    Controlled Lateral Positioning of Microparticles Inside Droplets Using Acoustophoresis2015In: Analytical Chemistry, ISSN 0003-2700, E-ISSN 1520-6882, Vol. 87, no 20, p. 10521-10526Article in journal (Refereed)
    Abstract [en]

    In this paper, we utilize bulk acoustic waves to control the position of micropartides inside droplets in two-phase microfluidic systems and demonstrate a method to enrich the micropartides. In droplet microfluidics, different unit operations are combined and integrated on-chip to miniaturize complex biochemical assays. We present a droplet unit operation capable of controlling the position of micropartides during a trident shaped droplet split. An acoustic standing wave field is generated in the microchannel, and the acoustic forces direct the encapsulated micropartides to the center of the droplets. The method is generic, requires no labeling of the micropartides, and is operated in a noncontact fashion. It was possible to achieve 2+-fold enrichment of polystyrene beads (5 mu m in diameter) in the center daughter droplet with an average recovery of 89% of the beads. Red blood cells were also successfully manipulated inside droplets. These results show the possibility to use acoustophoresis in two-phase systems to enrich micropartides and open up the possibility for new droplet-based assays that are not performed today.

  • 115.
    Forsberg, Olof
    KTH, School of Electrical Engineering and Computer Science (EECS).
    Semantic Stixels fusing LIDAR for Scene Perception2018Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Autonomous driving is the concept of a vehicle that operates in traffic without instructions from a driver. A major challenge for such a system is to provide a comprehensive, accurate and compact scene model based on information from sensors. For such a model to be comprehensive it must provide 3D position and semantics on relevant surroundings to enable a safe traffic behavior. Such a model creates a foundation for autonomous driving to make substantiated driving decisions. The model must be compact to enable efficient processing, allowing driving decisions to be made in real time. In this thesis rectangular objects (The Stixelworld) are used to represent the surroundings of a vehicle and provide a scene model. LIDAR and semantic segmentation are fused in the computation of these rectangles. This method indicates that a dense and compact scene model can be provided also from sparse LIDAR data by use of semantic segmentation.

    Download full text (pdf)
    fulltext
  • 116.
    Frennert, Susanne
    et al.
    KTH, School of Engineering Sciences in Chemistry, Biotechnology and Health (CBH), Biomedical Engineering and Health Systems, Technology in Health Care.
    Östlund, Britt
    KTH, School of Engineering Sciences in Chemistry, Biotechnology and Health (CBH), Biomedical Engineering and Health Systems, Technology in Health Care.
    How Do Older People Think and Feel About Robots in Health- and Elderly Care?2019In: Inclusive Robotics for a Better Society: Selected Papers from INBOTS Conference 2018, 16-18 October, 2018, Pisa, Italy / [ed] José L. Pons, Springer International Publishing , 2019, Vol. 25, p. 167-174Conference paper (Refereed)
    Abstract [en]

    This extended abstract is a report on older people’s perception of interactive robots in health- and elderly care. A series of focus groups was conducted. In total 31 older people participated. The majority of the participants viewed interactive robots in health- and elderly care as an asset but they also voiced concerns regarding reliability, practical handling, costs and fear of mechanical care.

  • 117.
    Frintrop, Simone
    et al.
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.
    Jensfelt, Patric
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.
    Christensen, Henrik I.
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.
    Attentional landmark selection for visual SLAM2006In: 2006 IEEE/RSJ International Conference on Intelligent Robots and Systems, Vols 1-12, NEW YORK: IEEE , 2006, p. 2582-2587Conference paper (Refereed)
    Abstract [en]

    In this paper, we introduce a new method to automatically detect useful landmarks for visual SLAM. A biologically motivated attention system detects regions of interest which "pop-out" automatically due to strong contrasts and the uniqueness of features. This property makes the regions easily redetectable and thus they are useful candidates for visual landmarks. Matching based on scene prediction and feature similarity allows not only short-term tracking of the regions, but also redetection in loop closing situations. The paper demonstrates how regions are determined and how they are matched reliably. Various experimental results on real-world data show that the landmarks are useful with respect to be tracked in consecutive frames and to enable closing loops.

  • 118. Förell, Erik
    et al.
    Jensfelt, Patric
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.
    Robotsystem och förfarande för behandling av en yta2003Patent (Other (popular science, discussion, etc.))
    Abstract [en]

    Robot system including at least one mobile robot (10), for treating a surface, which comprises map storage means to store a map of the surface to be treated and means to navigate the, or each, mobile robot (10) to at least one point on a surface. The, or each, mobile robot (10) comprises locating means (13,14) to identify its position with respect to the surface to be treated and means t o automatically deviate the mobile robot (10) away from its initial path in the event that an obstacle is detected along its path. The, or each, mobile robot (10) also comprises means to store and/or communicate data concerning the surface treatment performed and any obstacles detected by the locating means (13,14).

  • 119.
    Garcia-Camacho, Irene
    et al.
    CSIC UPC, Inst Robot & Informat Ind, Barcelona 08902, Spain..
    Lippi, Martina
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Robotics, Perception and Learning, RPL.
    Welle, Michael C.
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Robotics, Perception and Learning, RPL.
    Yin, Hang
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Robotics, Perception and Learning, RPL.
    Antonova, Rika
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Robotics, Perception and Learning, RPL.
    Varava, Anastasiia
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Robotics, Perception and Learning, RPL.
    Borras, Julia
    CSIC UPC, Inst Robot & Informat Ind, Barcelona 08902, Spain..
    Torras, Carme
    CSIC UPC, Inst Robot & Informat Ind, Barcelona 08902, Spain..
    Marino, Alessandro
    Univ Cassino & Southern Lazio, I-03043 Cassino, Italy..
    Alenya, Guillem
    CSIC UPC, Inst Robot & Informat Ind, Barcelona 08902, Spain..
    Kragic, Danica
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Robotics, Perception and Learning, RPL.
    Benchmarking Bimanual Cloth Manipulation2020In: IEEE Robotics and Automation Letters, ISSN 2377-3766, E-ISSN 1949-3045, Vol. 5, no 2, p. 1111-1118Article in journal (Refereed)
    Abstract [en]

    Cloth manipulation is a challenging task that, despite its importance, has received relatively little attention compared to rigid object manipulation. In this letter, we provide three benchmarks for evaluation and comparison of different approaches towards three basic tasks in cloth manipulation: spreading a tablecloth over a table, folding a towel, and dressing. The tasks can be executed on any bimanual robotic platform and the objects involved in the tasks are standardized and easy to acquire. We provide several complexity levels for each task, and describe the quality measures to evaluate task execution. Furthermore, we provide baseline solutions for all the tasks and evaluate them according to the proposed metrics.

  • 120.
    Garzon, C. L.
    et al.
    Automatizac Avanzada, Bogota, Colombia..
    Chamorro Vera, Harold Rene
    KTH, School of Electrical Engineering (EES), Electric Power Systems. NDT Innovations Inc, Bogota, Colombia..
    Diaz, M. M.
    Sequeira, E.
    UTEP, El Paso, TX USA..
    Leottau, L.
    Univ Chile, Adv Min Technol Ctr, Dept Elect Engn, Santiago, Chile..
    Swarm Ant Algorithm Incorporation for Navigation of Resource Collecting Robots2014In: 2014 5th IEEE RAS & EMBS International Conference on Biomedical Robotics and Biomechatronics (BioRob), IEEE , 2014, p. 987-992Conference paper (Refereed)
    Abstract [en]

    Swarm robotics requires the development of new strategies and algorithm integration, which allow for the improvement of the design and the applications for harvesting or collecting resources. This paper describes the programming and design of Finite State Machines (FSM) bio-inspired algorithms for seeker and resource gathering Pherobots systems, like Anthill Known Location (AKL) aggressiveness and sense of panic. FSM designing allows for the use of control architectures for behaviour-based agents and for measuring the change in system performance. Simulations demonstrate the capability of the algorithms under different environments and scenarios.

  • 121. Georgiou, Tryphon T.
    et al.
    Lindquist, Anders
    KTH, School of Engineering Sciences (SCI), Centres, Center for Industrial and Applied Mathematics, CIAM. KTH, School of Electrical Engineering (EES), Centres, ACCESS Linnaeus Centre. Shanghai Jiao Tong Univ, Peoples R China.
    Optimal Estimation With Missing Observations via Balanced Time-Symmetric Stochastic Models2017In: IEEE Transactions on Automatic Control, ISSN 0018-9286, E-ISSN 1558-2523, Vol. 62, no 11, p. 5590-5603Article in journal (Refereed)
    Abstract [en]

    We consider data fusion for the purpose of smoothing and interpolation based on observation records with missing data. Stochastic processes are generated by linear stochastic models. The paper begins by drawing a connection between time reversal in stochastic systems and all-pass extensions. A particular normalization (choice of basis) between the two time-directions allows the two to share the same orthonormalized state process and simplifies the mathematics of data fusion. In this framework, we derive symmetric and balanced Mayne-Fraser-like formulas that apply simultaneously to continuous-time smoothing and interpolation, providing a definitive unification of these concepts. The absence of data over subintervals requires in general a hybrid filtering approach involving both continuous-time and discrete-time filtering steps.

  • 122.
    Ghadirzadeh, Ali
    et al.
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.
    Bütepage, Judith
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.
    Kragic, Danica
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.
    Björkman, Mårten
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.
    Self-learning and adaptation in a sensorimotor framework2016In: Proceedings - IEEE International Conference on Robotics and Automation, IEEE conference proceedings, 2016, p. 551-558Conference paper (Refereed)
    Abstract [en]

    We present a general framework to autonomously achieve the task of finding a sequence of actions that result in a desired state. Autonomy is acquired by learning sensorimotor patterns of a robot, while it is interacting with its environment. Gaussian processes (GP) with automatic relevance determination are used to learn the sensorimotor mapping. In this way, relevant sensory and motor components can be systematically found in high-dimensional sensory and motor spaces. We propose an incremental GP learning strategy, which discerns between situations, when an update or an adaptation must be implemented. The Rapidly exploring Random Tree (RRT∗) algorithm is exploited to enable long-term planning and generating a sequence of states that lead to a given goal; while a gradient-based search finds the optimum action to steer to a neighbouring state in a single time step. Our experimental results prove the suitability of the proposed framework to learn a joint space controller with high data dimensions (10×15). It demonstrates short training phase (less than 12 seconds), real-time performance and rapid adaptations capabilities.

  • 123.
    Ghadirzadeh, Ali
    et al.
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Bütepage, Judith
    KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Maki, Atsuto
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Kragic, Danica
    KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Björkman, Mårten
    KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    A sensorimotor reinforcement learning framework for physical human-robot interaction2016In: IEEE International Conference on Intelligent Robots and Systems, IEEE, 2016, p. 2682-2688Conference paper (Refereed)
    Abstract [en]

    Modeling of physical human-robot collaborations is generally a challenging problem due to the unpredictive nature of human behavior. To address this issue, we present a data-efficient reinforcement learning framework which enables a robot to learn how to collaborate with a human partner. The robot learns the task from its own sensorimotor experiences in an unsupervised manner. The uncertainty in the interaction is modeled using Gaussian processes (GP) to implement a forward model and an actionvalue function. Optimal action selection given the uncertain GP model is ensured by Bayesian optimization. We apply the framework to a scenario in which a human and a PR2 robot jointly control the ball position on a plank based on vision and force/torque data. Our experimental results show the suitability of the proposed method in terms of fast and data-efficient model learning, optimal action selection under uncertainty and equal role sharing between the partners.

  • 124.
    Ghadirzadeh, Ali
    et al.
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.
    Kootstra, Gert
    Wageningen University, The Netherlands.
    Maki, Atsuto
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.
    Björkman, Mårten
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.
    Learning visual forward models to compensate for self-induced image motion2014In: 23rd IEEE International Conference on Robot and Human Interactive Communication: IEEE RO-MAN, IEEE, 2014, p. 1110-1115Conference paper (Refereed)
    Abstract [en]

    Predicting the sensory consequences of an agent's own actions is considered an important skill for intelligent behavior. In terms of vision, so-called visual forward models can be applied to learn such predictions. This is no trivial task given the high-dimensionality of sensory data and complex action spaces. In this work, we propose to learn the visual consequences of changes in pan and tilt of a robotic head using a visual forward model based on Gaussian processes and SURF correspondences. This is done without any assumptions on the kinematics of the system or requirements on calibration. The proposed method is compared to an earlier work using accumulator-based correspondences and Radial Basis function networks. We also show the feasibility of the proposed method for detection of independent motion using a moving camera system. By comparing the predicted and actual captured images, image motion due to the robot's own actions and motion caused by moving external objects can be distinguished. Results show the proposed method to be preferable from the earlier method in terms of both prediction errors and ability to detect independent motion.

  • 125.
    Ghadirzadeh, Ali
    et al.
    KTH, School of Computer Science and Communication (CSC), Robotics, perception and learning, RPL.
    Maki, Atsuto
    KTH, School of Computer Science and Communication (CSC), Robotics, perception and learning, RPL.
    Kragic, Danica
    KTH, School of Computer Science and Communication (CSC), Robotics, perception and learning, RPL.
    Björkman, Mårten
    KTH, School of Computer Science and Communication (CSC), Robotics, perception and learning, RPL.
    Deep predictive policy training using reinforcement learning2017In: 2017 IEEE/RSJ International Conference on Intelligent Robots and Systems, IROS 2017, Institute of Electrical and Electronics Engineers (IEEE), 2017, p. 2351-2358, article id 8206046Conference paper (Refereed)
    Abstract [en]

    Skilled robot task learning is best implemented by predictive action policies due to the inherent latency of sensorimotor processes. However, training such predictive policies is challenging as it involves finding a trajectory of motor activations for the full duration of the action. We propose a data-efficient deep predictive policy training (DPPT) framework with a deep neural network policy architecture which maps an image observation to a sequence of motor activations. The architecture consists of three sub-networks referred to as the perception, policy and behavior super-layers. The perception and behavior super-layers force an abstraction of visual and motor data trained with synthetic and simulated training samples, respectively. The policy super-layer is a small subnetwork with fewer parameters that maps data in-between the abstracted manifolds. It is trained for each task using methods for policy search reinforcement learning. We demonstrate the suitability of the proposed architecture and learning framework by training predictive policies for skilled object grasping and ball throwing on a PR2 robot. The effectiveness of the method is illustrated by the fact that these tasks are trained using only about 180 real robot attempts with qualitative terminal rewards.

  • 126.
    Green, Anders
    et al.
    KTH, School of Computer Science and Communication (CSC), Human - Computer Interaction, MDI.
    Eklundh, Kerstin Severinson
    KTH, School of Computer Science and Communication (CSC), Numerical Analysis and Computer Science, NADA.
    Wrede, Britta
    Li, Shuyin
    Integrating miscommunication analysis in natural language interface design for a service robot2006In: 2006 IEEE/RSJ International Conference on Intelligent Robots and Systems, Vols 1-12, NEW YORK: IEEE , 2006, p. 4678-4683Conference paper (Refereed)
    Abstract [en]

    Natural language user interfaces for robots with cognitive capabilities should be designed to reduce the occurrence of miscommunication in order to be perceived as providing a smooth and intuitive interaction to its users. This paper describes how miscommunication analysis is integrated in the design process. Observations from 12 user sessions revealed that users misunderstand the robot's functionality; and that feedback sometimes is ill-timed with respect to the situation. We provide a set of design implications to prevent errors from occurring, to influence or adapt to users' behavior.

  • 127.
    Guin, Agneev
    KTH, School of Electrical Engineering and Computer Science (EECS), Robotics, perception and learning, RPL.
    Terrain Classification to find Drivable Surfaces using Deep Neural Networks: Semantic segmentation for unstructured roads combined with the use of Gabor filters to determine drivable regions trained on a small dataset2018Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Autonomous vehicles face various challenges under difficult terrain conditions such as marginally rural or back-country roads, due to the lack of lane information, road signs or traffic signals. In this thesis, we investigate a novel approach of using Deep Neural Networks (DNNs) to classify off-road surfaces into the types of terrains with the aim of supporting autonomous navigation in unstructured environments. For example, off-road surfaces can be classified as asphalt, gravel, grass, mud, snow, etc.

    Images from the camera mounted on a mining truck were used to perform semantic segmentation and to classify road surface types. Camera images were segmented manually for training into sets of 16 and 9 classes, for all relevant classes and the drivable classes respectively. A small but diverse dataset of 100 images was augmented and compiled along with nearby frames from the video clips to expand this dataset. Neural networks were used to test the performance for the classification under these off-road conditions. Pre-trained AlexNet was compared to the networks without pre-training. Gabor filters, known to distinguish textured surfaces, was further used to improve the results of the neural network.

    The experiments show that pre-trained networks perform well with small datasets and many classes. A combination of Gabor filters with pre-trained networks can establish a dependable navigation path under difficult terrain conditions. While the results seem positive for images similar to the training image scenes, the networks fail to perform well in other situations. Though the tests imply that larger datasets are required for dependable results, this is a step closer to making the autonomous vehicles drivable under off-road conditions.

    Download full text (pdf)
    Master Thesis - Agneev Guin
  • 128.
    Gunning, Robin
    KTH, School of Electrical Engineering and Computer Science (EECS).
    A performance comparison of coverage algorithms for simple robotic vacuum cleaners2018Independent thesis Basic level (degree of Bachelor), 10 credits / 15 HE creditsStudent thesis
    Abstract [en]

    Cheap automatic robotic vacuum cleaners keep their cost down by having less sensors and many of them focus on using a single frontal bumper sensor, this makes it important to be able to get good coverage of a room with no knowledge of said room. This paper investigates whether the algorithm boustrophedon is enough to get a good coverage of a simple room with a maximum of two furnitures and only 90 degree corners.

    A graphical simulation were made to test the algorithms that are commonly used in cheap automatic robotic vacuum cleaners to compare them with the result of only using boustrophedon. The results show that the best algorithms are the non-deterministic random walk and all combined. Boustrophedon tends to get stuck when the room is not empty and it only cleans half the room when starting in the middle of the room, while being the fastest and gets most coverage in an empty room when starting in a corner.

    Download full text (pdf)
    fulltext
  • 129.
    Guo, Meng
    et al.
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Decision and Control Systems (Automatic Control). KTH, School of Electrical Engineering and Computer Science (EECS), Centres, ACCESS Linnaeus Centre.
    Andersson, Sofie
    KTH, School of Electrical Engineering and Computer Science (EECS), Centres, ACCESS Linnaeus Centre. KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Decision and Control Systems (Automatic Control).
    Dimarogonas, Dimos V.
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Decision and Control Systems (Automatic Control). KTH, School of Electrical Engineering and Computer Science (EECS), Centres, ACCESS Linnaeus Centre.
    Human-in-the-Loop Mixed-Initiative Control under Temporal Tasks2018In: 2018 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION (ICRA), IEEE Computer Society, 2018, p. 6395-6400Conference paper (Refereed)
    Abstract [en]

    This paper considers the motion control and task planning problem of mobile robots under complex high-level tasks and human initiatives. The assigned task is specified as Linear Temporal Logic (LTL) formulas that consist of hard and soft constraints. The human initiative influences the robot autonomy in two explicit ways: with additive terms in the continuous controller and with contingent task assignments. We propose an online coordination scheme that encapsulates (i) a mixed-initiative continuous controller that ensures all-time safety despite of possible human errors, (ii) a plan adaptation scheme that accommodates new features discovered in the workspace and short-term tasks assigned by the operator during run time, and (iii) an iterative inverse reinforcement learning (IRL) algorithm that allows the robot to asymptotically learn the human preference on the parameters during the plan synthesis. The results are demonstrated by both realistic human-in-the-loop simulations and experiments.

  • 130.
    Guo, Meng
    et al.
    KTH, School of Electrical Engineering (EES), Automatic Control. KTH, School of Electrical Engineering (EES), Centres, ACCESS Linnaeus Centre.
    Egerstedt, Magnus
    Dimarogonas, Dimos V.
    KTH, School of Electrical Engineering (EES), Automatic Control. KTH, School of Electrical Engineering (EES), Centres, ACCESS Linnaeus Centre.
    Hybrid control of multi-robot systems using embedded graph grammars2016In: Proceedings - IEEE International Conference on Robotics and Automation, IEEE conference proceedings, 2016, p. 5242-5247, article id 7487733Conference paper (Refereed)
    Abstract [en]

    We propose a distributed and cooperative motion and task control scheme for a team of mobile robots that are subject to dynamic constraints including inter-robot collision avoidance and connectivity maintenance of the communication network. Moreover, each agent has a local high-level task given as a Linear Temporal Logic (LTL) formula of desired motion and actions. Embedded graph grammars (EGGs) are used as the main tool to specify local interaction rules and switching control modes among the robots, which is then combined with the model-checking-based task planning module. It is ensured that all local tasks are satisfied while the dynamic constraints are obeyed at all time. The overall approach is demonstrated by simulation and experimental results.

    Download full text (pdf)
    fulltext
  • 131.
    Guo, Meng
    et al.
    KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS. KTH, School of Electrical Engineering (EES), Centres, ACCESS Linnaeus Centre.
    Tumova, Jana
    KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS. KTH, School of Electrical Engineering (EES), Centres, ACCESS Linnaeus Centre.
    Dimarogonas, Dimos V.
    KTH, School of Electrical Engineering (EES), Automatic Control. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS. KTH, School of Electrical Engineering (EES), Centres, ACCESS Linnaeus Centre.
    Communication-Free Multi-Agent Control Under Local Temporal Tasks and Relative-Distance Constraints2016In: IEEE Transactions on Automatic Control, ISSN 0018-9286, E-ISSN 1558-2523, Vol. 61, no 12, p. 3948-3962Article in journal (Refereed)
    Abstract [en]

    We propose a distributed control and coordination strategy for multi-agent systems where each agent has a local task specified as a Linear Temporal Logic (LTL) formula and at the same time is subject to relative-distance constraints with its neighboring agents. The local tasks capture the temporal requirements on individual agents' behaviors, while the relative-distance constraints impose requirements on the collective motion of the whole team. The proposed solution relies only on relative-state measurements among the neighboring agents without the need for explicit information exchange. It is guaranteed that the local tasks given as syntactically co-safe or general LTL formulas are fulfilled and the relative-distance constraints are satisfied at all time. The approach is demonstrated with computer simulations.

  • 132.
    Gustavi, Tove
    et al.
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Optimization and Systems Theory.
    Hu, Xiaoming
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Optimization and Systems Theory.
    Navigation coordination for multi-agent systems with limited sensor information2005In: 2005 International Conference on Control and Automation (ICCA), Vols 1 and 2, 2005, p. 77-82Conference paper (Refereed)
    Abstract [en]

    In this paper mobile multi-agent systems with limited sensor information are studied. Some control algorithms are proposed that do not require global information, and are easy to implement. First two basic controls for serial and parallel formations are derived. Then it is demonstrated how these basic controls can be combined in order to achieve more complex formations. Combined with an obstacle avoidance controller, the emerging system can perform quite complex navigation tasks.

  • 133.
    Gustavi, Tove
    et al.
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Optimization and Systems Theory.
    Hu, Xiaoming
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Optimization and Systems Theory.
    Robust formation adaptation for mobile robots2006In: 2006 IEEE/RSJ International Conference on Intelligent Robots and Systems, Vols 1-12, NEW YORK: IEEE , 2006, p. 2521-2526Conference paper (Refereed)
    Abstract [en]

    In this paper, formation adaptation and stability for mobile multi-agent systems is studied. The objective is to suggest a set of robust control functions that can be combined to build complex formations of mobile robots. A properly designed formation should be able to follow a single leader in a clustered environment and to, if necessary, adapt to the surroundings by changing its shape. The two control algorithms proposed here are adapted for systems with limited communication capacity and low performance sensors. The algorithms only require information that can be directly achieved from on-board sensors and, in particular, they only need very coarse estimation of the velocity of the neighbors in the formation. An arbitrary change of the shape of a formation may require switching between control algorithms. In the paper, it is shown that switching between the two proposed control algorithms is stable under some reasonable assumptions. The results are verified by simulations, which also show that switching can be performed safely even with a high noise level and no prior filtering of sensor input.

  • 134. Göbelbecker, M.
    et al.
    Aydemir, Alper
    KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Pronobis, Andrzej
    KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Sjöö, Kristoffer
    KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Jensfelt, Patric
    KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    A planning approach to active visual search in large environments2011In: AAAI Workshop Tech. Rep., 2011, p. 8-13Conference paper (Refereed)
    Abstract [en]

    In this paper we present a principled planner based approach to the active visual object search problem in unknown environments. We make use of a hierarchical planner that combines the strength of decision theory and heuristics. Furthermore, our object search approach leverages on the conceptual spatial knowledge in the form of object co-occurrences and semantic place categorisation. A hierarchical model for representing object locations is presented with which the planner is able to perform indirect search. Finally we present real world experiments to show the feasibility of the approach.

  • 135.
    Göbelbecker, Moritz
    et al.
    University of Freiburg.
    Hanheide, Marc
    University of Lincoln.
    Gretton, Charles
    University of Birmingham.
    Hawes, Nick
    University of Birmingham.
    Pronobis, Andrzej
    KTH, School of Computer Science and Communication (CSC), Robotics, perception and learning, RPL.
    Aydemir, Alper
    KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS. KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.
    Kristoffer, Sjöö
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Zender, Hendrik
    DFKI, Saarbruecken.
    Dora: A Robot that Plans and Acts Under Uncertainty2012In: Proceedings of the 35th German Conference on Artificial Intelligence (KI’12), 2012Conference paper (Refereed)
    Abstract [en]

    Dealing with uncertainty is one of the major challenges when constructing autonomous mobile robots. The CogX project addressed key aspects of that by developing and implementing mechanisms for self-understanding and self-extension -- i.e. awareness of gaps in knowledge, and the ability to reason and act to fill those gaps. We discuss our robot called Dora, a showcase outcome of that project. Dora is able to perform a variety of search tasks in unexplored environments. One of the results of the project is the Dora robot, that can perform a variety of search tasks in unexplored environments by exploiting probabilistic knowledge representations while retaining efficiency by using a fast planning system.

  • 136.
    Güler, Püren
    KTH, School of Computer Science and Communication (CSC), Robotics, perception and learning, RPL.
    Learning Object Properties From Manipulation for Manipulation2017Doctoral thesis, monograph (Other academic)
    Abstract [en]

    The world contains objects with various properties - rigid, granular, liquid, elastic or plastic. As humans, while interacting with the objects, we plan our manipulation by considering their properties. For instance, while holding a rigid object such as a brick, we adapt our grasp based on its centre of mass not to drop it. On the other hand while manipulating a deformable object, we may consider additional properties to the centre of mass such elasticity, brittleness etc. for grasp stability. Therefore, knowing object properties is an integral part of skilled manipulation of objects. 

    For manipulating objects skillfully, robots should be able to predict the object properties as humans do. To predict the properties, interactions with objects are essential. These interactions give rise distinct sensory signals that contains information about the object properties. The signals coming from a single sensory modality may give ambiguous information or noisy measurements. Hence, by integrating multi-sensory modalities (vision, touch, audio or proprioceptive), a manipulated object can be observed from different aspects and this can decrease the uncertainty in the observed properties. By analyzing the perceived sensory signals, a robot reasons about the object properties and adjusts its manipulation based on this information. During this adjustment, the robot can make use of a simulation model to predict the object behavior to plan the next action. For instance, if an object is assumed to be rigid before interaction and exhibit deformable behavior after interaction, an internal simulation model can be used to predict the load force exerted on the object, so that appropriate manipulation can be planned in the next action. Thus, learning about object properties can be defined as an active procedure. The robot explores the object properties actively and purposefully by interacting with the object, and adjusting its manipulation based on the sensory information and predicted object behavior through an internal simulation model.

    This thesis investigates the necessary mechanisms that we mentioned above to learn object properties: (i) multi-sensory information, (ii) simulation and (iii) active exploration. In particular, we investigate these three mechanisms that represent different and complementary ways of extracting a certain object property, the deformability of objects. Firstly, we investigate the feasibility of using visual and/or tactile data to classify the content of a container based on the deformation observed when a robotic hand squeezes and deforms the container. According to our result, both visual and tactile sensory data individually give high accuracy rates while classifying the content type based on the deformation. Next, we investigate the usage of a simulation model to estimate the object deformability that is revealed through a manipulation. The proposed method identify accurately the deformability of the test objects in synthetic and real-world data. Finally, we investigate the integration of the deformation simulation in a robotic active perception framework to extract the heterogenous deformability properties of an environment through physical interactions. In the experiments that we apply on real-world objects, we illustrate that the active perception framework can map the heterogeneous deformability properties of a surface.

    Download full text (pdf)
    thesis
  • 137.
    Güler, Püren
    et al.
    KTH, School of Computer Science and Communication (CSC), Robotics, perception and learning, RPL.
    Pieropan, A.
    Ishikawa, M.
    Kragic, Danica
    KTH, School of Computer Science and Communication (CSC), Robotics, perception and learning, RPL.
    Estimating deformability of objects using meshless shape matching2017In: 2017 IEEE/RSJ International Conference on Intelligent Robots and Systems, IROS 2017, Institute of Electrical and Electronics Engineers (IEEE), 2017, p. 5941-5948, article id 8206489Conference paper (Refereed)
    Abstract [en]

    Humans interact with deformable objects on a daily basis but this still represents a challenge for robots. To enable manipulation of and interaction with deformable objects, robots need to be able to extract and learn the deformability of objects both prior to and during the interaction. Physics-based models are commonly used to predict the physical properties of deformable objects and simulate their deformation accurately. The most popular simulation techniques are force-based models that need force measurements. In this paper, we explore the applicability of a geometry-based simulation method called meshless shape matching (MSM) for estimating the deformability of objects. The main advantages of MSM are its controllability and computational efficiency that make it popular in computer graphics to simulate complex interactions of multiple objects at the same time. Additionally, a useful feature of the MSM that differentiates it from other physics-based simulation is to be independent of force measurements that may not be available to a robotic framework lacking force/torque sensors. In this work, we design a method to estimate deformability based on certain properties, such as volume conservation. Using the finite element method (FEM) we create the ground truth deformability for various settings to evaluate our method. The experimental evaluation shows that our approach is able to accurately identify the deformability of test objects, supporting the value of MSM for robotic applications.

  • 138.
    Gürdür, Didem
    et al.
    KTH, School of Industrial Engineering and Management (ITM), Machine Design (Dept.), Mechatronics.
    Vulgarakis Feljan, Aneta
    Ericsson Research, Sweden.
    El-khoury, Jad
    KTH, School of Industrial Engineering and Management (ITM), Machine Design (Dept.), Mechatronics.
    Mohalik, Swarup Kumar
    Ericsson Research, India.
    Badrinath, Ramamurthy
    Ericsson Research, India.
    Mujumdar, Anusha Pradeep
    Ericsson Research, India.
    Fersman, Elena
    Ericsson Research, Sweden.
    Knowledge Representation of Cyber-physical Systems for Monitoring Purpose2018In: 51st CIRP Conference on Manufacturing Systems, Elsevier, 2018, Vol. 72, p. 468-473Conference paper (Refereed)
    Abstract [en]

    Automated warehouses, as a form of cyber-physical systems (CPSs), require several components to work collaboratively to address the common business objectives of complex logistics systems. During the collaborative operations, a number of key performance indicators (KPI) can be monitored to understand the proficiency of the warehouse and control the operations and decisions. It is possible to drive and monitor these KPIs by looking at both the state of the warehouse components and the operations carried out by them. Therefore, it is necessary to represent this knowledge in an explicit and formally-specified data model and provide automated methods to derive the KPIs from the representation. In this paper, we implement a minimalistic data model for a subset of warehouse resources using linked data in order to monitor a few KPIs, namely sustainability, safety and performance. The applicability of the approach and the data model is illustrated through a use case. We demonstrate that it is possible to develop minimalistic data models through Open Services for Lifecycle Collaboration (OSLC) resource shapes which enables compatibility with the declarative and procedural knowledge of automated warehouse agents specified in Planning Domain Definition Language (PDDL).

    Download full text (pdf)
    fulltext
  • 139.
    Hang, Kaiyu
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. CVAP/CAS/CSC, KTH Royal Institute of Technology.
    Dexterous Grasping: Representation and Optimization2016Doctoral thesis, comprehensive summary (Other academic)
    Abstract [en]

    Many robot object interactions require that an object is firmly held, and that the grasp remains stable during the whole manipulation process. Based on grasp wrench space, this thesis address the problems of measuring the grasp sensitivity against friction changes, planning contacts and hand configurations on mesh and point cloud representations of arbitrary objects, planning adaptable grasps and finger gaiting for keeping a grasp stable under various external disturbances, as well as learning of grasping manifolds for more accurate reachability and inverse kinematics computation for multifingered grasping. 

    Firstly, we propose a new concept called friction sensitivity, which measures how susceptible a specific grasp is to changes in the underlying frictionc oefficients. We develop algorithms for the synthesis of stable grasps with low friction sensitivity and for the synthesis of stable grasps in the case of small friction coefficients.  

    Secondly, for fast planning of contacts and hand configurations for dexterous grasping, as well as keeping the stability of a grasp during execution, we present a unified framework for grasp planning and in-hand grasp adaptation using visual, tactile and proprioceptive feedback. The main objective of the proposed framework is to enable fingertip grasping by addressing problems of changed weight of the object, slippage and external disturbances. For this purpose, we introduce the Hierarchical Fingertip Space (HFTS) as a representation enabling optimization for both efficient grasp synthesis and online finger gaiting. Grasp synthesis is followed by a grasp adaptation step that consists of both grasp force adaptation through impedance control and regrasping/finger gaiting when the former is not sufficient. 

    Lastly, to improve the efficiency and accuracy of dexterous grasping and in-hand manipulation, we present a system for fingertip grasp planning that incrementally learns a heuristic for hand reachability and multi-fingered inverse kinematics. During execution the system plans and executes fingertip grasps using Canny’s grasp quality metric and a learned random forest based hand reachability heuristic. In the offline module, this heuristic is improved based on a grasping manifold that is incrementally learned from the experiences collected during execution.

    Download full text (pdf)
    fulltext
  • 140.
    Hang, Kaiyu
    et al.
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Haustein, Joshua
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Li, Miao
    EPFL.
    Billard, Aude
    Smith, Christian
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Kragic, Danica
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    On the Evolution of Fingertip Grasping Manifolds2016In: IEEE International Conference on Robotics and Automation, IEEE Robotics and Automation Society, 2016, p. 2022-2029, article id 7487349Conference paper (Refereed)
    Abstract [en]

    Efficient and accurate planning of fingertip grasps is essential for dexterous in-hand manipulation. In this work, we present a system for fingertip grasp planning that incrementally learns a heuristic for hand reachability and multi-fingered inverse kinematics. The system consists of an online execution module and an offline optimization module. During execution the system plans and executes fingertip grasps using Canny’s grasp quality metric and a learned random forest based hand reachability heuristic. In the offline module, this heuristic is improved based on a grasping manifold that is incrementally learned from the experiences collected during execution. The system is evaluated both in simulation and on a SchunkSDH dexterous hand mounted on a KUKA-KR5 arm. We show that, as the grasping manifold is adapted to the system’s experiences, the heuristic becomes more accurate, which results in an improved performance of the execution module. The improvement is not only observed for experienced objects, but also for previously unknown objects of similar sizes.

  • 141.
    Hang, Kaiyu
    et al.
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Li, Miao
    EPFL.
    Stork, Johannes A.
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Bekiroglu, Yasemin
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Pokorny, Florian T.
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Billard, Aude
    Kragic, Danica
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Hierarchical Fingertip Space: A Unified Framework for Grasp Planning and In-Hand Grasp Adaptation2016In: IEEE Transactions on robotics, ISSN 1552-3098, E-ISSN 1941-0468, Vol. 32, no 4, p. 960-972, article id 7530865Article in journal (Refereed)
    Abstract [en]

    We present a unified framework for grasp planning and in-hand grasp adaptation using visual, tactile and proprioceptive feedback. The main objective of the proposed framework is to enable fingertip grasping by addressing problems of changed weight of the object, slippage and external disturbances. For this purpose, we introduce the Hierarchical Fingertip Space (HFTS) as a representation enabling optimization for both efficient grasp synthesis and online finger gaiting. Grasp synthesis is followed by a grasp adaptation step that consists of both grasp force adaptation through impedance control and regrasping/finger gaiting when the former is not sufficient. Experimental evaluation is conducted on an Allegro hand mounted on a Kuka LWR arm.

    Download full text (pdf)
    fulltext
  • 142.
    Hang, Kaiyu
    et al.
    Yale Univ, Dept Mech Engn & Mat Sci, New Haven, CT 06520 USA..
    Lyu, Ximin
    Hong Kong Univ Sci & Technol, Hong Kong, Peoples R China..
    Song, Haoran
    Hong Kong Univ Sci & Technol, Hong Kong, Peoples R China..
    Stork, Johannes A.
    KTH, School of Electrical Engineering and Computer Science (EECS), Robotics, perception and learning, RPL. Örebro Univ, Ctr Appl Autonomous Sensor Syst AASS, Örebro, Sweden.
    Dollar, Aaron M.
    Yale Univ, Dept Mech Engn & Mat Sci, New Haven, CT 06520 USA..
    Kragic, Danica
    KTH, School of Electrical Engineering and Computer Science (EECS), Robotics, perception and learning, RPL.
    Zhang, Fu
    Univ Hong Kong, Hong Kong, Peoples R China..
    Perching and resting-A paradigm for UAV maneuvering with modularized landing gears2019In: SCIENCE ROBOTICS, ISSN 2470-9476, Vol. 4, no 28, article id eaau6637Article in journal (Refereed)
    Abstract [en]

    Perching helps small unmanned aerial vehicles (UAVs) extend their time of operation by saving battery power. However, most strategies for UAV perching require complex maneuvering and rely on specific structures, such as rough walls for attaching or tree branches for grasping. Many strategies to perching neglect the UAV's mission such that saving battery power interrupts the mission. We suggest enabling UAVs with the capability of making and stabilizing contacts with the environment, which will allow the UAV to consume less energy while retaining its altitude, in addition to the perching capability that has been proposed before. This new capability is termed "resting." For this, we propose a modularized and actuated landing gear framework that allows stabilizing the UAV on a wide range of different structures by perching and resting. Modularization allows our framework to adapt to specific structures for resting through rapid prototyping with additive manufacturing. Actuation allows switching between different modes of perching and resting during flight and additionally enables perching by grasping. Our results show that this framework can be used to perform UAV perching and resting on a set of common structures, such as street lights and edges or corners of buildings. We show that the design is effective in reducing power consumption, promotes increased pose stability, and preserves large vision ranges while perching or resting at heights. In addition, we discuss the potential applications facilitated by our design, as well as the potential issues to be addressed for deployment in practice.

  • 143.
    Hang, Kaiyu
    et al.
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Pokorny, Florian T.
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Kragic, Danica
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Friction Coefficients and Grasp Synthesis2013In: IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2013), IEEE , 2013, p. 3520-3526Conference paper (Refereed)
    Abstract [en]

    We propose a new concept called friction sensitivity which measures how susceptible a specific grasp is to changes in the underlying friction coefficients. We develop algorithms for the synthesis of stable grasps with low friction sensitivity and for the synthesis of stable grasps in the case of small friction coefficients. We describe how grasps with low friction sensitivity can be used when a robot has an uncertain belief about friction coefficients and study the statistics of grasp quality under changes in those coefficients. We also provide a parametric estimate for the distribution of grasp qualities and friction sensitivities for a uniformly sampled set of grasps.

  • 144.
    Hang, Kaiyu
    et al.
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Stork, Johannes A.
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Kragic, Danica
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Hierarchical Fingertip Space for Multi-fingered Precision Grasping2014In: 2014 IEEE/RSJ International Conference on Intelligent Robots and Systems, (IROS 2014), IEEE , 2014, p. 1641-1648Conference paper (Refereed)
    Abstract [en]

    Dexterous in-hand manipulation of objects benefits from the ability of a robot system to generate precision grasps. In this paper, we propose a concept of Fingertip Space and its use for precision grasp synthesis. Fingertip Space is a representation that takes into account both the local geometry of object surface as well as the fingertip geometry. As such, it is directly applicable to the object point cloud data and it establishes a basis for the grasp search space. We propose a model for a hierarchical encoding of the Fingertip Space that enables multilevel refinement for efficient grasp synthesis. The proposed method works at the grasp contact level while not neglecting object shape nor hand kinematics. Experimental evaluation is performed for the Barrett hand considering also noisy and incomplete point cloud data.

  • 145.
    Hang, Kaiyu
    et al.
    KTH, School of Computer Science and Communication (CSC), Robotics, perception and learning, RPL.
    Stork, Johannes A.
    KTH, School of Computer Science and Communication (CSC), Robotics, perception and learning, RPL.
    Pollard, Nancy S.
    Kragic, Danica
    KTH, School of Computer Science and Communication (CSC), Robotics, perception and learning, RPL.
    A Framework for Optimal Grasp Contact Planning2017In: IEEE Robotics and Automation Letters, ISSN 2377-3766, E-ISSN 1949-3045, Vol. 2, no 2, p. 704-711Article in journal (Refereed)
    Abstract [en]

    We consider the problem of finding grasp contacts that are optimal under a given grasp quality function on arbitrary objects. Our approach formulates a framework for contact-level grasping as a path finding problem in the space of supercontact grasps. The initial supercontact grasp contains all grasps and in each step along a path grasps are removed. For this, we introduce and formally characterize search space structure and cost functions underwhich minimal cost paths correspond to optimal grasps. Our formulation avoids expensive exhaustive search and reduces computational cost by several orders of magnitude. We present admissible heuristic functions and exploit approximate heuristic search to further reduce the computational cost while maintaining bounded suboptimality for resulting grasps. We exemplify our formulation with point-contact grasping for which we define domain specific heuristics and demonstrate optimality and bounded suboptimality by comparing against exhaustive and uniform cost search on example objects. Furthermore, we explain how to restrict the search graph to satisfy grasp constraints for modeling hand kinematics. We also analyze our algorithm empirically in terms of created and visited search states and resultant effective branching factor.

  • 146.
    Hanheide, Marc
    et al.
    University of Lincoln.
    Göbelbecker, Moritz
    University of Freiburg.
    Horn, Graham S.
    University of Birmingham.
    Pronobis, Andrzej
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Sjöö, Kristoffer
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS. krsj@kth.se.
    Aydemir, Alper
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Jensfelt, Patric
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Gretton, Charles
    University of Birmingham.
    Dearden, Richard
    University of Birmingham.
    Janicek, Miroslav
    DFKI, Saarbrücken.
    Zender, Hendrik
    DFKI, Saarbrücken.
    Kruijff, Geert-Jan
    DFKI, Saarbrücken.
    Hawes, Nick
    University of Birmingham.
    Wyatt, Jeremy
    University of Birmingham.
    Robot task planning and explanation in open and uncertain worlds2015In: Artificial Intelligence, ISSN 0004-3702, E-ISSN 1872-7921Article in journal (Refereed)
    Abstract [en]

    A long-standing goal of AI is to enable robots to plan in the face of uncertain and incomplete information, and to handle task failure intelligently. This paper shows how to achieve this. There are two central ideas. The first idea is to organize the robot's knowledge into three layers: instance knowledge at the bottom, commonsense knowledge above that, and diagnostic knowledge on top. Knowledge in a layer above can be used to modify knowledge in the layer(s) below. The second idea is that the robot should represent not just how its actions change the world, but also what it knows or believes. There are two types of knowledge effects the robot's actions can have: epistemic effects (I believe X because I saw it) and assumptions (I'll assume X to be true). By combining the knowledge layers with the models of knowledge effects, we can simultaneously solve several problems in robotics: (i) task planning and execution under uncertainty; (ii) task planning and execution in open worlds; (iii) explaining task failure; (iv) verifying those explanations. The paper describes how the ideas are implemented in a three-layer architecture on a mobile robot platform. The robot implementation was evaluated in five different experiments on object search, mapping, and room categorization.

  • 147.
    Hanson, Lars
    et al.
    Scania CV AB, Södertälje, Sweden.
    Ore, Fredrik
    Mälardalens högskola, Innovation och produktrealisering.
    Wiktorsson, Magnus
    Mälardalens högskola, Innovation och produktrealisering.
    Virtual Verification of Human-Industrial robot Collaboration in Truck Tyre Assembly2015In: Proceedings 19th Triennial Congress of the IEA, 2015Conference paper (Refereed)
    Abstract [en]

    Human-industrial robot collaboration has been introduced as the ultimate combination for industry. The endurance and strength of a robot is combined with a human’s flexibility, precision and quality skills. One challenge in the implementation of human-industrial robot collaboration is to create a safe working station for the operators, therefore most of the research focuses on these safety aspects. Industrial designers and engineers verify and optimise workstations in different simulation and visualisation tools in order to improve competitiveness, reduce late changes and reduce cost. Several robot tools and digital human modelling tools are available, but there are no or few simulation and visualisation tools that include both humans and robots. The aim of the proposed paper is to illustrate how unique software can be used to verify human-industrial robot collaboration. This software is a combination of the robot simulation tool IPS and the digital human modelling tool IMMA. The software demonstration is promising, covering the gap between digital human modelling tools and robot simulation tools. The simulation and visualisation tools generate pictures and animations, as well as quantified numbers to aid well-founded decision-making. The demonstration software was used to analyse a truck tyre assembly station. Fully manual, fully automated and human-industrial robot collaboration were compared.

    Practitioner Summary: The presented paper illustrates simulation and visualisation software for the virtual verification of Human - Industrial Robot collaboration. The software demonstration is a combination of the robot simulation tool IPS and the digital human modelling tool IMMA. The software demonstration is promising, covering the gap between digital human modelling tools and robot simulation tools.

    Keywords: ergonomics, digital human modelling, robot simulation, simulation and visualisation

  • 148.
    Hata, Alberto
    et al.
    Ericsson Research.
    Inam, Rafia
    Ericsson Research.
    Raizer, Klaus
    Ericsson Research.
    Wang, Shaolei
    KTH.
    Cao, Enyo
    KTH.
    AI-based Safety Analysis for Collaborative Mobile Robots2019In: Proceedings of the 2019 24th IEEE International Conference on Emerging Technologies and Factory Automation (ETFA), 2019, p. 1722-1729Conference paper (Refereed)
    Abstract [en]

    This paper presents implementation details of AI methods used for safety in human robot collaborative scenarios, based on fuzzy logic and state-of-the-art deep learning-based perception. For semantic representation of the environment, a scene graph encodes the environment surrounding the robots with information from camera, which is then fed to risk management system for safety analysis purpose. Transfer learning effects have been observed when starting training based on weights pre-trained on Image Net and COCO data-sets. A fuzzy logic solution for risk analysis, evaluation and mitigation has been compared to a neuro-fuzzy approach. Experiments have been performed in a physically realistic 3D simulation of a warehouse environment to evaluate which configuration presents the best performance for robotic perception and risk mitigation.

  • 149.
    Haustein, Joshua A.
    et al.
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Robotics, Perception and Learning, RPL.
    Cruciani, Silvia
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Robotics, Perception and Learning, RPL.
    Asif, Rizwan
    KTH.
    Hang, Kaiyu
    KTH.
    Kragic, Danica
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Robotics, Perception and Learning, RPL.
    Placing Objects with prior In-Hand Manipulation using Dexterous Manipulation Graphs2019Conference paper (Refereed)
    Abstract [en]

    We address the problem of planning the placement of a grasped object with a robot manipulator. More specifically, the robot is tasked to place the grasped object such that a placement preference function is maximized. For this, we present an approach that uses in-hand manipulation to adjust the robot’s initial grasp to extend the set of reachable placements. Given an initial grasp, the algorithm computes a set of grasps that can be reached by pushing and rotating the object in-hand. With this set of reachable grasps, it then searches for a stable placement that maximizes the preference function. If successful it returns a sequence of in-hand pushes to adjust the initial grasp to a more advantageous grasp together with a transport motion that carries the object to the placement. We evaluate our algorithm’s performance on various placing scenarios, and observe its effectiveness also in challenging scenes containing many obstacles. Our experiments demonstrate that re-grasping with in-hand manipulation increases the quality of placements the robot can reach. In particular, it enables the algorithm to find solutions in situations where safe placing with the initial grasp wouldn’t be possible.

  • 150.
    Hedström, Andreas
    et al.
    KTH, School of Computer Science and Communication (CSC), Numerical Analysis and Computer Science, NADA.
    Christensen, Henrik I.
    KTH, School of Computer Science and Communication (CSC), Numerical Analysis and Computer Science, NADA.
    Lundberg, Carl
    KTH, School of Computer Science and Communication (CSC), Numerical Analysis and Computer Science, NADA.
    A wearable GUI for field robots2006In: Field and Service Robotics / [ed] Corke, P; Sukkarieh, S, BERLIN: SPRINGER-VERLAG BERLIN , 2006, Vol. 25, p. 367-376Conference paper (Refereed)
    Abstract [en]

    In most search and rescue or reconnaissance missions involving field robots the requirements of the operator being mobile and alert to sudden changes in the near environment, are just as important as the ability to control the robot proficiently. This implies that the GUI platform should be light-weight and portable, and that the GUI itself is carefully designed for the task at hand. In this paper different platform solutions and design of a user-friendly GUI for a packbot will be discussed. Our current wearable system will be presented along with some results from initial field tests in urban search and rescue facilities.

1234567 101 - 150 of 416
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf