Change search
Refine search result
12 1 - 50 of 100
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Rows per page
  • 5
  • 10
  • 20
  • 50
  • 100
  • 250
Sort
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
Select
The maximal number of hits you can export is 250. When you want to export more records please use the 'Create feeds' function.
  • 1. Abbeloos, W.
    et al.
    Caccamo, Sergio
    KTH, School of Computer Science and Communication (CSC), Robotics, perception and learning, RPL.
    Ataer-Cansizoglu, E.
    Taguchi, Y.
    Feng, C.
    Lee, T. -Y
    Detecting and Grouping Identical Objects for Region Proposal and Classification2017In: 2017 IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops, IEEE Computer Society, 2017, Vol. 2017, p. 501-502, article id 8014810Conference paper (Refereed)
    Abstract [en]

    Often multiple instances of an object occur in the same scene, for example in a warehouse. Unsupervised multi-instance object discovery algorithms are able to detect and identify such objects. We use such an algorithm to provide object proposals to a convolutional neural network (CNN) based classifier. This results in fewer regions to evaluate, compared to traditional region proposal algorithms. Additionally, it enables using the joint probability of multiple instances of an object, resulting in improved classification accuracy. The proposed technique can also split a single class into multiple sub-classes corresponding to the different object types, enabling hierarchical classification.

  • 2.
    Abdulaziz Ali Haseeb, Mohamed
    KTH, School of Computer Science and Communication (CSC), Robotics, perception and learning, RPL.
    Passive gesture recognition on unmodified smartphones using Wi-Fi RSSI2017Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    The smartphone is becoming a common device carried by hundreds of millions of individual humans worldwide, and is used to accomplish a multitude of different tasks like basic communication, internet browsing, online shopping and fitness tracking. Limited by its small size and tight energy storage, the human-smartphone interface is largely bound to the smartphones small screens and simple keypads. This prohibits introducing new rich ways of interaction with smartphones.

     

    The industry and research community are working extensively to find ways to enrich the human-smartphone interface by either seizing the existing smartphones resources like microphones, cameras and inertia sensors, or by introducing new specialized sensing capabilities into the smartphones like compact gesture sensing radar devices.

     

    The prevalence of Radio Frequency (RF) signals and their limited power needs, led us towards investigating using RF signals received by smartphones to recognize gestures and activities around smartphones. This thesis introduces a solution for recognizing touch-less dynamic hand gestures from the Wi-Fi Received Signal Strength (RSS) received by the smartphone using a recurrent neural network (RNN) based probabilistic model. Unlike other Wi-Fi based gesture recognition solutions, the one introduced in this thesis does not require a change to the smartphone hardware or operating system, and performs the hand gesture recognition without interfering with the normal operation of other smartphone applications.

     

    The developed hand gesture recognition solution achieved a mean accuracy of 78% detecting and classifying three hand gestures in an online setting involving different spatial and traffic scenarios between the smartphone and Wi-Fi access points (AP). Furthermore the characteristics of the developed solution were studied, and a set of improvements have been suggested for further future work.

  • 3. Agarwal, P.
    et al.
    Al Moubayed, Samer
    KTH, School of Computer Science and Communication (CSC), Robotics, perception and learning, RPL.
    Alspach, A.
    Kim, J.
    Carter, E. J.
    Lehman, J. F.
    Yamane, K.
    Imitating human movement with teleoperated robotic head2016In: 25th IEEE International Symposium on Robot and Human Interactive Communication, RO-MAN 2016, IEEE, 2016, p. 630-637Conference paper (Refereed)
    Abstract [en]

    Effective teleoperation requires real-time control of a remote robotic system. In this work, we develop a controller for realizing smooth and accurate motion of a robotic head with application to a teleoperation system for the Furhat robot head [1], which we call TeleFurhat. The controller uses the head motion of an operator measured by a Microsoft Kinect 2 sensor as reference and applies a processing framework to condition and render the motion on the robot head. The processing framework includes a pre-filter based on a moving average filter, a neural network-based model for improving the accuracy of the raw pose measurements of Kinect, and a constrained-state Kalman filter that uses a minimum jerk model to smooth motion trajectories and limit the magnitude of changes in position, velocity, and acceleration. Our results demonstrate that the robot can reproduce the human head motion in real time with a latency of approximately 100 to 170 ms while operating within its physical limits. Furthermore, viewers prefer our new method over rendering the raw pose data from Kinect.

  • 4. Agarwal, Priyanshu
    et al.
    Al Moubayed, Samer
    KTH, School of Computer Science and Communication (CSC), Robotics, perception and learning, RPL.
    Alspach, Alexander
    Kim, Joohyung
    Carter, Elizabeth J.
    Lehman, Jill Fain
    Yamane, Katsu
    Imitating Human Movement with Teleoperated Robotic Head2016In: 2016 25TH IEEE INTERNATIONAL SYMPOSIUM ON ROBOT AND HUMAN INTERACTIVE COMMUNICATION (RO-MAN), 2016, p. 630-637Conference paper (Refereed)
    Abstract [en]

    Effective teleoperation requires real-time control of a remote robotic system. In this work, we develop a controller for realizing smooth and accurate motion of a robotic head with application to a teleoperation system for the Furhat robot head [1], which we call TeleFurhat. The controller uses the head motion of an operator measured by a Microsoft Kinect 2 sensor as reference and applies a processing framework to condition and render the motion on the robot head. The processing framework includes a pre-filter based on a moving average filter, a neural network-based model for improving the accuracy of the raw pose measurements of Kinect, and a constrained-state Kalman filter that uses a minimum jerk model to smooth motion trajectories and limit the magnitude of changes in position, velocity, and acceleration. Our results demonstrate that the robot can reproduce the human head motion in real time with a latency of approximately 100 to 170 ms while operating within its physical limits. Furthermore, viewers prefer our new method over rendering the raw pose data from Kinect.

  • 5.
    Almeida, Diogo
    et al.
    KTH, School of Computer Science and Communication (CSC), Robotics, perception and learning, RPL. KTH.
    Ambrus, Rares
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.
    Caccamo, Sergio
    KTH, School of Electrical Engineering and Computer Science (EECS), Robotics, perception and learning, RPL.
    Chen, Xi
    KTH.
    Cruciani, Silvia
    Pinto Basto De Carvalho, Joao F
    KTH, School of Electrical Engineering and Computer Science (EECS), Robotics, perception and learning, RPL.
    Haustein, Joshua
    KTH, School of Electrical Engineering and Computer Science (EECS), Robotics, perception and learning, RPL.
    Marzinotto, Alejandro
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.
    Vina, Francisco
    KTH.
    Karayiannidis, Yannis
    KTH.
    Ögren, Petter
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Optimization and Systems Theory.
    Jensfelt, Patric
    KTH, Superseded Departments (pre-2005), Signals, Sensors and Systems. KTH, Superseded Departments (pre-2005), Numerical Analysis and Computer Science, NADA.
    Kragic, Danica
    KTH, Superseded Departments (pre-2005), Numerical Analysis and Computer Science, NADA.
    Team KTH’s Picking Solution for the Amazon Picking Challenge 20162017In: Warehouse Picking Automation Workshop 2017: Solutions, Experience, Learnings and Outlook of the Amazon Robotics Challenge, 2017Conference paper (Other (popular science, discussion, etc.))
    Abstract [en]

    In this work we summarize the solution developed by Team KTH for the Amazon Picking Challenge 2016 in Leipzig, Germany. The competition simulated a warehouse automation scenario and it was divided in two tasks: a picking task where a robot picks items from a shelf and places them in a tote and a stowing task which is the inverse task where the robot picks items from a tote and places them in a shelf. We describe our approach to the problem starting from a high level overview of our system and later delving into details of our perception pipeline and our strategy for manipulation and grasping. The solution was implemented using a Baxter robot equipped with additional sensors.

  • 6.
    Almeida, Diogo
    et al.
    KTH, School of Computer Science and Communication (CSC), Robotics, perception and learning, RPL. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS. KTH.
    Karayiannidis, Yiannis
    Chalmers, Sweden.
    Dexterous manipulation by means of compliant grasps and external contacts2017In: 2017 IEEE/RSJ International Conference on Intelligent Robots and Systems, IROS 2017, IEEE, 2017, p. 1913-1920, article id 8206010Conference paper (Refereed)
    Abstract [en]

    We propose a method that allows for dexterousmanipulation of an object by exploiting contact with an externalsurface. The technique requires a compliant grasp, enablingthe motion of the object in the robot hand while allowingfor significant contact forces to be present on the externalsurface. We show that under this type of grasp it is possibleto estimate and control the pose of the object with respect tothe surface, leveraging the trade-off between force control andmanipulative dexterity. The method is independent of the objectgeometry, relying only on the assumptions of type of grasp andthe existence of a contact with a known surface. Furthermore,by adapting the estimated grasp compliance, the method canhandle unmodelled effects. The approach is demonstrated andevaluated with experiments on object pose regulation andpivoting against a rigid surface, where a mechanical springprovides the required compliance.

  • 7.
    Almeida, Diogo
    et al.
    KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS. KTH, School of Computer Science and Communication (CSC), Robotics, perception and learning, RPL. Royal Inst Technol KTH, Ctr Autonomous Syst, Sch Comp Sci & Commun, Robot Percept & Learning Lab, SE-10044 Stockholm, Sweden..
    Karayiannidis, Yiannis
    KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS. KTH, School of Computer Science and Communication (CSC), Robotics, perception and learning, RPL. Chalmers Univ Technol, Dept Signals & Syst, SE-41296 Gothenburg, Sweden..
    Dexterous Manipulation with Compliant Grasps and External Contacts2017In: 2017 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS (IROS) / [ed] Bicchi, A Okamura, A, IEEE , 2017, p. 1913-1920Conference paper (Refereed)
    Abstract [en]

    We propose a method that allows for dexterous manipulation of an object by exploiting contact with an external surface. The technique requires a compliant grasp, enabling the motion of the object in the robot hand while allowing for significant contact forces to be present on the external surface. We show that under this type of grasp it is possible to estimate and control the pose of the object with respect to the surface, leveraging the trade-off between force control and manipulative dexterity. The method is independent of the object geometry, relying only on the assumptions of type of grasp and the existence of a contact with a known surface. Furthermore, by adapting the estimated grasp compliance, the method can handle unmodelled effects. The approach is demonstrated and evaluated with experiments on object pose regulation and pivoting against a rigid surface, where a mechanical spring provides the required compliance.

  • 8.
    Almeida, Diogo
    et al.
    KTH, School of Computer Science and Communication (CSC), Robotics, perception and learning, RPL. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Viña, Francisco E.
    KTH, School of Computer Science and Communication (CSC), Robotics, perception and learning, RPL. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Karayiannidis, Yiannis
    Bimanual Folding Assembly: Switched Control and Contact Point Estimation2016In: IEEE-RAS 16th International Conference on Humanoid Robots (Humanoids), Cancun, 2016, Cancun: IEEE, 2016Conference paper (Refereed)
    Abstract [en]

    Robotic assembly in unstructured environments is a challenging task, due to the added uncertainties. These can be mitigated through the employment of assembly systems, which offer a modular approach to the assembly problem via the conjunction of primitives. In this paper, we use a dual-arm manipulator in order to execute a folding assembly primitive. When executing a folding primitive, two parts are brought into rigid contact and posteriorly translated and rotated. A switched controller is employed in order to ensure that the relative motion of the parts follows the desired model, while regulating the contact forces. The control is complemented with an estimator based on a Kalman filter, which tracks the contact point between parts based on force and torque measurements. Experimental results are provided, and the effectiveness of the control and contact point estimation is shown.

  • 9. Alomari, M.
    et al.
    Duckworth, P.
    Bore, Nils
    KTH, School of Computer Science and Communication (CSC), Robotics, perception and learning, RPL.
    Hawasly, M.
    Hogg, D. C.
    Cohn, A. G.
    Grounding of human environments and activities for autonomous robots2017In: IJCAI International Joint Conference on Artificial Intelligence, International Joint Conferences on Artificial Intelligence , 2017, p. 1395-1402Conference paper (Refereed)
    Abstract [en]

    With the recent proliferation of human-oriented robotic applications in domestic and industrial scenarios, it is vital for robots to continually learn about their environments and about the humans they share their environments with. In this paper, we present a novel, online, incremental framework for unsupervised symbol grounding in real-world, human environments for autonomous robots. We demonstrate the flexibility of the framework by learning about colours, people names, usable objects and simple human activities, integrating stateofthe-art object segmentation, pose estimation, activity analysis along with a number of sensory input encodings into a continual learning framework. Natural language is grounded to the learned concepts, enabling the robot to communicate in a human-understandable way. We show, using a challenging real-world dataset of human activities as perceived by a mobile robot, that our framework is able to extract useful concepts, ground natural language descriptions to them, and, as a proof-ofconcept, generate simple sentences from templates to describe people and the activities they are engaged in.

  • 10.
    Ambrus, Rares
    KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS. KTH, School of Computer Science and Communication (CSC), Robotics, perception and learning, RPL.
    Unsupervised construction of 4D semantic maps in a long-term autonomy scenario2017Doctoral thesis, monograph (Other academic)
    Abstract [en]

    Robots are operating for longer times and collecting much more data than just a few years ago. In this setting we are interested in exploring ways of modeling the environment, segmenting out areas of interest and keeping track of the segmentations over time, with the purpose of building 4D models (i.e. space and time) of the relevant parts of the environment.

    Our approach relies on repeatedly observing the environment and creating local maps at specific locations. The first question we address is how to choose where to build these local maps. Traditionally, an operator defines a set of waypoints on a pre-built map of the environment which the robot visits autonomously. Instead, we propose a method to automatically extract semantically meaningful regions from a point cloud representation of the environment. The resulting segmentation is purely geometric, and in the context of mobile robots operating in human environments, the semantic label associated with each segment (i.e. kitchen, office) can be of interest for a variety of applications. We therefore also look at how to obtain per-pixel semantic labels given the geometric segmentation, by fusing probabilistic distributions over scene and object types in a Conditional Random Field.

    For most robotic systems, the elements of interest in the environment are the ones which exhibit some dynamic properties (such as people, chairs, cups, etc.), and the ability to detect and segment such elements provides a very useful initial segmentation of the scene. We propose a method to iteratively build a static map from observations of the same scene acquired at different points in time. Dynamic elements are obtained by computing the difference between the static map and new observations. We address the problem of clustering together dynamic elements which correspond to the same physical object, observed at different points in time and in significantly different circumstances. To address some of the inherent limitations in the sensors used, we autonomously plan, navigate around and obtain additional views of the segmented dynamic elements. We look at methods of fusing the additional data and we show that both a combined point cloud model and a fused mesh representation can be used to more robustly recognize the dynamic object in future observations. In the case of the mesh representation, we also show how a Convolutional Neural Network can be trained for recognition by using mesh renderings.

    Finally, we present a number of methods to analyse the data acquired by the mobile robot autonomously and over extended time periods. First, we look at how the dynamic segmentations can be used to derive a probabilistic prior which can be used in the mapping process to further improve and reinforce the segmentation accuracy. We also investigate how to leverage spatial-temporal constraints in order to cluster dynamic elements observed at different points in time and under different circumstances. We show that by making a few simple assumptions we can increase the clustering accuracy even when the object appearance varies significantly between observations. The result of the clustering is a spatial-temporal footprint of the dynamic object, defining an area where the object is likely to be observed spatially as well as a set of time stamps corresponding to when the object was previously observed. Using this data, predictive models can be created and used to infer future times when the object is more likely to be observed. In an object search scenario, this model can be used to decrease the search time when looking for specific objects.

  • 11.
    Ambrus, Rares
    et al.
    KTH, School of Computer Science and Communication (CSC), Robotics, perception and learning, RPL. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Bore, Nils
    KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Folkesson, John
    KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS. KTH, School of Computer Science and Communication (CSC), Robotics, perception and learning, RPL.
    Jensfelt, Patric
    KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS. KTH, School of Computer Science and Communication (CSC), Robotics, perception and learning, RPL.
    Autonomous meshing, texturing and recognition of objectmodels with a mobile robot2017Conference paper (Refereed)
    Abstract [en]

    We present a system for creating object modelsfrom RGB-D views acquired autonomously by a mobile robot.We create high-quality textured meshes of the objects byapproximating the underlying geometry with a Poisson surface.Our system employs two optimization steps, first registering theviews spatially based on image features, and second aligningthe RGB images to maximize photometric consistency withrespect to the reconstructed mesh. We show that the resultingmodels can be used robustly for recognition by training aConvolutional Neural Network (CNN) on images rendered fromthe reconstructed meshes. We perform experiments on datacollected autonomously by a mobile robot both in controlledand uncontrolled scenarios. We compare quantitatively andqualitatively to previous work to validate our approach.

  • 12.
    Ambrus, Rares
    et al.
    KTH, School of Computer Science and Communication (CSC), Robotics, perception and learning, RPL. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Claici, Sebastian
    Wendt, Axel
    Automatic Room Segmentation From Unstructured 3-D Data of Indoor Environments2017In: IEEE Robotics and Automation Letters, ISSN 2377-3766, E-ISSN 1949-3045, Vol. 2, no 2, p. 749-756Article in journal (Refereed)
    Abstract [en]

    We present an automatic approach for the task of reconstructing a 2-D floor plan from unstructured point clouds of building interiors. Our approach emphasizes accurate and robust detection of building structural elements and, unlike previous approaches, does not require prior knowledge of scanning device poses. The reconstruction task is formulated as a multiclass labeling problem that we approach using energy minimization. We use intuitive priors to define the costs for the energy minimization problem and rely on accurate wall and opening detection algorithms to ensure robustness. We provide detailed experimental evaluation results, both qualitative and quantitative, against state-of-the-art methods and labeled ground-truth data.

  • 13.
    Ambrus, Rares
    et al.
    KTH, School of Computer Science and Communication (CSC), Robotics, perception and learning, RPL. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Folkesson, John
    KTH, School of Computer Science and Communication (CSC), Robotics, perception and learning, RPL. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Jensfelt, Patric
    KTH, School of Computer Science and Communication (CSC), Robotics, perception and learning, RPL. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Unsupervised object segmentation through change detection in a long term autonomy scenario2016In: IEEE-RAS International Conference on Humanoid Robots, IEEE, 2016, p. 1181-1187Conference paper (Refereed)
    Abstract [en]

    In this work we address the problem of dynamic object segmentation in office environments. We make no prior assumptions on what is dynamic and static, and our reasoning is based on change detection between sparse and non-uniform observations of the scene. We model the static part of the environment, and we focus on improving the accuracy and quality of the segmented dynamic objects over long periods of time. We address the issue of adapting the static structure over time and incorporating new elements, for which we train and use a classifier whose output gives an indication of the dynamic nature of the segmented elements. We show that the proposed algorithms improve the accuracy and the rate of detection of dynamic objects by comparing with a labelled dataset.

  • 14.
    Antonova, Rika
    et al.
    KTH, School of Computer Science and Communication (CSC), Robotics, perception and learning, RPL.
    Cruciani, Silvia
    KTH, School of Computer Science and Communication (CSC), Robotics, perception and learning, RPL.
    Smith, Christian
    KTH, School of Computer Science and Communication (CSC), Robotics, perception and learning, RPL.
    Kragic, Danica
    KTH, School of Computer Science and Communication (CSC), Robotics, perception and learning, RPL.
    Reinforcement Learning for Pivoting TaskManuscript (preprint) (Other academic)
    Abstract [en]

    In this work we propose an approach to learn a robust policy for solving the pivoting task. Recently, several model-free continuous control algorithms were shown to learn successful policies without prior knowledge of the dynamics of the task. However, obtaining successful policies required thousands to millions of training episodes, limiting the applicability of these approaches to real hardware. We developed a training procedure that allows us to use a simple custom simulator to learn policies robust to the mismatch of simulation vs robot. In our experiments, we demonstrate that the policy learned in the simulator is able to pivot the object to the desired target angle on the real robot. We also show generalization to an object with different inertia, shape, mass and friction properties than those used during training. This result is a step towards making model-free reinforcement learning available for solving robotics tasks via pre-training in simulators that offer only an imprecise match to the real-world dynamics.

  • 15.
    Antonova, Rika
    et al.
    KTH, School of Computer Science and Communication (CSC), Robotics, perception and learning, RPL.
    Rai, Akshara
    Robotics Institute, School of Computer Science, Carnegie Mellon University, USA.
    Atkeson, Christopher G.
    Robotics Institute, School of Computer Science, Carnegie Mellon University, USA.
    Deep kernels for optimizing locomotion controllers2017In: Proceedings of the 1st Annual Conference on Robot Learning, PMLR , 2017Conference paper (Refereed)
    Abstract [en]

    Sample efciency is important when optimizing parameters of locomotion controllers, since hardware experiments are time consuming and expensive. Bayesian Optimization, a sample-efcient optimization framework, has recently been widely applied to address this problem, but further improvements in sample efciency are needed for practical applicability to real-world robots and highdimensional controllers. To address this, prior work has proposed using domain expertise for constructing custom distance metrics for locomotion. In this work we show how to learn such a distance metric automatically. We use a neural network to learn an informed distance metric from data obtained in high-delity simulations. We conduct experiments on two different controllers and robot architectures. First, we demonstrate improvement in sample efciency when optimizing a 5-dimensional controller on the ATRIAS robot hardware. We then conduct simulation experiments to optimize a 16-dimensional controller for a 7-link robot model and obtain signicant improvements even when optimizing in perturbed environments. This demonstrates that our approach is able to enhance sample efciency for two different controllers, hence is a tting candidate for further experiments on hardware in the future. Keywor

  • 16.
    Antonova, Rika
    et al.
    KTH, School of Computer Science and Communication (CSC), Robotics, perception and learning, RPL. Robotics Institute, School of Computer Science, Carnegie Mellon University, USA.
    Rai, Akshara
    Robotics Institute, School of Computer Science, Carnegie Mellon University, USA.
    Atkeson, Christopher G.
    Robotics Institute, School of Computer Science, Carnegie Mellon University, USA.
    Sample efficient optimization for learning controllers for bipedal locomotion2016Conference paper (Refereed)
    Abstract [en]

    Learning policies for bipedal locomotion can be difficult, as experiments are expensive and simulation does not usually transfer well to hardware. To counter this, we need algorithms that are sample efficient and inherently safe. Bayesian Optimization is a powerful sample-efficient tool for optimizing non-convex black-box functions. However, its performance can degrade in higher dimensions. We develop a distance metric for bipedal locomotion that enhances the sample-efficiency of Bayesian Optimization and use it to train a 16 dimensional neuromuscular model for planar walking. This distance metric reflects some basic gait features of healthy walking and helps us quickly eliminate a majority of unstable controllers. With our approach we can learn policies for walking in less than 100 trials for a range of challenging settings. In simulation, we show results on two different costs and on various terrains including rough ground and ramps, sloping upwards and downwards. We also perturb our models with unknown inertial disturbances analogous with differences between simulation and hardware. These results are promising, as they indicate that this method can potentially be used to learn control policies on hardware.

  • 17.
    Ay, Emre
    KTH, School of Computer Science and Communication (CSC), Robotics, perception and learning, RPL.
    Ego-Motion Estimation of Drones2017Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    To remove the dependency on external structure for drone positioning in GPS-denied environments, it is desirable to estimate the ego-motion of drones on-board. Visual positioning systems have been studied for quite some time and the literature on the area is diligent. The aim of this project is to investigate the currently available methods and implement a visual odometry system for drones which is capable of giving continuous estimates with a lightweight solution. In that manner, the state of the art systems are investigated and a visual odometry system is implemented based on the design decisions. The resulting system is shown to give acceptable estimates. 

  • 18.
    Beskow, Jonas
    et al.
    KTH, School of Computer Science and Communication (CSC), Speech, Music and Hearing, TMH.
    Peters, Christopher
    KTH, School of Computer Science and Communication (CSC), Computational Science and Technology (CST).
    Castellano, G.
    O'Sullivan, C.
    Leite, Iolanda
    KTH, School of Computer Science and Communication (CSC), Robotics, perception and learning, RPL.
    Kopp, S.
    Preface2017In: 17th International Conference on Intelligent Virtual Agents, IVA 2017, Springer, 2017, Vol. 10498, p. V-VIConference paper (Refereed)
  • 19.
    Binz, Marcel
    KTH, School of Computer Science and Communication (CSC), Robotics, perception and learning, RPL.
    Learning Goal-Directed Behaviour2017Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Learning behaviour of artificial agents is commonly studied in the framework of Reinforcement Learning. Reinforcement Learning gained increasing popularity in the past years. This is partially due to developments that enabled the possibility to employ complex function approximators, such as deep networks, in combination with the framework. Two of the core challenges in Reinforcement Learning are the correct assignment of credits over long periods of time and dealing with sparse rewards. In this thesis we propose a framework based on the notions of goals to tackle these problems. This work implements several components required to obtain a form of goal-directed behaviour, similar to how it is observed in human reasoning. This includes the representation of a goal space, learning how to set goals and finally how to reach them. The framework itself is build upon the options model, which is a common approach for representing temporally extended actions in Reinforcement Learning. All components of the proposed method can be implemented as deep networks and the complete system can be learned in an end-to-end fashion using standard optimization techniques. We evaluate the approachon a set of continuous control problems of increasing difficulty. We show, that we are able to solve a difficult gathering task, which poses a challenge to state-of-the-art Reinforcement Learning algorithms. The presented approach is furthermore able to scale to complex kinematic agents of the MuJoCo benchmark.

  • 20.
    Bore, Nils
    KTH, School of Computer Science and Communication (CSC), Robotics, perception and learning, RPL.
    Detection and Tracking of General Movable Objects in Large 3D MapsManuscript (preprint) (Other academic)
    Abstract [en]

    This paper studies the problem of detection and tracking of general objects with long-term dynamics, observed by a mobile robot moving in a large environment. A key problem is that due to the environment scale, it can only observe a subset of the objects at any given time. Since some time passes between observations of objects in different places, the objects might be moved when the robot is not there. We propose a model for this movement in which the objects typically only move locally, but with some small probability they jump longer distances, through what we call global motion. For filtering, we decompose the posterior over local and global movements into two linked processes. The posterior over the global movements and measurement associations is sampled, while we track the local movement analytically using Kalman filters. This novel filter is evaluated on point cloud data gathered autonomously by a mobile robot over an extended period of time. We show that tracking jumping objects is feasible, and that the proposed probabilistic treatment outperforms previous methods when applied to real world data. The key to efficient probabilistic tracking in this scenario is focused sampling of the object posteriors.

  • 21.
    Bore, Nils
    KTH, School of Computer Science and Communication (CSC), Robotics, perception and learning, RPL. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Multiple Object Detection, Tracking and Long-Term Dynamics Learning in Large 3D MapsManuscript (preprint) (Other academic)
    Abstract [en]

    In this work, we present a method for tracking and learning the dynamics of all objects in a large scale robot environment. A mobile robot patrols the environment and visits the different locations one by one. Movable objects are discovered by change detection, and tracked throughout the robot deployment. For tracking, we extend our previous Rao-Blackwellized particle filter with birth and death processes, enabling the method to handle an arbitrary number of objects. Target births and associations are sampled using Gibbs sampling. The parameters of the system are then learnt using the Expectation Maximization algorithm in an unsupervised fashion. The system therefore enables learning of the dynamics of one particular environment, and of its objects. The algorithm is evaluated on data collected autonomously by a mobile robot in an office environment during a real-world deployment. We show that the algorithm automatically identifies and tracks the moving objects within 3D maps and infers plausible dynamics models, significantly decreasing the modeling bias of our previous work. The proposed method represents an improvement over previous methods for environment dynamics learning as it allows for learning of fine grained processes.

  • 22.
    Bore, Nils
    KTH, School of Computer Science and Communication (CSC), Robotics, perception and learning, RPL.
    Object Instance Detection and Dynamics Modeling in a Long-Term Mobile Robot Context2017Doctoral thesis, comprehensive summary (Other academic)
    Abstract [en]

    In the last years, simple service robots such as autonomous vacuum cleaners and lawn mowers have become commercially available and increasingly common. The next generation of service robots should perform more advanced tasks, such as to clean up objects. Robots then need to learn to robustly navigate, and manipulate, cluttered environments, such as an untidy living room. In this thesis, we focus on representations for tasks such as general cleaning and fetching of objects. We discuss requirements for these specific tasks, and argue that solving them would be generally useful, because of their object-centric nature. We rely on two fundamental insights in our approach to understand environments on a fine-grained level. First, many of today's robot map representations are limited to the spatial domain, and ignore that there is a time axis that constrains how much an environment may change during a given period. We argue that it is of critical importance to also consider the temporal domain. By studying the motion of individual objects, we can enable tasks such as general cleaning and object fetching. The second insight comes from that mobile robots are becoming more robust. They can therefore collect large amounts of data from those environments. With more data, unsupervised learning of models becomes feasible, allowing the robot to adapt to changes in the environment, and to scenarios that the designer could not foresee. We view these capabilities as vital for robots to become truly autonomous. The combination of unsupervised learning and dynamics modelling creates an interesting symbiosis: the dynamics vary between different environments and between the objects in one environment, and learning can capture these variations. A major difficulty when modeling environment dynamics is that the whole environment can not be observed at one time, since the robot is moving between different places. We demonstrate how this can be dealt with in a principled manner, by modeling several modes of object movement. We also demonstrate methods for detection and learning of objects and structures in the static parts of the maps. Using the complete system, we can represent and learn many aspects of the full environment. In real-world experiments, we demonstrate that our system can keep track of varied objects in large and highly dynamic environments.​

  • 23.
    Butepage, Judith
    et al.
    KTH, School of Computer Science and Communication (CSC), Robotics, perception and learning, RPL.
    Black, Michael J.
    Kragic, Danica
    KTH, School of Computer Science and Communication (CSC), Robotics, perception and learning, RPL.
    Kjellström, Hedvig
    KTH, School of Computer Science and Communication (CSC), Robotics, perception and learning, RPL.
    Deep representation learning for human motion prediction and classification2017In: 30th IEEE Conference on Computer Vision and Pattern Recognition (CVPR 2017), IEEE, 2017, p. 1591-1599Conference paper (Refereed)
    Abstract [en]

    Generative models of 3D human motion are often restricted to a small number of activities and can therefore not generalize well to novel movements or applications. In this work we propose a deep learning framework for human motion capture data that learns a generic representation from a large corpus of motion capture data and generalizes well to new, unseen, motions. Using an encoding-decoding network that learns to predict future 3D poses from the most recent past, we extract a feature representation of human motion. Most work on deep learning for sequence prediction focuses on video and speech. Since skeletal data has a different structure, we present and evaluate different network architectures that make different assumptions about time dependencies and limb correlations. To quantify the learned features, we use the output of different layers for action classification and visualize the receptive fields of the network units. Our method outperforms the recent state of the art in skeletal motion prediction even though these use action specific training data. Our results show that deep feedforward networks, trained from a generic mocap database, can successfully be used for feature extraction from human motion data and that this representation can be used as a foundation for classification and prediction.

  • 24.
    Båberg, Fredrik
    et al.
    KTH, School of Computer Science and Communication (CSC), Robotics, perception and learning, RPL. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Petter, Ögren
    KTH, School of Electrical Engineering and Computer Science (EECS), Robotics, perception and learning, RPL. KTH, School of Electrical Engineering and Computer Science (EECS), Centres, Centre for Autonomous Systems, CAS.
    Formation Obstacle Avoidance using RRT and Constraint Based Programming2017In: 2017 IEEE International Symposium on Safety, Security and Rescue Robotics (SSRR), IEEE conference proceedings, 2017, article id 8088131Conference paper (Refereed)
    Abstract [en]

    In this paper, we propose a new way of doing formation obstacle avoidance using a combination of Constraint Based Programming (CBP) and Rapidly Exploring Random Trees (RRTs). RRT is used to select waypoint nodes, and CBP is used to move the formation between those nodes, reactively rotating and translating the formation to pass the obstacles on the way. Thus, the CBP includes constraints for both formation keeping and obstacle avoidance, while striving to move the formation towards the next waypoint. The proposed approach is compared to a pure RRT approach where the motion between the RRT waypoints is done following linear interpolation trajectories, which are less computationally expensive than the CBP ones. The results of a number of challenging simulations show that the proposed approach is more efficient for scenarios with high obstacle densities.

  • 25.
    Carvalho, J. Frederico
    et al.
    KTH, School of Computer Science and Communication (CSC), Robotics, perception and learning, RPL. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Pequito, S.
    Aguiar, A. P.
    Kar, S.
    Johansson, Karl Henrik
    KTH, School of Electrical Engineering (EES), Automatic Control.
    Composability and controllability of structural linear time-invariant systems: Distributed verification2017In: Automatica, ISSN 0005-1098, E-ISSN 1873-2836, Vol. 78, p. 123-134Article in journal (Refereed)
    Abstract [en]

    Motivated by the development and deployment of large-scale dynamical systems, often comprised of geographically distributed smaller subsystems, we address the problem of verifying their controllability in a distributed manner. Specifically, we study controllability in the structural system theoretic sense, structural controllability, in which rather than focusing on a specific numerical system realization, we provide guarantees for equivalence classes of linear time-invariant systems on the basis of their structural sparsity patterns, i.e., the location of zero/nonzero entries in the plant matrices. Towards this goal, we first provide several necessary and/or sufficient conditions that ensure that the overall system is structurally controllable on the basis of the subsystems’ structural pattern and their interconnections. The proposed verification criteria are shown to be efficiently implementable (i.e., with polynomial time-complexity in the number of the state variables and inputs) in two important subclasses of interconnected dynamical systems: similar (where every subsystem has the same structure) and serial (where every subsystem outputs to at most one other subsystem). Secondly, we provide an iterative distributed algorithm to verify structural controllability for general interconnected dynamical system, i.e., it is based on communication among (physically) interconnected subsystems, and requires only local model and interconnection knowledge at each subsystem.

  • 26. Ciccozzi, F.
    et al.
    Di Ruscio, D.
    Malavolta, I.
    Pelliccione, P.
    Tumova, Jana
    KTH, School of Computer Science and Communication (CSC), Robotics, perception and learning, RPL.
    Engineering the software of robotic systems2017In: Proceedings - 2017 IEEE/ACM 39th International Conference on Software Engineering Companion, ICSE-C 2017, Institute of Electrical and Electronics Engineers (IEEE), 2017, p. 507-508, article id 7965406Conference paper (Refereed)
    Abstract [en]

    The production of software for robotic systems is often case-specific, without fully following established engineering approaches. Systematic approaches, methods, models, and tools are pivotal for the creation of robotic systems for real-world applications and turn-key solutions. Well-defined (software) engineering approaches are considered the 'make or break' factor in the development of complex robotic systems. The shift towards well-defined engineering approaches will stimulate component supply-chains and significantly reshape the robotics marketplace. The goal of this technical briefing is to provide an overview on the state of the art and practice concerning solutions and open challenges in the engineering of software required to develop and manage robotic systems. Model-Driven Engineering (MDE) is discussed as a promising technology to raise the level of abstraction, promote reuse, facilitate integration, boost automation and promote early analysis in such a complex domain.

  • 27.
    Colledanchise, Michele
    KTH, School of Computer Science and Communication (CSC), Robotics, perception and learning, RPL.
    Behavior Trees in Robotics2017Doctoral thesis, comprehensive summary (Other academic)
    Abstract [en]

    Behavior Trees (BTs) are a Control Architecture (CA) that was invented in the video game industry, for controlling non-player characters. In this thesis we investigate the possibilities of using BTs for controlling autonomous robots, from a theoretical as well as practical standpoint. The next generation of robots will need to work, not only in the structured assembly lines of factories, but also in the unpredictable and dynamic environments of homes, shops, and other places where the space is shared with humans, and with different and possibly conflicting objectives. The nature of these environments makes it impossible to first compute the long sequence of actions needed to complete a task, and then blindly execute these actions. One way of addressing this problem is to perform a complete re-planning once a deviation is detected. Another way is to include feedback in the plan, and invoke additional incremental planning only when outside the scope of the feedback built into the plan. However, the feasibility of the latter option depends on the choice of CA, which thereby impacts the way the robot deals with unpredictable environments. In this thesis we address the problem of analyzing BTs as a novel CA for robots. The philosophy of BTs is to create control policies that are both modular and reactive. Modular in the sense that control policies can be separated and recombined, and reactive in the sense that they efficiently respond to events that were not predicted, either caused by external agents, or by unexpected outcomes of robot's own actions. Firstly, we propose a new functional formulation of BTs that allows us to mathematically analyze key system properties using standard tools from robot control theory. In particular we analyze whenever a BT is safe, in terms of avoiding particular parts of the state space; and robust, in terms of having a large domain of operation. This formulation also allows us to compare BTs with other commonly used CAs such as Finite State Machines (FSMs); the Subsumption Architecture; Sequential Behavior Compositions; Decision Trees; AND-OR Trees; and Teleo-Reactive Programs. Then we propose a framework to systematically analyze the efficiency and reliability of a given BT, in terms of expected time to completion and success probability. By including these performance measures in a user defined objective function, we can optimize the order of different fallback options in a given BT for minimizing such function. Finally we show the advantages of using BTs within an Automated Planning framework. In particular we show how to synthesize a policy that is reactive, modular, safe, and fault tolerant with two different approaches: model-based (using planning), and model-free (using learning).

  • 28.
    Colledanchise, Michele
    et al.
    KTH, School of Computer Science and Communication (CSC), Robotics, perception and learning, RPL.
    Almeid, Diogo
    Ögren, Petter
    KTH, School of Computer Science and Communication (CSC), Robotics, perception and learning, RPL.
    Towards Blended Planning and Acting using Behavior Trees. A Reactive, Safe and Fault Tolerant Approach.Article in journal (Refereed)
  • 29.
    Colledanchise, Michele
    et al.
    KTH, School of Computer Science and Communication (CSC), Robotics, perception and learning, RPL.
    Marzinotto, Alejandro
    KTH, School of Computer Science and Communication (CSC), Robotics, perception and learning, RPL.
    Dimarogonas, Dimos V.
    KTH, School of Electrical Engineering (EES), Automatic Control.
    Ögren, Petter
    KTH, School of Computer Science and Communication (CSC), Robotics, perception and learning, RPL.
    The advantages of using behavior trees in multi-robot systems2016In: 47th International Symposium on Robotics, ISR 2016, VDE Verlag GmbH, 2016, p. 23-30Conference paper (Refereed)
    Abstract [en]

    Multi-robot teams offer possibilities of improved performance and fault tolerance, compared to single robot solutions. In this paper, we show how to realize those possibilities when starting from a single robot system controlled by a Behavior Tree (BT). By extending the single robot BT to a multi-robot BT, we are able to combine the fault tolerant properties of the BT, in terms of built-in fallbacks, with the fault tolerance inherent in multi-robot approaches, in terms of a faulty robot being replaced by another one. Furthermore, we improve performance by identifying and taking advantage of the opportunities of parallel task execution, that are present in the single robot BT. Analyzing the proposed approach, we present results regarding how mission performance is affected by minor faults (a robot losing one capability) as well as major faults (a robot losing all its capabilities).

  • 30.
    Colledanchise, Michele
    et al.
    KTH, School of Computer Science and Communication (CSC), Robotics, perception and learning, RPL.
    Marzinotto, Alejandro
    Ögren, Petter
    KTH, School of Computer Science and Communication (CSC), Robotics, perception and learning, RPL.
    Stochastic Behavior Trees for Estimating and Optimizing the Performance of Reactive Plan ExecutionsArticle in journal (Refereed)
  • 31.
    Colledanchise, Michele
    et al.
    KTH, School of Computer Science and Communication (CSC), Robotics, perception and learning, RPL.
    Murray, R. M.
    Ögren, Petter
    KTH, School of Computer Science and Communication (CSC), Robotics, perception and learning, RPL.
    Synthesis of correct-by-construction behavior trees2017In: 2017 IEEE/RSJ International Conference on Intelligent Robots and Systems, IROS 2017, Institute of Electrical and Electronics Engineers (IEEE), 2017, p. 6039-6046, article id 8206502Conference paper (Refereed)
    Abstract [en]

    In this paper we study the problem of synthesizing correct-by-construction Behavior Trees (BTs) controlling agents in adversarial environments. The proposed approach combines the modularity and reactivity of BTs with the formal guarantees of Linear Temporal Logic (LTL) methods. Given a set of admissible environment specifications, an agent model in form of a Finite Transition System and the desired task in form of an LTL formula, we synthesize a BT in polynomial time, that is guaranteed to correctly execute the desired task. To illustrate the approach, we present three examples of increasing complexity.

  • 32.
    Colledanchise, Michele
    et al.
    KTH, School of Computer Science and Communication (CSC), Robotics, perception and learning, RPL.
    Murray, Richard M.
    CALTECH, Dept Control & Dynam Syst, Pasadena, CA 91125 USA..
    Ögren, Petter
    KTH, School of Computer Science and Communication (CSC), Robotics, perception and learning, RPL.
    Synthesis of Correct-by-Construction Behavior Trees2017In: 2017 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS (IROS) / [ed] Bicchi, A Okamura, A, IEEE , 2017, p. 6039-6046Conference paper (Refereed)
    Abstract [en]

    In this paper we study the problem of synthesizing correct-by-construction Behavior Trees (BTs) controlling agents in adversarial environments. The proposed approach combines the modularity and reactivity of BTs with the formal guarantees of Linear Temporal Logic (LTL) methods. Given a set of admissible environment specifications, an agent model in form of a Finite Transition System and the desired task in form of an LTL formula, we synthesize a BT in polynomial time, that is guaranteed to correctly execute the desired task. To illustrate the approach, we present three examples of increasing complexity.

  • 33.
    Colledanchise, Michele
    et al.
    KTH, School of Computer Science and Communication (CSC), Robotics, perception and learning, RPL.
    Parasuraman, Ramviyas
    Ögren, Petter
    KTH, School of Computer Science and Communication (CSC), Robotics, perception and learning, RPL.
    Learning of Behavior Trees for Autonomous Agents.Article in journal (Refereed)
  • 34.
    Colledanchise, Michele
    et al.
    KTH, School of Computer Science and Communication (CSC), Robotics, perception and learning, RPL.
    Ögren, Petter
    KTH, School of Computer Science and Communication (CSC), Robotics, perception and learning, RPL.
    How Behavior Trees Modularize Hybrid Control Systems and Generalize Sequential Behavior Compositions, the Subsumption Architecture, and Decision Trees2017In: IEEE Transactions on robotics, ISSN 1552-3098, E-ISSN 1941-0468, Vol. 33, no 2, p. 372-389Article in journal (Refereed)
  • 35.
    Cruciani, Silvia
    et al.
    KTH, School of Computer Science and Communication (CSC), Robotics, perception and learning, RPL.
    Smith, Christian
    KTH, School of Computer Science and Communication (CSC), Robotics, perception and learning, RPL.
    In-hand manipulation using three-stages open loop pivoting2017Conference paper (Refereed)
    Abstract [en]

    In this paper we propose a method for pivoting an object held by a parallel gripper, without requiring accurate dynamical models or advanced hardware. Our solution uses the motion of the robot arm for generating inertial forces to move the object. It also controls the rotational friction at the pivoting point by commanding a desired distance to the gripper's fingers. This method relies neither on fast and precise tracking systems to obtain the position of the tool, nor on real-time and high-frequency controllable robotic grippers to quickly adjust the finger distance. We demonstrate the efficacy of our method by applying it on a Baxter robot.

  • 36.
    Cruciani, Silvia
    et al.
    KTH, School of Computer Science and Communication (CSC), Robotics, perception and learning, RPL.
    Smith, Christian
    KTH, School of Computer Science and Communication (CSC), Robotics, perception and learning, RPL.
    In-Hand Manipulation Using Three-Stages Open Loop Pivoting2017In: 2017 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS (IROS) / [ed] Bicchi, A Okamura, A, IEEE , 2017, p. 1244-1251Conference paper (Refereed)
    Abstract [en]

    In this paper we propose a method for pivoting an object held by a parallel gripper, without requiring accurate dynamical models or advanced hardware. Our solution uses the motion of the robot arm for generating inertial forces to move the object. It also controls the rotational friction at the pivoting point by commanding a desired distance to the gripper's fingers. This method relies neither on fast and precise tracking systems to obtain the position of the tool, nor on real-time and high-frequency controllable robotic grippers to quickly adjust the finger distance. We demonstrate the efficacy of our method by applying it on a Baxter robot.

  • 37. Ek, C. H.
    et al.
    Kragic, Danica
    KTH, School of Computer Science and Communication (CSC), Robotics, perception and learning, RPL.
    The importance of structure2017In: 15th International Symposium of Robotics Research, 2011, Springer, 2017, p. 111-127Conference paper (Refereed)
    Abstract [en]

    Many tasks in robotics and computer vision are concerned with inferring a continuous or discrete state variable from observations and measurements from the environment. Due to the high-dimensional nature of the input data the inference is often cast as a two stage process: first a low-dimensional feature representation is extracted on which secondly a learning algorithm is applied. Due to the significant progress that have been achieved within the field of machine learning over the last decade focus have placed at the second stage of the inference process, improving the process by exploiting more advanced learning techniques applied to the same (or more of the same) data. We believe that for many scenarios significant strides in performance could be achieved by focusing on representation rather than aiming to alleviate inconclusive and/or redundant information by exploiting more advanced inference methods. This stems from the notion that; given the “correct” representation the inference problem becomes easier to solve. In this paper we argue that one important mode of information for many application scenarios is not the actual variation in the data but the rather the higher order statistics as the structure of variations. We will exemplify this through a set of applications and show different ways of representing the structure of data. © Springer International Publishing Switzerland 2017.

  • 38.
    Engelhardt, Sara
    et al.
    KTH, School of Computer Science and Communication (CSC), Robotics, perception and learning, RPL.
    Hansson, Emmeli
    KTH, School of Computer Science and Communication (CSC), Robotics, perception and learning, RPL.
    Leite, Iolanda
    KTH, School of Computer Science and Communication (CSC), Robotics, perception and learning, RPL.
    Better faulty than sorry: Investigating social recovery strategies to minimize the impact of failure in human-robot interaction2017In: WCIHAI 2017 Workshop on Conversational Interruptions in Human-Agent Interactions: Proceedings of the first Workshop on Conversational Interruptions in Human-Agent Interactions co-located with 17th International Conference on International Conference on Intelligent Virtual Agents (IVA 2017) Stockholm, Sweden, August 27, 2017., CEUR-WS , 2017, Vol. 1943, p. 19-27Conference paper (Refereed)
    Abstract [en]

    Failure happens in most social interactions, possibly even more so in interactions between a robot and a human. This paper investigates different failure recovery strategies that robots can employ to minimize the negative effect on people's perception of the robot. A between-subject Wizard-of-Oz experiment with 33 participants was conducted in a scenario where a robot and a human play a collaborative game. The interaction was mainly speech-based and controlled failures were introduced at specific moments. Three types of recovery strategies were investigated, one in each experimental condition: ignore (the robot ignores that a failure has occurred and moves on with the task), apology (the robot apologizes for failing and moves on) and problem-solving (the robot tries to solve the problem with the help of the human). Our results show that the apology-based strategy scored the lowest on measures such as likeability and perceived intelligence, and that the ignore strategy lead to better perceptions of perceived intelligence and animacy than the employed recovery strategies.

  • 39. Erkent, Ozgur
    et al.
    Karaoguz, Hakan
    KTH, School of Computer Science and Communication (CSC), Robotics, perception and learning, RPL.
    Bozma, H. Isil
    Hierarchically self-organizing visual place memory2017In: Advanced Robotics, ISSN 0169-1864, E-ISSN 1568-5535, Vol. 31, no 16, p. 865-879Article in journal (Refereed)
    Abstract [en]

    A hierarchically organized visual place memory enables a robot to associate with its respective knowledge efficiently. In this paper, we consider how this organization can be done by the robot on its own throughout its operation and introduce an approach that is based on the agglomerative method SLINK. The hierarchy is obtained from a single link cluster analysis that is carried out based on similarity in the appearance space. As such, the robot can incrementally incorporate the knowledge of places into its visual place memory over the long term. The resulting place memory has an order-invariant hierarchy that enables both storage and construction efficiency. Experimental results obtained under the guided operation of the robot demonstrate that the robot is able to organize its place knowledge and relate to it efficiently. This is followed by experimental results under autonomous operation in which the robot evolves its visual place memory completely on its own.

  • 40. Evestedt, Niclas
    et al.
    Ward, Erik
    KTH, School of Computer Science and Communication (CSC), Robotics, perception and learning, RPL.
    Folkesson, John
    KTH, School of Computer Science and Communication (CSC), Robotics, perception and learning, RPL.
    Axehill, Daniel
    Interaction aware trajectory planning for merge scenarios in congested traffic situations2016In: 2016 IEEE 19th International Conference on Intelligent Transportation Systems, IEEE, 2016, p. 465-472Conference paper (Refereed)
    Abstract [en]

    In many traffic situations there are times where interaction with other drivers is necessary and unavoidable in order to safely progress towards an intended destination. This is especially true for merge manoeuvres into dense traffic, where drivers sometimes must be somewhat aggressive and show the intention of merging in order to interact with the other driver and make the driver open the gap needed to execute the manoeuvre safely. Many motion planning frameworks for autonomous vehicles adopt a reactive approach where simple models of other traffic participants are used and therefore need to adhere to large margins in order to behave safely. However, the large margins needed can sometimes get the system stuck in congested traffic where time gaps between vehicles are too small. In other situations, such as a highway merge, it can be significantly more dangerous to stop on the entrance ramp if the gaps are found to be too small than to make a slightly more aggressive manoeuvre and let the driver behind open the gap needed. To remedy this problem, this work uses the Intelligent Driver Model (IDM) to explicitly model the interaction of other drivers and evaluates the risk by their required deceleration in a similar manner as the Minimum Overall Breaking Induced by Lane change (MOBIL) model that has been used in large scale traffic simulations before. This allows the algorithm to evaluate the effect on other drivers depending on our own trajectory plans by simulating the nearby traffic situation. Finding a globally optimal solution is often intractable in these situations so instead a large set of candidate trajectories are generated that are evaluated against the traffic scene by forward simulations of other traffic participants. By discretization and using an efficient trajectory generator together with efficient modelling of the traffic scene real-time demands can be met.

  • 41.
    Ghadirzadeh, Ali
    et al.
    KTH, School of Computer Science and Communication (CSC), Robotics, perception and learning, RPL.
    Maki, Atsuto
    KTH, School of Computer Science and Communication (CSC), Robotics, perception and learning, RPL.
    Kragic, Danica
    KTH, School of Computer Science and Communication (CSC), Robotics, perception and learning, RPL.
    Björkman, Mårten
    KTH, School of Computer Science and Communication (CSC), Robotics, perception and learning, RPL.
    Deep predictive policy training using reinforcement learning2017In: 2017 IEEE/RSJ International Conference on Intelligent Robots and Systems, IROS 2017, Institute of Electrical and Electronics Engineers (IEEE), 2017, p. 2351-2358, article id 8206046Conference paper (Refereed)
    Abstract [en]

    Skilled robot task learning is best implemented by predictive action policies due to the inherent latency of sensorimotor processes. However, training such predictive policies is challenging as it involves finding a trajectory of motor activations for the full duration of the action. We propose a data-efficient deep predictive policy training (DPPT) framework with a deep neural network policy architecture which maps an image observation to a sequence of motor activations. The architecture consists of three sub-networks referred to as the perception, policy and behavior super-layers. The perception and behavior super-layers force an abstraction of visual and motor data trained with synthetic and simulated training samples, respectively. The policy super-layer is a small subnetwork with fewer parameters that maps data in-between the abstracted manifolds. It is trained for each task using methods for policy search reinforcement learning. We demonstrate the suitability of the proposed architecture and learning framework by training predictive policies for skilled object grasping and ball throwing on a PR2 robot. The effectiveness of the method is illustrated by the fact that these tasks are trained using only about 180 real robot attempts with qualitative terminal rewards.

  • 42.
    Göbelbecker, Moritz
    et al.
    University of Freiburg.
    Hanheide, Marc
    University of Lincoln.
    Gretton, Charles
    University of Birmingham.
    Hawes, Nick
    University of Birmingham.
    Pronobis, Andrzej
    KTH, School of Computer Science and Communication (CSC), Robotics, perception and learning, RPL.
    Aydemir, Alper
    KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS. KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.
    Kristoffer, Sjöö
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Zender, Hendrik
    DFKI, Saarbruecken.
    Dora: A Robot that Plans and Acts Under Uncertainty2012In: Proceedings of the 35th German Conference on Artificial Intelligence (KI’12), 2012Conference paper (Refereed)
    Abstract [en]

    Dealing with uncertainty is one of the major challenges when constructing autonomous mobile robots. The CogX project addressed key aspects of that by developing and implementing mechanisms for self-understanding and self-extension -- i.e. awareness of gaps in knowledge, and the ability to reason and act to fill those gaps. We discuss our robot called Dora, a showcase outcome of that project. Dora is able to perform a variety of search tasks in unexplored environments. One of the results of the project is the Dora robot, that can perform a variety of search tasks in unexplored environments by exploiting probabilistic knowledge representations while retaining efficiency by using a fast planning system.

  • 43.
    Güler, Püren
    KTH, School of Computer Science and Communication (CSC), Robotics, perception and learning, RPL.
    Learning Object Properties From Manipulation for Manipulation2017Doctoral thesis, monograph (Other academic)
    Abstract [en]

    The world contains objects with various properties - rigid, granular, liquid, elastic or plastic. As humans, while interacting with the objects, we plan our manipulation by considering their properties. For instance, while holding a rigid object such as a brick, we adapt our grasp based on its centre of mass not to drop it. On the other hand while manipulating a deformable object, we may consider additional properties to the centre of mass such elasticity, brittleness etc. for grasp stability. Therefore, knowing object properties is an integral part of skilled manipulation of objects. 

    For manipulating objects skillfully, robots should be able to predict the object properties as humans do. To predict the properties, interactions with objects are essential. These interactions give rise distinct sensory signals that contains information about the object properties. The signals coming from a single sensory modality may give ambiguous information or noisy measurements. Hence, by integrating multi-sensory modalities (vision, touch, audio or proprioceptive), a manipulated object can be observed from different aspects and this can decrease the uncertainty in the observed properties. By analyzing the perceived sensory signals, a robot reasons about the object properties and adjusts its manipulation based on this information. During this adjustment, the robot can make use of a simulation model to predict the object behavior to plan the next action. For instance, if an object is assumed to be rigid before interaction and exhibit deformable behavior after interaction, an internal simulation model can be used to predict the load force exerted on the object, so that appropriate manipulation can be planned in the next action. Thus, learning about object properties can be defined as an active procedure. The robot explores the object properties actively and purposefully by interacting with the object, and adjusting its manipulation based on the sensory information and predicted object behavior through an internal simulation model.

    This thesis investigates the necessary mechanisms that we mentioned above to learn object properties: (i) multi-sensory information, (ii) simulation and (iii) active exploration. In particular, we investigate these three mechanisms that represent different and complementary ways of extracting a certain object property, the deformability of objects. Firstly, we investigate the feasibility of using visual and/or tactile data to classify the content of a container based on the deformation observed when a robotic hand squeezes and deforms the container. According to our result, both visual and tactile sensory data individually give high accuracy rates while classifying the content type based on the deformation. Next, we investigate the usage of a simulation model to estimate the object deformability that is revealed through a manipulation. The proposed method identify accurately the deformability of the test objects in synthetic and real-world data. Finally, we investigate the integration of the deformation simulation in a robotic active perception framework to extract the heterogenous deformability properties of an environment through physical interactions. In the experiments that we apply on real-world objects, we illustrate that the active perception framework can map the heterogeneous deformability properties of a surface.

  • 44.
    Güler, Püren
    et al.
    KTH, School of Computer Science and Communication (CSC), Robotics, perception and learning, RPL.
    Pieropan, A.
    Ishikawa, M.
    Kragic, Danica
    KTH, School of Computer Science and Communication (CSC), Robotics, perception and learning, RPL.
    Estimating deformability of objects using meshless shape matching2017In: 2017 IEEE/RSJ International Conference on Intelligent Robots and Systems, IROS 2017, Institute of Electrical and Electronics Engineers (IEEE), 2017, p. 5941-5948, article id 8206489Conference paper (Refereed)
    Abstract [en]

    Humans interact with deformable objects on a daily basis but this still represents a challenge for robots. To enable manipulation of and interaction with deformable objects, robots need to be able to extract and learn the deformability of objects both prior to and during the interaction. Physics-based models are commonly used to predict the physical properties of deformable objects and simulate their deformation accurately. The most popular simulation techniques are force-based models that need force measurements. In this paper, we explore the applicability of a geometry-based simulation method called meshless shape matching (MSM) for estimating the deformability of objects. The main advantages of MSM are its controllability and computational efficiency that make it popular in computer graphics to simulate complex interactions of multiple objects at the same time. Additionally, a useful feature of the MSM that differentiates it from other physics-based simulation is to be independent of force measurements that may not be available to a robotic framework lacking force/torque sensors. In this work, we design a method to estimate deformability based on certain properties, such as volume conservation. Using the finite element method (FEM) we create the ground truth deformability for various settings to evaluate our method. The experimental evaluation shows that our approach is able to accurately identify the deformability of test objects, supporting the value of MSM for robotic applications.

  • 45.
    Güler, Püren
    et al.
    KTH, School of Computer Science and Communication (CSC), Robotics, perception and learning, RPL.
    Pieropan, Alessandro
    Univrses, Stockholm, Sweden.;Univ Tokyo, Ishikawa Watanabe Lab, Tokyo, Japan..
    Ishikawa, Masatoshi
    Univ Tokyo, Ishikawa Watanabe Lab, Tokyo, Japan..
    Kragic, Danica
    KTH, School of Computer Science and Communication (CSC), Robotics, perception and learning, RPL. KTH Royal Inst Technol, Robot Percept & Learning Lab, Sch Comp Sci & Commun, Stockholm, Sweden..
    Estimating deformability of objects using meshless shape matching2017In: 2017 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS (IROS) / [ed] Bicchi, A Okamura, A, IEEE , 2017, p. 5941-5948Conference paper (Refereed)
    Abstract [en]

    Humans interact with deformable objects on a daily basis but this still represents a challenge for robots. To enable manipulation of and interaction with deformable objects, robots need to be able to extract and learn the deformability of objects both prior to and during the interaction. Physics-based models are commonly used to predict the physical properties of deformable objects and simulate their deformation accurately. The most popular simulation techniques are force-based models that need force measurements. In this paper, we explore the applicability of a geometry-based simulation method called meshless shape matching (MSM) for estimating the deformability of objects. The main advantages of MSM are its controllability and computational efficiency that make it popular in computer graphics to simulate complex interactions of multiple objects at the same time. Additionally, a useful feature of the MSM that differentiates it from other physics-based simulation is to be independent of force measurements that may not be available to a robotic framework lacking force/torque sensors. In this work, we design a method to estimate deformability based on certain properties, such as volume conservation. Using the finite element method (FEM) we create the ground truth deformability for various settings to evaluate our method. The experimental evaluation shows that our approach is able to accurately identify the deformability of test objects, supporting the value of MSM for robotic applications.

  • 46.
    Hang, Kaiyu
    et al.
    KTH, School of Computer Science and Communication (CSC), Robotics, perception and learning, RPL.
    Stork, Johannes A.
    KTH, School of Computer Science and Communication (CSC), Robotics, perception and learning, RPL.
    Pollard, Nancy S.
    Kragic, Danica
    KTH, School of Computer Science and Communication (CSC), Robotics, perception and learning, RPL.
    A Framework for Optimal Grasp Contact Planning2017In: IEEE Robotics and Automation Letters, ISSN 2377-3766, E-ISSN 1949-3045, Vol. 2, no 2, p. 704-711Article in journal (Refereed)
    Abstract [en]

    We consider the problem of finding grasp contacts that are optimal under a given grasp quality function on arbitrary objects. Our approach formulates a framework for contact-level grasping as a path finding problem in the space of supercontact grasps. The initial supercontact grasp contains all grasps and in each step along a path grasps are removed. For this, we introduce and formally characterize search space structure and cost functions underwhich minimal cost paths correspond to optimal grasps. Our formulation avoids expensive exhaustive search and reduces computational cost by several orders of magnitude. We present admissible heuristic functions and exploit approximate heuristic search to further reduce the computational cost while maintaining bounded suboptimality for resulting grasps. We exemplify our formulation with point-contact grasping for which we define domain specific heuristics and demonstrate optimality and bounded suboptimality by comparing against exhaustive and uniform cost search on example objects. Furthermore, we explain how to restrict the search graph to satisfy grasp constraints for modeling hand kinematics. We also analyze our algorithm empirically in terms of created and visited search states and resultant effective branching factor.

  • 47.
    Haustein, Joshua
    et al.
    KTH, School of Computer Science and Communication (CSC), Robotics, perception and learning, RPL. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Hang, Kaiyu
    KTH, School of Computer Science and Communication (CSC), Robotics, perception and learning, RPL. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Kragic, Danica
    KTH, School of Computer Science and Communication (CSC), Robotics, perception and learning, RPL. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Integrating motion and hierarchical fingertip grasp planning2017In: 2017 IEEE International Conference on Robotics and Automation (ICRA), Institute of Electrical and Electronics Engineers (IEEE), 2017, p. 3439-3446, article id 7989392Conference paper (Refereed)
    Abstract [en]

    In this work, we present an algorithm that simultaneously searches for a high quality fingertip grasp and a collision-free path for a robot hand-arm system to achieve it. The algorithm combines a bidirectional sampling-based motion planning approach with a hierarchical contact optimization process. Rather than tackling these problems in a decoupled manner, the grasp optimization is guided by the proximity to collision-free configurations explored by the motion planner. We implemented the algorithm for a 13-DoF manipulator and show that it is capable of efficiently planning reachable high quality grasps in cluttered environments. Further, we show that our algorithm outperforms a decoupled integration in terms of planning runtime.

  • 48. Hawasly, M.
    et al.
    Pokorny, Florian T.
    KTH, School of Computer Science and Communication (CSC), Robotics, perception and learning, RPL.
    Ramamoorthy, S.
    Multi-scale activity estimation with spatial abstractions2017In: 3rd International Conference on Geometric Science of Information, GSI 2017, Springer, 2017, Vol. 10589, p. 273-281Conference paper (Refereed)
    Abstract [en]

    Estimation and forecasting of dynamic state are fundamental to the design of autonomous systems such as intelligent robots. State-of-the-art algorithms, such as the particle filter, face computational limitations when needing to maintain beliefs over a hypothesis space that is made large by the dynamic nature of the environment. We propose an algorithm that utilises a hierarchy of such filters, exploiting a filtration arising from the geometry of the underlying hypothesis space. In addition to computational savings, such a method can accommodate the availability of evidence at varying degrees of coarseness. We show, using synthetic trajectory datasets, that our method achieves a better normalised error in prediction and better time to convergence to a true class when compared against baselines that do not similarly exploit geometric structure.

  • 49. Hawes, N
    et al.
    Ambrus, Rares
    KTH, School of Computer Science and Communication (CSC), Robotics, perception and learning, RPL.
    Bore, Nils
    KTH, School of Computer Science and Communication (CSC), Robotics, perception and learning, RPL.
    Folkesson, John
    KTH, School of Computer Science and Communication (CSC), Robotics, perception and learning, RPL.
    Jensfelt, Patric
    KTH, School of Computer Science and Communication (CSC), Robotics, perception and learning, RPL.
    Hanheide, Marc
    et al.,
    The STRANDS Project Long-Term Autonomy in Everyday Environments2017In: IEEE robotics & automation magazine, ISSN 1070-9932, E-ISSN 1558-223X, Vol. 24, no 3, p. 146-156Article in journal (Refereed)
  • 50.
    Hlynur Davíð, Hlynsson
    KTH, School of Computer Science and Communication (CSC), Robotics, perception and learning, RPL.
    Predicting expert moves in the game of Othello using fully convolutional neural networks2017Independent thesis Advanced level (professional degree), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

      Careful feature engineering is an important factor of artificial intelligence for games. In this thesis I investigate the benefit of delegating the engineering efforts to the model rather than the features, using the board game Othello as a case study. Convolutional neural networks of varying depths are trained to play in a human-like manner by learning to predict actions from tournaments. My main result is that using a raw board state representation, a network can be trained to achieve 57.4% prediction accuracy on a test set, surpassing previous state-of-the-art in this task.  The accuracy is increased to 58.3% by adding several common handcrafted features as input to the network but at the cost of more than half again as much the computation time.

12 1 - 50 of 100
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf