Change search
Refine search result
123 1 - 50 of 143
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Rows per page
  • 5
  • 10
  • 20
  • 50
  • 100
  • 250
Sort
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
Select
The maximal number of hits you can export is 250. When you want to export more records please use the Create feeds function.
  • 1.
    Almeida, Diogo
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Robotics, Perception and Learning, RPL.
    Dual-Arm Robotic Manipulation under Uncertainties and Task-Based Redundancy2019Doctoral thesis, comprehensive summary (Other academic)
    Abstract [en]

    Robotic manipulators are mostly employed in industrial environments, where their tasks can be prescribed with little to no uncertainty. This is possible in scenarios where the deployment time of robot workcells is not prohibitive, such as in the automotive industry. In other contexts, however, the time cost of setting up a classical robotic automation workcell is often prohibitive. This is the case with cellphone manufacturing, for example, which is currently mostly executed by human workers. Robotic automation is nevertheless desirable in these human-centric environments, as a robot can automate the most tedious parts of an assembly. To deploy robots in these environments, however, requires an ability to deal with uncertainties and to robustly execute any given task. In this thesis, we discuss two topics related to autonomous robotic manipulation. First, we address parametric uncertainties in manipulation tasks, such as the location of contacts during the execution of an assembly. We propose and experimentally evaluate two methods that rely on force and torque measurements to produce estimates of task related uncertainties: a method for dexterous manipulation under uncertainties which relies on a compliant rotational degree of freedom at the robot's gripper grasp point and exploits contact  with an external surface, and a cooperative manipulation system which is able to identify the kinematics of a two degrees of freedom mechanism. Then, we consider redundancies in dual-arm robotic manipulation. Dual-armed robots offer a large degree of redundancy which can be exploited to ensure a more robust task execution. When executing an assembly task, for instance, robots can freely change the location of the assembly in their workspace without affecting the task execution. We discuss methods that explore these types of redundancies in relative motion tasks in the form of asymmetries in their execution. Finally, we approach the converse problem by presenting a system which is able to balance measured forces and torques at its end-effectors by leveraging relative motion between them, while grasping a rigid tray. This is achieved through discrete sliding of the grasp points, which constitutes a novel application of bimanual dexterous manipulation.

  • 2.
    Almeida, Diogo
    et al.
    KTH, School of Computer Science and Communication (CSC), Robotics, perception and learning, RPL. KTH.
    Ambrus, Rares
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.
    Caccamo, Sergio
    KTH, School of Electrical Engineering and Computer Science (EECS), Robotics, perception and learning, RPL.
    Chen, Xi
    KTH.
    Cruciani, Silvia
    Pinto Basto De Carvalho, Joao F
    KTH, School of Electrical Engineering and Computer Science (EECS), Robotics, perception and learning, RPL.
    Haustein, Joshua
    KTH, School of Electrical Engineering and Computer Science (EECS), Robotics, perception and learning, RPL.
    Marzinotto, Alejandro
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.
    Vina, Francisco
    KTH.
    Karayiannidis, Yannis
    KTH.
    Ögren, Petter
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Optimization and Systems Theory.
    Jensfelt, Patric
    KTH, School of Computer Science and Communication (CSC), Robotics, perception and learning, RPL.
    Kragic, Danica
    KTH, School of Computer Science and Communication (CSC), Robotics, perception and learning, RPL.
    Team KTH’s Picking Solution for the Amazon Picking Challenge 20162017In: Warehouse Picking Automation Workshop 2017: Solutions, Experience, Learnings and Outlook of the Amazon Robotics Challenge, 2017Conference paper (Other (popular science, discussion, etc.))
    Abstract [en]

    In this work we summarize the solution developed by Team KTH for the Amazon Picking Challenge 2016 in Leipzig, Germany. The competition simulated a warehouse automation scenario and it was divided in two tasks: a picking task where a robot picks items from a shelf and places them in a tote and a stowing task which is the inverse task where the robot picks items from a tote and places them in a shelf. We describe our approach to the problem starting from a high level overview of our system and later delving into details of our perception pipeline and our strategy for manipulation and grasping. The solution was implemented using a Baxter robot equipped with additional sensors.

  • 3.
    Almeida, Diogo
    et al.
    KTH, School of Electrical Engineering and Computer Science (EECS), Robotics, Perception and Learning, RPL.
    Ataer-Cansizoglu, Esra
    Wayfair, Boston, MA 02116, USA.
    Corcodel, Radu
    Mitsubishi Electric Research Labs (MERL), Cambridge, MA 02139, USA.
    Detection, Tracking and 3D Modeling of Objects with Sparse RGB-D SLAM and Interactive Perception2019In: IEEE-RAS International Conference on Humanoid Robots (Humanoids), 2019Conference paper (Refereed)
    Abstract [en]

    We present an interactive perception system that enables an autonomous agent to deliberately interact with its environment and produce 3D object models. Our system verifies object hypotheses through interaction and simultaneously maintains 3D SLAM maps for each rigidly moving object hypothesis in the scene. We rely on depth-based segmentation and a multigroup registration scheme to classify features into various object maps. Our main contribution lies in the employment of a novel segment classification scheme that allows the system to handle incorrect object hypotheses, common in cluttered environments due to touching objects or occlusion. We start with a single map and initiate further object maps based on the outcome of depth segment classification. For each existing map, we select a segment to interact with and execute a manipulation primitive with the goal of disturbing it. If the resulting set of depth segments has at least one segment that did not follow the dominant motion pattern of its respective map, we split the map, thus yielding updated object hypotheses. We show qualitative results with a Fetch manipulator and objects of various shapes, which showcase the viability of the method for identifying and modelling multiple objects through repeated interactions.

  • 4.
    Almeida, Diogo
    et al.
    KTH, School of Electrical Engineering and Computer Science (EECS), Robotics, Perception and Learning, RPL.
    Karayiannidis, Yiannis
    A Lyapunov-Based Approach to Exploit Asymmetries in Robotic Dual-Arm Task Resolution2019In: 58th IEEE Conference on Decision and Control (CDC), 2019Conference paper (Refereed)
    Abstract [en]

    Dual-arm manipulation tasks can be prescribed to a robotic system in terms of desired absolute and relative motion of the robot’s end-effectors. These can represent, e.g., jointly carrying a rigid object or performing an assembly task. When both types of motion are to be executed concurrently, the symmetric distribution of the relative motion between arms prevents task conflicts. Conversely, an asymmetric solution to the relative motion task will result in conflicts with the absolute task. In this work, we address the problem of designing a control law for the absolute motion task together with updating the distribution of the relative task among arms. Through a set of numerical results, we contrast our approach with the classical symmetric distribution of the relative motion task to illustrate the advantages of our method.

  • 5.
    Almeida, Diogo
    et al.
    KTH, School of Electrical Engineering and Computer Science (EECS), Robotics, Perception and Learning, RPL.
    Karayiannidis, Yiannis
    KTH, School of Electrical Engineering and Computer Science (EECS), Robotics, Perception and Learning, RPL.
    Asymmetric Dual-Arm Task Execution using an Extended Relative Jacobian2019In: The International Symposium on Robotics Research, 2019Conference paper (Refereed)
    Abstract [en]

    Coordinated dual-arm manipulation tasks can be broadly characterized as possessing absolute and relative motion components. Relative motion tasks, in particular, are inherently redundant in the way they can be distributed between end-effectors. In this work, we analyse cooperative manipulation in terms of the asymmetric resolution of relative motion tasks. We discuss how existing approaches enable the asymmetric execution of a relative motion task, and show how an asymmetric relative motion space can be defined. We leverage this result to propose an extended relative Jacobian to model the cooperative system, which allows a user to set a concrete degree of asymmetry in the task execution. This is achieved without the need for prescribing an absolute motion target. Instead, the absolute motion remains available as a functional redundancy to the system. We illustrate the properties of our proposed Jacobian through numerical simulations of a novel differential Inverse Kinematics algorithm.

  • 6.
    Almeida, Diogo
    et al.
    KTH, School of Electrical Engineering and Computer Science (EECS), Robotics, perception and learning, RPL.
    Karayiannidis, Yiannis
    KTH, School of Electrical Engineering and Computer Science (EECS), Robotics, perception and learning, RPL. Dept. of Electrical Eng., Chalmers University of Technology.
    Cooperative Manipulation and Identification of a 2-DOF Articulated Object by a Dual-Arm Robot2018In: 2018 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION (ICRA) / [ed] IEEE, 2018, p. 5445-5451Conference paper (Refereed)
    Abstract [en]

    In this work, we address the dual-arm manipula-tion of a two degrees-of-freedom articulated object that consistsof two rigid links. This can include a linkage constrainedalong two motion directions, or two objects in contact, wherethe contact imposes motion constraints. We formulate theproblem as a cooperative task, which allows the employment ofcoordinated task space frameworks, thus enabling redundancyexploitation by adjusting how the task is shared by the robotarms. In addition, we propose a method that can estimate thejoint location and the direction of the degrees-of-freedom, basedon the contact forces and the motion constraints imposed bythe object. Experimental results demonstrate the performanceof the system in its ability to estimate the two degrees of freedomindependently or simultaneously.

  • 7.
    Almeida, Diogo
    et al.
    KTH, School of Electrical Engineering and Computer Science (EECS), Robotics, perception and learning, RPL.
    Karayiannidis, Yiannis
    KTH, School of Electrical Engineering and Computer Science (EECS), Robotics, perception and learning, RPL. Chalmers University of Technology.
    Folding Assembly by Means of Dual-Arm Robotic Manipulation2016In: 2016 IEEE International Conference on Robotics and Automation, IEEE conference proceedings, 2016, p. 3987-3993Conference paper (Refereed)
    Abstract [en]

    In this paper, we consider folding assembly as an assembly primitive suitable for dual-arm robotic assembly, that can be integrated in a higher level assembly strategy. The system composed by two pieces in contact is modelled as an articulated object, connected by a prismatic-revolute joint. Different grasping scenarios were considered in order to model the system, and a simple controller based on feedback linearisation is proposed, using force torque measurements to compute the contact point kinematics. The folding assembly controller has been experimentally tested with two sample parts, in order to showcase folding assembly as a viable assembly primitive.

  • 8.
    Antonova, Rika
    et al.
    KTH, School of Electrical Engineering and Computer Science (EECS), Centres, Centre for Autonomous Systems, CAS. KTH, School of Electrical Engineering and Computer Science (EECS), Robotics, Perception and Learning, RPL.
    Kokic, Mia
    KTH, School of Electrical Engineering and Computer Science (EECS), Centres, Centre for Autonomous Systems, CAS. KTH, School of Electrical Engineering and Computer Science (EECS), Robotics, Perception and Learning, RPL.
    Stork, Johannes A.
    KTH, School of Electrical Engineering and Computer Science (EECS), Centres, Centre for Autonomous Systems, CAS. KTH, School of Electrical Engineering and Computer Science (EECS), Robotics, Perception and Learning, RPL.
    Kragic, Danica
    KTH, School of Electrical Engineering and Computer Science (EECS), Centres, Centre for Autonomous Systems, CAS. KTH, School of Electrical Engineering and Computer Science (EECS), Robotics, Perception and Learning, RPL.
    Global Search with Bernoulli Alternation Kernel for Task-oriented Grasping Informed by Simulation2018In: Proceedings of The 2nd Conference on Robot Learning, PMLR 87, 2018, p. 641-650Conference paper (Refereed)
    Abstract [en]

    We develop an approach that benefits from large simulated datasets and takes full advantage of the limited online data that is most relevant. We propose a variant of Bayesian optimization that alternates between using informed and uninformed kernels. With this Bernoulli Alternation Kernel we ensure that discrepancies between simulation and reality do not hinder adapting robot control policies online. The proposed approach is applied to a challenging real-world problem of task-oriented grasping with novel objects. Our further contribution is a neural network architecture and training pipeline that use experience from grasping objects in simulation to learn grasp stability scores. We learn task scores from a labeled dataset with a convolutional network, which is used to construct an informed kernel for our variant of Bayesian optimization. Experiments on an ABB Yumi robot with real sensor data demonstrate success of our approach, despite the challenge of fulfilling task requirements and high uncertainty over physical properties of objects.

  • 9.
    Arnekvist, Isac
    et al.
    KTH, School of Electrical Engineering and Computer Science (EECS), Robotics, Perception and Learning, RPL.
    Kragic, Danica
    KTH, School of Electrical Engineering and Computer Science (EECS), Robotics, Perception and Learning, RPL.
    Stork, Johannes A.
    KTH, School of Electrical Engineering and Computer Science (EECS), Robotics, Perception and Learning, RPL.
    Vpe: Variational policy embedding for transfer reinforcement learning2019Conference paper (Refereed)
  • 10.
    Baldassarre, Federico
    et al.
    KTH, School of Electrical Engineering and Computer Science (EECS), Robotics, Perception and Learning, RPL.
    Azizpour, Hossein
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.
    Explainability Techniques for Graph Convolutional Networks2019Conference paper (Refereed)
    Abstract [en]

    Graph Networks are used to make decisions in potentially complex scenarios but it is usually not obvious how or why they made them. In this work, we study the explainability of Graph Network decisions using two main classes of techniques, gradient-based and decomposition-based, on a toy dataset and a chemistry task. Our study sets the ground for future development as well as application to real-world problems.

  • 11.
    Barbosa, Fernando S.
    et al.
    KTH, School of Electrical Engineering and Computer Science (EECS), Robotics, Perception and Learning, RPL.
    Duberg, Daniel
    KTH, School of Electrical Engineering and Computer Science (EECS), Robotics, Perception and Learning, RPL.
    Jensfelt, Patric
    KTH, School of Electrical Engineering and Computer Science (EECS), Robotics, Perception and Learning, RPL.
    Tumova, Jana
    KTH, School of Electrical Engineering and Computer Science (EECS), Robotics, Perception and Learning, RPL.
    Guiding Autonomous Exploration with Signal Temporal Logic2019In: IEEE Robotics and Automation Letters, ISSN 2377-3766, E-ISSN 1949-3045, Vol. 4, no 4, p. 3332-3339Article in journal (Refereed)
    Abstract [en]

    Algorithms for autonomous robotic exploration usually focus on optimizing time and coverage, often in a greedy fashion. However, obstacle inflation is conservative and might limit mapping capabilities and even prevent the robot from moving through narrow, important places. This letter proposes a method to influence the manner the robot moves in the environment by taking into consideration a user-defined spatial preference formulated in a fragment of signal temporal logic (STL). We propose to guide the motion planning toward minimizing the violation of such preference through a cost function that integrates the quantitative semantics, i.e., robustness of STL. To demonstrate the effectiveness of the proposed approach, we integrate it into the autonomous exploration planner (AEP). Results from simulations and real-world experiments are presented, highlighting the benefits of our approach.

  • 12.
    Barbosa, Fernando S.
    et al.
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Robotics, Perception and Learning, RPL.
    Lindemann, Lars
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Decision and Control Systems (Automatic Control).
    Dimarogonas, Dimos V.
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Decision and Control Systems (Automatic Control).
    Tumova, Jana
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Robotics, Perception and Learning, RPL.
    Integrated motion planning and control under metric interval temporal logic specifications2019In: 2019 18th European Control Conference, ECC 2019, Institute of Electrical and Electronics Engineers (IEEE), 2019, p. 2042-2049, article id 8795925Conference paper (Refereed)
    Abstract [en]

    This paper proposes an approach that combines motion planning and hybrid feedback control design in order to find and follow trajectories fulfilling a given complex mission involving time constraints. We use Metric Interval Temporal Logic (MITL) as a rich and rigorous formalism to specify such missions. The solution builds on three main steps: (i) using sampling-based motion planning methods and the untimed version of the mission specification in the form of Zone automaton, we find a sequence of waypoints in the workspace; (ii) based on the clock zones from the satisfying run on the Zone automaton, we compute time-stamps at which these waypoints should be reached; and (iii) to control the system to connect two waypoints in the desired time, we design a low-level feedback controller leveraging Time-varying Control Barrier Functions. Illustrative simulation results are included.

  • 13.
    Billard, Aude
    et al.
    Ecole Polytech Fed Lausanne, Learning Algorithms & Syst Lab, Lausanne, Switzerland..
    Kragic, Danica
    KTH, School of Electrical Engineering and Computer Science (EECS), Robotics, Perception and Learning, RPL.
    Trends and challenges in robot manipulation2019In: Science, ISSN 0036-8075, E-ISSN 1095-9203, Vol. 364, no 6446, p. 1149-+Article, review/survey (Refereed)
    Abstract [en]

    Dexterous manipulation is one of the primary goals in robotics. Robots with this capability could sort and package objects, chop vegetables, and fold clothes. As robots come to work side by side with humans, they must also become human-aware. Over the past decade, research has made strides toward these goals. Progress has come from advances in visual and haptic perception and in mechanics in the form of soft actuators that offer a natural compliance. Most notably, immense progress in machine learning has been leveraged to encapsulate models of uncertainty and to support improvements in adaptive and robust control. Open questions remain in terms of how to enable robots to deal with the most unpredictable agent of all, the human.

  • 14.
    Björklund, Linnea
    KTH, School of Electrical Engineering and Computer Science (EECS), Robotics, perception and learning, RPL.
    Knock on Wood: Does Material Choice Change the Social Perception of Robots?2018Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    This paper aims to understand whether there is a difference in how socially interactive robots are perceived based on the material they are constructed out of. Two studies to that end were performed; a pilot in a live setting and a main one online. Participants were asked to rate three versions of the same robot design, one built out of wood, one out of plastic, and one covered in fur. This was then used in two studies to ascertain the participants perception of competence, warmth, and discomfort and the differences between the three materials. Statistically significant differences were found between the materials regarding the perception of warmth and discomfort

  • 15.
    Blom, Fredrik
    KTH, School of Electrical Engineering and Computer Science (EECS), Robotics, perception and learning, RPL.
    Unsupervised Feature Extraction of Clothing Using Deep Convolutional Variational Autoencoders2018Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    As online retail continues to grow, large amounts of valuable data, such as transaction and search history, and, specifically for fashion retail, similarly structured images of clothing, is generated. By using unsupervised learning, it is possible to tap into this almost unlimited supply of data. This thesis set out to determine to what extent generative models – in particular, deep convolutional variational autoencoders – can be used to automatically extract representative features from images of clothing in a completely unsupervised manner. In reviewing variations of the autoencoder, both in terms of reconstruction quality and the ability to generate new realistic samples, results suggest that there exists an optimal size of the latent vector in relation to the image data complexity. Furthermore, by weighting the latent loss and generation loss in the loss function, it was possible to disentangle the learned features such that each feature captured a unique defining characteristic of clothing items (here t-shirts and tops).

  • 16.
    Bore, Nils
    et al.
    KTH, School of Electrical Engineering and Computer Science (EECS), Robotics, perception and learning, RPL.
    Ekekrantz, Johan
    KTH, School of Electrical Engineering and Computer Science (EECS), Robotics, perception and learning, RPL.
    Jensfelt, Patric
    KTH, School of Electrical Engineering and Computer Science (EECS), Robotics, perception and learning, RPL.
    Folkesson, John
    KTH, School of Electrical Engineering and Computer Science (EECS), Robotics, perception and learning, RPL.
    Detection and Tracking of General Movable Objects in Large Three-Dimensional Maps2019In: IEEE Transactions on robotics, ISSN 1552-3098, E-ISSN 1941-0468, Vol. 35, no 1, p. 231-247Article in journal (Refereed)
    Abstract [en]

    This paper studies the problem of detection and tracking of general objects with semistatic dynamics observed by a mobile robot moving in a large environment. A key problem is that due to the environment scale, the robot can only observe a subset of the objects at any given time. Since some time passes between observations of objects in different places, the objects might be moved when the robot is not there. We propose a model for this movement in which the objects typically only move locally, but with some small probability they jump longer distances through what we call global motion. For filtering, we decompose the posterior over local and global movements into two linked processes. The posterior over the global movements and measurement associations is sampled, while we track the local movement analytically using Kalman filters. This novel filter is evaluated on point cloud data gathered autonomously by a mobile robot over an extended period of time. We show that tracking jumping objects is feasible, and that the proposed probabilistic treatment outperforms previous methods when applied to real world data. The key to efficient probabilistic tracking in this scenario is focused sampling of the object posteriors.

  • 17.
    Bore, Nils
    et al.
    KTH, School of Electrical Engineering and Computer Science (EECS), Robotics, perception and learning, RPL.
    Torroba, Ignacio
    KTH, School of Electrical Engineering and Computer Science (EECS), Robotics, perception and learning, RPL.
    Folkesson, John
    KTH, School of Electrical Engineering and Computer Science (EECS), Robotics, perception and learning, RPL.
    Sparse Gaussian Process SLAM, Storage and Filtering for AUV Multibeam Bathymetry2018In: 2018 IEEE OES Autonomous Underwater Vehicle Symposium, 2018Conference paper (Refereed)
    Abstract [en]

    With dead-reckoning from velocity sensors,AUVs may construct short-term, local bathymetry mapsof the sea floor using multibeam sensors. However, theposition estimate from dead-reckoning will include somedrift that grows with time. In this work, we focus on long-term onboard storage of these local bathymetry maps,and the alignment of maps with respect to each other. Wepropose using Sparse Gaussian Processes for this purpose,and show that the representation has several advantages,including an intuitive alignment optimization, data com-pression, and sensor noise filtering. We demonstrate thesethree key capabilities on two real-world datasets.

  • 18.
    Bore, Nils
    et al.
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Robotics, Perception and Learning, RPL.
    Torroba, Ignacio
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Robotics, Perception and Learning, RPL.
    Folkesson, John
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Robotics, Perception and Learning, RPL.
    Sparse Gaussian Process SLAM, Storage and Filtering for AUV Multibeam Bathymetry2018In: AUV 2018 - 2018 IEEE/OES Autonomous Underwater Vehicle Workshop, Proceedings, Institute of Electrical and Electronics Engineers Inc. , 2018Conference paper (Refereed)
    Abstract [en]

    With dead-reckoning from velocity sensors, AUVs may construct short-term, local bathymetry maps of the sea floor using multibeam sensors. However, the position estimate from dead-reckoning will include some drift that grows with time. In this work, we focus on long-term onboard storage of these local bathymetry maps, and the alignment of maps with respect to each other. We propose using Sparse Gaussian Processes for this purpose, and show that the representation has several advantages, including an intuitive alignment optimization, data compression, and sensor noise filtering. We demonstrate these three key capabilities on two real-world datasets.

  • 19.
    Brucker, Manuel
    et al.
    German Aerosp Ctr DLR, Inst Robot & Mechatron, D-82234 Oberpfaffenhofen, Germany..
    Durner, Maximilian
    German Aerosp Ctr DLR, Inst Robot & Mechatron, D-82234 Oberpfaffenhofen, Germany..
    Ambrus, Rares
    KTH, School of Electrical Engineering and Computer Science (EECS), Robotics, perception and learning, RPL. KTH, School of Electrical Engineering and Computer Science (EECS), Centres, Centre for Autonomous Systems, CAS.
    Marton, Zoltan Csaba
    German Aerosp Ctr DLR, Inst Robot & Mechatron, D-82234 Oberpfaffenhofen, Germany..
    Wendt, Axel
    Robert Bosch, Corp Res, St Joseph, MI USA.;Robert Bosch, Corp Res, Gerlingen, Germany..
    Jensfelt, Patric
    KTH, School of Electrical Engineering and Computer Science (EECS), Robotics, perception and learning, RPL. KTH, School of Electrical Engineering and Computer Science (EECS), Centres, Centre for Autonomous Systems, CAS.
    Arras, Kai O.
    Robert Bosch, Corp Res, St Joseph, MI USA.;Robert Bosch, Corp Res, Gerlingen, Germany..
    Triebel, Rudolph
    German Aerosp Ctr DLR, Inst Robot & Mechatron, D-82234 Oberpfaffenhofen, Germany.;Tech Univ Munich, Dep Comp Sci, Munich, Germany..
    Semantic Labeling of Indoor Environments from 3D RGB Maps2018In: 2018 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION (ICRA), IEEE Computer Society, 2018, p. 1871-1878Conference paper (Refereed)
    Abstract [en]

    We present an approach to automatically assign semantic labels to rooms reconstructed from 3D RGB maps of apartments. Evidence for the room types is generated using state-of-the-art deep-learning techniques for scene classification and object detection based on automatically generated virtual RGB views, as well as from a geometric analysis of the map's 3D structure. The evidence is merged in a conditional random field, using statistics mined from different datasets of indoor environments. We evaluate our approach qualitatively and quantitatively and compare it to related methods.

  • 20.
    Buda, Mateusz
    et al.
    Duke Univ, Dept Radiol, Sch Med, Durham, NC 27710 USA.;KTH Royal Inst Technol, Sch Elect Engn & Comp Sci, Stockholm, Sweden..
    Maki, Atsuto
    KTH, School of Electrical Engineering and Computer Science (EECS), Robotics, perception and learning, RPL.
    Mazurowski, Maciej A.
    Duke Univ, Dept Radiol, Sch Med, Durham, NC 27710 USA.;Duke Univ, Dept Elect & Comp Engn, Durham, NC USA..
    A systematic study of the class imbalance problem in convolutional neural networks2018In: Neural Networks, ISSN 0893-6080, E-ISSN 1879-2782, Vol. 106, p. 249-259Article in journal (Refereed)
    Abstract [en]

    In this study, we systematically investigate the impact of class imbalance on classification performance of convolutional neural networks (CNNs) and compare frequently used methods to address the issue. Class imbalance is a common problem that has been comprehensively studied in classical machine learning, yet very limited systematic research is available in the context of deep learning. In our study, we use three benchmark datasets of increasing complexity, MNIST, CIFAR-10 and ImageNet, to investigate the effects of imbalance on classification and perform an extensive comparison of several methods to address the issue: oversampling, undersampling, two-phase training, and thresholding that compensates for prior class probabilities. Our main evaluation metric is area under the receiver operating characteristic curve (ROC AUC) adjusted to multi-class tasks since overall accuracy metric is associated with notable difficulties in the context of imbalanced data. Based on results from our experiments we conclude that (i) the effect of class imbalance on classification performance is detrimental; (ii) the method of addressing class imbalance that emerged as dominant in almost all analyzed scenarios was oversampling; (iii) oversampling should be applied to the level that completely eliminates the imbalance, whereas the optimal undersampling ratio depends on the extent of imbalance; (iv) as opposed to some classical machine learning models, oversampling does not cause overfitting of CNNs; (v) thresholding should be applied to compensate for prior class probabilities when overall number of properly classified cases is of interest. 

  • 21.
    Butepage, Judith
    et al.
    KTH, School of Electrical Engineering and Computer Science (EECS), Robotics, perception and learning, RPL.
    Cruciani, Silvia
    KTH, School of Electrical Engineering and Computer Science (EECS), Robotics, perception and learning, RPL.
    Kokic, Mia
    KTH, School of Electrical Engineering and Computer Science (EECS), Robotics, perception and learning, RPL.
    Welle, Michael
    KTH, School of Electrical Engineering and Computer Science (EECS), Robotics, perception and learning, RPL.
    Kragic, Danica
    KTH, School of Electrical Engineering and Computer Science (EECS), Robotics, perception and learning, RPL.
    From Visual Understanding to Complex Object Manipulation2019In: Annual Review of Control, Robotics, and Autonomous Systems, Vol. 2, p. 161-179Article, review/survey (Refereed)
    Abstract [en]

    Planning and executing object manipulation requires integrating multiple sensory and motor channels while acting under uncertainty and complying with task constraints. As the modern environment is tuned for human hands, designing robotic systems with similar manipulative capabilities is crucial. Research on robotic object manipulation is divided into smaller communities interested in, e.g., motion planning, grasp planning, sensorimotor learning, and tool use. However, few attempts have been made to combine these areas into holistic systems. In this review, we aim to unify the underlying mechanics of grasping and in-hand manipulation by focusing on the temporal aspects of manipulation, including visual perception, grasp planning and execution, and goal-directed manipulation. Inspired by human manipulation, we envision that an emphasis on the temporal integration of these processes opens the way for human-like object use by robots.

  • 22.
    Butepage, Judith
    et al.
    KTH, School of Electrical Engineering and Computer Science (EECS), Robotics, Perception and Learning, RPL.
    Kjellström, Hedvig
    KTH, School of Electrical Engineering and Computer Science (EECS), Robotics, Perception and Learning, RPL.
    Kragic, Danica
    KTH, School of Electrical Engineering and Computer Science (EECS), Robotics, Perception and Learning, RPL.
    Anticipating many futures: Online human motion prediction and generation for human-robot interaction2018In: 2018 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION (ICRA), IEEE COMPUTER SOC , 2018, p. 4563-4570Conference paper (Refereed)
    Abstract [en]

    Fluent and safe interactions of humans and robots require both partners to anticipate the others' actions. The bottleneck of most methods is the lack of an accurate model of natural human motion. In this work, we present a conditional variational autoencoder that is trained to predict a window of future human motion given a window of past frames. Using skeletal data obtained from RGB depth images, we show how this unsupervised approach can be used for online motion prediction for up to 1660 ms. Additionally, we demonstrate online target prediction within the first 300-500 ms after motion onset without the use of target specific training data. The advantage of our probabilistic approach is the possibility to draw samples of possible future motion patterns. Finally, we investigate how movements and kinematic cues are represented on the learned low dimensional manifold.

  • 23.
    Båberg, Fredrik
    et al.
    KTH, School of Computer Science and Communication (CSC), Robotics, perception and learning, RPL. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Petter, Ögren
    KTH, School of Electrical Engineering and Computer Science (EECS), Robotics, perception and learning, RPL. KTH, School of Electrical Engineering and Computer Science (EECS), Centres, Centre for Autonomous Systems, CAS.
    Formation Obstacle Avoidance using RRT and Constraint Based Programming2017In: 2017 IEEE International Symposium on Safety, Security and Rescue Robotics (SSRR), IEEE conference proceedings, 2017, article id 8088131Conference paper (Refereed)
    Abstract [en]

    In this paper, we propose a new way of doing formation obstacle avoidance using a combination of Constraint Based Programming (CBP) and Rapidly Exploring Random Trees (RRTs). RRT is used to select waypoint nodes, and CBP is used to move the formation between those nodes, reactively rotating and translating the formation to pass the obstacles on the way. Thus, the CBP includes constraints for both formation keeping and obstacle avoidance, while striving to move the formation towards the next waypoint. The proposed approach is compared to a pure RRT approach where the motion between the RRT waypoints is done following linear interpolation trajectories, which are less computationally expensive than the CBP ones. The results of a number of challenging simulations show that the proposed approach is more efficient for scenarios with high obstacle densities.

  • 24.
    Bütepage, Judith
    KTH, School of Electrical Engineering and Computer Science (EECS), Robotics, Perception and Learning, RPL.
    Generative models for action generation and action understanding2019Doctoral thesis, comprehensive summary (Other academic)
    Abstract [en]

    The question of how to build intelligent machines raises the question of how to rep-resent the world to enable intelligent behavior. In nature, this representation relies onthe interplay between an organism’s sensory input and motor input. Action-perceptionloops allow many complex behaviors to arise naturally. In this work, we take these sen-sorimotor contingencies as an inspiration to build robot systems that can autonomouslyinteract with their environment and with humans. The goal is to pave the way for robotsystems that can learn motor control in an unsupervised fashion and relate their ownsensorimotor experience to observed human actions. By combining action generationand action understanding we hope to facilitate smooth and intuitive interaction betweenrobots and humans in shared work spaces.To model robot sensorimotor contingencies and human behavior we employ gen-erative models. Since generative models represent a joint distribution over relevantvariables, they are flexible enough to cover the range of tasks that we are tacklinghere. Generative models can represent variables that originate from multiple modali-ties, model temporal dynamics, incorporate latent variables and represent uncertaintyover any variable - all of which are features required to model sensorimotor contin-gencies. By using generative models, we can predict the temporal development of thevariables in the future, which is important for intelligent action selection.We present two lines of work. Firstly, we will focus on unsupervised learning ofmotor control with help of sensorimotor contingencies. Based on Gaussian Processforward models we demonstrate how the robot can execute goal-directed actions withthe help of planning techniques or reinforcement learning. Secondly, we present anumber of approaches to model human activity, ranging from pure unsupervised mo-tion prediction to including semantic action and affordance labels. Here we employdeep generative models, namely Variational Autoencoders, to model the 3D skeletalpose of humans over time and, if required, include semantic information. These twolines of work are then combined to implement physical human-robot interaction tasks.Our experiments focus on real-time applications, both when it comes to robot ex-periments and human activity modeling. Since many real-world scenarios do not haveaccess to high-end sensors, we require our models to cope with uncertainty. Additionalrequirements are data-efficient learning, because of the wear and tear of the robot andhuman involvement, online employability and operation under safety and complianceconstraints. We demonstrate how generative models of sensorimotor contingencies canhandle these requirements in our experiments satisfyingly.

  • 25.
    Bütepage, Judith
    et al.
    KTH, School of Electrical Engineering and Computer Science (EECS), Robotics, Perception and Learning, RPL.
    Kjellström, Hedvig
    KTH, School of Electrical Engineering and Computer Science (EECS), Robotics, Perception and Learning, RPL.
    Kragic, Danica
    KTH, School of Electrical Engineering and Computer Science (EECS), Robotics, Perception and Learning, RPL.
    A Probabilistic Semi-Supervised Approach to Multi-Task Human Activity ModelingManuscript (preprint) (Other academic)
    Abstract [en]

    Human behavior is a continuous stochastic spatio-temporal process which is governed by semantic actions and affordances as well as latent factors. Therefore, video-based human activity modeling is concerned with a number of tasks such as inferring current and future semantic labels, predicting future continuous observations as well as imagining possible future label and feature sequences. In this paper we present a semi-supervised probabilistic deep latent variable model that can represent both discrete labels and continuous observations as well as latent dynamics over time. This allows the model to solve several tasks at once without explicit fine-tuning. We focus here on the tasks of action classification, detection, prediction and anticipation as well as motion prediction and synthesis based on 3D human activity data recorded with Kinect. We further extend the model to capture hierarchical label structure and to model the dependencies between multiple entities, such as a human and objects. Our experiments demonstrate that our principled approach to human activity modeling can be used to detect current and anticipate future semantic labels and to predict and synthesize future label and feature sequences. When comparing our model to state-of-the-art approaches, which are specifically designed for e.g. action classification, we find that our probabilistic formulation outperforms or is comparable to these task specific models.

  • 26.
    Caccamo, Sergio
    KTH, School of Electrical Engineering and Computer Science (EECS), Robotics, perception and learning, RPL.
    Enhancing geometric maps through environmental interactions2018Doctoral thesis, comprehensive summary (Other academic)
    Abstract [en]

    The deployment of rescue robots in real operations is becoming increasingly commonthanks to recent advances in AI technologies and high performance hardware. Rescue robots can now operate for extended period of time, cover wider areas andprocess larger amounts of sensory information making them considerably more usefulduring real life threatening situations, including both natural or man-made disasters.

    In this thesis we present results of our research which focuses on investigating ways of enhancing visual perception for Unmanned Ground Vehicles (UGVs) through environmental interactions using different sensory systems, such as tactile sensors and wireless receivers.

    We argue that a geometric representation of the robot surroundings built upon vision data only, may not suffice in overcoming challenging scenarios, and show that robot interactions with the environment can provide a rich layer of new information that needs to be suitably represented and merged into the cognitive world model. Visual perception for mobile ground vehicles is one of the fundamental problems in rescue robotics. Phenomena such as rain, fog, darkness, dust, smoke and fire heavily influence the performance of visual sensors, and often result in highly noisy data, leading to unreliable or incomplete maps.

    We address this problem through a collection of studies and structure the thesis as follow:Firstly, we give an overview of the Search & Rescue (SAR) robotics field, and discuss scenarios, hardware and related scientific questions.Secondly, we focus on the problems of control and communication. Mobile robotsrequire stable communication with the base station to exchange valuable information. Communication loss often presents a significant mission risk and disconnected robotsare either abandoned, or autonomously try to back-trace their way to the base station. We show how non-visual environmental properties (e.g. the WiFi signal distribution) can be efficiently modeled using probabilistic active perception frameworks based on Gaussian Processes, and merged into geometric maps so to facilitate the SAR mission. We then show how to use tactile perception to enhance mapping. Implicit environmental properties such as the terrain deformability, are analyzed through strategic glancesand touches and then mapped into probabilistic models.Lastly, we address the problem of reconstructing objects in the environment. Wepresent a technique for simultaneous 3D reconstruction of static regions and rigidly moving objects in a scene that enables on-the-fly model generation. Although this thesis focuses mostly on rescue UGVs, the concepts presented canbe applied to other mobile platforms that operates under similar circumstances. To make sure that the suggested methods work, we have put efforts into design of user interfaces and the evaluation of those in user studies.

  • 27.
    Caccamo, Sergio Salvatore
    KTH, School of Electrical Engineering and Computer Science (EECS), Robotics, Perception and Learning, RPL.
    Joint 3D Reconstruction of a Static Scene and Moving Objects2017In: Proceedings of the 2017International Conference on 3D Vision (3DV’17), IEEE, 2017Conference paper (Other academic)
  • 28.
    Carlsson, Stefan
    et al.
    KTH, School of Electrical Engineering and Computer Science (EECS), Robotics, Perception and Learning, RPL.
    Azizpour, Hossein
    KTH, School of Computer Science and Communication (CSC), Computational Science and Technology (CST).
    Razavian, Ali Sharif
    KTH, School of Electrical Engineering and Computer Science (EECS), Robotics, Perception and Learning, RPL.
    Sullivan, Josephine
    KTH, School of Electrical Engineering and Computer Science (EECS), Robotics, Perception and Learning, RPL.
    Smith, Kevin
    KTH, School of Electrical Engineering and Computer Science (EECS), Computational Science and Technology (CST).
    The Preimage of Rectifier Network Activities2017In: International Conference on Learning Representations (ICLR), 2017Conference paper (Refereed)
    Abstract [en]

    The preimage of the activity at a certain level of a deep network is the set of inputs that result in the same node activity. For fully connected multi layer rectifier networks we demonstrate how to compute the preimages of activities at arbitrary levels from knowledge of the parameters in a deep rectifying network. If the preimage set of a certain activity in the network contains elements from more than one class it means that these classes are irreversibly mixed. This implies that preimage sets which are piecewise linear manifolds are building blocks for describing the input manifolds specific classes, ie all preimages should ideally be from the same class. We believe that the knowledge of how to compute preimages will be valuable in understanding the efficiency displayed by deep learning networks and could potentially be used in designing more efficient training algorithms.

  • 29.
    Carvalho, J. Frederico
    et al.
    KTH. KTH, CAS, RPL, Royal Inst Technol, Stocholm, Sweden..
    Vejdemo-Johansson, Mikael
    CUNY Coll Staten Isl, Math Dept, Staten Isl, NY 10314 USA.;CUNY, Grad Ctr, Comp Sci, New York, NY USA..
    Kragic, Danica
    KTH, School of Electrical Engineering and Computer Science (EECS), Robotics, perception and learning, RPL. KTH, CAS, RPL, Royal Inst Technol, Stocholm, Sweden..
    Pokorny, Florian T.
    KTH, School of Electrical Engineering and Computer Science (EECS), Robotics, perception and learning, RPL. KTH, CAS, RPL, Royal Inst Technol, Stocholm, Sweden..
    Path Clustering with Homology Area2018In: 2018 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION (ICRA), IEEE Computer Society, 2018, p. 7346-7353Conference paper (Refereed)
    Abstract [en]

    Path clustering has found many applications in recent years. Common approaches to this problem use aggregates of the distances between points to provide a measure of dissimilarity between paths which do not satisfy the triangle inequality. Furthermore, they do not take into account the topology of the space where the paths are embedded. To tackle this, we extend previous work in path clustering with relative homology, by employing minimum homology area as a measure of distance between homologous paths in a triangulated mesh. Further, we show that the resulting distance satisfies the triangle inequality, and how we can exploit the properties of homology to reduce the amount of pairwise distance calculations necessary to cluster a set of paths. We further compare the output of our algorithm with that of DTW on a toy dataset of paths, as well as on a dataset of real-world paths.

  • 30.
    Carvalho, Joao Frederico
    et al.
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Robotics, Perception and Learning, RPL.
    Vejdemo-Johansson, Mikael
    CUNY, Math Dept, Coll Staten Isl, New York, NY 10021 USA..
    Kragic, Danica
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Robotics, Perception and Learning, RPL.
    Pokorny, Florian T.
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Robotics, Perception and Learning, RPL.
    An algorithm for calculating top-dimensional bounding chains2018In: PEERJ COMPUTER SCIENCE, ISSN 2376-5992, article id e153Article in journal (Refereed)
    Abstract [en]

    We describe the Coefficient-Flow algorithm for calculating the bounding chain of an (n-1)-boundary on an n-manifold-like simplicial complex S. We prove its correctness and show that it has a computational time complexity of O(vertical bar S(n-1)vertical bar) (where S(n-1) is the set of (n-1)-faces of S). We estimate the big-O coefficient which depends on the dimension of S and the implementation. We present an implementation, experimentally evaluate the complexity of our algorithm, and compare its performance with that of solving the underlying linear system.

  • 31.
    Chen, Xi
    et al.
    KTH, School of Electrical Engineering and Computer Science (EECS), Robotics, Perception and Learning, RPL.
    Ghadirzadeh, Ali
    KTH, School of Electrical Engineering and Computer Science (EECS), Robotics, Perception and Learning, RPL.
    Folkesson, John
    KTH, School of Electrical Engineering and Computer Science (EECS), Robotics, Perception and Learning, RPL.
    Björkman, Mårten
    KTH, School of Electrical Engineering and Computer Science (EECS), Robotics, Perception and Learning, RPL.
    Jensfelt, Patric
    KTH, School of Electrical Engineering and Computer Science (EECS), Robotics, Perception and Learning, RPL.
    Deep Reinforcement Learning to Acquire Navigation Skills for Wheel-Legged Robots in Complex Environments2018In: 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 2018Conference paper (Refereed)
    Abstract [en]

    Mobile robot navigation in complex and dynamic environments is a challenging but important problem. Reinforcement learning approaches fail to solve these tasks efficiently due to reward sparsities, temporal complexities and high-dimensionality of sensorimotor spaces which are inherent in such problems. We present a novel approach to train action policies to acquire navigation skills for wheel-legged robots using deep reinforcement learning. The policy maps height-map image observations to motor commands to navigate to a target position while avoiding obstacles. We propose to acquire the multifaceted navigation skill by learning and exploiting a number of manageable navigation behaviors. We also introduce a domain randomization technique to improve the versatility of the training samples. We demonstrate experimentally a significant improvement in terms of data-efficiency, success rate, robustness against irrelevant sensory data, and also the quality of the maneuver skills.

  • 32.
    Colledanchise, Michele
    et al.
    Istituto Italiano di Tecnologia - IIT, Genoa, Italy.
    Almeida, Diogo
    Ögren, Petter
    KTH, School of Electrical Engineering and Computer Science (EECS), Robotics, perception and learning, RPL.
    Towards Blended Reactive Planning and Acting using Behavior Trees2019Conference paper (Refereed)
    Abstract [en]

    In this paper, we show how a planning algorithm can be used to automatically create and update a Behavior Tree (BT), controlling a robot in a dynamic environment. The planning part of the algorithm is based on the idea of back chaining. Starting from a goal condition we iteratively select actions to achieve that goal, and if those actions have unmet preconditions, they are extended with actions to achieve them in the same way. The fact that BTs are inherently modular and reactive makes the proposed solution blend acting and planning in a way that enables the robot to effectively react to external disturbances. If an external agent undoes an action the robot re- executes it without re-planning, and if an external agent helps the robot, it skips the corresponding actions, again without re- planning. We illustrate our approach in two different robotics scenarios.

  • 33. Colledancise, Michele
    et al.
    Parasuraman, Ramviyas Nattanmai
    Petter, Ögren
    KTH, School of Electrical Engineering and Computer Science (EECS), Robotics, Perception and Learning, RPL.
    Learning of Behavior Trees for Autonomous Agents2018In: IEEE Transactions on Games, ISSN 2475-1502Article in journal (Refereed)
    Abstract [en]

    In this paper, we study the problem of automatically synthesizing a successful Behavior Tree (BT) in an a-priori unknown dynamic environment. Starting with a given set of behaviors, a reward function, and sensing in terms of a set of binary conditions, the proposed algorithm incrementally learns a switching structure in terms of a BT, that is able to handle the situations encountered. Exploiting the fact that BTs generalize And-Or-Trees and also provide very natural chromosome mappings for genetic pro- gramming, we combine the long term performance of Genetic Programming with a greedy element and use the And-Or analogy to limit the size of the resulting structure. Finally, earlier results on BTs enable us to provide certain safety guarantees for the resulting system. Using the testing environment Mario AI we compare our approach to alternative methods for learning BTs and Finite State Machines. The evaluation shows that the proposed approach generated solutions with better performance, and often fewer nodes than the other two methods.

  • 34.
    Correia, Filipa
    et al.
    Univ Lisbon, INESC ID, Inst Super Tecn, Lisbon, Portugal..
    Mascarenhas, Samuel F.
    Univ Lisbon, INESC ID, Inst Super Tecn, Lisbon, Portugal..
    Gomes, Samuel
    Univ Lisbon, INESC ID, Inst Super Tecn, Lisbon, Portugal..
    Arriaga, Patricia
    CIS IUL, Inst Univ Lisboa ISCTE IUL, Lisbon, Portugal..
    Leite, Iolanda
    KTH, School of Electrical Engineering and Computer Science (EECS), Robotics, Perception and Learning, RPL.
    Prada, Rui
    Univ Lisbon, INESC ID, Inst Super Tecn, Lisbon, Portugal..
    Melo, Francisco S.
    Univ Lisbon, INESC ID, Inst Super Tecn, Lisbon, Portugal..
    Paiva, Ana
    Univ Lisbon, INESC ID, Inst Super Tecn, Lisbon, Portugal..
    Exploring Prosociality in Human-Robot Teams2019In: HRI '19: 2019 14TH ACM/IEEE INTERNATIONAL CONFERENCE ON HUMAN-ROBOT INTERACTION, IEEE , 2019, p. 143-151Conference paper (Refereed)
    Abstract [en]

    This paper explores the role of prosocial behaviour when people team up with robots in a collaborative game that presents a social dilemma similar to a public goods game. An experiment was conducted with the proposed game in which each participant joined a team with a prosocial robot and a selfish robot. During 5 rounds of the game, each player chooses between contributing to the team goal (cooperate) or contributing to his individual goal (defect). The prosociality level of the robots only affects their strategies to play the game, as one always cooperates and the other always defects. We conducted a user study at the office of a large corporation with 70 participants where we manipulated the game result (winning or losing) in a between-subjects design. Results revealed two important considerations: (1) the prosocial robot was rated more positively in terms of its social attributes than the selfish robot, regardless of the game result; (2) the perception of competence, the responsibility attribution (blame/credit), and the preference for a future partner revealed significant differences only in the losing condition. These results yield important concerns for the creation of robotic partners, the understanding of group dynamics and, from a more general perspective, the promotion of a prosocial society.

  • 35.
    Cruciani, Silvia
    KTH, School of Electrical Engineering and Computer Science (EECS), Robotics, Perception and Learning, RPL.
    Vision-Based In-Hand Manipulation with Limited Dexterity2019Doctoral thesis, comprehensive summary (Other academic)
    Abstract [en]

    In-hand manipulation is an action that allows for changing the grasp on an object without the need for releasing it. This action is an important component in the manipulation process and helps solving many tasks. Human hands are dexterous instruments suitable for moving an object inside the hand. However, it is not common for robots to be equipped with dexterous hands due to many challenges in control and mechanical design. In fact, robots are frequently equipped with simple parallel grippers, robust but lacking dexterity. This thesis focuses on achieving in-hand manipulation with limited dexterity. The proposed solutions are based only on visual input, without the need for additional sensing capabilities in the robot's hand.

    Extrinsic dexterity allows simple grippers to execute in-hand manipulation thanks to the exploitation of external supports. This thesis introduces new methods for solving in-hand manipulation using inertial forces, controlled friction and external pushes as additional supports to enhance the robot's manipulation capabilities. Pivoting is seen as a possible solution for simple grasp changes: two methods, which cope with inexact friction modeling, are reported, and pivoting is successfully integrated in an overall manipulation task. For large scale in-hand manipulation, the Dexterous Manipulation Graph is introduced as a novel representation of the object. This graph is a useful tool for planning how to change a certain grasp via in-hand manipulation. It can also be exploited to combine both in-hand manipulation and regrasping to augment the possibilities of adjusting the grasp. In addition, this method is extended to achieve in-hand manipulation even for objects with unknown shape. To execute the planned object motions within the gripper, dual-arm robots are exploited to enhance the poor dexterity of parallel grippers: the second arm is seen as an additional support that helps in pushing and holding the object to successfully adjust the grasp configuration.

    This thesis presents examples of successful executions of tasks where in-hand manipulation is a fundamental step in the manipulation process, showing how the proposed methods are a viable solution for achieving in-hand manipulation with limited dexterity.

  • 36.
    Cruciani, Silvia
    et al.
    KTH, School of Electrical Engineering and Computer Science (EECS), Robotics, Perception and Learning, RPL.
    Almeida, Diogo
    KTH, School of Electrical Engineering and Computer Science (EECS), Robotics, Perception and Learning, RPL. KTH.
    Kragic, Danica
    KTH, School of Electrical Engineering and Computer Science (EECS), Robotics, Perception and Learning, RPL.
    Karayiannidis, Yiannis
    KTH, School of Electrical Engineering and Computer Science (EECS), Robotics, Perception and Learning, RPL.
    Discrete Bimanual Manipulation for Wrench BalancingManuscript (preprint) (Other academic)
    Abstract [en]

    Dual-arm robots can overcome grasping force and payload limitations of a single arm by jointly grasping an object.However, if the distribution of mass of the grasped object is not even, each arm will experience different wrenches that can exceed its payload limits.In this work, we consider the problem of balancing the wrenches experienced by  a dual-arm robot grasping a rigid tray.The distribution of wrenches among the robot arms changes due to objects being placed on the tray.We present an approach to reduce the wrench imbalance among arms through discrete bimanual manipulation.Our approach is based on sequential sliding motions of the grasp points on the surface of the object, to attain a more balanced configuration.%This is achieved in a discrete manner, one arm at a time, to minimize the potential for undesirable object motion during execution.We validate our modeling approach and system design through a set of robot experiments.

  • 37.
    Cruciani, Silvia
    et al.
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Robotics, Perception and Learning, RPL.
    Hang, Kaiyu
    Yale University.
    Smith, Christian
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Robotics, Perception and Learning, RPL.
    Kragic, Danica
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Robotics, Perception and Learning, RPL.
    Dual-Arm In-Hand Manipulation Using Visual Feedback2019Conference paper (Refereed)
    Abstract [en]

    In this work, we address the problem of executing in-hand manipulation based on visual input. Given an initial grasp, the robot has to change its grasp configuration without releasing the object. We propose a method for in-hand manipulation planning and execution based on information on the object’s shape using a dual-arm robot. From the available information on the object, which can be a complete point cloud but also partial data, our method plans a sequence of rotations and translations to reconfigure the object’s pose. This sequence is executed using non-prehensile pushes defined as relative motions between the two robot arms.

  • 38.
    Cruciani, Silvia
    et al.
    KTH, School of Electrical Engineering and Computer Science (EECS), Robotics, Perception and Learning, RPL.
    Hang, Yin
    KTH, School of Electrical Engineering and Computer Science (EECS), Robotics, Perception and Learning, RPL.
    Kragic, Danica
    KTH, School of Electrical Engineering and Computer Science (EECS), Robotics, Perception and Learning, RPL.
    In-Hand Manipulation of Objects with Unknown ShapesManuscript (preprint) (Other academic)
    Abstract [en]

    This work addresses the problem of changing grasp configurations on objects with an unknown shape through in-hand manipulation. Our approach leverages shape priors,learned as deep generative models, to infer novel object shapesfrom partial visual sensing. The Dexterous Manipulation Graph method is extended to build upon incremental data and account for estimation uncertainty in searching a sequence of manipulation actions. We show that our approach successfully solves in-hand manipulation tasks with unknown objects, and demonstrate the validity of these solutions with robot experiments.

  • 39.
    Cruciani, Silvia
    et al.
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Robotics, Perception and Learning, RPL.
    Smith, Christian
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Robotics, Perception and Learning, RPL.
    Integrating Path Planning and Pivoting2018In: 2018 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS (IROS) / [ed] Maciejewski, AA Okamura, A Bicchi, A Stachniss, C Song, DZ Lee, DH Chaumette, F Ding, H Li, JS Wen, J Roberts, J Masamune, K Chong, NY Amato, N Tsagwarakis, N Rocco, P Asfour, T Chung, WK Yasuyoshi, Y Sun, Y Maciekeski, T Althoefer, K AndradeCetto, J Chung, WK Demircan, E Dias, J Fraisse, P Gross, R Harada, H Hasegawa, Y Hayashibe, M Kiguchi, K Kim, K Kroeger, T Li, Y Ma, S Mochiyama, H Monje, CA Rekleitis, I Roberts, R Stulp, F Tsai, CHD Zollo, L, IEEE , 2018, p. 6601-6608Conference paper (Refereed)
    Abstract [en]

    In this work we propose a method for integrating motion planning and in-hand manipulation. Commonly addressed as a separate step from the final execution, in-hand manipulation allows the robot to reorient an object within the end-effector for the successful outcome of the goal task. A joint achievement of repositioning the object and moving the manipulator towards its desired final pose saves time in the execution and introduces more flexibility in the system. We address this problem using a pivoting strategy (i.e. in-hand rotation) for repositioning the object and we integrate this strategy with a path planner for the execution of a complex task. This method is applied on a Baxter robot and its efficacy is shown by experimental results.

  • 40.
    Cruciani, Silvia
    et al.
    KTH, School of Electrical Engineering and Computer Science (EECS), Robotics, Perception and Learning, RPL.
    Smith, Christian
    KTH, School of Electrical Engineering and Computer Science (EECS), Robotics, Perception and Learning, RPL.
    Kragic, Danica
    KTH, School of Electrical Engineering and Computer Science (EECS), Robotics, Perception and Learning, RPL.
    Hang, Kaiyu
    Hong Kong Univ Sci & Technol, Dept Comp Sci & Engn, Hong Kong, Peoples R China.;Hong Kong Univ Sci & Technol, Inst Adv Study, Hong Kong, Peoples R China..
    Dexterous Manipulation Graphs2018In: 2018 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS (IROS) / [ed] Maciejewski, AA Okamura, A Bicchi, A Stachniss, C Song, DZ Lee, DH Chaumette, F Ding, H Li, JS Wen, J Roberts, J Masamune, K Chong, NY Amato, N Tsagwarakis, N Rocco, P Asfour, T Chung, WK Yasuyoshi, Y Sun, Y Maciekeski, T Althoefer, K AndradeCetto, J Chung, WK Demircan, E Dias, J Fraisse, P Gross, R Harada, H Hasegawa, Y Hayashibe, M Kiguchi, K Kim, K Kroeger, T Li, Y Ma, S Mochiyama, H Monje, CA Rekleitis, I Roberts, R Stulp, F Tsai, CHD Zollo, L, IEEE , 2018, p. 2040-2047Conference paper (Refereed)
    Abstract [en]

    We propose the Dexterous Manipulation Graph as a tool to address in-hand manipulation and reposition an object inside a robot's end-effector. This graph is used to plan a sequence of manipulation primitives so to bring the object to the desired end pose. This sequence of primitives is translated into motions of the robot to move the object held by the end-effector. We use a dual arm robot with parallel grippers to test our method on a real system and show successful planning and execution of in-hand manipulation.

  • 41.
    Cruciani, Silvia
    et al.
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Robotics, Perception and Learning, RPL.
    Yin, Hang
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Robotics, Perception and Learning, RPL.
    Kragic, Danica
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Robotics, Perception and Learning, RPL.
    In-Hand Manipulation of Objects with Unknown ShapesManuscript (preprint) (Other academic)
    Abstract [en]

    This work addresses the problem of changing grasp configurations on objects with an unknown shape through in-hand manipulation. Our approach leverages shape priors,learned as deep generative models, to infer novel object shapesfrom partial visual sensing. The Dexterous Manipulation Graph method is extended to build upon incremental data and account for estimation uncertainty in searching a sequence of manipulation actions. We show that our approach successfully solves in-hand manipulation tasks with unknown objects, and demonstrate the validity of these solutions with robot experiments.

  • 42.
    Djikic, Addi
    KTH, School of Electrical Engineering and Computer Science (EECS), Robotics, perception and learning, RPL.
    Segmentation and Depth Estimation of Urban Road Using Monocular Camera and Convolutional Neural Networks2018Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Deep learning for safe autonomous transport is rapidly emerging. Fast and robust perception for autonomous vehicles will be crucial for future navigation in urban areas with high traffic and human interplay.

    Previous work focuses on extracting full image depth maps, or finding specific road features such as lanes. However, in urban environments lanes are not always present, and sensors such as LiDAR with 3D point clouds provide a quite sparse depth perception of road with demanding algorithmic approaches.

    In this thesis we derive a novel convolutional neural network that we call AutoNet. It is designed as an encoder-decoder network for pixel-wise depth estimation of an urban drivable free-space road, using only a monocular camera, and handled as a supervised regression problem. AutoNet is also constructed as a classification network to solely classify and segment the drivable free-space in real- time with monocular vision, handled as a supervised classification problem, which shows to be a simpler and more robust solution than the regression approach.

    We also implement the state of the art neural network ENet for comparison, which is designed for fast real-time semantic segmentation and fast inference speed. The evaluation shows that AutoNet outperforms ENet for every performance metrics, but shows to be slower in terms of frame rate. However, optimization techniques are proposed for future work, on how to advance the frame rate of the network while still maintaining the robustness and performance.

    All the training and evaluation is done on the Cityscapes dataset. New ground truth labels for road depth perception are created for training with a novel approach of fusing pre-computed depth maps with semantic labels. Data collection with a Scania vehicle is conducted, mounted with a monocular camera to test the final derived models.

    The proposed AutoNet shows promising state of the art performance in regards to road depth estimation as well as road classification.

  • 43.
    Englesson, Erik
    et al.
    KTH, School of Electrical Engineering and Computer Science (EECS), Robotics, Perception and Learning, RPL.
    Azizpour, Hossein
    KTH, School of Electrical Engineering and Computer Science (EECS), Robotics, Perception and Learning, RPL.
    Efficient Evaluation-Time Uncertainty Estimation by Improved Distillation2019Conference paper (Refereed)
  • 44.
    Ericson, Ludvig
    KTH, School of Electrical Engineering and Computer Science (EECS), Robotics, perception and learning, RPL.
    Flying High: Deep Imitation Learning of Optimal Control for Unmanned Aerial Vehicles2018Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Optimal control for multicopters is difficult in part due to the low processing power available, and the instability inherent to multicopters. Deep imitation learning is a method for approximating an expert control policy with a neural network, and has the potential of improving control for multicopters. We investigate the performance and reliability of deep imitation learning with trajectory optimization as the expert policy by first defining a dynamics model for multicopters and applying a trajectory optimization algorithm to it. Our investigation shows that network architecture plays an important role in the characteristics of both the learning process and the resulting control policy, and that in particular trajectory optimization can be leveraged to improve convergence times for imitation learning. Finally, we identify some limitations and future areas of study and development for the technology.

  • 45.
    Eriksson, Sara
    et al.
    KTH, School of Electrical Engineering and Computer Science (EECS), Media Technology and Interaction Design, MID.
    Unander-Scharin, Åsa
    Luleå University of Technology.
    Trichon, Vincent
    KTH, School of Electrical Engineering and Computer Science (EECS), Media Technology and Interaction Design, MID.
    Unander-Scharin, Carl
    Karlstad University.
    Kjellström, Hedvig
    KTH, School of Electrical Engineering and Computer Science (EECS), Robotics, Perception and Learning, RPL.
    Höök, Kristina
    KTH, School of Electrical Engineering and Computer Science (EECS), Media Technology and Interaction Design, MID.
    Dancing with Drones: Crafting Novel Artistic Expressions through Intercorporeality2019In: Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, New York, NY USA, 2019, p. 617:1-617:12Conference paper (Refereed)
  • 46.
    Ghadirzadeh, Ali
    KTH, School of Electrical Engineering and Computer Science (EECS), Robotics, perception and learning, RPL.
    Sensorimotor Robot Policy Training using Reinforcement Learning2018Doctoral thesis, comprehensive summary (Other academic)
    Abstract [en]

    Robots are becoming more ubiquitous in our society and taking over many tasks that were previously considered as human hallmarks. Many of these tasks, e.g., autonomously driving a car, collaborating with humans in dynamic and changing working conditions and performing household chores, require human-level intelligence to perceive the world and to act appropriately. In this thesis, we pursue a different approach compared to classical methods that often construct a robot controller based on the perception-then-action paradigm. We devise robotic action-selection policies by considering action-selection and perception processes as being intertwined, emphasizing that perception comes prior to action and action is key to perception. The main hypothesis is that complex robotic behaviors come as the result of mastering sensorimotor contingencies (SMCs), i.e., regularities between motor actions and associated changes in sensory observations, where SMCs can be seen as building blocks to skillful behaviors. We elaborate and investigate this hypothesis by deliberate design of frameworks which enable policy training merely based on data experienced by a robot,without intervention of human experts for analytical modelings or calibrations. In such circumstances, action policies can be obtained by reinforcement learning (RL) paradigm by making exploratory action decisions and reinforcing patterns of SMCs that lead to reward events for a given task. However, the dimensionality of sensorimotor spaces, complex dynamics of physical tasks, sparseness of reward events, limited amount of data from real-robot experiments, ambiguities of crediting past decisions and safety issues, which arise from exploratory actions of a physical robot, pose challenges to obtain a policy based on data-driven methods alone. In this thesis, we introduce our contributions to deal with the aforementioned issues by devising learning frameworks which endow a robot with the ability to integrate sensorimotor data to obtain action-selection policies. The effectiveness of the proposed frameworks is demonstrated by evaluating the methods on a number of real robotic tasks and illustrating the suitability of the methods to acquire different skills, to make sequential action-decisions in high-dimensional sensorimotor spaces, with limited data and sparse rewards.

  • 47.
    Guin, Agneev
    KTH, School of Electrical Engineering and Computer Science (EECS), Robotics, perception and learning, RPL.
    Terrain Classification to find Drivable Surfaces using Deep Neural Networks: Semantic segmentation for unstructured roads combined with the use of Gabor filters to determine drivable regions trained on a small dataset2018Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Autonomous vehicles face various challenges under difficult terrain conditions such as marginally rural or back-country roads, due to the lack of lane information, road signs or traffic signals. In this thesis, we investigate a novel approach of using Deep Neural Networks (DNNs) to classify off-road surfaces into the types of terrains with the aim of supporting autonomous navigation in unstructured environments. For example, off-road surfaces can be classified as asphalt, gravel, grass, mud, snow, etc.

    Images from the camera mounted on a mining truck were used to perform semantic segmentation and to classify road surface types. Camera images were segmented manually for training into sets of 16 and 9 classes, for all relevant classes and the drivable classes respectively. A small but diverse dataset of 100 images was augmented and compiled along with nearby frames from the video clips to expand this dataset. Neural networks were used to test the performance for the classification under these off-road conditions. Pre-trained AlexNet was compared to the networks without pre-training. Gabor filters, known to distinguish textured surfaces, was further used to improve the results of the neural network.

    The experiments show that pre-trained networks perform well with small datasets and many classes. A combination of Gabor filters with pre-trained networks can establish a dependable navigation path under difficult terrain conditions. While the results seem positive for images similar to the training image scenes, the networks fail to perform well in other situations. Though the tests imply that larger datasets are required for dependable results, this is a step closer to making the autonomous vehicles drivable under off-road conditions.

  • 48.
    Guo, Meng
    et al.
    KTH, School of Electrical Engineering and Computer Science (EECS), Centres, ACCESS Linnaeus Centre. KTH, School of Electrical Engineering and Computer Science (EECS), Centres, Centre for Autonomous Systems, CAS.
    Boskos, Dimitris
    KTH, School of Electrical Engineering and Computer Science (EECS), Centres, ACCESS Linnaeus Centre. KTH, School of Electrical Engineering and Computer Science (EECS), Centres, Centre for Autonomous Systems, CAS.
    Tumova, Jana
    KTH, School of Electrical Engineering and Computer Science (EECS), Robotics, perception and learning, RPL.
    Dimarogonas, Dimos V.
    KTH, School of Electrical Engineering and Computer Science (EECS), Centres, ACCESS Linnaeus Centre. KTH, School of Electrical Engineering and Computer Science (EECS), Centres, Centre for Autonomous Systems, CAS.
    Distributed hybrid control synthesis for multi-agent systems from high-level specifications2018In: Control Subject to Computational and Communication Constraints, Springer Verlag , 2018, 475, p. 241-260Chapter in book (Refereed)
    Abstract [en]

    Current control applications necessitate in many cases the consideration of systems with multiple interconnected components. These components/agents may need to fulfill high-level tasks at a discrete planning layer and also coupled constraints at the continuous control layer. Toward this end, the need for combined decentralized control at the continuous layer and planning at the discrete layer becomes apparent. While there are approaches that handle the problem in a top-down centralized manner, decentralized bottom-up approaches have not been pursued to the same extent. We present here some of our results for the problem of combined, hybrid control and task planning from high-level specifications for multi-agent systems in a bottom-up manner. In the first part, we present some initial results on extending the necessary notion of abstractions to multi-agent systems in a distributed fashion. We then consider a setup where agents are assigned individual tasks in the form of linear temporal logic (LTL) formulas and derive local task planning strategies for each agent. In the last part, the problem of combined distributed task planning and control under coupled continuous constraints is further considered.

  • 49.
    Gällström, Andreas
    et al.
    Saab AB, SE-581 88 Linköping, Sweden ; Department of Electrical and Information Technology, Lund University.
    Rixon Fuchs, Louise
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligenta system, Robotics, Perception and Learning, RPL. Saab AB, SE-581 88 Linköping, Sweden.
    Larsson, Christer
    Saab AB, SE-581 88 Linköping, Sweden ; Department of Electrical and Information Technology, Lund University.
    Enhanced sonar image resolution using compressive sensing modelling2019In: Conference Proceedings 5th Underwater Acoustics Conference and Exhibition UACE2019 / [ed] John S. Papadakis, UACE , 2019, p. 995-999Conference paper (Other academic)
    Abstract [en]

    The sonar image resolution is classically limited by the sonar array dimensions. There are several techniques to enhance the resolution; most common is the synthetic aperture sonar (SAS) technique where several pings are added coherently to achieve a longer array and thereby higher cross range resolution. This leads to high requirements on navigation accuracy, but the different autofocus techniques in general also require collecting overlapping data. This limits the acquisition speed whencovering a specific area. We investigate the possibility to enhance the resolution in images processed from one ping measurementin this paperusing compressive sensing methods. A model consisting of isotropic point scatterers is used for the imaged target. The point scatterer amplitudes are frequency and angle independent. We assume only direct paths between the scatterers and the transmitter/receiver in theinverse problemformulation. The solution to this system of equations turns out to be naturally sparse, i.e., relatively few scatterers are required to describe the measured signal.The sparsity means that L1 optimization and methods from compressive sensing (CS) can be used to solve the inverse problem efficiently. We use the basis pursuit denoise algorithm (BPDN) as implemented in the SPGL1 package to solve the optimization problem.We present results based on CS on measurements collected at Saab. The measurements are collected using the experimental platform Sapphires in freshwater Lake Vättern. Images processed using classical back projection algorithms are compared tosonar images with enhanced resolution using CS, with a 10 times improvement in cross range resolution.

  • 50.
    Hamesse, Charles
    KTH, School of Electrical Engineering and Computer Science (EECS), Robotics, perception and learning, RPL.
    Simultaneous Measurement Imputation and Rehabilitation Outcome Prediction for Achilles Tendon Rupture2018Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Achilles Tendon Rupture (ATR) is one of the typical soft tissue injuries. Rehabilitation after such musculoskeletal injuries remains a prolonged process with a very variable outcome. Being able to predict the rehabilitation outcome accurately is crucial for treatment decision support. In this work, we design a probabilistic model to predict the rehabilitation outcome for ATR using a clinical cohort with numerous missing entries. Our model is trained end-to-end in order to simultaneously predict the missing entries and the rehabilitation outcome. We evaluate our model and compare with multiple baselines, including multi-stage methods. Experimental results demonstrate the superiority of our model over these baseline multi-stage approaches with various data imputation methods for ATR rehabilitation outcome prediction.

123 1 - 50 of 143
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf