Endre søk
Begrens søket
1234 1 - 50 of 175
RefereraExporteraLink til resultatlisten
Permanent link
Referera
Referensformat
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Annet format
Fler format
Språk
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Annet språk
Fler språk
Utmatningsformat
  • html
  • text
  • asciidoc
  • rtf
Treff pr side
  • 5
  • 10
  • 20
  • 50
  • 100
  • 250
Sortering
  • Standard (Relevans)
  • Forfatter A-Ø
  • Forfatter Ø-A
  • Tittel A-Ø
  • Tittel Ø-A
  • Type publikasjon A-Ø
  • Type publikasjon Ø-A
  • Eldste først
  • Nyeste først
  • Skapad (Eldste først)
  • Skapad (Nyeste først)
  • Senast uppdaterad (Eldste først)
  • Senast uppdaterad (Nyeste først)
  • Disputationsdatum (tidligste først)
  • Disputationsdatum (siste først)
  • Standard (Relevans)
  • Forfatter A-Ø
  • Forfatter Ø-A
  • Tittel A-Ø
  • Tittel Ø-A
  • Type publikasjon A-Ø
  • Type publikasjon Ø-A
  • Eldste først
  • Nyeste først
  • Skapad (Eldste først)
  • Skapad (Nyeste først)
  • Senast uppdaterad (Eldste først)
  • Senast uppdaterad (Nyeste først)
  • Disputationsdatum (tidligste først)
  • Disputationsdatum (siste først)
Merk
Maxantalet träffar du kan exportera från sökgränssnittet är 250. Vid större uttag använd dig av utsökningar.
  • 1. Alexanderson, Simon
    et al.
    Henter, Gustav Eje
    KTH, Skolan för elektroteknik och datavetenskap (EECS), Intelligenta system, Tal, musik och hörsel, TMH.
    Kucherenko, Taras
    KTH, Skolan för elektroteknik och datavetenskap (EECS), Intelligenta system, Robotik, perception och lärande, RPL.
    Beskow, Jonas
    KTH, Tidigare Institutioner (före 2005), Tal, musik och hörsel.
    Style-Controllable Speech-Driven Gesture SynthesisUsing Normalising Flows2020Konferansepaper (Fagfellevurdert)
    Abstract [en]

    Automatic synthesis of realistic gestures promises to transform the fields of animation, avatars and communicative agents. In off-line applications, novel tools can alter the role of an animator to that of a director, who provides only high-level input for the desired animation; a learned network then translates these instructions into an appropriate sequence of body poses. In interactive scenarios, systems for generating natural animations on the fly are key to achieving believable and relatable characters. In this paper we address some of the core issues towards these ends. By adapting a deep learning-based motion synthesis method called MoGlow, we propose a new generative model for generating state-of-the-art realistic speech-driven gesticulation. Owing to the probabilistic nature of the approach, our model can produce a battery of different, yet plausible, gestures given the same input speech signal. Just like humans, this gives a rich natural variation of motion. We additionally demonstrate the ability to exert directorial control over the output style, such as gesture level, speed, symmetry and spacial extent. Such control can be leveraged to convey a desired character personality or mood. We achieve all this without any manual annotation of the data. User studies evaluating upper-body gesticulation confirm that the generated motions are natural and well match the input speech. Our method scores above all prior systems and baselines on these measures, and comes close to the ratings of the original recorded motions. We furthermore find that we can accurately control gesticulation styles without unnecessarily compromising perceived naturalness. Finally, we also demonstrate an application of the same method to full-body gesticulation, including the synthesis of stepping motion and stance.

  • 2.
    Almeida, Diogo
    KTH, Skolan för elektroteknik och datavetenskap (EECS), Intelligenta system, Robotik, perception och lärande, RPL.
    Dual-Arm Robotic Manipulation under Uncertainties and Task-Based Redundancy2019Doktoravhandling, med artikler (Annet vitenskapelig)
    Abstract [en]

    Robotic manipulators are mostly employed in industrial environments, where their tasks can be prescribed with little to no uncertainty. This is possible in scenarios where the deployment time of robot workcells is not prohibitive, such as in the automotive industry. In other contexts, however, the time cost of setting up a classical robotic automation workcell is often prohibitive. This is the case with cellphone manufacturing, for example, which is currently mostly executed by human workers. Robotic automation is nevertheless desirable in these human-centric environments, as a robot can automate the most tedious parts of an assembly. To deploy robots in these environments, however, requires an ability to deal with uncertainties and to robustly execute any given task. In this thesis, we discuss two topics related to autonomous robotic manipulation. First, we address parametric uncertainties in manipulation tasks, such as the location of contacts during the execution of an assembly. We propose and experimentally evaluate two methods that rely on force and torque measurements to produce estimates of task related uncertainties: a method for dexterous manipulation under uncertainties which relies on a compliant rotational degree of freedom at the robot's gripper grasp point and exploits contact  with an external surface, and a cooperative manipulation system which is able to identify the kinematics of a two degrees of freedom mechanism. Then, we consider redundancies in dual-arm robotic manipulation. Dual-armed robots offer a large degree of redundancy which can be exploited to ensure a more robust task execution. When executing an assembly task, for instance, robots can freely change the location of the assembly in their workspace without affecting the task execution. We discuss methods that explore these types of redundancies in relative motion tasks in the form of asymmetries in their execution. Finally, we approach the converse problem by presenting a system which is able to balance measured forces and torques at its end-effectors by leveraging relative motion between them, while grasping a rigid tray. This is achieved through discrete sliding of the grasp points, which constitutes a novel application of bimanual dexterous manipulation.

  • 3.
    Almeida, Diogo
    et al.
    KTH, Skolan för datavetenskap och kommunikation (CSC), Robotik, perception och lärande, RPL. KTH.
    Ambrus, Rares
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP.
    Caccamo, Sergio
    KTH, Skolan för elektroteknik och datavetenskap (EECS), Robotik, perception och lärande, RPL.
    Chen, Xi
    KTH.
    Cruciani, Silvia
    Pinto Basto De Carvalho, Joao F
    KTH, Skolan för elektroteknik och datavetenskap (EECS), Robotik, perception och lärande, RPL.
    Haustein, Joshua
    KTH, Skolan för elektroteknik och datavetenskap (EECS), Robotik, perception och lärande, RPL.
    Marzinotto, Alejandro
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP.
    Vina, Francisco
    KTH.
    Karayiannidis, Yannis
    KTH.
    Ögren, Petter
    KTH, Skolan för teknikvetenskap (SCI), Matematik (Inst.), Optimeringslära och systemteori.
    Jensfelt, Patric
    KTH, Skolan för datavetenskap och kommunikation (CSC), Robotik, perception och lärande, RPL.
    Kragic, Danica
    KTH, Skolan för datavetenskap och kommunikation (CSC), Robotik, perception och lärande, RPL.
    Team KTH’s Picking Solution for the Amazon Picking Challenge 20162017Inngår i: Warehouse Picking Automation Workshop 2017: Solutions, Experience, Learnings and Outlook of the Amazon Robotics Challenge, 2017Konferansepaper (Annet (populærvitenskap, debatt, mm))
    Abstract [en]

    In this work we summarize the solution developed by Team KTH for the Amazon Picking Challenge 2016 in Leipzig, Germany. The competition simulated a warehouse automation scenario and it was divided in two tasks: a picking task where a robot picks items from a shelf and places them in a tote and a stowing task which is the inverse task where the robot picks items from a tote and places them in a shelf. We describe our approach to the problem starting from a high level overview of our system and later delving into details of our perception pipeline and our strategy for manipulation and grasping. The solution was implemented using a Baxter robot equipped with additional sensors.

  • 4.
    Almeida, Diogo
    et al.
    KTH, Skolan för elektroteknik och datavetenskap (EECS), Robotik, perception och lärande, RPL.
    Ataer-Cansizoglu, Esra
    Wayfair, Boston, MA 02116, USA.
    Corcodel, Radu
    Mitsubishi Electric Research Labs (MERL), Cambridge, MA 02139, USA.
    Detection, Tracking and 3D Modeling of Objects with Sparse RGB-D SLAM and Interactive Perception2019Inngår i: IEEE-RAS International Conference on Humanoid Robots (Humanoids), 2019Konferansepaper (Fagfellevurdert)
    Abstract [en]

    We present an interactive perception system that enables an autonomous agent to deliberately interact with its environment and produce 3D object models. Our system verifies object hypotheses through interaction and simultaneously maintains 3D SLAM maps for each rigidly moving object hypothesis in the scene. We rely on depth-based segmentation and a multigroup registration scheme to classify features into various object maps. Our main contribution lies in the employment of a novel segment classification scheme that allows the system to handle incorrect object hypotheses, common in cluttered environments due to touching objects or occlusion. We start with a single map and initiate further object maps based on the outcome of depth segment classification. For each existing map, we select a segment to interact with and execute a manipulation primitive with the goal of disturbing it. If the resulting set of depth segments has at least one segment that did not follow the dominant motion pattern of its respective map, we split the map, thus yielding updated object hypotheses. We show qualitative results with a Fetch manipulator and objects of various shapes, which showcase the viability of the method for identifying and modelling multiple objects through repeated interactions.

  • 5.
    Almeida, Diogo
    et al.
    KTH, Skolan för elektroteknik och datavetenskap (EECS), Robotik, perception och lärande, RPL.
    Karayiannidis, Yiannis
    A Lyapunov-Based Approach to Exploit Asymmetries in Robotic Dual-Arm Task Resolution2019Inngår i: 58th IEEE Conference on Decision and Control (CDC), 2019Konferansepaper (Fagfellevurdert)
    Abstract [en]

    Dual-arm manipulation tasks can be prescribed to a robotic system in terms of desired absolute and relative motion of the robot’s end-effectors. These can represent, e.g., jointly carrying a rigid object or performing an assembly task. When both types of motion are to be executed concurrently, the symmetric distribution of the relative motion between arms prevents task conflicts. Conversely, an asymmetric solution to the relative motion task will result in conflicts with the absolute task. In this work, we address the problem of designing a control law for the absolute motion task together with updating the distribution of the relative task among arms. Through a set of numerical results, we contrast our approach with the classical symmetric distribution of the relative motion task to illustrate the advantages of our method.

  • 6.
    Almeida, Diogo
    et al.
    KTH, Skolan för elektroteknik och datavetenskap (EECS), Robotik, perception och lärande, RPL.
    Karayiannidis, Yiannis
    KTH, Skolan för elektroteknik och datavetenskap (EECS), Robotik, perception och lärande, RPL.
    Asymmetric Dual-Arm Task Execution using an Extended Relative Jacobian2019Inngår i: The International Symposium on Robotics Research, 2019Konferansepaper (Fagfellevurdert)
    Abstract [en]

    Coordinated dual-arm manipulation tasks can be broadly characterized as possessing absolute and relative motion components. Relative motion tasks, in particular, are inherently redundant in the way they can be distributed between end-effectors. In this work, we analyse cooperative manipulation in terms of the asymmetric resolution of relative motion tasks. We discuss how existing approaches enable the asymmetric execution of a relative motion task, and show how an asymmetric relative motion space can be defined. We leverage this result to propose an extended relative Jacobian to model the cooperative system, which allows a user to set a concrete degree of asymmetry in the task execution. This is achieved without the need for prescribing an absolute motion target. Instead, the absolute motion remains available as a functional redundancy to the system. We illustrate the properties of our proposed Jacobian through numerical simulations of a novel differential Inverse Kinematics algorithm.

  • 7.
    Almeida, Diogo
    et al.
    KTH, Skolan för elektroteknik och datavetenskap (EECS), Robotik, perception och lärande, RPL.
    Karayiannidis, Yiannis
    KTH, Skolan för elektroteknik och datavetenskap (EECS), Robotik, perception och lärande, RPL. Dept. of Electrical Eng., Chalmers University of Technology.
    Cooperative Manipulation and Identification of a 2-DOF Articulated Object by a Dual-Arm Robot2018Inngår i: 2018 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION (ICRA) / [ed] IEEE, 2018, s. 5445-5451Konferansepaper (Fagfellevurdert)
    Abstract [en]

    In this work, we address the dual-arm manipula-tion of a two degrees-of-freedom articulated object that consistsof two rigid links. This can include a linkage constrainedalong two motion directions, or two objects in contact, wherethe contact imposes motion constraints. We formulate theproblem as a cooperative task, which allows the employment ofcoordinated task space frameworks, thus enabling redundancyexploitation by adjusting how the task is shared by the robotarms. In addition, we propose a method that can estimate thejoint location and the direction of the degrees-of-freedom, basedon the contact forces and the motion constraints imposed bythe object. Experimental results demonstrate the performanceof the system in its ability to estimate the two degrees of freedomindependently or simultaneously.

  • 8.
    Almeida, Diogo
    et al.
    KTH, Skolan för elektroteknik och datavetenskap (EECS), Robotik, perception och lärande, RPL.
    Karayiannidis, Yiannis
    KTH, Skolan för elektroteknik och datavetenskap (EECS), Robotik, perception och lärande, RPL. Chalmers University of Technology.
    Folding Assembly by Means of Dual-Arm Robotic Manipulation2016Inngår i: 2016 IEEE International Conference on Robotics and Automation, IEEE conference proceedings, 2016, s. 3987-3993Konferansepaper (Fagfellevurdert)
    Abstract [en]

    In this paper, we consider folding assembly as an assembly primitive suitable for dual-arm robotic assembly, that can be integrated in a higher level assembly strategy. The system composed by two pieces in contact is modelled as an articulated object, connected by a prismatic-revolute joint. Different grasping scenarios were considered in order to model the system, and a simple controller based on feedback linearisation is proposed, using force torque measurements to compute the contact point kinematics. The folding assembly controller has been experimentally tested with two sample parts, in order to showcase folding assembly as a viable assembly primitive.

  • 9.
    Antonova, Rika
    et al.
    KTH, Skolan för elektroteknik och datavetenskap (EECS), Centra, Centrum för autonoma system, CAS. KTH, Skolan för elektroteknik och datavetenskap (EECS), Robotik, perception och lärande, RPL.
    Kokic, Mia
    KTH, Skolan för elektroteknik och datavetenskap (EECS), Centra, Centrum för autonoma system, CAS. KTH, Skolan för elektroteknik och datavetenskap (EECS), Robotik, perception och lärande, RPL.
    Stork, Johannes A.
    KTH, Skolan för elektroteknik och datavetenskap (EECS), Centra, Centrum för autonoma system, CAS. KTH, Skolan för elektroteknik och datavetenskap (EECS), Robotik, perception och lärande, RPL.
    Kragic, Danica
    KTH, Skolan för elektroteknik och datavetenskap (EECS), Centra, Centrum för autonoma system, CAS. KTH, Skolan för elektroteknik och datavetenskap (EECS), Robotik, perception och lärande, RPL.
    Global Search with Bernoulli Alternation Kernel for Task-oriented Grasping Informed by Simulation2018Inngår i: Proceedings of The 2nd Conference on Robot Learning, PMLR 87, 2018, s. 641-650Konferansepaper (Fagfellevurdert)
    Abstract [en]

    We develop an approach that benefits from large simulated datasets and takes full advantage of the limited online data that is most relevant. We propose a variant of Bayesian optimization that alternates between using informed and uninformed kernels. With this Bernoulli Alternation Kernel we ensure that discrepancies between simulation and reality do not hinder adapting robot control policies online. The proposed approach is applied to a challenging real-world problem of task-oriented grasping with novel objects. Our further contribution is a neural network architecture and training pipeline that use experience from grasping objects in simulation to learn grasp stability scores. We learn task scores from a labeled dataset with a convolutional network, which is used to construct an informed kernel for our variant of Bayesian optimization. Experiments on an ABB Yumi robot with real sensor data demonstrate success of our approach, despite the challenge of fulfilling task requirements and high uncertainty over physical properties of objects.

  • 10.
    Arnekvist, Isac
    et al.
    KTH, Skolan för elektroteknik och datavetenskap (EECS), Intelligenta system, Robotik, perception och lärande, RPL.
    Kragic, Danica
    KTH, Skolan för elektroteknik och datavetenskap (EECS), Intelligenta system, Robotik, perception och lärande, RPL.
    Stork, Johannes A.
    KTH, Skolan för elektroteknik och datavetenskap (EECS), Intelligenta system, Robotik, perception och lärande, RPL. Center for Applied Autonomous Sensor Systems, Örebro University, Sweden.
    Vpe: Variational policy embedding for transfer reinforcement learning2019Inngår i: 2019 International Conference on Robotics And Automation (ICRA), Institute of Electrical and Electronics Engineers (IEEE), 2019, s. 36-42Konferansepaper (Fagfellevurdert)
    Abstract [en]

    Reinforcement Learning methods are capable of solving complex problems, but resulting policies might perform poorly in environments that are even slightly different. In robotics especially, training and deployment conditions often vary and data collection is expensive, making retraining undesirable. Simulation training allows for feasible training times, but on the other hand suffer from a reality-gap when applied in real-world settings. This raises the need of efficient adaptation of policies acting in new environments. We consider the problem of transferring knowledge within a family of similar Markov decision processes. We assume that Q-functions are generated by some low-dimensional latent variable. Given such a Q-function, we can find a master policy that can adapt given different values of this latent variable. Our method learns both the generative mapping and an approximate posterior of the latent variables, enabling identification of policies for new tasks by searching only in the latent space, rather than the space of all policies. The low-dimensional space, and master policy found by our method enables policies to quickly adapt to new environments. We demonstrate the method on both a pendulum swing-up task in simulation, and for simulation-to-real transfer on a pushing task.

  • 11.
    Baldassarre, Federico
    et al.
    KTH, Skolan för elektroteknik och datavetenskap (EECS), Robotik, perception och lärande, RPL.
    Azizpour, Hossein
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP.
    Explainability Techniques for Graph Convolutional Networks2019Konferansepaper (Fagfellevurdert)
    Abstract [en]

    Graph Networks are used to make decisions in potentially complex scenarios but it is usually not obvious how or why they made them. In this work, we study the explainability of Graph Network decisions using two main classes of techniques, gradient-based and decomposition-based, on a toy dataset and a chemistry task. Our study sets the ground for future development as well as application to real-world problems.

  • 12.
    Barbosa, Fernando S.
    et al.
    KTH, Skolan för elektroteknik och datavetenskap (EECS), Robotik, perception och lärande, RPL.
    Duberg, Daniel
    KTH, Skolan för elektroteknik och datavetenskap (EECS), Robotik, perception och lärande, RPL.
    Jensfelt, Patric
    KTH, Skolan för elektroteknik och datavetenskap (EECS), Robotik, perception och lärande, RPL.
    Tumova, Jana
    KTH, Skolan för elektroteknik och datavetenskap (EECS), Robotik, perception och lärande, RPL.
    Guiding Autonomous Exploration with Signal Temporal Logic2019Inngår i: IEEE Robotics and Automation Letters, ISSN 2377-3766, E-ISSN 1949-3045, Vol. 4, nr 4, s. 3332-3339Artikkel i tidsskrift (Fagfellevurdert)
    Abstract [en]

    Algorithms for autonomous robotic exploration usually focus on optimizing time and coverage, often in a greedy fashion. However, obstacle inflation is conservative and might limit mapping capabilities and even prevent the robot from moving through narrow, important places. This letter proposes a method to influence the manner the robot moves in the environment by taking into consideration a user-defined spatial preference formulated in a fragment of signal temporal logic (STL). We propose to guide the motion planning toward minimizing the violation of such preference through a cost function that integrates the quantitative semantics, i.e., robustness of STL. To demonstrate the effectiveness of the proposed approach, we integrate it into the autonomous exploration planner (AEP). Results from simulations and real-world experiments are presented, highlighting the benefits of our approach.

  • 13.
    Barbosa, Fernando S.
    et al.
    KTH, Skolan för elektroteknik och datavetenskap (EECS), Intelligenta system, Robotik, perception och lärande, RPL.
    Lindemann, Lars
    KTH, Skolan för elektroteknik och datavetenskap (EECS), Intelligenta system, Reglerteknik.
    Dimarogonas, Dimos V.
    KTH, Skolan för elektroteknik och datavetenskap (EECS), Intelligenta system, Reglerteknik.
    Tumova, Jana
    KTH, Skolan för elektroteknik och datavetenskap (EECS), Intelligenta system, Robotik, perception och lärande, RPL.
    Integrated motion planning and control under metric interval temporal logic specifications2019Inngår i: 2019 18th European Control Conference, ECC 2019, Institute of Electrical and Electronics Engineers (IEEE), 2019, s. 2042-2049, artikkel-id 8795925Konferansepaper (Fagfellevurdert)
    Abstract [en]

    This paper proposes an approach that combines motion planning and hybrid feedback control design in order to find and follow trajectories fulfilling a given complex mission involving time constraints. We use Metric Interval Temporal Logic (MITL) as a rich and rigorous formalism to specify such missions. The solution builds on three main steps: (i) using sampling-based motion planning methods and the untimed version of the mission specification in the form of Zone automaton, we find a sequence of waypoints in the workspace; (ii) based on the clock zones from the satisfying run on the Zone automaton, we compute time-stamps at which these waypoints should be reached; and (iii) to control the system to connect two waypoints in the desired time, we design a low-level feedback controller leveraging Time-varying Control Barrier Functions. Illustrative simulation results are included.

  • 14.
    Billard, Aude
    et al.
    Ecole Polytech Fed Lausanne, Learning Algorithms & Syst Lab, Lausanne, Switzerland..
    Kragic, Danica
    KTH, Skolan för elektroteknik och datavetenskap (EECS), Robotik, perception och lärande, RPL.
    Trends and challenges in robot manipulation2019Inngår i: Science, ISSN 0036-8075, E-ISSN 1095-9203, Vol. 364, nr 6446, s. 1149-+Artikkel, forskningsoversikt (Fagfellevurdert)
    Abstract [en]

    Dexterous manipulation is one of the primary goals in robotics. Robots with this capability could sort and package objects, chop vegetables, and fold clothes. As robots come to work side by side with humans, they must also become human-aware. Over the past decade, research has made strides toward these goals. Progress has come from advances in visual and haptic perception and in mechanics in the form of soft actuators that offer a natural compliance. Most notably, immense progress in machine learning has been leveraged to encapsulate models of uncertainty and to support improvements in adaptive and robust control. Open questions remain in terms of how to enable robots to deal with the most unpredictable agent of all, the human.

  • 15.
    Björklund, Linnea
    KTH, Skolan för elektroteknik och datavetenskap (EECS), Robotik, perception och lärande, RPL.
    Knock on Wood: Does Material Choice Change the Social Perception of Robots?2018Independent thesis Advanced level (degree of Master (Two Years)), 20 poäng / 30 hpOppgave
    Abstract [en]

    This paper aims to understand whether there is a difference in how socially interactive robots are perceived based on the material they are constructed out of. Two studies to that end were performed; a pilot in a live setting and a main one online. Participants were asked to rate three versions of the same robot design, one built out of wood, one out of plastic, and one covered in fur. This was then used in two studies to ascertain the participants perception of competence, warmth, and discomfort and the differences between the three materials. Statistically significant differences were found between the materials regarding the perception of warmth and discomfort

  • 16.
    Blom, Fredrik
    KTH, Skolan för elektroteknik och datavetenskap (EECS), Robotik, perception och lärande, RPL.
    Unsupervised Feature Extraction of Clothing Using Deep Convolutional Variational Autoencoders2018Independent thesis Advanced level (degree of Master (Two Years)), 20 poäng / 30 hpOppgave
    Abstract [en]

    As online retail continues to grow, large amounts of valuable data, such as transaction and search history, and, specifically for fashion retail, similarly structured images of clothing, is generated. By using unsupervised learning, it is possible to tap into this almost unlimited supply of data. This thesis set out to determine to what extent generative models – in particular, deep convolutional variational autoencoders – can be used to automatically extract representative features from images of clothing in a completely unsupervised manner. In reviewing variations of the autoencoder, both in terms of reconstruction quality and the ability to generate new realistic samples, results suggest that there exists an optimal size of the latent vector in relation to the image data complexity. Furthermore, by weighting the latent loss and generation loss in the loss function, it was possible to disentangle the learned features such that each feature captured a unique defining characteristic of clothing items (here t-shirts and tops).

  • 17.
    Bore, Nils
    et al.
    KTH, Skolan för elektroteknik och datavetenskap (EECS), Robotik, perception och lärande, RPL.
    Ekekrantz, Johan
    KTH, Skolan för elektroteknik och datavetenskap (EECS), Robotik, perception och lärande, RPL.
    Jensfelt, Patric
    KTH, Skolan för elektroteknik och datavetenskap (EECS), Robotik, perception och lärande, RPL.
    Folkesson, John
    KTH, Skolan för elektroteknik och datavetenskap (EECS), Robotik, perception och lärande, RPL.
    Detection and Tracking of General Movable Objects in Large Three-Dimensional Maps2019Inngår i: IEEE Transactions on robotics, ISSN 1552-3098, E-ISSN 1941-0468, Vol. 35, nr 1, s. 231-247Artikkel i tidsskrift (Fagfellevurdert)
    Abstract [en]

    This paper studies the problem of detection and tracking of general objects with semistatic dynamics observed by a mobile robot moving in a large environment. A key problem is that due to the environment scale, the robot can only observe a subset of the objects at any given time. Since some time passes between observations of objects in different places, the objects might be moved when the robot is not there. We propose a model for this movement in which the objects typically only move locally, but with some small probability they jump longer distances through what we call global motion. For filtering, we decompose the posterior over local and global movements into two linked processes. The posterior over the global movements and measurement associations is sampled, while we track the local movement analytically using Kalman filters. This novel filter is evaluated on point cloud data gathered autonomously by a mobile robot over an extended period of time. We show that tracking jumping objects is feasible, and that the proposed probabilistic treatment outperforms previous methods when applied to real world data. The key to efficient probabilistic tracking in this scenario is focused sampling of the object posteriors.

  • 18.
    Bore, Nils
    et al.
    KTH, Skolan för elektroteknik och datavetenskap (EECS), Intelligenta system, Robotik, perception och lärande, RPL.
    Torroba, Ignacio
    KTH, Skolan för elektroteknik och datavetenskap (EECS), Intelligenta system, Robotik, perception och lärande, RPL.
    Folkesson, John
    KTH, Skolan för elektroteknik och datavetenskap (EECS), Intelligenta system, Robotik, perception och lärande, RPL.
    Sparse Gaussian Process SLAM, Storage and Filtering for AUV Multibeam Bathymetry2018Inngår i: AUV 2018 - 2018 IEEE/OES Autonomous Underwater Vehicle Workshop, Proceedings, Institute of Electrical and Electronics Engineers Inc. , 2018Konferansepaper (Fagfellevurdert)
    Abstract [en]

    With dead-reckoning from velocity sensors, AUVs may construct short-term, local bathymetry maps of the sea floor using multibeam sensors. However, the position estimate from dead-reckoning will include some drift that grows with time. In this work, we focus on long-term onboard storage of these local bathymetry maps, and the alignment of maps with respect to each other. We propose using Sparse Gaussian Processes for this purpose, and show that the representation has several advantages, including an intuitive alignment optimization, data compression, and sensor noise filtering. We demonstrate these three key capabilities on two real-world datasets.

  • 19.
    Broomé, Sofia
    et al.
    KTH, Skolan för elektroteknik och datavetenskap (EECS), Intelligenta system, Robotik, perception och lärande, RPL.
    Bech Gleerup, Karina
    Haubro Andersen, Pia
    Kjellström, Hedvig
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Tidigare Institutioner (före 2005), Numerisk analys och datalogi, NADA. KTH, Skolan för elektroteknik och datavetenskap (EECS), Intelligenta system, Robotik, perception och lärande, RPL.
    Dynamics are important for the recognition of equine pain in video2019Konferansepaper (Fagfellevurdert)
  • 20.
    Brucker, Manuel
    et al.
    German Aerosp Ctr DLR, Inst Robot & Mechatron, D-82234 Oberpfaffenhofen, Germany..
    Durner, Maximilian
    German Aerosp Ctr DLR, Inst Robot & Mechatron, D-82234 Oberpfaffenhofen, Germany..
    Ambrus, Rares
    KTH, Skolan för elektroteknik och datavetenskap (EECS), Robotik, perception och lärande, RPL. KTH, Skolan för elektroteknik och datavetenskap (EECS), Centra, Centrum för autonoma system, CAS.
    Marton, Zoltan Csaba
    German Aerosp Ctr DLR, Inst Robot & Mechatron, D-82234 Oberpfaffenhofen, Germany..
    Wendt, Axel
    Robert Bosch, Corp Res, St Joseph, MI USA.;Robert Bosch, Corp Res, Gerlingen, Germany..
    Jensfelt, Patric
    KTH, Skolan för elektroteknik och datavetenskap (EECS), Robotik, perception och lärande, RPL. KTH, Skolan för elektroteknik och datavetenskap (EECS), Centra, Centrum för autonoma system, CAS.
    Arras, Kai O.
    Robert Bosch, Corp Res, St Joseph, MI USA.;Robert Bosch, Corp Res, Gerlingen, Germany..
    Triebel, Rudolph
    German Aerosp Ctr DLR, Inst Robot & Mechatron, D-82234 Oberpfaffenhofen, Germany.;Tech Univ Munich, Dep Comp Sci, Munich, Germany..
    Semantic Labeling of Indoor Environments from 3D RGB Maps2018Inngår i: 2018 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION (ICRA), IEEE Computer Society, 2018, s. 1871-1878Konferansepaper (Fagfellevurdert)
    Abstract [en]

    We present an approach to automatically assign semantic labels to rooms reconstructed from 3D RGB maps of apartments. Evidence for the room types is generated using state-of-the-art deep-learning techniques for scene classification and object detection based on automatically generated virtual RGB views, as well as from a geometric analysis of the map's 3D structure. The evidence is merged in a conditional random field, using statistics mined from different datasets of indoor environments. We evaluate our approach qualitatively and quantitatively and compare it to related methods.

  • 21.
    Buda, Mateusz
    et al.
    Duke Univ, Dept Radiol, Sch Med, Durham, NC 27710 USA.;KTH Royal Inst Technol, Sch Elect Engn & Comp Sci, Stockholm, Sweden..
    Maki, Atsuto
    KTH, Skolan för elektroteknik och datavetenskap (EECS), Robotik, perception och lärande, RPL.
    Mazurowski, Maciej A.
    Duke Univ, Dept Radiol, Sch Med, Durham, NC 27710 USA.;Duke Univ, Dept Elect & Comp Engn, Durham, NC USA..
    A systematic study of the class imbalance problem in convolutional neural networks2018Inngår i: Neural Networks, ISSN 0893-6080, E-ISSN 1879-2782, Vol. 106, s. 249-259Artikkel i tidsskrift (Fagfellevurdert)
    Abstract [en]

    In this study, we systematically investigate the impact of class imbalance on classification performance of convolutional neural networks (CNNs) and compare frequently used methods to address the issue. Class imbalance is a common problem that has been comprehensively studied in classical machine learning, yet very limited systematic research is available in the context of deep learning. In our study, we use three benchmark datasets of increasing complexity, MNIST, CIFAR-10 and ImageNet, to investigate the effects of imbalance on classification and perform an extensive comparison of several methods to address the issue: oversampling, undersampling, two-phase training, and thresholding that compensates for prior class probabilities. Our main evaluation metric is area under the receiver operating characteristic curve (ROC AUC) adjusted to multi-class tasks since overall accuracy metric is associated with notable difficulties in the context of imbalanced data. Based on results from our experiments we conclude that (i) the effect of class imbalance on classification performance is detrimental; (ii) the method of addressing class imbalance that emerged as dominant in almost all analyzed scenarios was oversampling; (iii) oversampling should be applied to the level that completely eliminates the imbalance, whereas the optimal undersampling ratio depends on the extent of imbalance; (iv) as opposed to some classical machine learning models, oversampling does not cause overfitting of CNNs; (v) thresholding should be applied to compensate for prior class probabilities when overall number of properly classified cases is of interest. 

  • 22.
    Butepage, Judith
    et al.
    KTH, Skolan för elektroteknik och datavetenskap (EECS), Robotik, perception och lärande, RPL.
    Cruciani, Silvia
    KTH, Skolan för elektroteknik och datavetenskap (EECS), Robotik, perception och lärande, RPL.
    Kokic, Mia
    KTH, Skolan för elektroteknik och datavetenskap (EECS), Robotik, perception och lärande, RPL.
    Welle, Michael
    KTH, Skolan för elektroteknik och datavetenskap (EECS), Robotik, perception och lärande, RPL.
    Kragic, Danica
    KTH, Skolan för elektroteknik och datavetenskap (EECS), Robotik, perception och lärande, RPL.
    From Visual Understanding to Complex Object Manipulation2019Inngår i: Annual Review of Control, Robotics, and Autonomous Systems, Vol. 2, s. 161-179Artikkel, forskningsoversikt (Fagfellevurdert)
    Abstract [en]

    Planning and executing object manipulation requires integrating multiple sensory and motor channels while acting under uncertainty and complying with task constraints. As the modern environment is tuned for human hands, designing robotic systems with similar manipulative capabilities is crucial. Research on robotic object manipulation is divided into smaller communities interested in, e.g., motion planning, grasp planning, sensorimotor learning, and tool use. However, few attempts have been made to combine these areas into holistic systems. In this review, we aim to unify the underlying mechanics of grasping and in-hand manipulation by focusing on the temporal aspects of manipulation, including visual perception, grasp planning and execution, and goal-directed manipulation. Inspired by human manipulation, we envision that an emphasis on the temporal integration of these processes opens the way for human-like object use by robots.

  • 23.
    Butepage, Judith
    et al.
    KTH, Skolan för elektroteknik och datavetenskap (EECS), Robotik, perception och lärande, RPL.
    Kjellström, Hedvig
    KTH, Skolan för elektroteknik och datavetenskap (EECS), Robotik, perception och lärande, RPL.
    Kragic, Danica
    KTH, Skolan för elektroteknik och datavetenskap (EECS), Robotik, perception och lärande, RPL.
    Anticipating many futures: Online human motion prediction and generation for human-robot interaction2018Inngår i: 2018 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION (ICRA), IEEE COMPUTER SOC , 2018, s. 4563-4570Konferansepaper (Fagfellevurdert)
    Abstract [en]

    Fluent and safe interactions of humans and robots require both partners to anticipate the others' actions. The bottleneck of most methods is the lack of an accurate model of natural human motion. In this work, we present a conditional variational autoencoder that is trained to predict a window of future human motion given a window of past frames. Using skeletal data obtained from RGB depth images, we show how this unsupervised approach can be used for online motion prediction for up to 1660 ms. Additionally, we demonstrate online target prediction within the first 300-500 ms after motion onset without the use of target specific training data. The advantage of our probabilistic approach is the possibility to draw samples of possible future motion patterns. Finally, we investigate how movements and kinematic cues are represented on the learned low dimensional manifold.

  • 24.
    Båberg, Fredrik
    et al.
    KTH, Skolan för datavetenskap och kommunikation (CSC), Robotik, perception och lärande, RPL. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Petter, Ögren
    KTH, Skolan för elektroteknik och datavetenskap (EECS), Intelligenta system, Robotik, perception och lärande, RPL. KTH, Skolan för elektroteknik och datavetenskap (EECS), Centra, Centrum för autonoma system, CAS.
    Formation Obstacle Avoidance using RRT and Constraint Based Programming2017Inngår i: 2017 IEEE International Symposium on Safety, Security and Rescue Robotics (SSRR), IEEE conference proceedings, 2017, artikkel-id 8088131Konferansepaper (Fagfellevurdert)
    Abstract [en]

    In this paper, we propose a new way of doing formation obstacle avoidance using a combination of Constraint Based Programming (CBP) and Rapidly Exploring Random Trees (RRTs). RRT is used to select waypoint nodes, and CBP is used to move the formation between those nodes, reactively rotating and translating the formation to pass the obstacles on the way. Thus, the CBP includes constraints for both formation keeping and obstacle avoidance, while striving to move the formation towards the next waypoint. The proposed approach is compared to a pure RRT approach where the motion between the RRT waypoints is done following linear interpolation trajectories, which are less computationally expensive than the CBP ones. The results of a number of challenging simulations show that the proposed approach is more efficient for scenarios with high obstacle densities.

  • 25.
    Bütepage, Judith
    KTH, Skolan för elektroteknik och datavetenskap (EECS), Robotik, perception och lärande, RPL.
    Generative models for action generation and action understanding2019Doktoravhandling, med artikler (Annet vitenskapelig)
    Abstract [en]

    The question of how to build intelligent machines raises the question of how to rep-resent the world to enable intelligent behavior. In nature, this representation relies onthe interplay between an organism’s sensory input and motor input. Action-perceptionloops allow many complex behaviors to arise naturally. In this work, we take these sen-sorimotor contingencies as an inspiration to build robot systems that can autonomouslyinteract with their environment and with humans. The goal is to pave the way for robotsystems that can learn motor control in an unsupervised fashion and relate their ownsensorimotor experience to observed human actions. By combining action generationand action understanding we hope to facilitate smooth and intuitive interaction betweenrobots and humans in shared work spaces.To model robot sensorimotor contingencies and human behavior we employ gen-erative models. Since generative models represent a joint distribution over relevantvariables, they are flexible enough to cover the range of tasks that we are tacklinghere. Generative models can represent variables that originate from multiple modali-ties, model temporal dynamics, incorporate latent variables and represent uncertaintyover any variable - all of which are features required to model sensorimotor contin-gencies. By using generative models, we can predict the temporal development of thevariables in the future, which is important for intelligent action selection.We present two lines of work. Firstly, we will focus on unsupervised learning ofmotor control with help of sensorimotor contingencies. Based on Gaussian Processforward models we demonstrate how the robot can execute goal-directed actions withthe help of planning techniques or reinforcement learning. Secondly, we present anumber of approaches to model human activity, ranging from pure unsupervised mo-tion prediction to including semantic action and affordance labels. Here we employdeep generative models, namely Variational Autoencoders, to model the 3D skeletalpose of humans over time and, if required, include semantic information. These twolines of work are then combined to implement physical human-robot interaction tasks.Our experiments focus on real-time applications, both when it comes to robot ex-periments and human activity modeling. Since many real-world scenarios do not haveaccess to high-end sensors, we require our models to cope with uncertainty. Additionalrequirements are data-efficient learning, because of the wear and tear of the robot andhuman involvement, online employability and operation under safety and complianceconstraints. We demonstrate how generative models of sensorimotor contingencies canhandle these requirements in our experiments satisfyingly.

  • 26.
    Bütepage, Judith
    et al.
    KTH, Skolan för elektroteknik och datavetenskap (EECS), Robotik, perception och lärande, RPL.
    Kjellström, Hedvig
    KTH, Skolan för elektroteknik och datavetenskap (EECS), Robotik, perception och lärande, RPL.
    Kragic, Danica
    KTH, Skolan för elektroteknik och datavetenskap (EECS), Robotik, perception och lärande, RPL.
    A Probabilistic Semi-Supervised Approach to Multi-Task Human Activity ModelingManuskript (preprint) (Annet vitenskapelig)
    Abstract [en]

    Human behavior is a continuous stochastic spatio-temporal process which is governed by semantic actions and affordances as well as latent factors. Therefore, video-based human activity modeling is concerned with a number of tasks such as inferring current and future semantic labels, predicting future continuous observations as well as imagining possible future label and feature sequences. In this paper we present a semi-supervised probabilistic deep latent variable model that can represent both discrete labels and continuous observations as well as latent dynamics over time. This allows the model to solve several tasks at once without explicit fine-tuning. We focus here on the tasks of action classification, detection, prediction and anticipation as well as motion prediction and synthesis based on 3D human activity data recorded with Kinect. We further extend the model to capture hierarchical label structure and to model the dependencies between multiple entities, such as a human and objects. Our experiments demonstrate that our principled approach to human activity modeling can be used to detect current and anticipate future semantic labels and to predict and synthesize future label and feature sequences. When comparing our model to state-of-the-art approaches, which are specifically designed for e.g. action classification, we find that our probabilistic formulation outperforms or is comparable to these task specific models.

  • 27.
    Caccamo, Sergio
    KTH, Skolan för elektroteknik och datavetenskap (EECS), Robotik, perception och lärande, RPL.
    Enhancing geometric maps through environmental interactions2018Doktoravhandling, med artikler (Annet vitenskapelig)
    Abstract [en]

    The deployment of rescue robots in real operations is becoming increasingly commonthanks to recent advances in AI technologies and high performance hardware. Rescue robots can now operate for extended period of time, cover wider areas andprocess larger amounts of sensory information making them considerably more usefulduring real life threatening situations, including both natural or man-made disasters.

    In this thesis we present results of our research which focuses on investigating ways of enhancing visual perception for Unmanned Ground Vehicles (UGVs) through environmental interactions using different sensory systems, such as tactile sensors and wireless receivers.

    We argue that a geometric representation of the robot surroundings built upon vision data only, may not suffice in overcoming challenging scenarios, and show that robot interactions with the environment can provide a rich layer of new information that needs to be suitably represented and merged into the cognitive world model. Visual perception for mobile ground vehicles is one of the fundamental problems in rescue robotics. Phenomena such as rain, fog, darkness, dust, smoke and fire heavily influence the performance of visual sensors, and often result in highly noisy data, leading to unreliable or incomplete maps.

    We address this problem through a collection of studies and structure the thesis as follow:Firstly, we give an overview of the Search & Rescue (SAR) robotics field, and discuss scenarios, hardware and related scientific questions.Secondly, we focus on the problems of control and communication. Mobile robotsrequire stable communication with the base station to exchange valuable information. Communication loss often presents a significant mission risk and disconnected robotsare either abandoned, or autonomously try to back-trace their way to the base station. We show how non-visual environmental properties (e.g. the WiFi signal distribution) can be efficiently modeled using probabilistic active perception frameworks based on Gaussian Processes, and merged into geometric maps so to facilitate the SAR mission. We then show how to use tactile perception to enhance mapping. Implicit environmental properties such as the terrain deformability, are analyzed through strategic glancesand touches and then mapped into probabilistic models.Lastly, we address the problem of reconstructing objects in the environment. Wepresent a technique for simultaneous 3D reconstruction of static regions and rigidly moving objects in a scene that enables on-the-fly model generation. Although this thesis focuses mostly on rescue UGVs, the concepts presented canbe applied to other mobile platforms that operates under similar circumstances. To make sure that the suggested methods work, we have put efforts into design of user interfaces and the evaluation of those in user studies.

  • 28.
    Caccamo, Sergio Salvatore
    KTH, Skolan för elektroteknik och datavetenskap (EECS), Robotik, perception och lärande, RPL.
    Joint 3D Reconstruction of a Static Scene and Moving Objects2017Inngår i: Proceedings of the 2017International Conference on 3D Vision (3DV’17), IEEE, 2017Konferansepaper (Annet vitenskapelig)
  • 29.
    Carlsson, Stefan
    et al.
    KTH, Skolan för elektroteknik och datavetenskap (EECS), Robotik, perception och lärande, RPL.
    Azizpour, Hossein
    KTH, Skolan för datavetenskap och kommunikation (CSC), Beräkningsvetenskap och beräkningsteknik (CST).
    Razavian, Ali Sharif
    KTH, Skolan för elektroteknik och datavetenskap (EECS), Robotik, perception och lärande, RPL.
    Sullivan, Josephine
    KTH, Skolan för elektroteknik och datavetenskap (EECS), Robotik, perception och lärande, RPL.
    Smith, Kevin
    KTH, Skolan för elektroteknik och datavetenskap (EECS), Beräkningsvetenskap och beräkningsteknik (CST).
    The Preimage of Rectifier Network Activities2017Inngår i: International Conference on Learning Representations (ICLR), 2017Konferansepaper (Fagfellevurdert)
    Abstract [en]

    The preimage of the activity at a certain level of a deep network is the set of inputs that result in the same node activity. For fully connected multi layer rectifier networks we demonstrate how to compute the preimages of activities at arbitrary levels from knowledge of the parameters in a deep rectifying network. If the preimage set of a certain activity in the network contains elements from more than one class it means that these classes are irreversibly mixed. This implies that preimage sets which are piecewise linear manifolds are building blocks for describing the input manifolds specific classes, ie all preimages should ideally be from the same class. We believe that the knowledge of how to compute preimages will be valuable in understanding the efficiency displayed by deep learning networks and could potentially be used in designing more efficient training algorithms.

  • 30.
    Carvalho, J. Frederico
    et al.
    KTH. KTH, CAS, RPL, Royal Inst Technol, Stocholm, Sweden..
    Vejdemo-Johansson, Mikael
    CUNY Coll Staten Isl, Math Dept, Staten Isl, NY 10314 USA.;CUNY, Grad Ctr, Comp Sci, New York, NY USA..
    Kragic, Danica
    KTH, Skolan för elektroteknik och datavetenskap (EECS), Robotik, perception och lärande, RPL. KTH, CAS, RPL, Royal Inst Technol, Stocholm, Sweden..
    Pokorny, Florian T.
    KTH, Skolan för elektroteknik och datavetenskap (EECS), Robotik, perception och lärande, RPL. KTH, CAS, RPL, Royal Inst Technol, Stocholm, Sweden..
    Path Clustering with Homology Area2018Inngår i: 2018 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION (ICRA), IEEE Computer Society, 2018, s. 7346-7353Konferansepaper (Fagfellevurdert)
    Abstract [en]

    Path clustering has found many applications in recent years. Common approaches to this problem use aggregates of the distances between points to provide a measure of dissimilarity between paths which do not satisfy the triangle inequality. Furthermore, they do not take into account the topology of the space where the paths are embedded. To tackle this, we extend previous work in path clustering with relative homology, by employing minimum homology area as a measure of distance between homologous paths in a triangulated mesh. Further, we show that the resulting distance satisfies the triangle inequality, and how we can exploit the properties of homology to reduce the amount of pairwise distance calculations necessary to cluster a set of paths. We further compare the output of our algorithm with that of DTW on a toy dataset of paths, as well as on a dataset of real-world paths.

  • 31.
    Carvalho, Joao Frederico
    et al.
    KTH, Skolan för elektroteknik och datavetenskap (EECS), Intelligenta system, Robotik, perception och lärande, RPL.
    Vejdemo-Johansson, Mikael
    CUNY, Math Dept, Coll Staten Isl, New York, NY 10021 USA..
    Kragic, Danica
    KTH, Skolan för elektroteknik och datavetenskap (EECS), Intelligenta system, Robotik, perception och lärande, RPL.
    Pokorny, Florian T.
    KTH, Skolan för elektroteknik och datavetenskap (EECS), Intelligenta system, Robotik, perception och lärande, RPL.
    An algorithm for calculating top-dimensional bounding chains2018Inngår i: PEERJ COMPUTER SCIENCE, ISSN 2376-5992, artikkel-id e153Artikkel i tidsskrift (Fagfellevurdert)
    Abstract [en]

    We describe the Coefficient-Flow algorithm for calculating the bounding chain of an (n-1)-boundary on an n-manifold-like simplicial complex S. We prove its correctness and show that it has a computational time complexity of O(vertical bar S(n-1)vertical bar) (where S(n-1) is the set of (n-1)-faces of S). We estimate the big-O coefficient which depends on the dimension of S and the implementation. We present an implementation, experimentally evaluate the complexity of our algorithm, and compare its performance with that of solving the underlying linear system.

  • 32.
    Chen, Xi
    et al.
    KTH, Skolan för elektroteknik och datavetenskap (EECS), Robotik, perception och lärande, RPL.
    Ghadirzadeh, Ali
    KTH, Skolan för elektroteknik och datavetenskap (EECS), Robotik, perception och lärande, RPL.
    Folkesson, John
    KTH, Skolan för elektroteknik och datavetenskap (EECS), Robotik, perception och lärande, RPL.
    Björkman, Mårten
    KTH, Skolan för elektroteknik och datavetenskap (EECS), Robotik, perception och lärande, RPL.
    Jensfelt, Patric
    KTH, Skolan för elektroteknik och datavetenskap (EECS), Robotik, perception och lärande, RPL.
    Deep Reinforcement Learning to Acquire Navigation Skills for Wheel-Legged Robots in Complex Environments2018Inngår i: 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 2018Konferansepaper (Fagfellevurdert)
    Abstract [en]

    Mobile robot navigation in complex and dynamic environments is a challenging but important problem. Reinforcement learning approaches fail to solve these tasks efficiently due to reward sparsities, temporal complexities and high-dimensionality of sensorimotor spaces which are inherent in such problems. We present a novel approach to train action policies to acquire navigation skills for wheel-legged robots using deep reinforcement learning. The policy maps height-map image observations to motor commands to navigate to a target position while avoiding obstacles. We propose to acquire the multifaceted navigation skill by learning and exploiting a number of manageable navigation behaviors. We also introduce a domain randomization technique to improve the versatility of the training samples. We demonstrate experimentally a significant improvement in terms of data-efficiency, success rate, robustness against irrelevant sensory data, and also the quality of the maneuver skills.

  • 33.
    Colledanchise, Michele
    et al.
    Istituto Italiano di Tecnologia - IIT, Genoa, Italy.
    Almeida, Diogo
    KTH, Skolan för elektroteknik och datavetenskap (EECS), Intelligenta system, Robotik, perception och lärande, RPL.
    Ögren, Petter
    KTH, Skolan för elektroteknik och datavetenskap (EECS), Intelligenta system, Robotik, perception och lärande, RPL.
    Towards Blended Reactive Planning and Acting using Behavior Trees2019Inngår i: 2019 International Conference on Robotics And Automation (ICRA), IEEE Robotics and Automation Society, 2019, s. 8839-8845Konferansepaper (Fagfellevurdert)
    Abstract [en]

    In this paper, we show how a planning algorithm can be used to automatically create and update a Behavior Tree (BT), controlling a robot in a dynamic environment. The planning part of the algorithm is based on the idea of back chaining. Starting from a goal condition we iteratively select actions to achieve that goal, and if those actions have unmet preconditions, they are extended with actions to achieve them in the same way. The fact that BTs are inherently modular and reactive makes the proposed solution blend acting and planning in a way that enables the robot to effectively react to external disturbances. If an external agent undoes an action the robot re- executes it without re-planning, and if an external agent helps the robot, it skips the corresponding actions, again without re- planning. We illustrate our approach in two different robotics scenarios.

  • 34. Colledancise, Michele
    et al.
    Parasuraman, Ramviyas Nattanmai
    Petter, Ögren
    KTH, Skolan för elektroteknik och datavetenskap (EECS), Robotik, perception och lärande, RPL.
    Learning of Behavior Trees for Autonomous Agents2018Inngår i: IEEE Transactions on Games, ISSN 2475-1502Artikkel i tidsskrift (Fagfellevurdert)
    Abstract [en]

    In this paper, we study the problem of automatically synthesizing a successful Behavior Tree (BT) in an a-priori unknown dynamic environment. Starting with a given set of behaviors, a reward function, and sensing in terms of a set of binary conditions, the proposed algorithm incrementally learns a switching structure in terms of a BT, that is able to handle the situations encountered. Exploiting the fact that BTs generalize And-Or-Trees and also provide very natural chromosome mappings for genetic pro- gramming, we combine the long term performance of Genetic Programming with a greedy element and use the And-Or analogy to limit the size of the resulting structure. Finally, earlier results on BTs enable us to provide certain safety guarantees for the resulting system. Using the testing environment Mario AI we compare our approach to alternative methods for learning BTs and Finite State Machines. The evaluation shows that the proposed approach generated solutions with better performance, and often fewer nodes than the other two methods.

  • 35.
    Correia, Filipa
    et al.
    Univ Lisbon, INESC ID, Inst Super Tecn, Lisbon, Portugal..
    Mascarenhas, Samuel F.
    Univ Lisbon, INESC ID, Inst Super Tecn, Lisbon, Portugal..
    Gomes, Samuel
    Univ Lisbon, INESC ID, Inst Super Tecn, Lisbon, Portugal..
    Arriaga, Patricia
    CIS IUL, Inst Univ Lisboa ISCTE IUL, Lisbon, Portugal..
    Leite, Iolanda
    KTH, Skolan för elektroteknik och datavetenskap (EECS), Robotik, perception och lärande, RPL.
    Prada, Rui
    Univ Lisbon, INESC ID, Inst Super Tecn, Lisbon, Portugal..
    Melo, Francisco S.
    Univ Lisbon, INESC ID, Inst Super Tecn, Lisbon, Portugal..
    Paiva, Ana
    Univ Lisbon, INESC ID, Inst Super Tecn, Lisbon, Portugal..
    Exploring Prosociality in Human-Robot Teams2019Inngår i: HRI '19: 2019 14TH ACM/IEEE INTERNATIONAL CONFERENCE ON HUMAN-ROBOT INTERACTION, IEEE , 2019, s. 143-151Konferansepaper (Fagfellevurdert)
    Abstract [en]

    This paper explores the role of prosocial behaviour when people team up with robots in a collaborative game that presents a social dilemma similar to a public goods game. An experiment was conducted with the proposed game in which each participant joined a team with a prosocial robot and a selfish robot. During 5 rounds of the game, each player chooses between contributing to the team goal (cooperate) or contributing to his individual goal (defect). The prosociality level of the robots only affects their strategies to play the game, as one always cooperates and the other always defects. We conducted a user study at the office of a large corporation with 70 participants where we manipulated the game result (winning or losing) in a between-subjects design. Results revealed two important considerations: (1) the prosocial robot was rated more positively in terms of its social attributes than the selfish robot, regardless of the game result; (2) the perception of competence, the responsibility attribution (blame/credit), and the preference for a future partner revealed significant differences only in the losing condition. These results yield important concerns for the creation of robotic partners, the understanding of group dynamics and, from a more general perspective, the promotion of a prosocial society.

  • 36.
    Cruciani, Silvia
    KTH, Skolan för elektroteknik och datavetenskap (EECS), Robotik, perception och lärande, RPL.
    Vision-Based In-Hand Manipulation with Limited Dexterity2019Doktoravhandling, med artikler (Annet vitenskapelig)
    Abstract [en]

    In-hand manipulation is an action that allows for changing the grasp on an object without the need for releasing it. This action is an important component in the manipulation process and helps solving many tasks. Human hands are dexterous instruments suitable for moving an object inside the hand. However, it is not common for robots to be equipped with dexterous hands due to many challenges in control and mechanical design. In fact, robots are frequently equipped with simple parallel grippers, robust but lacking dexterity. This thesis focuses on achieving in-hand manipulation with limited dexterity. The proposed solutions are based only on visual input, without the need for additional sensing capabilities in the robot's hand.

    Extrinsic dexterity allows simple grippers to execute in-hand manipulation thanks to the exploitation of external supports. This thesis introduces new methods for solving in-hand manipulation using inertial forces, controlled friction and external pushes as additional supports to enhance the robot's manipulation capabilities. Pivoting is seen as a possible solution for simple grasp changes: two methods, which cope with inexact friction modeling, are reported, and pivoting is successfully integrated in an overall manipulation task. For large scale in-hand manipulation, the Dexterous Manipulation Graph is introduced as a novel representation of the object. This graph is a useful tool for planning how to change a certain grasp via in-hand manipulation. It can also be exploited to combine both in-hand manipulation and regrasping to augment the possibilities of adjusting the grasp. In addition, this method is extended to achieve in-hand manipulation even for objects with unknown shape. To execute the planned object motions within the gripper, dual-arm robots are exploited to enhance the poor dexterity of parallel grippers: the second arm is seen as an additional support that helps in pushing and holding the object to successfully adjust the grasp configuration.

    This thesis presents examples of successful executions of tasks where in-hand manipulation is a fundamental step in the manipulation process, showing how the proposed methods are a viable solution for achieving in-hand manipulation with limited dexterity.

  • 37.
    Cruciani, Silvia
    et al.
    KTH, Skolan för elektroteknik och datavetenskap (EECS), Robotik, perception och lärande, RPL.
    Almeida, Diogo
    KTH, Skolan för elektroteknik och datavetenskap (EECS), Robotik, perception och lärande, RPL. KTH.
    Kragic, Danica
    KTH, Skolan för elektroteknik och datavetenskap (EECS), Robotik, perception och lärande, RPL.
    Karayiannidis, Yiannis
    KTH, Skolan för elektroteknik och datavetenskap (EECS), Robotik, perception och lärande, RPL.
    Discrete Bimanual Manipulation for Wrench BalancingManuskript (preprint) (Annet vitenskapelig)
    Abstract [en]

    Dual-arm robots can overcome grasping force and payload limitations of a single arm by jointly grasping an object.However, if the distribution of mass of the grasped object is not even, each arm will experience different wrenches that can exceed its payload limits.In this work, we consider the problem of balancing the wrenches experienced by  a dual-arm robot grasping a rigid tray.The distribution of wrenches among the robot arms changes due to objects being placed on the tray.We present an approach to reduce the wrench imbalance among arms through discrete bimanual manipulation.Our approach is based on sequential sliding motions of the grasp points on the surface of the object, to attain a more balanced configuration.%This is achieved in a discrete manner, one arm at a time, to minimize the potential for undesirable object motion during execution.We validate our modeling approach and system design through a set of robot experiments.

  • 38.
    Cruciani, Silvia
    et al.
    KTH, Skolan för elektroteknik och datavetenskap (EECS), Intelligenta system, Robotik, perception och lärande, RPL.
    Hang, Kaiyu
    Yale University.
    Smith, Christian
    KTH, Skolan för elektroteknik och datavetenskap (EECS), Intelligenta system, Robotik, perception och lärande, RPL.
    Kragic, Danica
    KTH, Skolan för elektroteknik och datavetenskap (EECS), Intelligenta system, Robotik, perception och lärande, RPL.
    Dual-Arm In-Hand Manipulation Using Visual Feedback2019Konferansepaper (Fagfellevurdert)
    Abstract [en]

    In this work, we address the problem of executing in-hand manipulation based on visual input. Given an initial grasp, the robot has to change its grasp configuration without releasing the object. We propose a method for in-hand manipulation planning and execution based on information on the object’s shape using a dual-arm robot. From the available information on the object, which can be a complete point cloud but also partial data, our method plans a sequence of rotations and translations to reconfigure the object’s pose. This sequence is executed using non-prehensile pushes defined as relative motions between the two robot arms.

  • 39.
    Cruciani, Silvia
    et al.
    KTH, Skolan för elektroteknik och datavetenskap (EECS), Robotik, perception och lärande, RPL.
    Hang, Yin
    KTH, Skolan för elektroteknik och datavetenskap (EECS), Robotik, perception och lärande, RPL.
    Kragic, Danica
    KTH, Skolan för elektroteknik och datavetenskap (EECS), Robotik, perception och lärande, RPL.
    In-Hand Manipulation of Objects with Unknown ShapesManuskript (preprint) (Annet vitenskapelig)
    Abstract [en]

    This work addresses the problem of changing grasp configurations on objects with an unknown shape through in-hand manipulation. Our approach leverages shape priors,learned as deep generative models, to infer novel object shapesfrom partial visual sensing. The Dexterous Manipulation Graph method is extended to build upon incremental data and account for estimation uncertainty in searching a sequence of manipulation actions. We show that our approach successfully solves in-hand manipulation tasks with unknown objects, and demonstrate the validity of these solutions with robot experiments.

  • 40.
    Cruciani, Silvia
    et al.
    KTH, Skolan för elektroteknik och datavetenskap (EECS), Intelligenta system, Robotik, perception och lärande, RPL.
    Smith, Christian
    KTH, Skolan för elektroteknik och datavetenskap (EECS), Intelligenta system, Robotik, perception och lärande, RPL.
    Integrating Path Planning and Pivoting2018Inngår i: 2018 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS (IROS) / [ed] Maciejewski, AA Okamura, A Bicchi, A Stachniss, C Song, DZ Lee, DH Chaumette, F Ding, H Li, JS Wen, J Roberts, J Masamune, K Chong, NY Amato, N Tsagwarakis, N Rocco, P Asfour, T Chung, WK Yasuyoshi, Y Sun, Y Maciekeski, T Althoefer, K AndradeCetto, J Chung, WK Demircan, E Dias, J Fraisse, P Gross, R Harada, H Hasegawa, Y Hayashibe, M Kiguchi, K Kim, K Kroeger, T Li, Y Ma, S Mochiyama, H Monje, CA Rekleitis, I Roberts, R Stulp, F Tsai, CHD Zollo, L, IEEE , 2018, s. 6601-6608Konferansepaper (Fagfellevurdert)
    Abstract [en]

    In this work we propose a method for integrating motion planning and in-hand manipulation. Commonly addressed as a separate step from the final execution, in-hand manipulation allows the robot to reorient an object within the end-effector for the successful outcome of the goal task. A joint achievement of repositioning the object and moving the manipulator towards its desired final pose saves time in the execution and introduces more flexibility in the system. We address this problem using a pivoting strategy (i.e. in-hand rotation) for repositioning the object and we integrate this strategy with a path planner for the execution of a complex task. This method is applied on a Baxter robot and its efficacy is shown by experimental results.

  • 41.
    Cruciani, Silvia
    et al.
    KTH, Skolan för elektroteknik och datavetenskap (EECS), Robotik, perception och lärande, RPL.
    Smith, Christian
    KTH, Skolan för elektroteknik och datavetenskap (EECS), Robotik, perception och lärande, RPL.
    Kragic, Danica
    KTH, Skolan för elektroteknik och datavetenskap (EECS), Robotik, perception och lärande, RPL.
    Hang, Kaiyu
    Hong Kong Univ Sci & Technol, Dept Comp Sci & Engn, Hong Kong, Peoples R China.;Hong Kong Univ Sci & Technol, Inst Adv Study, Hong Kong, Peoples R China..
    Dexterous Manipulation Graphs2018Inngår i: 2018 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS (IROS) / [ed] Maciejewski, AA Okamura, A Bicchi, A Stachniss, C Song, DZ Lee, DH Chaumette, F Ding, H Li, JS Wen, J Roberts, J Masamune, K Chong, NY Amato, N Tsagwarakis, N Rocco, P Asfour, T Chung, WK Yasuyoshi, Y Sun, Y Maciekeski, T Althoefer, K AndradeCetto, J Chung, WK Demircan, E Dias, J Fraisse, P Gross, R Harada, H Hasegawa, Y Hayashibe, M Kiguchi, K Kim, K Kroeger, T Li, Y Ma, S Mochiyama, H Monje, CA Rekleitis, I Roberts, R Stulp, F Tsai, CHD Zollo, L, IEEE , 2018, s. 2040-2047Konferansepaper (Fagfellevurdert)
    Abstract [en]

    We propose the Dexterous Manipulation Graph as a tool to address in-hand manipulation and reposition an object inside a robot's end-effector. This graph is used to plan a sequence of manipulation primitives so to bring the object to the desired end pose. This sequence of primitives is translated into motions of the robot to move the object held by the end-effector. We use a dual arm robot with parallel grippers to test our method on a real system and show successful planning and execution of in-hand manipulation.

  • 42.
    Cruciani, Silvia
    et al.
    KTH, Skolan för elektroteknik och datavetenskap (EECS), Intelligenta system, Robotik, perception och lärande, RPL. KTH Royal Inst Technol, Div Robot Percept & Learning, EECS, S-11428 Stockholm, Sweden..
    Sundaralingam, Balakumar
    Univ Utah, Robot Ctr, Salt Lake City, UT 84112 USA.;Univ Utah, Sch Comp, Salt Lake City, UT 84112 USA..
    Hang, Kaiyu
    Yale Univ, Dept Mech Engn & Mat Sci, New Haven, CT 06520 USA..
    Kumar, Vikash
    Google AI, San Francisco, CA 94110 USA..
    Hermans, Tucker
    Univ Utah, Robot Ctr, Salt Lake City, UT 84112 USA.;Univ Utah, Sch Comp, Salt Lake City, UT 84112 USA.;NVIDIA Res, Santa Clara, CA USA..
    Kragic, Danica
    KTH, Tidigare Institutioner (före 2005), Numerisk analys och datalogi, NADA. KTH, Skolan för elektroteknik och datavetenskap (EECS), Intelligenta system, Robotik, perception och lärande, RPL. KTH, Skolan för elektroteknik och datavetenskap (EECS), Centra, Centrum för autonoma system, CAS. KTH Royal Inst Technol, Div Robot Percept & Learning, EECS, S-11428 Stockholm, Sweden..
    Benchmarking In-Hand Manipulation2020Inngår i: IEEE Robotics and Automation Letters, ISSN 2377-3766, E-ISSN 1949-3045, Vol. 5, nr 2, s. 588-595Artikkel i tidsskrift (Fagfellevurdert)
    Abstract [en]

    The purpose of this benchmark is to evaluate the planning and control aspects of robotic in-hand manipulation systems. The goal is to assess the systems ability to change the pose of a hand-held object by either using the fingers, environment or a combination of both. Given an object surface mesh from the YCB data-set, we provide examples of initial and goal states (i.e. static object poses and fingertip locations) for various in-hand manipulation tasks. We further propose metrics that measure the error in reaching the goal state from a specific initial state, which, when aggregated across all tasks, also serves as a measure of the systems in-hand manipulation capability. We provide supporting software, task examples, and evaluation results associated with the benchmark.

  • 43.
    Cruciani, Silvia
    et al.
    KTH, Skolan för elektroteknik och datavetenskap (EECS), Intelligenta system, Robotik, perception och lärande, RPL.
    Yin, Hang
    KTH, Skolan för elektroteknik och datavetenskap (EECS), Intelligenta system, Robotik, perception och lärande, RPL.
    Kragic, Danica
    KTH, Skolan för elektroteknik och datavetenskap (EECS), Intelligenta system, Robotik, perception och lärande, RPL.
    In-Hand Manipulation of Objects with Unknown ShapesManuskript (preprint) (Annet vitenskapelig)
    Abstract [en]

    This work addresses the problem of changing grasp configurations on objects with an unknown shape through in-hand manipulation. Our approach leverages shape priors,learned as deep generative models, to infer novel object shapesfrom partial visual sensing. The Dexterous Manipulation Graph method is extended to build upon incremental data and account for estimation uncertainty in searching a sequence of manipulation actions. We show that our approach successfully solves in-hand manipulation tasks with unknown objects, and demonstrate the validity of these solutions with robot experiments.

  • 44. Dembrower, K.
    et al.
    Liu, Yue
    KTH, Centra, Science for Life Laboratory, SciLifeLab.
    Azizpour, Hossein
    KTH, Skolan för elektroteknik och datavetenskap (EECS), Intelligenta system, Robotik, perception och lärande, RPL.
    Eklund, M.
    Smith, Kevin
    KTH, Skolan för elektroteknik och datavetenskap (EECS), Datavetenskap, Beräkningsvetenskap och beräkningsteknik (CST). KTH, Centra, Science for Life Laboratory, SciLifeLab.
    Lindholm, P.
    Strand, F.
    Comparison of a deep learning risk score and standard mammographic density score for breast cancer risk prediction2020Inngår i: Radiology, ISSN 0033-8419, E-ISSN 1527-1315, Vol. 294, nr 2, s. 265-272Artikkel i tidsskrift (Fagfellevurdert)
    Abstract [en]

    Background: Most risk prediction models for breast cancer are based on questionnaires and mammographic density assessments. By training a deep neural network, further information in the mammographic images can be considered. Purpose: To develop a risk score that is associated with future breast cancer and compare it with density-based models. Materials and Methods: In this retrospective study, all women aged 40-74 years within the Karolinska University Hospital uptake area in whom breast cancer was diagnosed in 2013-2014 were included along with healthy control subjects. Network development was based on cases diagnosed from 2008 to 2012. The deep learning (DL) risk score, dense area, and percentage density were calculated for the earliest available digital mammographic examination for each woman. Logistic regression models were fitted to determine the association with subsequent breast cancer. False-negative rates were obtained for the DL risk score, age-adjusted dense area, and age-adjusted percentage density. Results: A total of 2283 women, 278 of whom were later diagnosed with breast cancer, were evaluated. The age at mammography (mean, 55.7 years vs 54.6 years; P< .001), the dense area (mean, 38.2 cm2 vs 34.2 cm2; P< .001), and the percentage density (mean, 25.6% vs 24.0%; P< .001) were higher among women diagnosed with breast cancer than in those without a breast cancer diagnosis. The odds ratios and areas under the receiver operating characteristic curve (AUCs) were higher for age-adjusted DL risk score than for dense area and percentage density: 1.56 (95% confidence interval [CI]: 1.48, 1.64; AUC, 0.65), 1.31 (95% CI: 1.24, 1.38; AUC, 0.60), and 1.18 (95% CI: 1.11, 1.25; AUC, 0.57), respectively (P< .001 for AUC). The false-negative rate was lower: 31% (95% CI: 29%, 34%), 36% (95% CI: 33%, 39%; P = .006), and 39% (95% CI: 37%, 42%; P< .001); this difference was most pronounced for more aggressive cancers. Conclusion: Compared with density-based models, a deep neural network can more accurately predict which women are at risk for future breast cancer, with a lower false-negative rate for more aggressive cancers.

  • 45.
    Djikic, Addi
    KTH, Skolan för elektroteknik och datavetenskap (EECS), Robotik, perception och lärande, RPL.
    Segmentation and Depth Estimation of Urban Road Using Monocular Camera and Convolutional Neural Networks2018Independent thesis Advanced level (degree of Master (Two Years)), 20 poäng / 30 hpOppgave
    Abstract [en]

    Deep learning for safe autonomous transport is rapidly emerging. Fast and robust perception for autonomous vehicles will be crucial for future navigation in urban areas with high traffic and human interplay.

    Previous work focuses on extracting full image depth maps, or finding specific road features such as lanes. However, in urban environments lanes are not always present, and sensors such as LiDAR with 3D point clouds provide a quite sparse depth perception of road with demanding algorithmic approaches.

    In this thesis we derive a novel convolutional neural network that we call AutoNet. It is designed as an encoder-decoder network for pixel-wise depth estimation of an urban drivable free-space road, using only a monocular camera, and handled as a supervised regression problem. AutoNet is also constructed as a classification network to solely classify and segment the drivable free-space in real- time with monocular vision, handled as a supervised classification problem, which shows to be a simpler and more robust solution than the regression approach.

    We also implement the state of the art neural network ENet for comparison, which is designed for fast real-time semantic segmentation and fast inference speed. The evaluation shows that AutoNet outperforms ENet for every performance metrics, but shows to be slower in terms of frame rate. However, optimization techniques are proposed for future work, on how to advance the frame rate of the network while still maintaining the robustness and performance.

    All the training and evaluation is done on the Cityscapes dataset. New ground truth labels for road depth perception are created for training with a novel approach of fusing pre-computed depth maps with semantic labels. Data collection with a Scania vehicle is conducted, mounted with a monocular camera to test the final derived models.

    The proposed AutoNet shows promising state of the art performance in regards to road depth estimation as well as road classification.

  • 46.
    Englesson, Erik
    et al.
    KTH, Skolan för elektroteknik och datavetenskap (EECS), Robotik, perception och lärande, RPL.
    Azizpour, Hossein
    KTH, Skolan för elektroteknik och datavetenskap (EECS), Robotik, perception och lärande, RPL.
    Efficient Evaluation-Time Uncertainty Estimation by Improved Distillation2019Konferansepaper (Fagfellevurdert)
  • 47.
    Ericson, Ludvig
    KTH, Skolan för elektroteknik och datavetenskap (EECS), Robotik, perception och lärande, RPL.
    Flying High: Deep Imitation Learning of Optimal Control for Unmanned Aerial Vehicles2018Independent thesis Advanced level (degree of Master (Two Years)), 20 poäng / 30 hpOppgave
    Abstract [en]

    Optimal control for multicopters is difficult in part due to the low processing power available, and the instability inherent to multicopters. Deep imitation learning is a method for approximating an expert control policy with a neural network, and has the potential of improving control for multicopters. We investigate the performance and reliability of deep imitation learning with trajectory optimization as the expert policy by first defining a dynamics model for multicopters and applying a trajectory optimization algorithm to it. Our investigation shows that network architecture plays an important role in the characteristics of both the learning process and the resulting control policy, and that in particular trajectory optimization can be leveraged to improve convergence times for imitation learning. Finally, we identify some limitations and future areas of study and development for the technology.

  • 48.
    Eriksson, Sara
    et al.
    KTH, Skolan för elektroteknik och datavetenskap (EECS), Medieteknik och interaktionsdesign, MID.
    Unander-Scharin, Åsa
    Luleå University of Technology.
    Trichon, Vincent
    KTH, Skolan för elektroteknik och datavetenskap (EECS), Medieteknik och interaktionsdesign, MID.
    Unander-Scharin, Carl
    Karlstad University.
    Kjellström, Hedvig
    KTH, Skolan för elektroteknik och datavetenskap (EECS), Robotik, perception och lärande, RPL.
    Höök, Kristina
    KTH, Skolan för elektroteknik och datavetenskap (EECS), Medieteknik och interaktionsdesign, MID.
    Dancing with Drones: Crafting Novel Artistic Expressions through Intercorporeality2019Inngår i: Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, New York, NY USA, 2019, s. 617:1-617:12Konferansepaper (Fagfellevurdert)
  • 49.
    Garcia-Camacho, Irene
    et al.
    CSIC UPC, Inst Robot & Informat Ind, Barcelona 08902, Spain..
    Lippi, Martina
    KTH, Skolan för elektroteknik och datavetenskap (EECS), Intelligenta system, Robotik, perception och lärande, RPL.
    Welle, Michael C.
    KTH, Skolan för elektroteknik och datavetenskap (EECS), Intelligenta system, Robotik, perception och lärande, RPL.
    Yin, Hang
    KTH, Skolan för elektroteknik och datavetenskap (EECS), Intelligenta system, Robotik, perception och lärande, RPL.
    Antonova, Rika
    KTH, Skolan för elektroteknik och datavetenskap (EECS), Intelligenta system, Robotik, perception och lärande, RPL.
    Varava, Anastasiia
    KTH, Skolan för elektroteknik och datavetenskap (EECS), Intelligenta system, Robotik, perception och lärande, RPL.
    Borras, Julia
    CSIC UPC, Inst Robot & Informat Ind, Barcelona 08902, Spain..
    Torras, Carme
    CSIC UPC, Inst Robot & Informat Ind, Barcelona 08902, Spain..
    Marino, Alessandro
    Univ Cassino & Southern Lazio, I-03043 Cassino, Italy..
    Alenya, Guillem
    CSIC UPC, Inst Robot & Informat Ind, Barcelona 08902, Spain..
    Kragic, Danica
    KTH, Skolan för elektroteknik och datavetenskap (EECS), Intelligenta system, Robotik, perception och lärande, RPL.
    Benchmarking Bimanual Cloth Manipulation2020Inngår i: IEEE Robotics and Automation Letters, ISSN 2377-3766, E-ISSN 1949-3045, Vol. 5, nr 2, s. 1111-1118Artikkel i tidsskrift (Fagfellevurdert)
    Abstract [en]

    Cloth manipulation is a challenging task that, despite its importance, has received relatively little attention compared to rigid object manipulation. In this letter, we provide three benchmarks for evaluation and comparison of different approaches towards three basic tasks in cloth manipulation: spreading a tablecloth over a table, folding a towel, and dressing. The tasks can be executed on any bimanual robotic platform and the objects involved in the tasks are standardized and easy to acquire. We provide several complexity levels for each task, and describe the quality measures to evaluate task execution. Furthermore, we provide baseline solutions for all the tasks and evaluate them according to the proposed metrics.

  • 50.
    Ghadirzadeh, Ali
    KTH, Skolan för elektroteknik och datavetenskap (EECS), Robotik, perception och lärande, RPL.
    Sensorimotor Robot Policy Training using Reinforcement Learning2018Doktoravhandling, med artikler (Annet vitenskapelig)
    Abstract [en]

    Robots are becoming more ubiquitous in our society and taking over many tasks that were previously considered as human hallmarks. Many of these tasks, e.g., autonomously driving a car, collaborating with humans in dynamic and changing working conditions and performing household chores, require human-level intelligence to perceive the world and to act appropriately. In this thesis, we pursue a different approach compared to classical methods that often construct a robot controller based on the perception-then-action paradigm. We devise robotic action-selection policies by considering action-selection and perception processes as being intertwined, emphasizing that perception comes prior to action and action is key to perception. The main hypothesis is that complex robotic behaviors come as the result of mastering sensorimotor contingencies (SMCs), i.e., regularities between motor actions and associated changes in sensory observations, where SMCs can be seen as building blocks to skillful behaviors. We elaborate and investigate this hypothesis by deliberate design of frameworks which enable policy training merely based on data experienced by a robot,without intervention of human experts for analytical modelings or calibrations. In such circumstances, action policies can be obtained by reinforcement learning (RL) paradigm by making exploratory action decisions and reinforcing patterns of SMCs that lead to reward events for a given task. However, the dimensionality of sensorimotor spaces, complex dynamics of physical tasks, sparseness of reward events, limited amount of data from real-robot experiments, ambiguities of crediting past decisions and safety issues, which arise from exploratory actions of a physical robot, pose challenges to obtain a policy based on data-driven methods alone. In this thesis, we introduce our contributions to deal with the aforementioned issues by devising learning frameworks which endow a robot with the ability to integrate sensorimotor data to obtain action-selection policies. The effectiveness of the proposed frameworks is demonstrated by evaluating the methods on a number of real robotic tasks and illustrating the suitability of the methods to acquire different skills, to make sequential action-decisions in high-dimensional sensorimotor spaces, with limited data and sparse rewards.

1234 1 - 50 of 175
RefereraExporteraLink til resultatlisten
Permanent link
Referera
Referensformat
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Annet format
Fler format
Språk
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Annet språk
Fler språk
Utmatningsformat
  • html
  • text
  • asciidoc
  • rtf