Ändra sökning
Avgränsa sökresultatet
123 1 - 50 av 102
RefereraExporteraLänk till träfflistan
Permanent länk
Referera
Referensformat
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Annat format
Fler format
Språk
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Annat språk
Fler språk
Utmatningsformat
  • html
  • text
  • asciidoc
  • rtf
Träffar per sida
  • 5
  • 10
  • 20
  • 50
  • 100
  • 250
Sortering
  • Standard (Relevans)
  • Författare A-Ö
  • Författare Ö-A
  • Titel A-Ö
  • Titel Ö-A
  • Publikationstyp A-Ö
  • Publikationstyp Ö-A
  • Äldst först
  • Nyast först
  • Skapad (Äldst först)
  • Skapad (Nyast först)
  • Senast uppdaterad (Äldst först)
  • Senast uppdaterad (Nyast först)
  • Disputationsdatum (tidigaste först)
  • Disputationsdatum (senaste först)
  • Standard (Relevans)
  • Författare A-Ö
  • Författare Ö-A
  • Titel A-Ö
  • Titel Ö-A
  • Publikationstyp A-Ö
  • Publikationstyp Ö-A
  • Äldst först
  • Nyast först
  • Skapad (Äldst först)
  • Skapad (Nyast först)
  • Senast uppdaterad (Äldst först)
  • Senast uppdaterad (Nyast först)
  • Disputationsdatum (tidigaste först)
  • Disputationsdatum (senaste först)
Markera
Maxantalet träffar du kan exportera från sökgränssnittet är 250. Vid större uttag använd dig av utsökningar.
  • 1. Abbeloos, W.
    et al.
    Caccamo, Sergio
    KTH, Skolan för datavetenskap och kommunikation (CSC), Robotik, perception och lärande, RPL.
    Ataer-Cansizoglu, E.
    Taguchi, Y.
    Feng, C.
    Lee, T. -Y
    Detecting and Grouping Identical Objects for Region Proposal and Classification2017Ingår i: 2017 IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops, IEEE Computer Society, 2017, Vol. 2017, s. 501-502, artikel-id 8014810Konferensbidrag (Refereegranskat)
    Abstract [en]

    Often multiple instances of an object occur in the same scene, for example in a warehouse. Unsupervised multi-instance object discovery algorithms are able to detect and identify such objects. We use such an algorithm to provide object proposals to a convolutional neural network (CNN) based classifier. This results in fewer regions to evaluate, compared to traditional region proposal algorithms. Additionally, it enables using the joint probability of multiple instances of an object, resulting in improved classification accuracy. The proposed technique can also split a single class into multiple sub-classes corresponding to the different object types, enabling hierarchical classification.

  • 2.
    Abdulaziz Ali Haseeb, Mohamed
    KTH, Skolan för datavetenskap och kommunikation (CSC), Robotik, perception och lärande, RPL.
    Passive gesture recognition on unmodified smartphones using Wi-Fi RSSI2017Självständigt arbete på avancerad nivå (masterexamen), 20 poäng / 30 hpStudentuppsats (Examensarbete)
    Abstract [sv]

    Smarta telefoner bärs idag av hundratals miljoner människor runt om i världen, och används för att utföra en mängd olika uppgifter, så som grundläggande kommunikation, internetsökning och online-inköp. På grund av begränsningar i storlek och energilagring är människa-telefon-gränssnitten dock i hög grad begränsade till de förhållandevis små skärmarna och enkla knappsatser.

     

    Industrin och forskarsamhället arbetar för att hitta vägar för att förbättra och bredda gränssnitten genom att antingen använda befintliga resurser såsom mikrofoner, kameror och tröghetssensorer, eller genom att införa nya specialiserade sensorer i telefonerna, som t.ex. kompakta radarenheter för gestigenkänning.

     

    Det begränsade strömbehovet hos radiofrekvenssignaler (RF) inspirerade oss till att undersöka om dessa kunde användas för att känna igen gester och aktiviteter i närheten av telefoner. Denna rapport presenterar en lösning för att känna igen gester med hjälp av ett s.k. recurrent neural network (RNN). Till skillnad från andra Wi-Fi-baserade lösningar kräver denna lösning inte en förändring av vare sig hårvara eller operativsystem, och ingenkänningen genomförs utan att inverka på den normala driften av andra applikationer på telefonen.

     

    Den utvecklade lösningen når en genomsnittlig noggranhet på 78% för detektering och klassificering av tre olika handgester, i ett antal olika konfigurationer vad gäller telefon och Wi-Fi-sändare. Rapporten innehåller även en analys av flera olika egenskaper hos den föreslagna lösningen, samt förslag till vidare arbete.

  • 3. Agarwal, P.
    et al.
    Al Moubayed, Samer
    KTH, Skolan för datavetenskap och kommunikation (CSC), Robotik, perception och lärande, RPL.
    Alspach, A.
    Kim, J.
    Carter, E. J.
    Lehman, J. F.
    Yamane, K.
    Imitating human movement with teleoperated robotic head2016Ingår i: 25th IEEE International Symposium on Robot and Human Interactive Communication, RO-MAN 2016, IEEE, 2016, s. 630-637Konferensbidrag (Refereegranskat)
    Abstract [en]

    Effective teleoperation requires real-time control of a remote robotic system. In this work, we develop a controller for realizing smooth and accurate motion of a robotic head with application to a teleoperation system for the Furhat robot head [1], which we call TeleFurhat. The controller uses the head motion of an operator measured by a Microsoft Kinect 2 sensor as reference and applies a processing framework to condition and render the motion on the robot head. The processing framework includes a pre-filter based on a moving average filter, a neural network-based model for improving the accuracy of the raw pose measurements of Kinect, and a constrained-state Kalman filter that uses a minimum jerk model to smooth motion trajectories and limit the magnitude of changes in position, velocity, and acceleration. Our results demonstrate that the robot can reproduce the human head motion in real time with a latency of approximately 100 to 170 ms while operating within its physical limits. Furthermore, viewers prefer our new method over rendering the raw pose data from Kinect.

  • 4. Agarwal, Priyanshu
    et al.
    Al Moubayed, Samer
    KTH, Skolan för datavetenskap och kommunikation (CSC), Robotik, perception och lärande, RPL.
    Alspach, Alexander
    Kim, Joohyung
    Carter, Elizabeth J.
    Lehman, Jill Fain
    Yamane, Katsu
    Imitating Human Movement with Teleoperated Robotic Head2016Ingår i: 2016 25TH IEEE INTERNATIONAL SYMPOSIUM ON ROBOT AND HUMAN INTERACTIVE COMMUNICATION (RO-MAN), 2016, s. 630-637Konferensbidrag (Refereegranskat)
    Abstract [en]

    Effective teleoperation requires real-time control of a remote robotic system. In this work, we develop a controller for realizing smooth and accurate motion of a robotic head with application to a teleoperation system for the Furhat robot head [1], which we call TeleFurhat. The controller uses the head motion of an operator measured by a Microsoft Kinect 2 sensor as reference and applies a processing framework to condition and render the motion on the robot head. The processing framework includes a pre-filter based on a moving average filter, a neural network-based model for improving the accuracy of the raw pose measurements of Kinect, and a constrained-state Kalman filter that uses a minimum jerk model to smooth motion trajectories and limit the magnitude of changes in position, velocity, and acceleration. Our results demonstrate that the robot can reproduce the human head motion in real time with a latency of approximately 100 to 170 ms while operating within its physical limits. Furthermore, viewers prefer our new method over rendering the raw pose data from Kinect.

  • 5.
    Almeida, Diogo
    et al.
    KTH, Skolan för datavetenskap och kommunikation (CSC), Robotik, perception och lärande, RPL. KTH.
    Ambrus, Rares
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP.
    Caccamo, Sergio
    KTH, Skolan för elektroteknik och datavetenskap (EECS), Robotik, perception och lärande, RPL.
    Chen, Xi
    KTH.
    Cruciani, Silvia
    Pinto Basto De Carvalho, Joao F
    KTH, Skolan för elektroteknik och datavetenskap (EECS), Robotik, perception och lärande, RPL.
    Haustein, Joshua
    KTH, Skolan för elektroteknik och datavetenskap (EECS), Robotik, perception och lärande, RPL.
    Marzinotto, Alejandro
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP.
    Vina, Francisco
    KTH.
    Karayiannidis, Yannis
    KTH.
    Ögren, Petter
    KTH, Skolan för teknikvetenskap (SCI), Matematik (Inst.), Optimeringslära och systemteori.
    Jensfelt, Patric
    KTH, Skolan för datavetenskap och kommunikation (CSC), Robotik, perception och lärande, RPL.
    Kragic, Danica
    KTH, Skolan för datavetenskap och kommunikation (CSC), Robotik, perception och lärande, RPL.
    Team KTH’s Picking Solution for the Amazon Picking Challenge 20162017Ingår i: Warehouse Picking Automation Workshop 2017: Solutions, Experience, Learnings and Outlook of the Amazon Robotics Challenge, 2017Konferensbidrag (Övrig (populärvetenskap, debatt, mm))
    Abstract [en]

    In this work we summarize the solution developed by Team KTH for the Amazon Picking Challenge 2016 in Leipzig, Germany. The competition simulated a warehouse automation scenario and it was divided in two tasks: a picking task where a robot picks items from a shelf and places them in a tote and a stowing task which is the inverse task where the robot picks items from a tote and places them in a shelf. We describe our approach to the problem starting from a high level overview of our system and later delving into details of our perception pipeline and our strategy for manipulation and grasping. The solution was implemented using a Baxter robot equipped with additional sensors.

  • 6.
    Almeida, Diogo
    et al.
    KTH, Skolan för datavetenskap och kommunikation (CSC), Robotik, perception och lärande, RPL. KTH.
    Karayiannidis, Yiannis
    KTH, Skolan för datavetenskap och kommunikation (CSC), Robotik, perception och lärande, RPL. Dept. of Electrical Eng., Chalmers University of Technology.
    A Framework for Bimanual Folding Assembly Under Uncertainties2017Ingår i: Workshop – Towards robust grasping and manipulation skills for humanoids, 2017Konferensbidrag (Övrigt vetenskapligt)
  • 7.
    Almeida, Diogo
    et al.
    KTH, Skolan för datavetenskap och kommunikation (CSC), Robotik, perception och lärande, RPL. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS. KTH.
    Karayiannidis, Yiannis
    Chalmers, Sweden.
    Dexterous manipulation by means of compliant grasps and external contacts2017Ingår i: 2017 IEEE/RSJ International Conference on Intelligent Robots and Systems, IROS 2017, IEEE, 2017, s. 1913-1920, artikel-id 8206010Konferensbidrag (Refereegranskat)
    Abstract [en]

    We propose a method that allows for dexterousmanipulation of an object by exploiting contact with an externalsurface. The technique requires a compliant grasp, enablingthe motion of the object in the robot hand while allowingfor significant contact forces to be present on the externalsurface. We show that under this type of grasp it is possibleto estimate and control the pose of the object with respect tothe surface, leveraging the trade-off between force control andmanipulative dexterity. The method is independent of the objectgeometry, relying only on the assumptions of type of grasp andthe existence of a contact with a known surface. Furthermore,by adapting the estimated grasp compliance, the method canhandle unmodelled effects. The approach is demonstrated andevaluated with experiments on object pose regulation andpivoting against a rigid surface, where a mechanical springprovides the required compliance.

  • 8.
    Almeida, Diogo
    et al.
    KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS. KTH, Skolan för datavetenskap och kommunikation (CSC), Robotik, perception och lärande, RPL. Royal Inst Technol KTH, Ctr Autonomous Syst, Sch Comp Sci & Commun, Robot Percept & Learning Lab, SE-10044 Stockholm, Sweden..
    Karayiannidis, Yiannis
    KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS. KTH, Skolan för datavetenskap och kommunikation (CSC), Robotik, perception och lärande, RPL. Chalmers Univ Technol, Dept Signals & Syst, SE-41296 Gothenburg, Sweden..
    Dexterous Manipulation with Compliant Grasps and External Contacts2017Ingår i: 2017 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS (IROS) / [ed] Bicchi, A Okamura, A, IEEE , 2017, s. 1913-1920Konferensbidrag (Refereegranskat)
    Abstract [en]

    We propose a method that allows for dexterous manipulation of an object by exploiting contact with an external surface. The technique requires a compliant grasp, enabling the motion of the object in the robot hand while allowing for significant contact forces to be present on the external surface. We show that under this type of grasp it is possible to estimate and control the pose of the object with respect to the surface, leveraging the trade-off between force control and manipulative dexterity. The method is independent of the object geometry, relying only on the assumptions of type of grasp and the existence of a contact with a known surface. Furthermore, by adapting the estimated grasp compliance, the method can handle unmodelled effects. The approach is demonstrated and evaluated with experiments on object pose regulation and pivoting against a rigid surface, where a mechanical spring provides the required compliance.

  • 9.
    Almeida, Diogo
    et al.
    KTH, Skolan för datavetenskap och kommunikation (CSC), Robotik, perception och lärande, RPL. KTH.
    Karayiannidis, Yiannis
    Robotic Manipulation for Bi-Manual Folding Assembly2015Ingår i: Late Breaking Posters, 2015Konferensbidrag (Övrigt vetenskapligt)
    Abstract [en]

    In this poster the problem of bimanual robotic assembly is considered. In particular we introduce folding assembly which is an assembly task that requires significant rotational motion in order to mate two assembly pieces. We model the connection between the two parts as an ideal virtual prismatic and revolute joint while non-ideal effects on the part movements can be considered as special cases of the ideal virtual joint. The connection between the gripper and the assembly part is also studied by considering the case of rigid and non-rigid grasp. As a proof-of-concept, a stabilizing controller for the assembly task is derived following a bimanual master-slave approach under the assumption of rigid grasps. The controller is validated through simulation while an example object has been designed and printed for experimental validation of our assembly technique.

  • 10.
    Almeida, Diogo
    et al.
    KTH, Skolan för datavetenskap och kommunikation (CSC), Robotik, perception och lärande, RPL. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Viña, Francisco E.
    KTH, Skolan för datavetenskap och kommunikation (CSC), Robotik, perception och lärande, RPL. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Karayiannidis, Yiannis
    Bimanual Folding Assembly: Switched Control and Contact Point Estimation2016Ingår i: IEEE-RAS 16th International Conference on Humanoid Robots (Humanoids), Cancun, 2016, Cancun: IEEE, 2016Konferensbidrag (Refereegranskat)
    Abstract [en]

    Robotic assembly in unstructured environments is a challenging task, due to the added uncertainties. These can be mitigated through the employment of assembly systems, which offer a modular approach to the assembly problem via the conjunction of primitives. In this paper, we use a dual-arm manipulator in order to execute a folding assembly primitive. When executing a folding primitive, two parts are brought into rigid contact and posteriorly translated and rotated. A switched controller is employed in order to ensure that the relative motion of the parts follows the desired model, while regulating the contact forces. The control is complemented with an estimator based on a Kalman filter, which tracks the contact point between parts based on force and torque measurements. Experimental results are provided, and the effectiveness of the control and contact point estimation is shown.

  • 11. Alomari, M.
    et al.
    Duckworth, P.
    Bore, Nils
    KTH, Skolan för datavetenskap och kommunikation (CSC), Robotik, perception och lärande, RPL.
    Hawasly, M.
    Hogg, D. C.
    Cohn, A. G.
    Grounding of human environments and activities for autonomous robots2017Ingår i: IJCAI International Joint Conference on Artificial Intelligence, International Joint Conferences on Artificial Intelligence , 2017, s. 1395-1402Konferensbidrag (Refereegranskat)
    Abstract [en]

    With the recent proliferation of human-oriented robotic applications in domestic and industrial scenarios, it is vital for robots to continually learn about their environments and about the humans they share their environments with. In this paper, we present a novel, online, incremental framework for unsupervised symbol grounding in real-world, human environments for autonomous robots. We demonstrate the flexibility of the framework by learning about colours, people names, usable objects and simple human activities, integrating stateofthe-art object segmentation, pose estimation, activity analysis along with a number of sensory input encodings into a continual learning framework. Natural language is grounded to the learned concepts, enabling the robot to communicate in a human-understandable way. We show, using a challenging real-world dataset of human activities as perceived by a mobile robot, that our framework is able to extract useful concepts, ground natural language descriptions to them, and, as a proof-ofconcept, generate simple sentences from templates to describe people and the activities they are engaged in.

  • 12.
    Ambrus, Rares
    KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS. KTH, Skolan för datavetenskap och kommunikation (CSC), Robotik, perception och lärande, RPL.
    Unsupervised construction of 4D semantic maps in a long-term autonomy scenario2017Doktorsavhandling, monografi (Övrigt vetenskapligt)
    Abstract [en]

    Robots are operating for longer times and collecting much more data than just a few years ago. In this setting we are interested in exploring ways of modeling the environment, segmenting out areas of interest and keeping track of the segmentations over time, with the purpose of building 4D models (i.e. space and time) of the relevant parts of the environment.

    Our approach relies on repeatedly observing the environment and creating local maps at specific locations. The first question we address is how to choose where to build these local maps. Traditionally, an operator defines a set of waypoints on a pre-built map of the environment which the robot visits autonomously. Instead, we propose a method to automatically extract semantically meaningful regions from a point cloud representation of the environment. The resulting segmentation is purely geometric, and in the context of mobile robots operating in human environments, the semantic label associated with each segment (i.e. kitchen, office) can be of interest for a variety of applications. We therefore also look at how to obtain per-pixel semantic labels given the geometric segmentation, by fusing probabilistic distributions over scene and object types in a Conditional Random Field.

    For most robotic systems, the elements of interest in the environment are the ones which exhibit some dynamic properties (such as people, chairs, cups, etc.), and the ability to detect and segment such elements provides a very useful initial segmentation of the scene. We propose a method to iteratively build a static map from observations of the same scene acquired at different points in time. Dynamic elements are obtained by computing the difference between the static map and new observations. We address the problem of clustering together dynamic elements which correspond to the same physical object, observed at different points in time and in significantly different circumstances. To address some of the inherent limitations in the sensors used, we autonomously plan, navigate around and obtain additional views of the segmented dynamic elements. We look at methods of fusing the additional data and we show that both a combined point cloud model and a fused mesh representation can be used to more robustly recognize the dynamic object in future observations. In the case of the mesh representation, we also show how a Convolutional Neural Network can be trained for recognition by using mesh renderings.

    Finally, we present a number of methods to analyse the data acquired by the mobile robot autonomously and over extended time periods. First, we look at how the dynamic segmentations can be used to derive a probabilistic prior which can be used in the mapping process to further improve and reinforce the segmentation accuracy. We also investigate how to leverage spatial-temporal constraints in order to cluster dynamic elements observed at different points in time and under different circumstances. We show that by making a few simple assumptions we can increase the clustering accuracy even when the object appearance varies significantly between observations. The result of the clustering is a spatial-temporal footprint of the dynamic object, defining an area where the object is likely to be observed spatially as well as a set of time stamps corresponding to when the object was previously observed. Using this data, predictive models can be created and used to infer future times when the object is more likely to be observed. In an object search scenario, this model can be used to decrease the search time when looking for specific objects.

  • 13.
    Ambrus, Rares
    et al.
    KTH, Skolan för datavetenskap och kommunikation (CSC), Robotik, perception och lärande, RPL. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Bore, Nils
    KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Folkesson, John
    KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS. KTH, Skolan för datavetenskap och kommunikation (CSC), Robotik, perception och lärande, RPL.
    Jensfelt, Patric
    KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS. KTH, Skolan för datavetenskap och kommunikation (CSC), Robotik, perception och lärande, RPL.
    Autonomous meshing, texturing and recognition of objectmodels with a mobile robot2017Konferensbidrag (Refereegranskat)
    Abstract [en]

    We present a system for creating object modelsfrom RGB-D views acquired autonomously by a mobile robot.We create high-quality textured meshes of the objects byapproximating the underlying geometry with a Poisson surface.Our system employs two optimization steps, first registering theviews spatially based on image features, and second aligningthe RGB images to maximize photometric consistency withrespect to the reconstructed mesh. We show that the resultingmodels can be used robustly for recognition by training aConvolutional Neural Network (CNN) on images rendered fromthe reconstructed meshes. We perform experiments on datacollected autonomously by a mobile robot both in controlledand uncontrolled scenarios. We compare quantitatively andqualitatively to previous work to validate our approach.

  • 14.
    Ambrus, Rares
    et al.
    KTH, Skolan för datavetenskap och kommunikation (CSC), Robotik, perception och lärande, RPL. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Claici, Sebastian
    Wendt, Axel
    Automatic Room Segmentation From Unstructured 3-D Data of Indoor Environments2017Ingår i: IEEE Robotics and Automation Letters, ISSN 2377-3766, E-ISSN 1949-3045, Vol. 2, nr 2, s. 749-756Artikel i tidskrift (Refereegranskat)
    Abstract [en]

    We present an automatic approach for the task of reconstructing a 2-D floor plan from unstructured point clouds of building interiors. Our approach emphasizes accurate and robust detection of building structural elements and, unlike previous approaches, does not require prior knowledge of scanning device poses. The reconstruction task is formulated as a multiclass labeling problem that we approach using energy minimization. We use intuitive priors to define the costs for the energy minimization problem and rely on accurate wall and opening detection algorithms to ensure robustness. We provide detailed experimental evaluation results, both qualitative and quantitative, against state-of-the-art methods and labeled ground-truth data.

  • 15.
    Ambrus, Rares
    et al.
    KTH, Skolan för datavetenskap och kommunikation (CSC), Robotik, perception och lärande, RPL. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Folkesson, John
    KTH, Skolan för datavetenskap och kommunikation (CSC), Robotik, perception och lärande, RPL. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Jensfelt, Patric
    KTH, Skolan för datavetenskap och kommunikation (CSC), Robotik, perception och lärande, RPL. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Unsupervised object segmentation through change detection in a long term autonomy scenario2016Ingår i: IEEE-RAS International Conference on Humanoid Robots, IEEE, 2016, s. 1181-1187Konferensbidrag (Refereegranskat)
    Abstract [en]

    In this work we address the problem of dynamic object segmentation in office environments. We make no prior assumptions on what is dynamic and static, and our reasoning is based on change detection between sparse and non-uniform observations of the scene. We model the static part of the environment, and we focus on improving the accuracy and quality of the segmented dynamic objects over long periods of time. We address the issue of adapting the static structure over time and incorporating new elements, for which we train and use a classifier whose output gives an indication of the dynamic nature of the segmented elements. We show that the proposed algorithms improve the accuracy and the rate of detection of dynamic objects by comparing with a labelled dataset.

  • 16.
    Antonova, Rika
    et al.
    KTH, Skolan för datavetenskap och kommunikation (CSC), Robotik, perception och lärande, RPL.
    Cruciani, Silvia
    KTH, Skolan för datavetenskap och kommunikation (CSC), Robotik, perception och lärande, RPL.
    Smith, Christian
    KTH, Skolan för datavetenskap och kommunikation (CSC), Robotik, perception och lärande, RPL.
    Kragic, Danica
    KTH, Skolan för datavetenskap och kommunikation (CSC), Robotik, perception och lärande, RPL.
    Reinforcement Learning for Pivoting TaskManuskript (preprint) (Övrigt vetenskapligt)
    Abstract [en]

    In this work we propose an approach to learn a robust policy for solving the pivoting task. Recently, several model-free continuous control algorithms were shown to learn successful policies without prior knowledge of the dynamics of the task. However, obtaining successful policies required thousands to millions of training episodes, limiting the applicability of these approaches to real hardware. We developed a training procedure that allows us to use a simple custom simulator to learn policies robust to the mismatch of simulation vs robot. In our experiments, we demonstrate that the policy learned in the simulator is able to pivot the object to the desired target angle on the real robot. We also show generalization to an object with different inertia, shape, mass and friction properties than those used during training. This result is a step towards making model-free reinforcement learning available for solving robotics tasks via pre-training in simulators that offer only an imprecise match to the real-world dynamics.

  • 17.
    Antonova, Rika
    et al.
    KTH, Skolan för datavetenskap och kommunikation (CSC), Robotik, perception och lärande, RPL.
    Rai, Akshara
    Robotics Institute, School of Computer Science, Carnegie Mellon University, USA.
    Atkeson, Christopher G.
    Robotics Institute, School of Computer Science, Carnegie Mellon University, USA.
    Deep kernels for optimizing locomotion controllers2017Ingår i: Proceedings of the 1st Annual Conference on Robot Learning, PMLR , 2017Konferensbidrag (Refereegranskat)
    Abstract [en]

    Sample efciency is important when optimizing parameters of locomotion controllers, since hardware experiments are time consuming and expensive. Bayesian Optimization, a sample-efcient optimization framework, has recently been widely applied to address this problem, but further improvements in sample efciency are needed for practical applicability to real-world robots and highdimensional controllers. To address this, prior work has proposed using domain expertise for constructing custom distance metrics for locomotion. In this work we show how to learn such a distance metric automatically. We use a neural network to learn an informed distance metric from data obtained in high-delity simulations. We conduct experiments on two different controllers and robot architectures. First, we demonstrate improvement in sample efciency when optimizing a 5-dimensional controller on the ATRIAS robot hardware. We then conduct simulation experiments to optimize a 16-dimensional controller for a 7-link robot model and obtain signicant improvements even when optimizing in perturbed environments. This demonstrates that our approach is able to enhance sample efciency for two different controllers, hence is a tting candidate for further experiments on hardware in the future. Keywor

  • 18.
    Antonova, Rika
    et al.
    KTH, Skolan för datavetenskap och kommunikation (CSC), Robotik, perception och lärande, RPL. Robotics Institute, School of Computer Science, Carnegie Mellon University, USA.
    Rai, Akshara
    Robotics Institute, School of Computer Science, Carnegie Mellon University, USA.
    Atkeson, Christopher G.
    Robotics Institute, School of Computer Science, Carnegie Mellon University, USA.
    Sample efficient optimization for learning controllers for bipedal locomotion2016Konferensbidrag (Refereegranskat)
    Abstract [en]

    Learning policies for bipedal locomotion can be difficult, as experiments are expensive and simulation does not usually transfer well to hardware. To counter this, we need algorithms that are sample efficient and inherently safe. Bayesian Optimization is a powerful sample-efficient tool for optimizing non-convex black-box functions. However, its performance can degrade in higher dimensions. We develop a distance metric for bipedal locomotion that enhances the sample-efficiency of Bayesian Optimization and use it to train a 16 dimensional neuromuscular model for planar walking. This distance metric reflects some basic gait features of healthy walking and helps us quickly eliminate a majority of unstable controllers. With our approach we can learn policies for walking in less than 100 trials for a range of challenging settings. In simulation, we show results on two different costs and on various terrains including rough ground and ramps, sloping upwards and downwards. We also perturb our models with unknown inertial disturbances analogous with differences between simulation and hardware. These results are promising, as they indicate that this method can potentially be used to learn control policies on hardware.

  • 19.
    Ay, Emre
    KTH, Skolan för datavetenskap och kommunikation (CSC), Robotik, perception och lärande, RPL.
    Ego-Motion Estimation of Drones2017Självständigt arbete på avancerad nivå (masterexamen), 20 poäng / 30 hpStudentuppsats (Examensarbete)
    Abstract [sv]

    För att avlägsna behovet av extern infrastruktur så som GPS, som dessutominte är tillgänglig i många miljöer, är det önskvärt att uppskatta en drönares rörelse med sensor ombord. Visuella positioneringssystem har studerats under lång tid och litteraturen på området är ymnig. Syftet med detta projekt är att undersöka de för närvarande tillgängliga metodernaoch designa ett visuellt baserat positioneringssystem för drönare. Det resulterande systemet utvärderas och visas ge acceptabla positionsuppskattningar.

  • 20.
    Beskow, Jonas
    et al.
    KTH, Skolan för datavetenskap och kommunikation (CSC), Tal, musik och hörsel, TMH.
    Peters, Christopher
    KTH, Skolan för datavetenskap och kommunikation (CSC), Beräkningsvetenskap och beräkningsteknik (CST).
    Castellano, G.
    O'Sullivan, C.
    Leite, Iolanda
    KTH, Skolan för datavetenskap och kommunikation (CSC), Robotik, perception och lärande, RPL.
    Kopp, S.
    Preface2017Ingår i: 17th International Conference on Intelligent Virtual Agents, IVA 2017, Springer, 2017, Vol. 10498, s. V-VIKonferensbidrag (Refereegranskat)
  • 21.
    Binz, Marcel
    KTH, Skolan för datavetenskap och kommunikation (CSC), Robotik, perception och lärande, RPL.
    Learning Goal-Directed Behaviour2017Självständigt arbete på avancerad nivå (masterexamen), 20 poäng / 30 hpStudentuppsats (Examensarbete)
    Abstract [sv]

    Inlärning av beteende för artificiella agenter studeras vanligen inom Reinforcement Learning.Reinforcement Learning har på senare tid fått ökad uppmärksamhet, detta berordelvis på utvecklingen som gjort det möjligt att använda komplexa funktionsapproximerare, såsom djupa nätverk, i kombination med Reinforcement Learning. Två av kärnutmaningarnainom reinforcement learning är credit assignment-problemet över långaperioder samt hantering av glesa belöningar. I denna uppsats föreslår vi ett ramverk baseratpå delvisa mål för att hantera dessa problem. Detta arbete undersöker de komponentersom krävs för att få en form av målinriktat beteende, som liknar det som observerasi mänskligt resonemang. Detta inkluderar representation av en målrymd, inlärningav målsättning, och till sist inlärning av beteende för att nå målen. Ramverket byggerpå options-modellen, som är ett gemensamt tillvägagångssätt för att representera temporaltutsträckta åtgärder inom Reinforcement Learning. Alla komponenter i den föreslagnametoden kan implementeras med djupa nätverk och det kompletta systemet kan tränasend-to-end med hjälp av vanliga optimeringstekniker. Vi utvärderar tillvägagångssättetpå en rad kontinuerliga kontrollproblem med varierande svårighetsgrad. Vi visar att vikan lösa en utmanande samlingsuppgift, som tidigare state-of-the-art algoritmer har uppvisatsvårigheter för att hitta lösningar. Den presenterade metoden kan vidare skalas upptill komplexa kinematiska agenter i MuJoCo-simuleringar.

  • 22.
    Bore, Nils
    KTH, Skolan för datavetenskap och kommunikation (CSC), Robotik, perception och lärande, RPL.
    Detection and Tracking of General Movable Objects in Large 3D MapsManuskript (preprint) (Övrigt vetenskapligt)
    Abstract [en]

    This paper studies the problem of detection and tracking of general objects with long-term dynamics, observed by a mobile robot moving in a large environment. A key problem is that due to the environment scale, it can only observe a subset of the objects at any given time. Since some time passes between observations of objects in different places, the objects might be moved when the robot is not there. We propose a model for this movement in which the objects typically only move locally, but with some small probability they jump longer distances, through what we call global motion. For filtering, we decompose the posterior over local and global movements into two linked processes. The posterior over the global movements and measurement associations is sampled, while we track the local movement analytically using Kalman filters. This novel filter is evaluated on point cloud data gathered autonomously by a mobile robot over an extended period of time. We show that tracking jumping objects is feasible, and that the proposed probabilistic treatment outperforms previous methods when applied to real world data. The key to efficient probabilistic tracking in this scenario is focused sampling of the object posteriors.

  • 23.
    Bore, Nils
    KTH, Skolan för datavetenskap och kommunikation (CSC), Robotik, perception och lärande, RPL. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Multiple Object Detection, Tracking and Long-Term Dynamics Learning in Large 3D MapsManuskript (preprint) (Övrigt vetenskapligt)
    Abstract [en]

    In this work, we present a method for tracking and learning the dynamics of all objects in a large scale robot environment. A mobile robot patrols the environment and visits the different locations one by one. Movable objects are discovered by change detection, and tracked throughout the robot deployment. For tracking, we extend our previous Rao-Blackwellized particle filter with birth and death processes, enabling the method to handle an arbitrary number of objects. Target births and associations are sampled using Gibbs sampling. The parameters of the system are then learnt using the Expectation Maximization algorithm in an unsupervised fashion. The system therefore enables learning of the dynamics of one particular environment, and of its objects. The algorithm is evaluated on data collected autonomously by a mobile robot in an office environment during a real-world deployment. We show that the algorithm automatically identifies and tracks the moving objects within 3D maps and infers plausible dynamics models, significantly decreasing the modeling bias of our previous work. The proposed method represents an improvement over previous methods for environment dynamics learning as it allows for learning of fine grained processes.

  • 24.
    Bore, Nils
    KTH, Skolan för datavetenskap och kommunikation (CSC), Robotik, perception och lärande, RPL.
    Object Instance Detection and Dynamics Modeling in a Long-Term Mobile Robot Context2017Doktorsavhandling, sammanläggning (Övrigt vetenskapligt)
    Abstract [sv]

    Under de senaste åren har enklare service-robotar, såsom autonoma dammsugare och gräsklippare, börjat säljas, och blivit alltmer vanliga. Nästa generations service-robotar förväntas utföra mer komplexa uppgifter, till exempel att städa upp utspridda föremål i ett vardagsrum. För att uppnå detta måste robotarna kunna navigera i ostrukturerade miljöer, och förstå hur de kan bringas i ordning. I denna avhandling undersöker vi abstrakta representationer som kan förverkliga generalla städrobotar, samt robotar som kan hämta föremål. Vi diskuterar vad dessa specifika tillämpningar kräver i form av representationer, och argumenterar för att en lösning på dessa problem vore mer generellt applicerbar på grund av uppgifternas föremåls-centrerade natur. Vi närmar oss uppgiften genom två viktiga insikter. Till att börja medär många av dagens robot-representationer begränsade till rumsdomänen. De utelämnar alltså att modellera den variation som sker över tiden, och utnyttjar därför inte att rörelsen som kan ske under en given tidsperiod är begränsad. Vi argumenterar för att det är kritiskt att också inkorperara miljöns rörelse i robotens modell. Genom att modellera omgivningen på en föremåls-nivå möjliggörs tillämpningar som städning och hämtning av rörliga objekt. Den andra insikten kommer från att mobila robotar nu börjar bli så robusta att de kan patrullera i en och samma omgivning under flera månader. Dekan därför samla in stora mängder data från enskilda omgivningar. Med dessa stora datamängder börjar det bli möjligt att tillämpa så kallade "unsupervised learning"-metoder för att lära sig modeller av enskilda miljöer utan mänsklig inblandning. Detta tillåter robotarna att anpassa sig till förändringar i omgivningen, samt att lära sig koncept som kan vara svåra att förutse på förhand. Vi ser detta som en grundläggande förmåga hos en helt autonom robot. Kombinationen av unsupervised learning och modellering av omgivningens dynamik är intressant. Eftersom dynamiken varierar mellan olika omgivningar,och mellan olika objekt, kan learning hjälpa oss att fånga dessa variationer,och skapa mer precisa dynamik-modeller. Något som försvårar modelleringen av omgivningens dynamik är att roboten inte kan observera hela omgivningen på samma gång. Detta betyder att saker kan flyttas långa sträckor mellan två observationer. Vi visar hur man kan adressera detta i modellen genom att inlemma flera olika sätt som ett föremål kan flyttas på. Det resulterande systemet är helt probabilistiskt, och kan hålla reda på samtliga föremål i robotens omgivning. Vi demonstrerar även metoder för att upptäcka och lära sig föremål i den statiska delen av omgivningen. Med det kombinerade systemet kan vi således representera och lära oss många aspekter av robotens omgivning. Genom experiment i mänskliga miljöer visar vi att systemet kan hålla reda på olika sorters föremål i stora, och dynamiska, miljöer.

  • 25.
    Butepage, Judith
    et al.
    KTH, Skolan för datavetenskap och kommunikation (CSC), Robotik, perception och lärande, RPL.
    Black, Michael J.
    Kragic, Danica
    KTH, Skolan för datavetenskap och kommunikation (CSC), Robotik, perception och lärande, RPL.
    Kjellström, Hedvig
    KTH, Skolan för datavetenskap och kommunikation (CSC), Robotik, perception och lärande, RPL.
    Deep representation learning for human motion prediction and classification2017Ingår i: 30th IEEE Conference on Computer Vision and Pattern Recognition (CVPR 2017), IEEE, 2017, s. 1591-1599Konferensbidrag (Refereegranskat)
    Abstract [en]

    Generative models of 3D human motion are often restricted to a small number of activities and can therefore not generalize well to novel movements or applications. In this work we propose a deep learning framework for human motion capture data that learns a generic representation from a large corpus of motion capture data and generalizes well to new, unseen, motions. Using an encoding-decoding network that learns to predict future 3D poses from the most recent past, we extract a feature representation of human motion. Most work on deep learning for sequence prediction focuses on video and speech. Since skeletal data has a different structure, we present and evaluate different network architectures that make different assumptions about time dependencies and limb correlations. To quantify the learned features, we use the output of different layers for action classification and visualize the receptive fields of the network units. Our method outperforms the recent state of the art in skeletal motion prediction even though these use action specific training data. Our results show that deep feedforward networks, trained from a generic mocap database, can successfully be used for feature extraction from human motion data and that this representation can be used as a foundation for classification and prediction.

  • 26.
    Båberg, Fredrik
    et al.
    KTH, Skolan för datavetenskap och kommunikation (CSC), Robotik, perception och lärande, RPL. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Petter, Ögren
    KTH, Skolan för elektroteknik och datavetenskap (EECS), Robotik, perception och lärande, RPL. KTH, Skolan för elektroteknik och datavetenskap (EECS), Centra, Centrum för autonoma systen, CAS.
    Formation Obstacle Avoidance using RRT and Constraint Based Programming2017Ingår i: 2017 IEEE International Symposium on Safety, Security and Rescue Robotics (SSRR), IEEE conference proceedings, 2017, artikel-id 8088131Konferensbidrag (Refereegranskat)
    Abstract [en]

    In this paper, we propose a new way of doing formation obstacle avoidance using a combination of Constraint Based Programming (CBP) and Rapidly Exploring Random Trees (RRTs). RRT is used to select waypoint nodes, and CBP is used to move the formation between those nodes, reactively rotating and translating the formation to pass the obstacles on the way. Thus, the CBP includes constraints for both formation keeping and obstacle avoidance, while striving to move the formation towards the next waypoint. The proposed approach is compared to a pure RRT approach where the motion between the RRT waypoints is done following linear interpolation trajectories, which are less computationally expensive than the CBP ones. The results of a number of challenging simulations show that the proposed approach is more efficient for scenarios with high obstacle densities.

  • 27.
    Carvalho, J. Frederico
    et al.
    KTH, Skolan för datavetenskap och kommunikation (CSC), Robotik, perception och lärande, RPL. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Pequito, S.
    Aguiar, A. P.
    Kar, S.
    Johansson, Karl Henrik
    KTH, Skolan för elektro- och systemteknik (EES), Reglerteknik.
    Composability and controllability of structural linear time-invariant systems: Distributed verification2017Ingår i: Automatica, ISSN 0005-1098, E-ISSN 1873-2836, Vol. 78, s. 123-134Artikel i tidskrift (Refereegranskat)
    Abstract [en]

    Motivated by the development and deployment of large-scale dynamical systems, often comprised of geographically distributed smaller subsystems, we address the problem of verifying their controllability in a distributed manner. Specifically, we study controllability in the structural system theoretic sense, structural controllability, in which rather than focusing on a specific numerical system realization, we provide guarantees for equivalence classes of linear time-invariant systems on the basis of their structural sparsity patterns, i.e., the location of zero/nonzero entries in the plant matrices. Towards this goal, we first provide several necessary and/or sufficient conditions that ensure that the overall system is structurally controllable on the basis of the subsystems’ structural pattern and their interconnections. The proposed verification criteria are shown to be efficiently implementable (i.e., with polynomial time-complexity in the number of the state variables and inputs) in two important subclasses of interconnected dynamical systems: similar (where every subsystem has the same structure) and serial (where every subsystem outputs to at most one other subsystem). Secondly, we provide an iterative distributed algorithm to verify structural controllability for general interconnected dynamical system, i.e., it is based on communication among (physically) interconnected subsystems, and requires only local model and interconnection knowledge at each subsystem.

  • 28. Ciccozzi, F.
    et al.
    Di Ruscio, D.
    Malavolta, I.
    Pelliccione, P.
    Tumova, Jana
    KTH, Skolan för datavetenskap och kommunikation (CSC), Robotik, perception och lärande, RPL.
    Engineering the software of robotic systems2017Ingår i: Proceedings - 2017 IEEE/ACM 39th International Conference on Software Engineering Companion, ICSE-C 2017, Institute of Electrical and Electronics Engineers (IEEE), 2017, s. 507-508, artikel-id 7965406Konferensbidrag (Refereegranskat)
    Abstract [en]

    The production of software for robotic systems is often case-specific, without fully following established engineering approaches. Systematic approaches, methods, models, and tools are pivotal for the creation of robotic systems for real-world applications and turn-key solutions. Well-defined (software) engineering approaches are considered the 'make or break' factor in the development of complex robotic systems. The shift towards well-defined engineering approaches will stimulate component supply-chains and significantly reshape the robotics marketplace. The goal of this technical briefing is to provide an overview on the state of the art and practice concerning solutions and open challenges in the engineering of software required to develop and manage robotic systems. Model-Driven Engineering (MDE) is discussed as a promising technology to raise the level of abstraction, promote reuse, facilitate integration, boost automation and promote early analysis in such a complex domain.

  • 29.
    Colledanchise, Michele
    KTH, Skolan för datavetenskap och kommunikation (CSC), Robotik, perception och lärande, RPL.
    Behavior Trees in Robotics2017Doktorsavhandling, sammanläggning (Övrigt vetenskapligt)
    Abstract [en]

    Behavior Trees (BTs) are a Control Architecture (CA) that was invented in the video game industry, for controlling non-player characters. In this thesis we investigate the possibilities of using BTs for controlling autonomous robots, from a theoretical as well as practical standpoint. The next generation of robots will need to work, not only in the structured assembly lines of factories, but also in the unpredictable and dynamic environments of homes, shops, and other places where the space is shared with humans, and with different and possibly conflicting objectives. The nature of these environments makes it impossible to first compute the long sequence of actions needed to complete a task, and then blindly execute these actions. One way of addressing this problem is to perform a complete re-planning once a deviation is detected. Another way is to include feedback in the plan, and invoke additional incremental planning only when outside the scope of the feedback built into the plan. However, the feasibility of the latter option depends on the choice of CA, which thereby impacts the way the robot deals with unpredictable environments. In this thesis we address the problem of analyzing BTs as a novel CA for robots. The philosophy of BTs is to create control policies that are both modular and reactive. Modular in the sense that control policies can be separated and recombined, and reactive in the sense that they efficiently respond to events that were not predicted, either caused by external agents, or by unexpected outcomes of robot's own actions. Firstly, we propose a new functional formulation of BTs that allows us to mathematically analyze key system properties using standard tools from robot control theory. In particular we analyze whenever a BT is safe, in terms of avoiding particular parts of the state space; and robust, in terms of having a large domain of operation. This formulation also allows us to compare BTs with other commonly used CAs such as Finite State Machines (FSMs); the Subsumption Architecture; Sequential Behavior Compositions; Decision Trees; AND-OR Trees; and Teleo-Reactive Programs. Then we propose a framework to systematically analyze the efficiency and reliability of a given BT, in terms of expected time to completion and success probability. By including these performance measures in a user defined objective function, we can optimize the order of different fallback options in a given BT for minimizing such function. Finally we show the advantages of using BTs within an Automated Planning framework. In particular we show how to synthesize a policy that is reactive, modular, safe, and fault tolerant with two different approaches: model-based (using planning), and model-free (using learning).

  • 30.
    Colledanchise, Michele
    et al.
    KTH, Skolan för datavetenskap och kommunikation (CSC), Robotik, perception och lärande, RPL.
    Almeid, Diogo
    Ögren, Petter
    KTH, Skolan för datavetenskap och kommunikation (CSC), Robotik, perception och lärande, RPL.
    Towards Blended Planning and Acting using Behavior Trees. A Reactive, Safe and Fault Tolerant Approach.Artikel i tidskrift (Refereegranskat)
  • 31.
    Colledanchise, Michele
    et al.
    KTH, Skolan för datavetenskap och kommunikation (CSC), Robotik, perception och lärande, RPL.
    Marzinotto, Alejandro
    KTH, Skolan för datavetenskap och kommunikation (CSC), Robotik, perception och lärande, RPL.
    Dimarogonas, Dimos V.
    KTH, Skolan för elektro- och systemteknik (EES), Reglerteknik.
    Ögren, Petter
    KTH, Skolan för datavetenskap och kommunikation (CSC), Robotik, perception och lärande, RPL.
    The advantages of using behavior trees in multi-robot systems2016Ingår i: 47th International Symposium on Robotics, ISR 2016, VDE Verlag GmbH, 2016, s. 23-30Konferensbidrag (Refereegranskat)
    Abstract [en]

    Multi-robot teams offer possibilities of improved performance and fault tolerance, compared to single robot solutions. In this paper, we show how to realize those possibilities when starting from a single robot system controlled by a Behavior Tree (BT). By extending the single robot BT to a multi-robot BT, we are able to combine the fault tolerant properties of the BT, in terms of built-in fallbacks, with the fault tolerance inherent in multi-robot approaches, in terms of a faulty robot being replaced by another one. Furthermore, we improve performance by identifying and taking advantage of the opportunities of parallel task execution, that are present in the single robot BT. Analyzing the proposed approach, we present results regarding how mission performance is affected by minor faults (a robot losing one capability) as well as major faults (a robot losing all its capabilities).

  • 32.
    Colledanchise, Michele
    et al.
    KTH, Skolan för datavetenskap och kommunikation (CSC), Robotik, perception och lärande, RPL.
    Marzinotto, Alejandro
    Ögren, Petter
    KTH, Skolan för datavetenskap och kommunikation (CSC), Robotik, perception och lärande, RPL.
    Stochastic Behavior Trees for Estimating and Optimizing the Performance of Reactive Plan ExecutionsArtikel i tidskrift (Refereegranskat)
  • 33.
    Colledanchise, Michele
    et al.
    KTH, Skolan för datavetenskap och kommunikation (CSC), Robotik, perception och lärande, RPL.
    Murray, R. M.
    Ögren, Petter
    KTH, Skolan för datavetenskap och kommunikation (CSC), Robotik, perception och lärande, RPL.
    Synthesis of correct-by-construction behavior trees2017Ingår i: 2017 IEEE/RSJ International Conference on Intelligent Robots and Systems, IROS 2017, Institute of Electrical and Electronics Engineers (IEEE), 2017, s. 6039-6046, artikel-id 8206502Konferensbidrag (Refereegranskat)
    Abstract [en]

    In this paper we study the problem of synthesizing correct-by-construction Behavior Trees (BTs) controlling agents in adversarial environments. The proposed approach combines the modularity and reactivity of BTs with the formal guarantees of Linear Temporal Logic (LTL) methods. Given a set of admissible environment specifications, an agent model in form of a Finite Transition System and the desired task in form of an LTL formula, we synthesize a BT in polynomial time, that is guaranteed to correctly execute the desired task. To illustrate the approach, we present three examples of increasing complexity.

  • 34.
    Colledanchise, Michele
    et al.
    KTH, Skolan för datavetenskap och kommunikation (CSC), Robotik, perception och lärande, RPL.
    Murray, Richard M.
    CALTECH, Dept Control & Dynam Syst, Pasadena, CA 91125 USA..
    Ögren, Petter
    KTH, Skolan för datavetenskap och kommunikation (CSC), Robotik, perception och lärande, RPL.
    Synthesis of Correct-by-Construction Behavior Trees2017Ingår i: 2017 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS (IROS) / [ed] Bicchi, A Okamura, A, IEEE , 2017, s. 6039-6046Konferensbidrag (Refereegranskat)
    Abstract [en]

    In this paper we study the problem of synthesizing correct-by-construction Behavior Trees (BTs) controlling agents in adversarial environments. The proposed approach combines the modularity and reactivity of BTs with the formal guarantees of Linear Temporal Logic (LTL) methods. Given a set of admissible environment specifications, an agent model in form of a Finite Transition System and the desired task in form of an LTL formula, we synthesize a BT in polynomial time, that is guaranteed to correctly execute the desired task. To illustrate the approach, we present three examples of increasing complexity.

  • 35.
    Colledanchise, Michele
    et al.
    KTH, Skolan för datavetenskap och kommunikation (CSC), Robotik, perception och lärande, RPL.
    Parasuraman, Ramviyas
    Ögren, Petter
    KTH, Skolan för datavetenskap och kommunikation (CSC), Robotik, perception och lärande, RPL.
    Learning of Behavior Trees for Autonomous Agents.Artikel i tidskrift (Refereegranskat)
  • 36.
    Colledanchise, Michele
    et al.
    KTH, Skolan för datavetenskap och kommunikation (CSC), Robotik, perception och lärande, RPL.
    Ögren, Petter
    KTH, Skolan för datavetenskap och kommunikation (CSC), Robotik, perception och lärande, RPL.
    How Behavior Trees Modularize Hybrid Control Systems and Generalize Sequential Behavior Compositions, the Subsumption Architecture, and Decision Trees2017Ingår i: IEEE Transactions on robotics, ISSN 1552-3098, E-ISSN 1941-0468, Vol. 33, nr 2, s. 372-389Artikel i tidskrift (Refereegranskat)
  • 37.
    Cruciani, Silvia
    et al.
    KTH, Skolan för datavetenskap och kommunikation (CSC), Robotik, perception och lärande, RPL.
    Smith, Christian
    KTH, Skolan för datavetenskap och kommunikation (CSC), Robotik, perception och lärande, RPL.
    In-hand manipulation using three-stages open loop pivoting2017Konferensbidrag (Refereegranskat)
    Abstract [en]

    In this paper we propose a method for pivoting an object held by a parallel gripper, without requiring accurate dynamical models or advanced hardware. Our solution uses the motion of the robot arm for generating inertial forces to move the object. It also controls the rotational friction at the pivoting point by commanding a desired distance to the gripper's fingers. This method relies neither on fast and precise tracking systems to obtain the position of the tool, nor on real-time and high-frequency controllable robotic grippers to quickly adjust the finger distance. We demonstrate the efficacy of our method by applying it on a Baxter robot.

  • 38.
    Cruciani, Silvia
    et al.
    KTH, Skolan för datavetenskap och kommunikation (CSC), Robotik, perception och lärande, RPL.
    Smith, Christian
    KTH, Skolan för datavetenskap och kommunikation (CSC), Robotik, perception och lärande, RPL.
    In-Hand Manipulation Using Three-Stages Open Loop Pivoting2017Ingår i: 2017 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS (IROS) / [ed] Bicchi, A Okamura, A, IEEE , 2017, s. 1244-1251Konferensbidrag (Refereegranskat)
    Abstract [en]

    In this paper we propose a method for pivoting an object held by a parallel gripper, without requiring accurate dynamical models or advanced hardware. Our solution uses the motion of the robot arm for generating inertial forces to move the object. It also controls the rotational friction at the pivoting point by commanding a desired distance to the gripper's fingers. This method relies neither on fast and precise tracking systems to obtain the position of the tool, nor on real-time and high-frequency controllable robotic grippers to quickly adjust the finger distance. We demonstrate the efficacy of our method by applying it on a Baxter robot.

  • 39. Ek, C. H.
    et al.
    Kragic, Danica
    KTH, Skolan för datavetenskap och kommunikation (CSC), Robotik, perception och lärande, RPL.
    The importance of structure2017Ingår i: 15th International Symposium of Robotics Research, 2011, Springer, 2017, s. 111-127Konferensbidrag (Refereegranskat)
    Abstract [en]

    Many tasks in robotics and computer vision are concerned with inferring a continuous or discrete state variable from observations and measurements from the environment. Due to the high-dimensional nature of the input data the inference is often cast as a two stage process: first a low-dimensional feature representation is extracted on which secondly a learning algorithm is applied. Due to the significant progress that have been achieved within the field of machine learning over the last decade focus have placed at the second stage of the inference process, improving the process by exploiting more advanced learning techniques applied to the same (or more of the same) data. We believe that for many scenarios significant strides in performance could be achieved by focusing on representation rather than aiming to alleviate inconclusive and/or redundant information by exploiting more advanced inference methods. This stems from the notion that; given the “correct” representation the inference problem becomes easier to solve. In this paper we argue that one important mode of information for many application scenarios is not the actual variation in the data but the rather the higher order statistics as the structure of variations. We will exemplify this through a set of applications and show different ways of representing the structure of data. © Springer International Publishing Switzerland 2017.

  • 40.
    Engelhardt, Sara
    et al.
    KTH, Skolan för datavetenskap och kommunikation (CSC), Robotik, perception och lärande, RPL.
    Hansson, Emmeli
    KTH, Skolan för datavetenskap och kommunikation (CSC), Robotik, perception och lärande, RPL.
    Leite, Iolanda
    KTH, Skolan för datavetenskap och kommunikation (CSC), Robotik, perception och lärande, RPL.
    Better faulty than sorry: Investigating social recovery strategies to minimize the impact of failure in human-robot interaction2017Ingår i: WCIHAI 2017 Workshop on Conversational Interruptions in Human-Agent Interactions: Proceedings of the first Workshop on Conversational Interruptions in Human-Agent Interactions co-located with 17th International Conference on International Conference on Intelligent Virtual Agents (IVA 2017) Stockholm, Sweden, August 27, 2017., CEUR-WS , 2017, Vol. 1943, s. 19-27Konferensbidrag (Refereegranskat)
    Abstract [en]

    Failure happens in most social interactions, possibly even more so in interactions between a robot and a human. This paper investigates different failure recovery strategies that robots can employ to minimize the negative effect on people's perception of the robot. A between-subject Wizard-of-Oz experiment with 33 participants was conducted in a scenario where a robot and a human play a collaborative game. The interaction was mainly speech-based and controlled failures were introduced at specific moments. Three types of recovery strategies were investigated, one in each experimental condition: ignore (the robot ignores that a failure has occurred and moves on with the task), apology (the robot apologizes for failing and moves on) and problem-solving (the robot tries to solve the problem with the help of the human). Our results show that the apology-based strategy scored the lowest on measures such as likeability and perceived intelligence, and that the ignore strategy lead to better perceptions of perceived intelligence and animacy than the employed recovery strategies.

  • 41. Erkent, Ozgur
    et al.
    Karaoguz, Hakan
    KTH, Skolan för datavetenskap och kommunikation (CSC), Robotik, perception och lärande, RPL.
    Bozma, H. Isil
    Hierarchically self-organizing visual place memory2017Ingår i: Advanced Robotics, ISSN 0169-1864, E-ISSN 1568-5535, Vol. 31, nr 16, s. 865-879Artikel i tidskrift (Refereegranskat)
    Abstract [en]

    A hierarchically organized visual place memory enables a robot to associate with its respective knowledge efficiently. In this paper, we consider how this organization can be done by the robot on its own throughout its operation and introduce an approach that is based on the agglomerative method SLINK. The hierarchy is obtained from a single link cluster analysis that is carried out based on similarity in the appearance space. As such, the robot can incrementally incorporate the knowledge of places into its visual place memory over the long term. The resulting place memory has an order-invariant hierarchy that enables both storage and construction efficiency. Experimental results obtained under the guided operation of the robot demonstrate that the robot is able to organize its place knowledge and relate to it efficiently. This is followed by experimental results under autonomous operation in which the robot evolves its visual place memory completely on its own.

  • 42. Evestedt, Niclas
    et al.
    Ward, Erik
    KTH, Skolan för datavetenskap och kommunikation (CSC), Robotik, perception och lärande, RPL.
    Folkesson, John
    KTH, Skolan för datavetenskap och kommunikation (CSC), Robotik, perception och lärande, RPL.
    Axehill, Daniel
    Interaction aware trajectory planning for merge scenarios in congested traffic situations2016Ingår i: 2016 IEEE 19th International Conference on Intelligent Transportation Systems, IEEE, 2016, s. 465-472Konferensbidrag (Refereegranskat)
    Abstract [en]

    In many traffic situations there are times where interaction with other drivers is necessary and unavoidable in order to safely progress towards an intended destination. This is especially true for merge manoeuvres into dense traffic, where drivers sometimes must be somewhat aggressive and show the intention of merging in order to interact with the other driver and make the driver open the gap needed to execute the manoeuvre safely. Many motion planning frameworks for autonomous vehicles adopt a reactive approach where simple models of other traffic participants are used and therefore need to adhere to large margins in order to behave safely. However, the large margins needed can sometimes get the system stuck in congested traffic where time gaps between vehicles are too small. In other situations, such as a highway merge, it can be significantly more dangerous to stop on the entrance ramp if the gaps are found to be too small than to make a slightly more aggressive manoeuvre and let the driver behind open the gap needed. To remedy this problem, this work uses the Intelligent Driver Model (IDM) to explicitly model the interaction of other drivers and evaluates the risk by their required deceleration in a similar manner as the Minimum Overall Breaking Induced by Lane change (MOBIL) model that has been used in large scale traffic simulations before. This allows the algorithm to evaluate the effect on other drivers depending on our own trajectory plans by simulating the nearby traffic situation. Finding a globally optimal solution is often intractable in these situations so instead a large set of candidate trajectories are generated that are evaluated against the traffic scene by forward simulations of other traffic participants. By discretization and using an efficient trajectory generator together with efficient modelling of the traffic scene real-time demands can be met.

  • 43.
    Ghadirzadeh, Ali
    et al.
    KTH, Skolan för datavetenskap och kommunikation (CSC), Robotik, perception och lärande, RPL.
    Maki, Atsuto
    KTH, Skolan för datavetenskap och kommunikation (CSC), Robotik, perception och lärande, RPL.
    Kragic, Danica
    KTH, Skolan för datavetenskap och kommunikation (CSC), Robotik, perception och lärande, RPL.
    Björkman, Mårten
    KTH, Skolan för datavetenskap och kommunikation (CSC), Robotik, perception och lärande, RPL.
    Deep predictive policy training using reinforcement learning2017Ingår i: 2017 IEEE/RSJ International Conference on Intelligent Robots and Systems, IROS 2017, Institute of Electrical and Electronics Engineers (IEEE), 2017, s. 2351-2358, artikel-id 8206046Konferensbidrag (Refereegranskat)
    Abstract [en]

    Skilled robot task learning is best implemented by predictive action policies due to the inherent latency of sensorimotor processes. However, training such predictive policies is challenging as it involves finding a trajectory of motor activations for the full duration of the action. We propose a data-efficient deep predictive policy training (DPPT) framework with a deep neural network policy architecture which maps an image observation to a sequence of motor activations. The architecture consists of three sub-networks referred to as the perception, policy and behavior super-layers. The perception and behavior super-layers force an abstraction of visual and motor data trained with synthetic and simulated training samples, respectively. The policy super-layer is a small subnetwork with fewer parameters that maps data in-between the abstracted manifolds. It is trained for each task using methods for policy search reinforcement learning. We demonstrate the suitability of the proposed architecture and learning framework by training predictive policies for skilled object grasping and ball throwing on a PR2 robot. The effectiveness of the method is illustrated by the fact that these tasks are trained using only about 180 real robot attempts with qualitative terminal rewards.

  • 44.
    Göbelbecker, Moritz
    et al.
    University of Freiburg.
    Hanheide, Marc
    University of Lincoln.
    Gretton, Charles
    University of Birmingham.
    Hawes, Nick
    University of Birmingham.
    Pronobis, Andrzej
    KTH, Skolan för datavetenskap och kommunikation (CSC), Robotik, perception och lärande, RPL.
    Aydemir, Alper
    KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS. KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP.
    Kristoffer, Sjöö
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Zender, Hendrik
    DFKI, Saarbruecken.
    Dora: A Robot that Plans and Acts Under Uncertainty2012Ingår i: Proceedings of the 35th German Conference on Artificial Intelligence (KI’12), 2012Konferensbidrag (Refereegranskat)
    Abstract [en]

    Dealing with uncertainty is one of the major challenges when constructing autonomous mobile robots. The CogX project addressed key aspects of that by developing and implementing mechanisms for self-understanding and self-extension -- i.e. awareness of gaps in knowledge, and the ability to reason and act to fill those gaps. We discuss our robot called Dora, a showcase outcome of that project. Dora is able to perform a variety of search tasks in unexplored environments. One of the results of the project is the Dora robot, that can perform a variety of search tasks in unexplored environments by exploiting probabilistic knowledge representations while retaining efficiency by using a fast planning system.

  • 45.
    Güler, Püren
    KTH, Skolan för datavetenskap och kommunikation (CSC), Robotik, perception och lärande, RPL.
    Learning Object Properties From Manipulation for Manipulation2017Doktorsavhandling, monografi (Övrigt vetenskapligt)
    Abstract [en]

    The world contains objects with various properties - rigid, granular, liquid, elastic or plastic. As humans, while interacting with the objects, we plan our manipulation by considering their properties. For instance, while holding a rigid object such as a brick, we adapt our grasp based on its centre of mass not to drop it. On the other hand while manipulating a deformable object, we may consider additional properties to the centre of mass such elasticity, brittleness etc. for grasp stability. Therefore, knowing object properties is an integral part of skilled manipulation of objects. 

    For manipulating objects skillfully, robots should be able to predict the object properties as humans do. To predict the properties, interactions with objects are essential. These interactions give rise distinct sensory signals that contains information about the object properties. The signals coming from a single sensory modality may give ambiguous information or noisy measurements. Hence, by integrating multi-sensory modalities (vision, touch, audio or proprioceptive), a manipulated object can be observed from different aspects and this can decrease the uncertainty in the observed properties. By analyzing the perceived sensory signals, a robot reasons about the object properties and adjusts its manipulation based on this information. During this adjustment, the robot can make use of a simulation model to predict the object behavior to plan the next action. For instance, if an object is assumed to be rigid before interaction and exhibit deformable behavior after interaction, an internal simulation model can be used to predict the load force exerted on the object, so that appropriate manipulation can be planned in the next action. Thus, learning about object properties can be defined as an active procedure. The robot explores the object properties actively and purposefully by interacting with the object, and adjusting its manipulation based on the sensory information and predicted object behavior through an internal simulation model.

    This thesis investigates the necessary mechanisms that we mentioned above to learn object properties: (i) multi-sensory information, (ii) simulation and (iii) active exploration. In particular, we investigate these three mechanisms that represent different and complementary ways of extracting a certain object property, the deformability of objects. Firstly, we investigate the feasibility of using visual and/or tactile data to classify the content of a container based on the deformation observed when a robotic hand squeezes and deforms the container. According to our result, both visual and tactile sensory data individually give high accuracy rates while classifying the content type based on the deformation. Next, we investigate the usage of a simulation model to estimate the object deformability that is revealed through a manipulation. The proposed method identify accurately the deformability of the test objects in synthetic and real-world data. Finally, we investigate the integration of the deformation simulation in a robotic active perception framework to extract the heterogenous deformability properties of an environment through physical interactions. In the experiments that we apply on real-world objects, we illustrate that the active perception framework can map the heterogeneous deformability properties of a surface.

  • 46.
    Güler, Püren
    et al.
    KTH, Skolan för datavetenskap och kommunikation (CSC), Robotik, perception och lärande, RPL.
    Pieropan, A.
    Ishikawa, M.
    Kragic, Danica
    KTH, Skolan för datavetenskap och kommunikation (CSC), Robotik, perception och lärande, RPL.
    Estimating deformability of objects using meshless shape matching2017Ingår i: 2017 IEEE/RSJ International Conference on Intelligent Robots and Systems, IROS 2017, Institute of Electrical and Electronics Engineers (IEEE), 2017, s. 5941-5948, artikel-id 8206489Konferensbidrag (Refereegranskat)
    Abstract [en]

    Humans interact with deformable objects on a daily basis but this still represents a challenge for robots. To enable manipulation of and interaction with deformable objects, robots need to be able to extract and learn the deformability of objects both prior to and during the interaction. Physics-based models are commonly used to predict the physical properties of deformable objects and simulate their deformation accurately. The most popular simulation techniques are force-based models that need force measurements. In this paper, we explore the applicability of a geometry-based simulation method called meshless shape matching (MSM) for estimating the deformability of objects. The main advantages of MSM are its controllability and computational efficiency that make it popular in computer graphics to simulate complex interactions of multiple objects at the same time. Additionally, a useful feature of the MSM that differentiates it from other physics-based simulation is to be independent of force measurements that may not be available to a robotic framework lacking force/torque sensors. In this work, we design a method to estimate deformability based on certain properties, such as volume conservation. Using the finite element method (FEM) we create the ground truth deformability for various settings to evaluate our method. The experimental evaluation shows that our approach is able to accurately identify the deformability of test objects, supporting the value of MSM for robotic applications.

  • 47.
    Güler, Püren
    et al.
    KTH, Skolan för datavetenskap och kommunikation (CSC), Robotik, perception och lärande, RPL.
    Pieropan, Alessandro
    Univrses, Stockholm, Sweden.;Univ Tokyo, Ishikawa Watanabe Lab, Tokyo, Japan..
    Ishikawa, Masatoshi
    Univ Tokyo, Ishikawa Watanabe Lab, Tokyo, Japan..
    Kragic, Danica
    KTH, Skolan för datavetenskap och kommunikation (CSC), Robotik, perception och lärande, RPL. KTH Royal Inst Technol, Robot Percept & Learning Lab, Sch Comp Sci & Commun, Stockholm, Sweden..
    Estimating deformability of objects using meshless shape matching2017Ingår i: 2017 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS (IROS) / [ed] Bicchi, A Okamura, A, IEEE , 2017, s. 5941-5948Konferensbidrag (Refereegranskat)
    Abstract [en]

    Humans interact with deformable objects on a daily basis but this still represents a challenge for robots. To enable manipulation of and interaction with deformable objects, robots need to be able to extract and learn the deformability of objects both prior to and during the interaction. Physics-based models are commonly used to predict the physical properties of deformable objects and simulate their deformation accurately. The most popular simulation techniques are force-based models that need force measurements. In this paper, we explore the applicability of a geometry-based simulation method called meshless shape matching (MSM) for estimating the deformability of objects. The main advantages of MSM are its controllability and computational efficiency that make it popular in computer graphics to simulate complex interactions of multiple objects at the same time. Additionally, a useful feature of the MSM that differentiates it from other physics-based simulation is to be independent of force measurements that may not be available to a robotic framework lacking force/torque sensors. In this work, we design a method to estimate deformability based on certain properties, such as volume conservation. Using the finite element method (FEM) we create the ground truth deformability for various settings to evaluate our method. The experimental evaluation shows that our approach is able to accurately identify the deformability of test objects, supporting the value of MSM for robotic applications.

  • 48.
    Hang, Kaiyu
    et al.
    KTH, Skolan för datavetenskap och kommunikation (CSC), Robotik, perception och lärande, RPL.
    Stork, Johannes A.
    KTH, Skolan för datavetenskap och kommunikation (CSC), Robotik, perception och lärande, RPL.
    Pollard, Nancy S.
    Kragic, Danica
    KTH, Skolan för datavetenskap och kommunikation (CSC), Robotik, perception och lärande, RPL.
    A Framework for Optimal Grasp Contact Planning2017Ingår i: IEEE Robotics and Automation Letters, ISSN 2377-3766, E-ISSN 1949-3045, Vol. 2, nr 2, s. 704-711Artikel i tidskrift (Refereegranskat)
    Abstract [en]

    We consider the problem of finding grasp contacts that are optimal under a given grasp quality function on arbitrary objects. Our approach formulates a framework for contact-level grasping as a path finding problem in the space of supercontact grasps. The initial supercontact grasp contains all grasps and in each step along a path grasps are removed. For this, we introduce and formally characterize search space structure and cost functions underwhich minimal cost paths correspond to optimal grasps. Our formulation avoids expensive exhaustive search and reduces computational cost by several orders of magnitude. We present admissible heuristic functions and exploit approximate heuristic search to further reduce the computational cost while maintaining bounded suboptimality for resulting grasps. We exemplify our formulation with point-contact grasping for which we define domain specific heuristics and demonstrate optimality and bounded suboptimality by comparing against exhaustive and uniform cost search on example objects. Furthermore, we explain how to restrict the search graph to satisfy grasp constraints for modeling hand kinematics. We also analyze our algorithm empirically in terms of created and visited search states and resultant effective branching factor.

  • 49.
    Haustein, Joshua
    et al.
    KTH, Skolan för datavetenskap och kommunikation (CSC), Robotik, perception och lärande, RPL. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Hang, Kaiyu
    KTH, Skolan för datavetenskap och kommunikation (CSC), Robotik, perception och lärande, RPL. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Kragic, Danica
    KTH, Skolan för datavetenskap och kommunikation (CSC), Robotik, perception och lärande, RPL. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Integrating motion and hierarchical fingertip grasp planning2017Ingår i: 2017 IEEE International Conference on Robotics and Automation (ICRA), Institute of Electrical and Electronics Engineers (IEEE), 2017, s. 3439-3446, artikel-id 7989392Konferensbidrag (Refereegranskat)
    Abstract [en]

    In this work, we present an algorithm that simultaneously searches for a high quality fingertip grasp and a collision-free path for a robot hand-arm system to achieve it. The algorithm combines a bidirectional sampling-based motion planning approach with a hierarchical contact optimization process. Rather than tackling these problems in a decoupled manner, the grasp optimization is guided by the proximity to collision-free configurations explored by the motion planner. We implemented the algorithm for a 13-DoF manipulator and show that it is capable of efficiently planning reachable high quality grasps in cluttered environments. Further, we show that our algorithm outperforms a decoupled integration in terms of planning runtime.

  • 50. Hawasly, M.
    et al.
    Pokorny, Florian T.
    KTH, Skolan för datavetenskap och kommunikation (CSC), Robotik, perception och lärande, RPL.
    Ramamoorthy, S.
    Multi-scale activity estimation with spatial abstractions2017Ingår i: 3rd International Conference on Geometric Science of Information, GSI 2017, Springer, 2017, Vol. 10589, s. 273-281Konferensbidrag (Refereegranskat)
    Abstract [en]

    Estimation and forecasting of dynamic state are fundamental to the design of autonomous systems such as intelligent robots. State-of-the-art algorithms, such as the particle filter, face computational limitations when needing to maintain beliefs over a hypothesis space that is made large by the dynamic nature of the environment. We propose an algorithm that utilises a hierarchy of such filters, exploiting a filtration arising from the geometry of the underlying hypothesis space. In addition to computational savings, such a method can accommodate the availability of evidence at varying degrees of coarseness. We show, using synthetic trajectory datasets, that our method achieves a better normalised error in prediction and better time to convergence to a true class when compared against baselines that do not similarly exploit geometric structure.

123 1 - 50 av 102
RefereraExporteraLänk till träfflistan
Permanent länk
Referera
Referensformat
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Annat format
Fler format
Språk
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Annat språk
Fler språk
Utmatningsformat
  • html
  • text
  • asciidoc
  • rtf