Endre søk
Begrens søket
34567 251 - 300 of 305
RefereraExporteraLink til resultatlisten
Permanent link
Referera
Referensformat
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Annet format
Fler format
Språk
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Annet språk
Fler språk
Utmatningsformat
  • html
  • text
  • asciidoc
  • rtf
Treff pr side
  • 5
  • 10
  • 20
  • 50
  • 100
  • 250
Sortering
  • Standard (Relevans)
  • Forfatter A-Ø
  • Forfatter Ø-A
  • Tittel A-Ø
  • Tittel Ø-A
  • Type publikasjon A-Ø
  • Type publikasjon Ø-A
  • Eldste først
  • Nyeste først
  • Skapad (Eldste først)
  • Skapad (Nyeste først)
  • Senast uppdaterad (Eldste først)
  • Senast uppdaterad (Nyeste først)
  • Disputationsdatum (tidligste først)
  • Disputationsdatum (siste først)
  • Standard (Relevans)
  • Forfatter A-Ø
  • Forfatter Ø-A
  • Tittel A-Ø
  • Tittel Ø-A
  • Type publikasjon A-Ø
  • Type publikasjon Ø-A
  • Eldste først
  • Nyeste først
  • Skapad (Eldste først)
  • Skapad (Nyeste først)
  • Senast uppdaterad (Eldste først)
  • Senast uppdaterad (Nyeste først)
  • Disputationsdatum (tidligste først)
  • Disputationsdatum (siste først)
Merk
Maxantalet träffar du kan exportera från sökgränssnittet är 250. Vid större uttag använd dig av utsökningar.
  • 251.
    Robison Fernlund, Joanne
    et al.
    KTH, Skolan för arkitektur och samhällsbyggnad (ABE), Mark- och vattenteknik, Teknisk geologi och geofysik.
    Zimmerman, Robert W.
    Kragic, Danica
    KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Influence of volume/mass on grain-size curves and conversion of image-analysis size to sieve size2007Inngår i: Engineering Geology, ISSN 0013-7952, E-ISSN 1872-6917, Vol. 90, nr 04-mar, s. 124-137Artikkel i tidsskrift (Fagfellevurdert)
    Abstract [en]

    Image analysis of aggregates does not measure the same size as sieve analysis. The size of aggregates, determined by sieve analysis, is presented with respect to the percent cumulative mass, whereas image analysis does not measure mass. Results are often presented in percent particles or percent area. Several researchers have claimed that more accurate volume and mass determinations are necessary for accurate construction of grain-size curves. In the present work, several methods for reconstructing volume and thus mass of aggregates from image analysis (IA) have been tested to see how they influence the grain-size distribution curves. The actual mass of the individual particles was found to have little or no influence on the grain-size distribution curve, which is normalized and thus insensitive to mass. Accurate conversion of image-analysis size to sieve size is dependent upon how particles pass through sieves. Most existing methods base their conversion of image-analysis size to sieve size on the intermediate axis, multiplied by some factor. The present work shows that there is no direct correlation between the intermediate axes and sieve size. A universal conversion of image-analysis size to sieve size has been developed, using the minimum-bounding square around the minimum projected area. This measure yields very good correlation with sieve-analysis results.

  • 252. Romero, Javier
    et al.
    Feix, Thomas
    Ek, Carl Henrik
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Kjellström, Hedvig
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Kragic, Danica
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Extracting Postural Synergies for Robotic Grasping2013Inngår i: IEEE Transactions on robotics, ISSN 1552-3098, E-ISSN 1941-0468, Vol. 29, nr 6, s. 1342-1352Artikkel i tidsskrift (Fagfellevurdert)
    Abstract [en]

    We address the problem of representing and encoding human hand motion data using nonlinear dimensionality reduction methods. We build our work on the notion of postural synergies being typically based on a linear embedding of the data. In addition to addressing the encoding of postural synergies using nonlinear methods, we relate our work to control strategies of combined reaching and grasping movements. We show the drawbacks of the (commonly made) causality assumption and propose methods that model the data as being generated from an inferred latent manifold to cope with the problem. Another important contribution is a thorough analysis of the parameters used in the employed dimensionality reduction techniques. Finally, we provide an experimental evaluation that shows how the proposed methods outperform the standard techniques, both in terms of recognition and generation of motion patterns.

  • 253.
    Romero, Javier
    et al.
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Feix, Thomas
    Kjellström, Hedvig
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Kragic, Danica
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Spatio-Temporal Modeling of Grasping Actions2010Inngår i: IEEE/RSJ 2010 INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS (IROS 2010), 2010, s. 2103-2108Konferansepaper (Fagfellevurdert)
    Abstract [en]

    Understanding the spatial dimensionality and temporal context of human hand actions can provide representations for programming grasping actions in robots and inspire design of new robotic and prosthetic hands. The natural representation of human hand motion has high dimensionality. For specific activities such as handling and grasping of objects, the commonly observed hand motions lie on a lower-dimensional non-linear manifold in hand posture space. Although full body human motion is well studied within Computer Vision and Biomechanics, there is very little work on the analysis of hand motion with nonlinear dimensionality reduction techniques. In this paper we use Gaussian Process Latent Variable Models (GPLVMs) to model the lower dimensional manifold of human hand motions during object grasping. We show how the technique can be used to embed high-dimensional grasping actions in a lower-dimensional space suitable for modeling, recognition and mapping.

  • 254. Romero, Javier
    et al.
    Kjellström, Hedvig
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Ek, Carl Henrik
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Kragic, Danica
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Non-parametric hand pose estimation with object context2013Inngår i: Image and Vision Computing, ISSN 0262-8856, E-ISSN 1872-8138, Vol. 31, nr 8, s. 555-564Artikkel i tidsskrift (Fagfellevurdert)
    Abstract [en]

    In the spirit of recent work on contextual recognition and estimation, we present a method for estimating the pose of human hands, employing information about the shape of the object in the hand. Despite the fact that most applications of human hand tracking involve grasping and manipulation of objects, the majority of methods in the literature assume a free hand, isolated from the surrounding environment. Occlusion of the hand from grasped objects does in fact often pose a severe challenge to the estimation of hand pose. In the presented method, object occlusion is not only compensated for, it contributes to the pose estimation in a contextual fashion; this without an explicit model of object shape. Our hand tracking method is non-parametric, performing a nearest neighbor search in a large database (.. entries) of hand poses with and without grasped objects. The system that operates in real time, is robust to self occlusions, object occlusions and segmentation errors, and provides full hand pose reconstruction from monocular video. Temporal consistency in hand pose is taken into account, without explicitly tracking the hand in the high-dim pose space. Experiments show the non-parametric method to outperform other state of the art regression methods, while operating at a significantly lower computational cost than comparable model-based hand tracking methods.

  • 255.
    Romero, Javier
    et al.
    KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS. KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP.
    Kjellström, Hedvig
    KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS. KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP.
    Kragic, Danica
    KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS. KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP.
    Hands in Action: Real-Time 3D Reconstruction of Hands in Interaction with Objects2010Inngår i: 2010 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION (ICRA)  / [ed] Rakotondrabe M; Ivan IA, 2010, s. 458-463Konferansepaper (Fagfellevurdert)
    Abstract [en]

    This paper presents a method for vision based estimation of the pose of human hands in interaction with objects. Despite the fact that most robotics applications of human hand tracking involve grasping and manipulation of objects, the majority of methods in the literature assume a free hand, isolated from the surrounding environment. Our hand tracking method is non-parametric, performing a nearest neighbor search in a large database (100000 entries) of hand poses with and without grasped objects. The system operates in real time, it is robust to self occlusions, object occlusions and segmentation errors, and provides full hand pose reconstruction from markerless video. Temporal consistency in hand pose is taken into account, without explicitly tracking the hand in the high dimensional pose space.

  • 256.
    Romero, Javier
    et al.
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Kjellström, Hedvig
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Kragic, Danica
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Human-to-Robot Mapping of Grasps2008Konferansepaper (Fagfellevurdert)
    Abstract [en]

    We are developing a Programming by Demonstration (PbD) system for which recognition of objects and pick-and-place actions represent basic building blocks for task learning. An important capability in this system is automatic isual recognition of human grasps, and methods for mapping the human grasps to the functionally corresponding robot grasps. This paper describes the grasp recognition system, focusing on the human-to-robot mapping. The visual grasp classification and grasp orientation regression is described in our IROS 2008 paper [1]. In contrary to earlier approaches, no articulated 3D reconstruction of the hand over time is taking place. The input data consists of a single image of the human hand. The hand shape is classified as one of six grasps by finding similar hand shapes in a large database of grasp images. From the database, the hand orientation is also estimated. The recognized grasp is then mapped to one of three predefined Barrett hand grasps. Depending on the type of robot grasp, a precomputed grasp strategy is selected. The strategy is further parameterized by the orientation of the hand relative to the environment show purposes.

  • 257.
    Romero, Javier
    et al.
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Kjellström, Hedvig
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Kragic, Danica
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Modeling and Evaluation of Human-to-Robot Mapping of Grasps2009Inngår i: ICAR: 2009 International Conference on Advanced Robotics, IEEE , 2009, s. 228-233Konferansepaper (Fagfellevurdert)
    Abstract [en]

    We study the problem of human to robot grasp mapping as a basic building block of a learning by imitation system. The human hand posture, including both the grasp type and hand orientation, is first classified based on a single image and mapped to a specific robot hand. A metric for the evaluation based on the notion of virtual fingers is proposed. The first part of the experimental evaluation, performed in simulation, shows bow the differences in the embodiment between human and robotic hand affect the grasp strategy. The second part, performed with a robotic system, demonstrates the feasibility of the proposed methodology in realistic applications.

  • 258.
    Romero, Javier
    et al.
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Kjellström, Hedvig
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Kragic, Danica
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Monocular Real-Time 3D Articulated Hand Pose Estimation2009Inngår i: 9th IEEE-RAS International Conference on Humanoid Robots, HUMANOIDS09, 2009, s. 87-92Konferansepaper (Fagfellevurdert)
    Abstract [en]

    Markerless, vision based estimation of human hand pose over time is a prerequisite for a number of robotics applications, such as Learning by Demonstration (LbD), health monitoring, teleoperation, human-robot interaction. It has special interest in humanoid platforms, where the number of degrees of freedom makes conventional programming challenging. Our primary application is LbD in natural environments where the humanoid robot learns how to grasp and manipulate objects by observing a human performing a task. This paper presents a method for continuous vision based estimation of human hand pose. The method is non-parametric, performing a nearest neighbor search in a large database (100000 entries) of hand pose examples. The main contribution is a real time system, robust to partial occlusions and segmentation errors, that provides full hand pose recognition from markerless data. An additional contribution is the modeling of  based on temporal consistency in hand pose, without explicitly tracking the hand in the high dimensional pose space. The pose representation is rich enough to enable a descriptive humanto-robotmapping. Experiments show the pose estimation to be more robust and accurate than a non-parametric method without temporal constraints.

  • 259.
    Romero, Javier
    et al.
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Kragic, Danica
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Kyrki, Ville
    LUT, Lappeenranta, Finland.
    Argyros, Antonis
    Institute of Computer Science, Forth, Crete, Greece.
    Dynamic Time Warping for binocular hand tracking and reconstruction2008Inngår i: 2008 IEEE International Conference On Robotics And Automation: Vols 1-9, 2008, s. 2289-2294Konferansepaper (Fagfellevurdert)
    Abstract [en]

    We show how matching and reconstruction of contour points can be performed using Dynamic Time Warping (DTW) for the purpose of 3D hand contour tracking. We evaluate the performance of the proposed algorithm in object manipulation activities and perform comparison with the Iterative Closest Point (ICP) method.

  • 260.
    Rubio, Oscar J.
    et al.
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Hübner, Kai
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Kragic, Danica
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Representations for Object Grasping and Learning from Experience2010Inngår i: IEEE/RSJ 2010 INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS (IROS 2010), 2010, s. 1566-1571Konferansepaper (Fagfellevurdert)
    Abstract [en]

    We study two important problems in the area of robot grasping: i) the methodology and representations for grasp selection on known and unknown objects, and ii) learning from experience for grasping of similar objects. The core part of the paper is the study of different representations necessary for implementing grasping tasks on objects of different complexity. We show how to select a grasp satisfying force-closure, taking into account the parameters of the robot hand and collision-free paths. Our implementation takes also into account efficient computation at different levels of the system regarding representation, description and grasp hypotheses generation.

  • 261. Rudinac, M.
    et al.
    Kootstra, Geert
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP.
    Kragic, Danica
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP.
    Jonker, P. P.
    Learning and recognition of objects inspired by early cognition2012Inngår i: Intelligent Robots and Systems (IROS), 2012 IEEE/RSJ International Conference on, IEEE , 2012, s. 4177-4184Konferansepaper (Fagfellevurdert)
    Abstract [en]

    In this paper, we present a unifying approach for learning and recognition of objects in unstructured environments through exploration. Taking inspiration from how young infants learn objects, we establish four principles for object learning. First, early object detection is based on an attention mechanism detecting salient parts in the scene. Second, motion of the object allows more accurate object localization. Next, acquiring multiple observations of the object through manipulation allows a more robust representation of the object. And last, object recognition benefits from a multi-modal representation. Using these principles, we developed a unifying method including visual attention, smooth pursuit of the object, and a multi-view and multi-modal object representation. Our results indicate the effectiveness of this approach and the improvement of the system when multiple observations are acquired from active object manipulation.

  • 262. Sanmohan,
    et al.
    Kruger, Volker
    Kragic, Danica
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP.
    Kjellström, Hedvig
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP.
    Primitive-Based Action Representation and Recognition2011Inngår i: Advanced Robotics, ISSN 0169-1864, E-ISSN 1568-5535, Vol. 25, nr 6-7, s. 871-891Artikkel i tidsskrift (Fagfellevurdert)
    Abstract [en]

    In robotics, there has been a growing interest in expressing actions as a combination of meaningful subparts commonly called motion primitives. Primitives are analogous to words in a language. Similar to words put together according to the rules of language in a sentence, primitives arranged with certain rules make an action. In this paper we investigate modeling and recognition of arm manipulation actions at different levels of complexity using primitives. Primitives are detected automatically in a sequential manner. Here, we assume no prior knowledge on primitives, but look for correlating segments across various sequences. All actions are then modeled within a single hidden Markov models whose structure is learned incrementally as new data is observed. We also generate an action grammar based on these primitives and thus link signals to symbols.

  • 263. Sanmohan,
    et al.
    Krüger, V.
    Kragic, Danica
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP.
    Unsupervised learning of action primitives2010Inngår i: 2010 10th IEEE-RAS International Conference on Humanoid Robots, Humanoids 2010, IEEE , 2010, s. 554-559Konferansepaper (Fagfellevurdert)
    Abstract [en]

    Action representation is a key issue in imitation learning for humanoids. With the recent finding of mirror neurons there has been a growing interest in expressing actions as a combination meaningful subparts called primitives. Primitives could be thought of as an alphabet for the human actions. In this paper we observe that human actions and objects can be seen as being intertwined: we can interpret actions from the way the body parts are moving, but as well from how their effect on the involved object. While human movements can look vastly different even under minor changes in location, orientation and scale, the use of the object can provide a strong invariant for the detection of motion primitives. In this paper we propose an unsupervised learning approach for action primitives that makes use of the human movements as well as the object state changes. We group actions according to the changes they make to the object state space. Movements that produce the same state change in the object state space are classified to be instances of the same action primitive. This allows us to define action primitives as sets of movements where the movements of each primitive are connected through the object state change they induce.

  • 264. Seita, D.
    et al.
    Pokorny, Florian T.
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Mahler, J.
    Kragic, Danica
    KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS. KTH, Skolan för datavetenskap och kommunikation (CSC), Robotik, perception och lärande, RPL.
    Franklin, M.
    Canny, J.
    Goldberg, K.
    Large-scale supervised learning of the grasp robustness of surface patch pairs2017Inngår i: 2016 IEEE International Conference on Simulation, Modeling, and Programming for Autonomous Robots, SIMPAR 2016, Institute of Electrical and Electronics Engineers Inc. , 2017, s. 216-223Konferansepaper (Fagfellevurdert)
    Abstract [en]

    The robustness of a parallel-jaw grasp can be estimated by Monte Carlo sampling of perturbations in pose and friction but this is not computationally efficient. As an alternative, we consider fast methods using large-scale supervised learning, where the input is a description of a local surface patch at each of two contact points. We train and test with disjoint subsets of a corpus of 1.66 million grasps where robustness is estimated by Monte Carlo sampling using Dex-Net 1.0. We use the BIDMach machine learning toolkit to compare the performance of two supervised learning methods: Random Forests and Deep Learning. We find that both of these methods learn to estimate grasp robustness fairly reliably in terms of Mean Absolute Error (MAE) and ROC Area Under Curve (AUC) on a held-out test set. Speedups over Monte Carlo sampling are approximately 7500x for Random Forests and 1500x for Deep Learning.

  • 265.
    Sibirtseva, Elena
    et al.
    KTH, Skolan för elektroteknik och datavetenskap (EECS), Intelligenta system, Robotik, perception och lärande, RPL.
    Ghadirzadeh, Ali
    KTH, Skolan för elektroteknik och datavetenskap (EECS), Intelligenta system, Robotik, perception och lärande, RPL. Intelligent Robotics Research Group, Aalto University, Espoo, Finland.
    Leite, Iolanda
    KTH, Skolan för elektroteknik och datavetenskap (EECS), Intelligenta system, Robotik, perception och lärande, RPL.
    Björkman, Mårten
    KTH, Skolan för elektroteknik och datavetenskap (EECS), Intelligenta system, Robotik, perception och lärande, RPL.
    Kragic, Danica
    KTH, Skolan för elektroteknik och datavetenskap (EECS), Intelligenta system, Robotik, perception och lärande, RPL.
    Exploring Temporal Dependencies in Multimodal Referring Expressions with Mixed Reality2019Inngår i: Virtual, Augmented and Mixed Reality. Multimodal Interaction 11th International Conference, VAMR 2019, Held as Part of the 21st HCI International Conference, HCII 2019, Orlando, FL, USA, July 26–31, 2019, Proceedings, Springer Verlag , 2019, s. 108-123Konferansepaper (Fagfellevurdert)
    Abstract [en]

    In collaborative tasks, people rely both on verbal and non-verbal cues simultaneously to communicate with each other. For human-robot interaction to run smoothly and naturally, a robot should be equipped with the ability to robustly disambiguate referring expressions. In this work, we propose a model that can disambiguate multimodal fetching requests using modalities such as head movements, hand gestures, and speech. We analysed the acquired data from mixed reality experiments and formulated a hypothesis that modelling temporal dependencies of events in these three modalities increases the model’s predictive power. We evaluated our model on a Bayesian framework to interpret referring expressions with and without exploiting the temporal prior.

  • 266.
    Sibirtseva, Elena
    et al.
    KTH, Skolan för elektroteknik och datavetenskap (EECS), Intelligenta system, Robotik, perception och lärande, RPL.
    Kontogiorgos, Dimosthenis
    KTH, Skolan för elektroteknik och datavetenskap (EECS), Intelligenta system, Robotik, perception och lärande, RPL.
    Nykvist, Olov
    KTH, Skolan för elektroteknik och datavetenskap (EECS), Intelligenta system, Robotik, perception och lärande, RPL.
    Karaoǧuz, Hakan
    KTH, Skolan för elektroteknik och datavetenskap (EECS), Intelligenta system, Robotik, perception och lärande, RPL.
    Leite, Iolanda
    KTH, Skolan för elektroteknik och datavetenskap (EECS), Intelligenta system, Robotik, perception och lärande, RPL.
    Gustafson, Joakim
    KTH, Skolan för elektroteknik och datavetenskap (EECS), Intelligenta system, Tal, musik och hörsel, TMH.
    Kragic, Danica
    KTH, Skolan för elektroteknik och datavetenskap (EECS), Intelligenta system, Robotik, perception och lärande, RPL.
    A Comparison of Visualisation Methods for Disambiguating Verbal Requests in Human-Robot Interaction2018Inngår i: Proceedings 27th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN) 2018, IEEE, 2018Konferansepaper (Fagfellevurdert)
    Abstract [en]

    Picking up objects requested by a human user is a common task in human-robot interaction. When multiple objects match the user's verbal description, the robot needs to clarify which object the user is referring to before executing the action. Previous research has focused on perceiving user's multimodal behaviour to complement verbal commands or minimising the number of follow up questions to reduce task time. In this paper, we propose a system for reference disambiguation based on visualisation and compare three methods to disambiguate natural language instructions. In a controlled experiment with a YuMi robot, we investigated realtime augmentations of the workspace in three conditions - head-mounted display, projector, and a monitor as the baseline - using objective measures such as time and accuracy, and subjective measures like engagement, immersion, and display interference. Significant differences were found in accuracy and engagement between the conditions, but no differences were found in task time. Despite the higher error rates in the head-mounted display condition, participants found that modality more engaging than the other two, but overall showed preference for the projector condition over the monitor and head-mounted display conditions.

  • 267.
    Sidenbladh, Hedvig
    et al.
    KTH, Tidigare Institutioner, Numerisk analys och datalogi, NADA.
    Kragic, Danica
    KTH, Tidigare Institutioner, Numerisk analys och datalogi, NADA.
    Christensen, Henrik I.
    KTH, Tidigare Institutioner, Numerisk analys och datalogi, NADA.
    Person following behaviour for a mobile robot1999Inngår i: Proceedings - IEEE International Conference on Robotics and Automation, 1999, s. 670-675Konferansepaper (Fagfellevurdert)
    Abstract [en]

    In this paper, a person following behaviour for a mobile robot is presented. The head of the person is located using skin colour detection. Then, a control loop is fed with the camera movements required to put the upper part of the person in the center of the image. The algorithm was tested in different rooms of a research lab. It performed well in all lightings except in direct sunlight. Since the background and lighting cannot be controlled, the vision algorithm must be robust to such changes. However, since the computing power is quite limited, the algorithm must have as low complexity as possible.

  • 268.
    Sjöö, Kristoffer
    et al.
    KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Gálvez López, Dorian
    KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Paul, Chandana
    KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Jensfelt, Patric
    KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Kragic, Danica
    KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Object Search and Localization for an Indoor Mobile Robot2009Inngår i: Journal of Computing and Information Technology, ISSN 1330-1136, E-ISSN 1846-3908, Vol. 17, nr 1, s. 67-80Artikkel i tidsskrift (Fagfellevurdert)
    Abstract [en]

    In this paper we present a method for search and localization of objects with a mobile robot using a monocular camera with zoom capabilities. We show how to overcome the limitations of low resolution images in object recognition by utilizing a combination of an attention mechanism and zooming as the first steps in the recognition process. The attention mechanism is based on receptive field cooccurrence histograms and the object recognition on SIFT feature matching. We present two methods for estimating the distance to the objects which serve both as the input to the control of the zoom and the final object localization. Through extensive experiments in a realistic environment, we highlight the strengths and weaknesses of both methods. To evaluate the usefulness of the method we also present results from experiments with an integrated system where a global sensing plan is generated based on view planning to let the camera cover the space on a per room basis.

  • 269.
    Smith, Christian
    et al.
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Karayiannidis, Ioannis
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Nalpantidis, Lazaros
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Gratal, Javier
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Qi, Peng
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Dimarogonas, Dimos
    KTH, Skolan för elektro- och systemteknik (EES), Reglerteknik. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Kragic, Danica
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Dual arm manipulation-A survey2012Inngår i: Robotics and Autonomous Systems, ISSN 0921-8890, E-ISSN 1872-793X, Vol. 60, nr 10, s. 1340-1353Artikkel, forskningsoversikt (Fagfellevurdert)
    Abstract [en]

    Recent advances in both anthropomorphic robots and bimanual industrial manipulators had led to an increased interest in the specific problems pertaining to dual arm manipulation. For the future, we foresee robots performing human-like tasks in both domestic and industrial settings. It is therefore natural to study specifics of dual arm manipulation in humans and methods for using the resulting knowledge in robot control. The related scientific problems range from low-level control to high level task planning and execution. This review aims to summarize the current state of the art from the heterogenous range of fields that study the different aspects of these problems specifically in dual arm manipulation.

  • 270.
    Song, Dan
    et al.
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP.
    Ek, Carl Henrik
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP.
    Huebner, Kai
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP.
    Kragic, Danica
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP.
    Embodiment-Specific Representation of Robot Grasping using Graphical Models and Latent-Space Discretization2011Inngår i: 2011 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 2011, s. 980-986Konferansepaper (Fagfellevurdert)
    Abstract [en]

    We study embodiment-specific robot grasping tasks, represented in a probabilistic framework. The framework consists of a Bayesian network (BN) integrated with a novel multi-variate discretization model. The BN models the probabilistic relationships among tasks, objects, grasping actions and constraints. The discretization model provides compact data representation that allows efficient learning of the conditional structures in the BN. To evaluate the framework, we use a database generated in a simulated environment including examples of a human and a robot hand interacting with objects. The results show that the different kinematic structures of the hands affect both the BN structure and the conditional distributions over the modeled variables. Both models achieve accurate task classification, and successfully encode the semantic task requirements in the continuous observation spaces. In an imitation experiment, we demonstrate that the representation framework can transfer task knowledge between different embodiments, therefore is a suitable model for grasp planning and imitation in a goal-directed manner.

  • 271.
    Song, Dan
    et al.
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP.
    Ek, Carl Henrik
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP.
    Hübner, Kai
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP.
    Kragic, Danica
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP.
    Task-Based Robot Grasp Planning Using Probabilistic Inference2015Inngår i: IEEE Transactions on robotics, ISSN 1552-3098, E-ISSN 1941-0468, Vol. 31, nr 3, s. 546-561Artikkel i tidsskrift (Fagfellevurdert)
    Abstract [en]

    Grasping and manipulating everyday objects in a goal-directed manner is an important ability of a service robot. The robot needs to reason about task requirements and ground these in the sensorimotor information. Grasping and interaction with objects are challenging in real-world scenarios, where sensorimotor uncertainty is prevalent. This paper presents a probabilistic framework for the representation and modeling of robot-grasping tasks. The framework consists of Gaussian mixture models for generic data discretization, and discrete Bayesian networks for encoding the probabilistic relations among various task-relevant variables, including object and action features as well as task constraints. We evaluate the framework using a grasp database generated in a simulated environment including a human and two robot hand models. The generative modeling approach allows the prediction of grasping tasks given uncertain sensory data, as well as object and grasp selection in a task-oriented manner. Furthermore, the graphical model framework provides insights into dependencies between variables and features relevant for object grasping.

  • 272.
    Song, Dan
    et al.
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Ek, Carl Henrik
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Kragic Jensfelt, Danica
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Multivariate Discretization for Bayesian Network Structure Learning in Robot Grasping2011Inngår i: IEEE International Conference on Robotics and Automation (ICRA), 2011, IEEE conference proceedings, 2011, s. 1944-1950Konferansepaper (Fagfellevurdert)
    Abstract [en]

    A major challenge in modeling with BNs is learning the structure from both discrete and multivariate continuous data. A common approach in such situations is to discretize continuous data before structure learning. However efficient methods to discretize high-dimensional variables are largely lacking. This paper presents a novel method specifically aiming at discretization of high-dimensional, high-correlated data. The method consists of two integrated steps: non-linear dimensionality reduction using sparse Gaussian process latent variable models, and discretization by application of a mixture model. The model is fully probabilistic and capable to facilitate structure learning from discretized data, while at the same time retain the continuous representation. We evaluate the effectiveness of the method in the domain of robot grasping. Compared with traditional discretization schemes, our model excels both in task classification and prediction of hand grasp configurations. Further, being a fully probabilistic model it handles uncertainty in the data and can easily be integrated into other frameworks in a principled manner.

  • 273.
    Song, Dan
    et al.
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Hübner, Kai
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Kyrki, Ville
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Kragic, Danica
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Learning task constraints in robot grasping2010Inngår i: 4th International Conference on Cognitive Systems, CogSys 2010, 2010Konferansepaper (Fagfellevurdert)
  • 274.
    Song, Dan
    et al.
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP.
    Kai, Hübner
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP.
    Ville, Kyrki
    Lappeenranta University of Technology, Finland, Department.
    Kragic, Danica
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP.
    Learning Task Constraints for Robot Grasping using Graphical Models2010Inngår i: IEEE/RSJ International Conference on Intelligent RObots and Systems, IEEE , 2010Konferansepaper (Fagfellevurdert)
    Abstract [en]

    This paper studies the learning of task constraints that allow grasp generation in a goal-directed manner. We show how an object representation and a grasp generated on it can be integrated with the task requirements. The scientific problems tackled are (i) identification and modeling of such task constraints, and (ii) integration between a semantically expressed goal of a task and quantitative constraint functions defined in the continuous object-action domains. We first define constraint functions given a set of object and action attributes, and then model the relationships between object, action, constraint features and the task using Bayesian networks. The probabilistic framework deals with uncertainty, combines apriori knowledge with observed data, and allows inference on target attributes given only partial observations. We present a system designed to structure data generation and constraintvlearning processes that is applicable to new tasks, embodiments and sensory data. The application of the task constraint model is demonstrated in a goal-directed imitation experiment.

  • 275.
    Song, Dan
    et al.
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Kyriazis, N.
    Oikonomidis, I.
    Papazov, C.
    Argyros, A.
    Burschka, D.
    Kragic, Danica
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Predicting human intention in visual observations of hand/object interactions2013Inngår i: 2013 IEEE International Conference On Robotics And Automation (ICRA), New York: IEEE , 2013, s. 1608-1615Konferansepaper (Fagfellevurdert)
    Abstract [en]

    The main contribution of this paper is a probabilistic method for predicting human manipulation intention from image sequences of human-object interaction. Predicting intention amounts to inferring the imminent manipulation task when human hand is observed to have stably grasped the object. Inference is performed by means of a probabilistic graphical model that encodes object grasping tasks over the 3D state of the observed scene. The 3D state is extracted from RGB-D image sequences by a novel vision-based, markerless hand-object 3D tracking framework. To deal with the high-dimensional state-space and mixed data types (discrete and continuous) involved in grasping tasks, we introduce a generative vector quantization method using mixture models and self-organizing maps. This yields a compact model for encoding of grasping actions, able of handling uncertain and partial sensory data. Experimentation showed that the model trained on simulated data can provide a potent basis for accurate goal-inference with partial and noisy observations of actual real-world demonstrations. We also show a grasp selection process, guided by the inferred human intention, to illustrate the use of the system for goal-directed grasp imitation.

  • 276. Song, Haoran
    et al.
    Haustein, Joshua Alexander
    KTH, Skolan för elektroteknik och datavetenskap (EECS), Intelligenta system, Robotik, perception och lärande, RPL.
    Yuan, Weihao
    Hang, Kaiyu
    Wang, Michael Yu
    Kragic, Danica
    KTH, Skolan för elektroteknik och datavetenskap (EECS), Intelligenta system, Robotik, perception och lärande, RPL.
    Stork, Johannes A.
    KTH, Skolan för elektroteknik och datavetenskap (EECS), Centra, Centrum för autonoma system, CAS.
    Multi-Object Rearrangement with Monte Carlo Tree Search: A Case Study on Planar Nonprehensile SortingManuskript (preprint) (Annet vitenskapelig)
    Abstract [en]

    In this work, we address a planar non-prehensile sorting task. Here, a robot needs to push many densely packed objects belonging to different classes into a configuration where these classes are clearly separated from each other. To achieve this, we propose to employ Monte Carlo tree search equipped with a task-specific heuristic function. We evaluate the algorithm on various simulated sorting tasks and observe its effectiveness in reliably sorting up to 40 convex objects. In addition, we observe that the algorithm is capable to also sort non-convex objects, as well as convex objects in the presence of immovable obstacles.

  • 277.
    Stork, Johanes Andreas
    et al.
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP.
    Ek, Carl Henrik
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP.
    Bekiroglu, Yasemin
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP.
    Kragic, Danica
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP.
    Learning Predictive State Representation for in-hand manipulation2015Inngår i: Proceedings - IEEE International Conference on Robotics and Automation, IEEE conference proceedings, 2015, nr June, s. 3207-3214Konferansepaper (Fagfellevurdert)
    Abstract [en]

    We study the use of Predictive State Representation (PSR) for modeling of an in-hand manipulation task through interaction with the environment. We extend the original PSR model to a new domain of in-hand manipulation and address the problem of partial observability by introducing new kernel-based features that integrate both actions and observations. The model is learned directly from haptic data and is used to plan series of actions that rotate the object in the hand to a specific configuration by pushing it against a table. Further, we analyze the model's belief states using additional visual data and enable planning of action sequences when the observations are ambiguous. We show that the learned representation is geometrically meaningful by embedding labeled action-observation traces. Suitability for planning is demonstrated by a post-grasp manipulation example that changes the object state to multiple specified target configurations.

  • 278.
    Stork, Johannes A.
    et al.
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Ek, Carl Henrik
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Kragic, Danica
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Learning Predictive State Representations for Planning2015Inngår i: 2015 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS (IROS), IEEE Press, 2015, s. 3427-3434Konferansepaper (Fagfellevurdert)
    Abstract [en]

    Predictive State Representations (PSRs) allow modeling of dynamical systems directly in observables and without relying on latent variable representations. A problem that arises from learning PSRs is that it is often hard to attribute semantic meaning to the learned representation. This makes generalization and planning in PSRs challenging. In this paper, we extend PSRs and introduce the notion of PSRs that include prior information (P-PSRs) to learn representations which are suitable for planning and interpretation. By learning a low-dimensional embedding of test features we map belief points of similar semantic to the same region of a subspace. This facilitates better generalization for planning and semantical interpretation of the learned representation. In specific, we show how to overcome the training sample bias and introduce feature selection such that the resulting representation emphasizes observables related to the planning task. We show that our P-PSRs result in qualitatively meaningful representations and present quantitative results that indicate improved suitability for planning.

  • 279.
    Stork, Johannes A.
    et al.
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Pokorny, Florian T.
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Kragic, Danica
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    A Topology-based Object Representation for Clasping, Latching and Hooking2015Inngår i: IEEE-RAS International Conference on Humanoid Robots (HUMANOIDS 2013), 2015, s. 138-145Konferansepaper (Fagfellevurdert)
    Abstract [en]

    We present a loop-based topological object representation for objects with holes. The representation is used to model object parts suitable for grasping, e.g. handles, and it incorporates local volume information about these. Furthermore, we present a grasp synthesis framework that utilizes this representation for synthesizing caging grasps that are robust under measurement noise. The approach is complementary to a local contact-based force-closure analysis as it depends on global topological features of the object. We perform an extensive evaluation with four robotic hands on synthetic data. Additionally, we provide real world experiments using a Kinect sensor on two robotic platforms: a Schunk dexterous hand attached to a Kuka robot arm as well as a Nao humanoid robot. In the case of the Nao platform, we provide initial experiments showing that our approach can be used to plan whole arm hooking as well as caging grasps involving only one hand.

  • 280.
    Stork, Johannes A.
    et al.
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Pokorny, Florian T.
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Kragic, Danica
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Integrated Motion and Clasp Planning with Virtual Linking2013Inngår i: 2013 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), IEEE , 2013, s. 3007-3014Konferansepaper (Fagfellevurdert)
    Abstract [en]

    In this work, we address the problem of simultaneous clasp and motion planning on unknown objects with holes. Clasping an object enables a rich set of activities such as dragging, toting, pulling and hauling which can be applied to both soft and rigid objects. To this end, we define a virtual linking measure which characterizes the spacial relation between the robot hand and object. The measure utilizes a set of closed curves arising from an approximately shortest basis of the object's first homology group. We define task spaces to perform collision-free motion planing with respect to multiple prioritized objectives using a sampling-based planing method. The approach is tested in simulation using different robot hands and various real-world objects.

  • 281.
    Stork, Johannes A.
    et al.
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP.
    Pokorny, Florian T.
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP.
    Kragic, Danica
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP.
    Towards Postural Synergies for Caging Grasps2013Inngår i: Hand Synergies - how to tame the complexity of grapsing: Workshop, IEEE International Conference on Robotics and Automation (ICRA 2013), 2013Konferansepaper (Fagfellevurdert)
    Abstract [en]

    Postural synergies have in recent years been successfully used as a low-dimensional representation for the control of robotic hands and in particular for the synthesis of force-closed grasps. This work proposes to study caging grasps using synergies and reports on an initial analysis of postural synergies for such grasps. Caging grasps, which have originally only been analyzed for simple planar objects, have recently been shown to be useful for certain manipulation tasks and are now starting to be investigated also for complicated object geometries. In this workshop contribution, we investigate a synthetic data-set ofcaging grasps of four robotic hands on several every-day objects and report on an analysis of synergies for this data-set.

  • 282.
    Tajvar, Pouria
    et al.
    KTH, Skolan för elektroteknik och datavetenskap (EECS), Intelligenta system, Robotik, perception och lärande, RPL.
    Varava, Anastasiia
    KTH, Skolan för elektroteknik och datavetenskap (EECS), Intelligenta system, Robotik, perception och lärande, RPL.
    Kragic, Danica
    KTH, Skolan för elektroteknik och datavetenskap (EECS), Intelligenta system, Robotik, perception och lärande, RPL.
    Tumova, Jana
    KTH, Skolan för elektroteknik och datavetenskap (EECS), Intelligenta system, Robotik, perception och lärande, RPL.
    Robust motion planning for non-holonomicrobots with planar geometric constraints2019Inngår i: Proceedings of the ISRR2019, 2019Konferansepaper (Fagfellevurdert)
    Abstract [en]

    We present a motion planning algorithm for cases where geometry of the robot cannot be neglected and where its dynamics are governed by non-holonomic constraints. While the two problems are classically treated separately, orientation of the robot strongly affects its possible motions both from the obstacle avoidance and from kinodynamic constraints perspective. We adopt an abstraction based approach ensuring asymptotic completeness. To handle the complex dynamics, a data driven approach is presented to construct a library of feedback motion primitives that guarantee a bounded error in following arbitrarily long trajectories. The library is constructed along local abstractions of the dynamics that enables addition of new motion primitives through abstraction refinement. Both the robot and the obstacles are represented as a union of circles, which allows arbitrarily precise approximation of complex geometries. To handle the geometrical constraints, we represent over- and under-approximations of the three-dimensional collision space as a finite set of two-dimensional "slices" corresponding to different intervals of the robot's orientation space. Starting from a coarse slicing, we use the collision space over-approximation to find a valid path and the under-approximation to check for  potential path non-existence. If none of the attempts are conclusive, the abstraction is refined. The algorithm is applied for motion planning and control of a rover with slipping without its prior modelling.

  • 283.
    Tegin, Johan
    et al.
    KTH, Skolan för industriell teknik och management (ITM), Maskinkonstruktion (Inst.).
    Ekvall, Staffan
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP.
    Kragic, Danica
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP.
    Wikander, Jan
    KTH, Skolan för industriell teknik och management (ITM), Maskinkonstruktion (Inst.).
    Iliev, Boyko
    Örebro University.
    Demonstration-based learning and control for automatic grasping2009Inngår i: Intelligent Service Robotics, ISSN 1861-2776, Vol. 2, nr 1, s. 23-30Artikkel i tidsskrift (Fagfellevurdert)
    Abstract [en]

    We present a method for automatic grasp generation based on object shape primitives in a Programming by Demonstration framework. The system first recognizes the grasp performed by a demonstrator as well as the object it is applied on and then generates a suitable grasping strategy on the robot. We start by presenting how to model and learn grasps and map them to robot hands. We continue by performing dynamic simulation of the grasp execution with a focus on grasping objects whose pose is not perfectly known.

  • 284.
    Tegin, Johan
    et al.
    KTH, Skolan för industriell teknik och management (ITM), Maskinkonstruktion (Inst.), Maskinkonstruktion (Avd.).
    Iliev, Boyko
    Örebro University, Sweden.
    Skoglund, Alexander
    Kragic, Danica
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP.
    Wikander, Jan
    KTH, Skolan för industriell teknik och management (ITM), Maskinkonstruktion (Inst.).
    Real Life Grasping using an Under-actuated Robot Hand - Simulation and Experiments2009Inngår i: ICAR: 2009 14th International Conference on Advanced Robotics, IEEE , 2009, s. 366-373Konferansepaper (Fagfellevurdert)
    Abstract [en]

    We present a system which includes an underactuated anthropomorphic hand and control algorithms for autonomous grasping of everyday objects. The system comprises a control framework for hybrid force/position control in simulation and reality, a grasp simulator, and an under-actuated robot hand equipped with tactile sensors. We start by presenting the robot hand, the simulation environment and the control framework that enable dynamic simulation of an under-actuated robot hand. We continue by presenting simulation results and also discuss and exemplify the use of simulation in relation to autonomous grasping. Finally, we use the very same controller in real world grasping experiments to validate the simulations and to exemplify system capabilities and limitations.

  • 285.
    Topp, Elin Anna
    et al.
    KTH, Tidigare Institutioner, Numerisk analys och datalogi, NADA.
    Kragic, Danica
    KTH, Tidigare Institutioner, Numerisk analys och datalogi, NADA.
    Jensfelt, Patric
    KTH, Tidigare Institutioner, Numerisk analys och datalogi, NADA.
    Christensen, Henrik
    KTH, Tidigare Institutioner, Numerisk analys och datalogi, NADA.
    An interactive interface for service robots2004Inngår i: 2004 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION, VOLS 1- 5, PROCEEDINGS, 2004, s. 3469-3474Konferansepaper (Fagfellevurdert)
    Abstract [en]

    In this paper, we present an initial design of an interactive interface for a service robot based on multi sensor fusion. We show how the integration of speech, vision and laser range data can be performed using a high level of abstraction. Guided by a number of scenarios commonly used in a service robot framework, the experimental evaluation will show the benefit of sensory integration which allows the design of a robust and natural interaction system using a set of simple perceptual algorithms.

  • 286.
    Tumova, Jana
    et al.
    KTH, Skolan för elektro- och systemteknik (EES), Reglerteknik. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Marzinotto, Alejandro
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Dimarogonas, Dimos V.
    KTH, Skolan för elektro- och systemteknik (EES), Reglerteknik. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Kragic, Danica
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Maximally Satisfying LTL Action Planning2014Inngår i: 2014 IEEE/RSJ International Conference on Intelligent Robots and Systems, (IROS 2014), IEEE , 2014, s. 1503-1510Konferansepaper (Fagfellevurdert)
    Abstract [en]

    We focus on autonomous robot action planning problem from Linear Temporal Logic (LTL) specifications, where the action refers to a "simple" motion or manipulation task, such as "go from A to B" or "grasp a ball". At the high-level planning layer, we propose an algorithm to synthesize a maximally satisfying discrete control strategy while taking into account that the robot's action executions may fail. Furthermore, we interface the high-level plan with the robot's low-level controller through a reactive middle-layer formalism called Behavior Trees (BTs). We demonstrate the proposed framework using a NAO robot capable of walking, ball grasping and ball dropping actions.

  • 287.
    Varava, Anastasiia
    et al.
    KTH, Skolan för datavetenskap och kommunikation (CSC), Robotik, perception och lärande, RPL.
    J. Frederico, Carvalho
    Pokorny, Florian T.
    Kragic, Danica
    KTH, Skolan för datavetenskap och kommunikation (CSC), Robotik, perception och lärande, RPL.
    Caging and Path Non-Existence: a Deterministic Sampling-Based Verification Algorithm2017Konferansepaper (Fagfellevurdert)
  • 288.
    Varava, Anastasiia
    et al.
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Kragic, Danica
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Pokorny, Florian T.
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Caging Grasps of Rigid and Partially Deformable 3-D Objects With Double Fork and Neck Features2016Inngår i: IEEE Transactions on robotics, ISSN 1552-3098, E-ISSN 1941-0468, Vol. 32, nr 6, s. 1479-1497Artikkel i tidsskrift (Fagfellevurdert)
    Abstract [en]

    Caging provides an alternative to point-contact-based rigid grasping, relying on reasoning about the global free configuration space of an object under consideration. While substantial progress has been made toward the analysis, verification, and synthesis of cages of polygonal objects in the plane, the use of caging as a tool for manipulating general complex objects in 3-D remains challenging. In this work, we introduce the problem of caging rigid and partially deformable 3-D objects, which exhibit geometric features we call double forks and necks. Our approach is based on the linking number-a classical topological invariant, allowing us to determine sufficient conditions for caging objects with these features even in the case when the object under consideration is partially deformable under a set of neck or double fork preserving deformations. We present synthesis and verification algorithms and demonstrations of applying these algorithms to cage 3-D meshes.

  • 289.
    Varava, Anastasiia
    et al.
    KTH, Skolan för elektroteknik och datavetenskap (EECS), Intelligenta system, Robotik, perception och lärande, RPL.
    Pinto Basto de Carvalho, Joao Frederico
    KTH, Skolan för elektroteknik och datavetenskap (EECS), Intelligenta system, Robotik, perception och lärande, RPL. KTH, Skolan för elektroteknik och datavetenskap (EECS), Centra, Centrum för autonoma system, CAS.
    Kragic, Danica
    KTH, Tidigare Institutioner (före 2005), Numerisk analys och datalogi, NADA. KTH, Skolan för elektroteknik och datavetenskap (EECS), Intelligenta system, Robotik, perception och lärande, RPL. KTH, Skolan för elektroteknik och datavetenskap (EECS), Centra, Centrum för autonoma system, CAS.
    Free Space of Rigid Objects: Caging, Path Non-Existence, and Narrow Passage DetectionInngår i: The international journal of robotics research, ISSN 0278-3649, E-ISSN 1741-3176Artikkel i tidsskrift (Fagfellevurdert)
    Abstract [en]

    In this work we propose algorithms to explicitly construct a conservative estimate of the configuration spaces of rigid objects in 2D and 3D. Our approach is able to detect compact path components and narrow passages in configuration space which are important for applications in robotic manipulation and path planning. Moreover, as we demonstrate, they are also applicable to identification of molecular cages in chemistry. Our algorithms are based on a decomposition of the resulting 3 and 6 dimensional configuration spaces into slices corresponding to a finite sample of fixed orientations in configuration space. We utilize dual diagrams of unions of balls and uniform grids of orientations to approximate the configuration space. We carry out experiments to evaluate the computational efficiency on a set of objects with different geometric features thus demonstrating that our approach is applicable to different object shapes. We investigate the performance of our algorithm by computing increasingly fine-grained approximations of the object's configuration space.

  • 290.
    Vejdemo Johansson, Mikael
    et al.
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. AI Laboratory, Jožef Stefan Institute, Ljubljana, Slovenia .
    Pokorny, Florian T.
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP.
    Skraba, Primoz
    Kragic, Danica
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP.
    Cohomological learning of periodic motion2015Inngår i: Applicable Algebra in Engineering, Communication and Computing, ISSN 0938-1279, E-ISSN 1432-0622, Vol. 26, nr 1-2, s. 5-26Artikkel i tidsskrift (Fagfellevurdert)
    Abstract [en]

    This work develops a novel framework which can automatically detect, parameterize and interpolate periodic motion patterns obtained from a motion capture sequence. Using our framework, periodic motions such as walking and running gaits or any motion sequence with periodic structure such as cleaning, dancing etc. can be detected automatically and without manual marking of the period start and end points. Our approach constructs an intrinsic parameterization of the motion and is computationally fast. Using this parameterization, we are able generate prototypical periodic motions. Additionally, we are able to interpolate between various motions, yielding a rich class of 'mixed' periodic actions. Our approach is based on ideas from applied algebraic topology. In particular, we apply a novel persistent cohomology based method for the first time in a graphics application which enables us to recover circular coordinates of motions. We also develop a suitable notion of homotopy which can be used to interpolate between periodic motion patterns. Our framework is directly applicable to the construction of walk cycles for animating character motions with motion graphs or state machine driven animation engines and processed our examples at an average speed of 11.78 frames per second.

  • 291.
    Vicente, Isabel Serrano
    et al.
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Kragic, Danica
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Eklundh, Jan-Olof
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Learning and recognition of object manipulation actions using linear and nonlinear dimensionality reduction2007Inngår i: 2007 RO-MAN: 16TH IEEE  International Symposium On Robot And Human Interactive Communication, Vols 1-3, 2007, s. 1003-1008Konferansepaper (Fagfellevurdert)
    Abstract [en]

    In this work, we perform an extensive statistical evaluation for learning and recognition of object manipulation actions. We concentrate on single arm/hand actions but study the problem of modeling and dimensionality reduction for cases where actions are very similar to each other in terms of arm motions. For this purpose, we evaluate a linear and a nonlinear dimensionality reduction techniques: Principal Component Analysis and Spatio-Temporal Isomap. Classification of query sequences is based on different variants of Nearest Neighbor classification. We thoroughly describe and evaluate different parameters that affect the modeling strategies and perform the evaluation with a training set of 20 people.

  • 292. Vicente, Isabel Serrano
    et al.
    Kyrki, Ville
    Kragic, Danica
    KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Larsson, Martin
    Action recognition and understanding through motor primitives2007Inngår i: Advanced Robotics, ISSN 0169-1864, E-ISSN 1568-5535, Vol. 21, nr 15, s. 1687-1707Artikkel i tidsskrift (Fagfellevurdert)
    Abstract [en]

    In robotics, recognition of human activity has been used extensively for robot task learning through imitation and demonstration. However, there has not been much work performed on modeling and recognition of activities that involve object manipulation and grasping. In this work, we deal with single arm/hand actions which are very similar to each other in terms of arm/hand motions. The approach is based on the hypothesis that actions can be represented as sequences of motion primitives. Given this, a set of five different manipulation actions of different levels of complexity are investigated. To model the process, we use a combination of discriminative support vector machines and generative hidden Markov models. The experimental evaluation, performed with 10 people, investigates both the definition and structure of primitive motions, as well as the validity of the modeling approach taken.

  • 293.
    Vina, Francisco
    et al.
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP.
    Karayiannidis, Yiannis
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP.
    Pauwels, Karl
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP.
    Smith, Christian
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Kragic, Danica
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP.
    In-hand manipulation using gravity and controlled slip2015Inngår i: Intelligent Robots and Systems (IROS), 2015 IEEE/RSJ International Conference on, IEEE conference proceedings, 2015, s. 5636-5641Konferansepaper (Fagfellevurdert)
    Abstract [en]

    In this work we propose a sliding mode controllerfor in-hand manipulation that repositions a tool in the robot’shand by using gravity and controlling the slippage of the tool. In our approach, the robot holds the tool with a pinch graspand we model the system as a link attached to the grippervia a passive revolute joint with friction, i.e., the grasp onlyaffords rotational motions of the tool around a given axis ofrotation. The robot controls the slippage by varying the openingbetween the fingers in order to allow the tool to move tothe desired angular position following a reference trajectory.We show experimentally how the proposed controller achievesconvergence to the desired tool orientation under variations ofthe tool’s inertial parameters.

  • 294.
    Vina, Francisco
    et al.
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP.
    Smith, Christian
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Kragic, Danica
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP.
    Karayiannidis, Yiannis
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP.
    Adaptive Contact Point Estimation for Autonomous Tool Manipulation2014Konferansepaper (Fagfellevurdert)
    Abstract [en]

    Autonomous grasping and manipulation of toolsenables robots to perform a large variety of tasks in unstructuredenvironments such as households. Many commonhousehold tasks involve controlling the motion of the tip of a toolwhile it is in contact with another object. Thus, for these types oftasks the robot requires knowledge of the location of the contactpoint while it is executing the task in order to accomplish themanipulation objective. In this work we propose an integraladaptive control law that uses force/torque measurements toestimate online the location of the contact point between thetool manipulated by the robot and the surface which the tooltouches

  • 295.
    Viña Barrientos, Francisco
    et al.
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP.
    Karayiannidis, Yiannis
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP.
    Smith, Christian
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP.
    Kragic, Danica
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP.
    Adaptive Control for Pivoting with Visual and Tactile Feedback2016Konferansepaper (Fagfellevurdert)
    Abstract [en]

    In this work we present an adaptive control approach for pivoting, which is an in-hand manipulation maneuver that consists of rotating a grasped object to a desired orientation relative to the robot’s hand. We perform pivoting by means of gravity, allowing the object to rotate between the fingers of a one degree of freedom gripper and controlling the gripping force to ensure that the object follows a reference trajectory and arrives at the desired angular position. We use a visual pose estimation system to track the pose of the object and force measurements from tactile sensors to control the gripping force. The adaptive controller employs an update law that accommodates for errors in the friction coefficient,which is one of the most common sources of uncertainty in manipulation. Our experiments confirm that the proposed adaptive controller successfully pivots a grasped object in the presence of uncertainty in the object’s friction parameters.

  • 296.
    Viña, Francisco
    et al.
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Bekiroglu, Yasemin
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Smith, Christian
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Karayiannidis, Yiannis
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Kragic, Danica
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Predicting Slippage and Learning Manipulation Affordances through Gaussian Process Regression2013Inngår i: Proceeding of the 2013 IEEE-RAS International Conference on Humanoid Robots, IEEE Computer Society, 2013Konferansepaper (Fagfellevurdert)
    Abstract [en]

    Object grasping is commonly followed by someform of object manipulation – either when using the grasped object as a tool or actively changing its position in the hand through in-hand manipulation to afford further interaction. In this process, slippage may occur due to inappropriate contact forces, various types of noise and/or due to the unexpected interaction or collision with the environment. In this paper, we study the problem of identifying continuous bounds on the forces and torques that can be applied on a grasped object before slippage occurs. We model the problem as kinesthetic rather than cutaneous learning given that the measurements originate from a wrist mounted force-torque sensor. Given the continuous output, this regression problem is solved using a Gaussian Process approach.We demonstrate a dual armed humanoid robot that can autonomously learn force and torque bounds and use these to execute actions on objects such as sliding and pushing. We show that the model can be used not only for the detection of maximum allowable forces and torques but also for potentially identifying what types of tasks, denoted as manipulation affordances, a specific grasp configuration allows. The latter can then be used to either avoid specific motions or as a simple step of achieving in-hand manipulation of objects through interaction with the environment.

  • 297. Wang, L.
    et al.
    Markdahl, Johan
    KTH, Skolan för teknikvetenskap (SCI), Matematik (Inst.), Optimeringslära och systemteori.
    Hu, Xiaoming
    KTH, Skolan för teknikvetenskap (SCI), Matematik (Inst.), Optimeringslära och systemteori. KTH, Skolan för elektro- och systemteknik (EES), Centra, ACCESS Linnaeus Centre. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Kragic, Danica
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP.
    A high level decentralized tracking algorithm for three manipulators subject to motion constraints2012Inngår i: Intelligent Control and Automation (WCICA), 2012 10th World Congress on, IEEE , 2012, s. 1920-1924Konferansepaper (Fagfellevurdert)
    Abstract [en]

    This paper considers a tracking problem for three manipulators grasping a rigid object. The control objective is to coordinate the movements of the manipulators using local information in order to align the object attitude with a desired rest attitude and the object position with a time parameterized reference trajectory. The object rigidity is modelled as a constraint on the motion of the end-effectors saying that the distance between any pair of end-effectors must be constant in time. The control law consists of a rotational part and a translational part. The translational part also incorporates a linear observer of the reference trajectory. We prove stability and illustrate the system dynamics by simulation.

  • 298.
    Yang, Guang-Zhong
    et al.
    Imperial Coll London, Hamlyn Ctr Robot Surg, London, England..
    Dario, Paolo
    Scuola Super Sant Anna, Biomed Robot, Pisa, Italy..
    Kragic, Danica
    KTH, Skolan för elektroteknik och datavetenskap (EECS), Robotik, perception och lärande, RPL. KTH, Skolan för elektroteknik och datavetenskap (EECS), Centra, Centrum för autonoma systen, CAS.
    Social robotics-Trust, learning, and social interaction2018Inngår i: Science Robotics, ISSN 2470-9476, Vol. 3, nr 21, artikkel-id UNSP eaau8839Artikkel i tidsskrift (Annet vitenskapelig)
  • 299.
    Yuan, Weihao
    et al.
    Hong Kong Univ Sci & Technol, ECE, Robot Inst, Hong Kong, Peoples R China..
    Hang, Kaiyu
    Yale Univ, Mech Engn & Mat Sci, New Haven, CT USA..
    Kragic, Danica
    KTH, Skolan för elektroteknik och datavetenskap (EECS), Centra, Centrum för autonoma system, CAS. KTH, Skolan för elektroteknik och datavetenskap (EECS), Robotik, perception och lärande, RPL.
    Wang, Michael Y.
    Hong Kong Univ Sci & Technol, ECE, Robot Inst, Hong Kong, Peoples R China..
    Stork, Johannes A.
    Orebro Univ, Ctr Appl Autonomous Sensor Syst, Orebro, Sweden..
    End-to-end nonprehensile rearrangement with deep reinforcement learning and simulation-to-reality transfer2019Inngår i: Robotics and Autonomous Systems, ISSN 0921-8890, E-ISSN 1872-793X, Vol. 119, s. 119-134Artikkel i tidsskrift (Fagfellevurdert)
    Abstract [en]

    Nonprehensile rearrangement is the problem of controlling a robot to interact with objects through pushing actions in order to reconfigure the objects into a predefined goal pose. In this work, we rearrange one object at a time in an environment with obstacles using an end-to-end policy that maps raw pixels as visual input to control actions without any form of engineered feature extraction. To reduce the amount of training data that needs to be collected using a real robot, we propose a simulation-to-reality transfer approach. In the first step, we model the nonprehensile rearrangement task in simulation and use deep reinforcement learning to learn a suitable rearrangement policy, which requires in the order of hundreds of thousands of example actions for training. Thereafter, we collect a small dataset of only 70 episodes of real-world actions as supervised examples for adapting the learned rearrangement policy to real-world input data. In this process, we make use of newly proposed strategies for improving the reinforcement learning process, such as heuristic exploration and the curation of a balanced set of experiences. We evaluate our method in both simulation and real setting using a Baxter robot to show that the proposed approach can effectively improve the training process in simulation, as well as efficiently adapt the learned policy to the real world application, even when the camera pose is different from simulation. Additionally, we show that the learned system not only can provide adaptive behavior to handle unforeseen events during executions, such as distraction objects, sudden changes in positions of the objects, and obstacles, but also can deal with obstacle shapes that were not present in the training process.

  • 300.
    Yuan, Weihao
    et al.
    Hong Kong Univ Sci & Technol, Dept Elect & Comp Engn, Hong Kong, Peoples R China..
    Hang, Kaiyu
    Yale Univ, Dept Mech Engn & Mat Sci, New Haven, CT USA..
    Song, Haoran
    Hong Kong Univ Sci & Technol, Dept Mech & Aerosp Engn, Hong Kong, Peoples R China..
    Kragic, Danica
    KTH, Skolan för elektroteknik och datavetenskap (EECS), Intelligenta system, Robotik, perception och lärande, RPL. KTH, Skolan för elektroteknik och datavetenskap (EECS), Centra, Centrum för autonoma system, CAS.
    Wang, Michael Y.
    Hong Kong Univ Sci & Technol, Dept Elect & Comp Engn, Hong Kong, Peoples R China.;Hong Kong Univ Sci & Technol, Dept Mech & Aerosp Engn, Hong Kong, Peoples R China..
    Stork, Johannes A.
    Örebro Univ, Ctr Appl Autonomous Sensor Syst, Örebro, Sweden.
    Reinforcement Learning in Topology-based Representation for Human Body Movement with Whole Arm Manipulation2019Inngår i: 2019 International Conference on Robotics and Automation (ICRA) / [ed] Howard, A Althoefer, K Arai, F Arrichiello, F Caputo, B Castellanos, J Hauser, K Isler, V Kim, J Liu, H Oh, P Santos, V Scaramuzza, D Ude, A Voyles, R Yamane, K Okamura, A, Institute of Electrical and Electronics Engineers (IEEE), 2019, s. 2153-2160Konferansepaper (Fagfellevurdert)
    Abstract [en]

    Moving a human body or a large and bulky object may require the strength of whole arm manipulation (WAM). This type of manipulation places the load on the robot's arms and relies on global properties of the interaction to succeed-rather than local contacts such as grasping or non-prehensile pushing. In this paper, we learn to generate motions that enable WAM for holding and transporting of humans in certain rescue or patient care scenarios. We model the task as a reinforcement learning problem in order to provide a robot behavior that can directly respond to external perturbation and human motion. For this, we represent global properties of the robot-human interaction with topology-based coordinates that are computed from arm and torso positions. These coordinates also allow transferring the learned policy to other body shapes and sizes. For training and evaluation, we simulate a dynamic sea rescue scenario and show in quantitative experiments that the policy can solve unseen scenarios with differently-shaped humans, floating humans, or with perception noise. Our qualitative experiments show the subsequent transporting after holding is achieved and we demonstrate that the policy can be directly transferred to a real world setting.

34567 251 - 300 of 305
RefereraExporteraLink til resultatlisten
Permanent link
Referera
Referensformat
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Annet format
Fler format
Språk
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Annet språk
Fler språk
Utmatningsformat
  • html
  • text
  • asciidoc
  • rtf