Endre søk
Begrens søket
1 - 24 of 24
RefereraExporteraLink til resultatlisten
Permanent link
Referera
Referensformat
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Annet format
Fler format
Språk
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Annet språk
Fler språk
Utmatningsformat
  • html
  • text
  • asciidoc
  • rtf
Treff pr side
  • 5
  • 10
  • 20
  • 50
  • 100
  • 250
Sortering
  • Standard (Relevans)
  • Forfatter A-Ø
  • Forfatter Ø-A
  • Tittel A-Ø
  • Tittel Ø-A
  • Type publikasjon A-Ø
  • Type publikasjon Ø-A
  • Eldste først
  • Nyeste først
  • Skapad (Eldste først)
  • Skapad (Nyeste først)
  • Senast uppdaterad (Eldste først)
  • Senast uppdaterad (Nyeste først)
  • Disputationsdatum (tidligste først)
  • Disputationsdatum (siste først)
  • Standard (Relevans)
  • Forfatter A-Ø
  • Forfatter Ø-A
  • Tittel A-Ø
  • Tittel Ø-A
  • Type publikasjon A-Ø
  • Type publikasjon Ø-A
  • Eldste først
  • Nyeste først
  • Skapad (Eldste først)
  • Skapad (Nyeste først)
  • Senast uppdaterad (Eldste først)
  • Senast uppdaterad (Nyeste først)
  • Disputationsdatum (tidligste først)
  • Disputationsdatum (siste først)
Merk
Maxantalet träffar du kan exportera från sökgränssnittet är 250. Vid större uttag använd dig av utsökningar.
  • 1.
    Bekiroglu, Yasemin
    et al.
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Song, Dan
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Wang, Lu
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Kragic, Danica
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    A probabilistic framework for task-oriented grasp stability assessment2013Inngår i: 2013 IEEE International Conference on Robotics and Automation (ICRA), IEEE Computer Society, 2013, s. 3040-3047Konferansepaper (Fagfellevurdert)
    Abstract [en]

    We present a probabilistic framework for grasp modeling and stability assessment. The framework facilitates assessment of grasp success in a goal-oriented way, taking into account both geometric constraints for task affordances and stability requirements specific for a task. We integrate high-level task information introduced by a teacher in a supervised setting with low-level stability requirements acquired through a robot's self-exploration. The conditional relations between tasks and multiple sensory streams (vision, proprioception and tactile) are modeled using Bayesian networks. The generative modeling approach both allows prediction of grasp success, and provides insights into dependencies between variables and features relevant for object grasping.

  • 2.
    Bohg, Jeannette
    et al.
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Barck-Holst, Carl
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Hübner, Kai
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Ralph, Maria
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Rasolzadeh, Babak
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Song, Dan
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Kragic, Danica
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    TOWARDS GRASP-ORIENTED VISUAL PERCEPTION FOR HUMANOID ROBOTS2009Inngår i: INTERNATIONAL JOURNAL OF HUMANOID ROBOTICS, ISSN 0219-8436, Vol. 6, nr 3, s. 387-434Artikkel i tidsskrift (Fagfellevurdert)
    Abstract [en]

    A distinct property of robot vision systems is that they are embodied. Visual information is extracted for the purpose of moving in and interacting with the environment. Thus, different types of perception-action cycles need to be implemented and evaluated. In this paper, we study the problem of designing a vision system for the purpose of object grasping in everyday environments. This vision system is firstly targeted at the interaction with the world through recognition and grasping of objects and secondly at being an interface for the reasoning and planning module to the real world. The latter provides the vision system with a certain task that drives it and defines a specific context, i.e. search for or identify a certain object and analyze it for potential later manipulation. We deal with cases of: (i) known objects, (ii) objects similar to already known objects, and (iii) unknown objects. The perception-action cycle is connected to the reasoning system based on the idea of affordances. All three cases are also related to the state of the art and the terminology in the neuroscientific area.

  • 3. Bohg, Jeannette
    et al.
    Welke, Kai
    Institute for Anthropomatics, Karlsruhe Institute of Technology, Germany.
    Leon, Beatriz
    Department of Computer Science and Engineering, Universitat Jaume I, Spain.
    Do, Martin
    Institute for Anthropomatics, Karlsruhe Institute of Technology, Germany.
    Song, Dan
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP.
    Wohlkinger, Walter
    Automation and Control Institute, Technische Universität Wien, Austria.
    Madry, Marianna
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP.
    Aldoma, Aitor
    Automation and Control Institute, Technische Universität Wien, Austria.
    Przybylski, Markus
    Institute for Anthropomatics, Karlsruhe Institute of Technology, Germany.
    Asfour, Tamim
    Institute for Anthropomatics, Karlsruhe Institute of Technology, Germany.
    Marti, Higinio
    Department of Computer Science and Engineering, Universitat Jaume I, Spain.
    Kragic, Danica
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP.
    Morales, Antonio
    Department of Computer Science and Engineering, Universitat Jaume I, Spain.
    Vincze, Markus
    Automation and Control Institute, Technische Universität Wien, Austria.
    Task-based Grasp Adaptation on a Humanoid Robot2012Inngår i: Proceedings 10th IFAC Symposium on Robot Control, 2012, s. 779-786Konferansepaper (Fagfellevurdert)
    Abstract [en]

    In this paper, we present an approach towards autonomous grasping of objects according to their category and a given task. Recent advances in the field of object segmentation and categorization as well as task-based grasp inference have been leveraged by integrating them into one pipeline. This allows us to transfer task-specific grasp experience between objects of the same category. The effectiveness of the approach is demonstrated on the humanoid robot ARMAR-IIIa.

  • 4.
    Ek, Carl Henrik
    et al.
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Song, Dan
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Huebner, Kai
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Kragic, Danica
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Exploring affordances in robot grasping through latent structure representation2010Inngår i: The 11th European Conference on Computer Vision (ECCV 2010), 2010Konferansepaper (Fagfellevurdert)
  • 5.
    Ek, Carl Henrik
    et al.
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Song, Dan
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Huebner, Kai
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Kragic, Danica
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Task Modeling in Imitation Learning using Latent Variable Models2010Inngår i: 2010 10th IEEE-RAS International Conference on Humanoid Robots, Humanoids 2010, 2010, s. 458-553Konferansepaper (Fagfellevurdert)
    Abstract [en]

    An important challenge in robotic research is learning and reasoning about different manipulation tasks from scene observations. In this paper we present a probabilistic model capable of modeling several different types of input sources within the same model. Our model is capable to infer the task using only partial observations. Further, our framework allows the robot, given partial knowledge of the scene, to reason about what information streams to acquire in order to disambiguate the state-space the most. We present results for task classification within and also reason about different features discriminative power for different classes of tasks.

  • 6.
    Ek, Carl Henrik
    et al.
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP.
    Song, Dan
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP.
    Kragic, Danica
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP.
    Learning Conditional Structures in Graphical Models from a Large Set of Observation Streams through efficient Discretisation2011Inngår i: IEEE International Conference on Robotics and Automation, Workshop on Manipulation under Uncertainty, 2011Konferansepaper (Fagfellevurdert)
  • 7. Lan, N.
    et al.
    Song, Dan
    University of Southern California.
    Milenusnic, M.
    Gordon, J.
    Modeling Spinal Sensorimotor Control for Reach Task2005Inngår i: 2005 27th Annual International Conference of the IEEE Engineering in Medicine and Biology Society, IEEE , 2005, Vol. 1-7, s. 4404-4407Konferansepaper (Fagfellevurdert)
    Abstract [en]

    The spinal sensorimotor control system executes movement instructions from the central controller in the brain that plans the task in terms of global requirements. Spinal circuits serve as a local regulator that tunes the neuromuscular apparatus to an optimal state for task execution. We hypothesize that reach tasks are controlled by a set of feedforward and feedback descending commands for trajectory and final posture, respectively. This paper presents the use of physiologically realistic models of the spinal sensorimotor system to demonstrate the feasibility of such dual control for reaching movements.

  • 8.
    Madry, Marianna
    et al.
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP.
    Song, Dan
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP.
    Ek, Carl Henrik
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP.
    Kragic, Danica
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP.
    "Robot, bring me something to drink from": object representation for transferring task specific grasps2013Inngår i: In IEEE International Conference on Robotics and Automation (ICRA 2012), Workshop on Semantic Perception, Mapping and Exploration (SPME),  St. Paul, MN, USA, May 13, 2012, 2013Konferansepaper (Fagfellevurdert)
    Abstract [en]

    In this paper, we present an approach for taskspecificobject representation which facilitates transfer of graspknowledge from a known object to a novel one. Our representation encompasses: (a) several visual object properties,(b) object functionality and (c) task constrains in order to provide a suitable goal-directed grasp. We compare various features describing complementary object attributes to evaluate the balance between the discrimination and generalization properties of the representation. The experimental setup is a scene containing multiple objects. Individual object hypotheses are first detected, categorized and then used as the input to a grasp reasoning system that encodes the task information. Our approach not only allows to find objects in a real world scene that afford a desired task, but also to generate and successfully transfer task-based grasp within and across object categories.

  • 9.
    Madry, Marianna
    et al.
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Song, Dan
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Kragic, Danica
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    From object categories to grasp transfer using probabilistic reasoning2012Inngår i: 2012 IEEE International Conference on Robotics and Automation (ICRA), IEEE Computer Society, 2012, s. 1716-1723Konferansepaper (Fagfellevurdert)
    Abstract [en]

    In this paper we address the problem of grasp generation and grasp transfer between objects using categorical knowledge. The system is built upon an i) active scene segmentation module, able of generating object hypotheses and segmenting them from the background in real time, ii) object categorization system using integration of 2D and 3D cues, and iii) probabilistic grasp reasoning system. Individual object hypotheses are first generated, categorized and then used as the input to a grasp generation and transfer system that encodes task, object and action properties. The experimental evaluation compares individual 2D and 3D categorization approaches with the integrated system, and it demonstrates the usefulness of the categorization in task-based grasping and grasp transfer.

  • 10.
    Song, Dan
    et al.
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP.
    Ek, Carl Henrik
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP.
    Huebner, Kai
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP.
    Kragic, Danica
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP.
    Embodiment-Specific Representation of Robot Grasping using Graphical Models and Latent-Space Discretization2011Inngår i: 2011 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 2011, s. 980-986Konferansepaper (Fagfellevurdert)
    Abstract [en]

    We study embodiment-specific robot grasping tasks, represented in a probabilistic framework. The framework consists of a Bayesian network (BN) integrated with a novel multi-variate discretization model. The BN models the probabilistic relationships among tasks, objects, grasping actions and constraints. The discretization model provides compact data representation that allows efficient learning of the conditional structures in the BN. To evaluate the framework, we use a database generated in a simulated environment including examples of a human and a robot hand interacting with objects. The results show that the different kinematic structures of the hands affect both the BN structure and the conditional distributions over the modeled variables. Both models achieve accurate task classification, and successfully encode the semantic task requirements in the continuous observation spaces. In an imitation experiment, we demonstrate that the representation framework can transfer task knowledge between different embodiments, therefore is a suitable model for grasp planning and imitation in a goal-directed manner.

  • 11.
    Song, Dan
    et al.
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP.
    Ek, Carl Henrik
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP.
    Hübner, Kai
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP.
    Kragic, Danica
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP.
    Task-Based Robot Grasp Planning Using Probabilistic Inference2015Inngår i: IEEE Transactions on robotics, ISSN 1552-3098, E-ISSN 1941-0468, Vol. 31, nr 3, s. 546-561Artikkel i tidsskrift (Fagfellevurdert)
    Abstract [en]

    Grasping and manipulating everyday objects in a goal-directed manner is an important ability of a service robot. The robot needs to reason about task requirements and ground these in the sensorimotor information. Grasping and interaction with objects are challenging in real-world scenarios, where sensorimotor uncertainty is prevalent. This paper presents a probabilistic framework for the representation and modeling of robot-grasping tasks. The framework consists of Gaussian mixture models for generic data discretization, and discrete Bayesian networks for encoding the probabilistic relations among various task-relevant variables, including object and action features as well as task constraints. We evaluate the framework using a grasp database generated in a simulated environment including a human and two robot hand models. The generative modeling approach allows the prediction of grasping tasks given uncertain sensory data, as well as object and grasp selection in a task-oriented manner. Furthermore, the graphical model framework provides insights into dependencies between variables and features relevant for object grasping.

  • 12.
    Song, Dan
    et al.
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Ek, Carl Henrik
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Kragic Jensfelt, Danica
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Multivariate Discretization for Bayesian Network Structure Learning in Robot Grasping2011Inngår i: IEEE International Conference on Robotics and Automation (ICRA), 2011, IEEE conference proceedings, 2011, s. 1944-1950Konferansepaper (Fagfellevurdert)
    Abstract [en]

    A major challenge in modeling with BNs is learning the structure from both discrete and multivariate continuous data. A common approach in such situations is to discretize continuous data before structure learning. However efficient methods to discretize high-dimensional variables are largely lacking. This paper presents a novel method specifically aiming at discretization of high-dimensional, high-correlated data. The method consists of two integrated steps: non-linear dimensionality reduction using sparse Gaussian process latent variable models, and discretization by application of a mixture model. The model is fully probabilistic and capable to facilitate structure learning from discretized data, while at the same time retain the continuous representation. We evaluate the effectiveness of the method in the domain of robot grasping. Compared with traditional discretization schemes, our model excels both in task classification and prediction of hand grasp configurations. Further, being a fully probabilistic model it handles uncertainty in the data and can easily be integrated into other frameworks in a principled manner.

  • 13.
    Song, Dan
    et al.
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Hübner, Kai
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Kyrki, Ville
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Kragic, Danica
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Learning task constraints in robot grasping2010Inngår i: 4th International Conference on Cognitive Systems, CogSys 2010, 2010Konferansepaper (Fagfellevurdert)
  • 14.
    Song, Dan
    et al.
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP.
    Kai, Hübner
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP.
    Ville, Kyrki
    Lappeenranta University of Technology, Finland, Department.
    Kragic, Danica
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP.
    Learning Task Constraints for Robot Grasping using Graphical Models2010Inngår i: IEEE/RSJ International Conference on Intelligent RObots and Systems, IEEE , 2010Konferansepaper (Fagfellevurdert)
    Abstract [en]

    This paper studies the learning of task constraints that allow grasp generation in a goal-directed manner. We show how an object representation and a grasp generated on it can be integrated with the task requirements. The scientific problems tackled are (i) identification and modeling of such task constraints, and (ii) integration between a semantically expressed goal of a task and quantitative constraint functions defined in the continuous object-action domains. We first define constraint functions given a set of object and action attributes, and then model the relationships between object, action, constraint features and the task using Bayesian networks. The probabilistic framework deals with uncertainty, combines apriori knowledge with observed data, and allows inference on target attributes given only partial observations. We present a system designed to structure data generation and constraintvlearning processes that is applicable to new tasks, embodiments and sensory data. The application of the task constraint model is demonstrated in a goal-directed imitation experiment.

  • 15.
    Song, Dan
    et al.
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Kyriazis, N.
    Oikonomidis, I.
    Papazov, C.
    Argyros, A.
    Burschka, D.
    Kragic, Danica
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Predicting human intention in visual observations of hand/object interactions2013Inngår i: 2013 IEEE International Conference On Robotics And Automation (ICRA), New York: IEEE , 2013, s. 1608-1615Konferansepaper (Fagfellevurdert)
    Abstract [en]

    The main contribution of this paper is a probabilistic method for predicting human manipulation intention from image sequences of human-object interaction. Predicting intention amounts to inferring the imminent manipulation task when human hand is observed to have stably grasped the object. Inference is performed by means of a probabilistic graphical model that encodes object grasping tasks over the 3D state of the observed scene. The 3D state is extracted from RGB-D image sequences by a novel vision-based, markerless hand-object 3D tracking framework. To deal with the high-dimensional state-space and mixed data types (discrete and continuous) involved in grasping tasks, we introduce a generative vector quantization method using mixture models and self-organizing maps. This yields a compact model for encoding of grasping actions, able of handling uncertain and partial sensory data. Experimentation showed that the model trained on simulated data can provide a potent basis for accurate goal-inference with partial and noisy observations of actual real-world demonstrations. We also show a grasp selection process, guided by the inferred human intention, to illustrate the use of the system for goal-directed grasp imitation.

  • 16.
    Song, Dan
    et al.
    University of Southern California, Los Angeles, CA.
    Lan, N.
    Gordon, J.
    Biomechanical Constraints on Equilibrium Point Control of Multi- Joint Arm Postures2007Konferansepaper (Fagfellevurdert)
  • 17.
    Song, Dan
    et al.
    Biomedical Engineering, University of Southern California (USC), Los Angeles, USA.
    Lan, N.
    Gordon, J.
    Computational Approach to Sensorimotor Control of Human Reaching Movement2006Konferansepaper (Fagfellevurdert)
  • 18.
    Song, Dan
    et al.
    Biomedical Engineering, University of Southern California (USC), Los Angeles, USA.
    Lan, N.
    Gordon, J.
    Simulated Hand Variability During Multi-joint Arm Posture Control2006Konferansepaper (Fagfellevurdert)
  • 19.
    Song, Dan
    et al.
    Biomedical Engineering, University of Southern California (USC), Los Angeles, USA.
    Lan, N.
    Loeb, G.E.
    Gordon, J.
    Model-Based Sensorimotor Integration for Multi-Joint Control: Development of a Virtual Arm Model2008Inngår i: Annals of Biomedical Engineering, ISSN 0090-6964, E-ISSN 1573-9686, Vol. 36, nr 6, s. 1033-1048Artikkel i tidsskrift (Fagfellevurdert)
    Abstract [en]

    An integrated, sensorimotor virtual arm (VA) model has been developed and validated for simulation studies of control of human arm movements. Realistic anatomical features of shoulder, elbow and forearm joints were captured with a graphic modeling environment, SIMM. The model included 15 musculotendon elements acting at the shoulder, elbow and forearm. Muscle actions on joints were evaluated by SIMM generated moment arms that were matched to experimentally measured profiles. The Virtual Muscle (TM) (VM) model contained appropriate admixture of slow and fast twitch fibers with realistic physiological properties for force production. A realistic spindle model was embedded in each VM with inputs of fascicle length, gamma static (gamma(stat)) and dynamic (gamma(dyn)) controls and outputs of primary (I-a) and secondary (II) afferents. A piecewise linear model of Golgi Tendon Organ (GTO) represented the ensemble sampling (I-b) of the total muscle force at the tendon. All model components were integrated into a Simulink block using a special software tool. The complete VA model was validated with open-loop simulation at discrete hand positions within the full range of alpha and gamma drives to extrafusal and intrafusal muscle fibers. The model behaviors were consistent with a wide variety of physiological phenomena. Spindle afferents were effectively modulated by fusimotor drives and hand positions of the arm. These simulations validated the VA model as a computational tool for studying arm movement control. The VA model is available to researchers at website http://pt.usc.edu/cel.

  • 20.
    Song, Dan
    et al.
    Biomedical Engineering, University of Southern California (USC), Los Angeles, USA.
    Milenusnic, M.
    Lan, N.
    Gordon, J.
    A Sensorimotor Systems Model for Dynamic Simulation of arm movement control2006Konferansepaper (Fagfellevurdert)
  • 21.
    Song, Dan
    et al.
    Biomedical Engineering, University of Southern California (USC), Los Angeles, USA.
    Raphael, G.
    Lan, N.
    Loeb, G. E.
    Computationally Efficient Models of Neuromuscular Recruitment and Mechanics2008Inngår i: Journal of Neural Engineering, ISSN 1741-2560, Vol. 5, nr 2, s. 175-184Artikkel i tidsskrift (Fagfellevurdert)
    Abstract [en]

    We have improved the stability and computational efficiency of a physiologically realistic, virtual muscle (VM 3.*) model (Cheng et al 2000 J. Neurosci. Methods 101 117-30) by a simpler structure of lumped fiber types and a novel recruitment algorithm. In the new version (VM 4.0), the mathematical equations are reformulated into state-space representation and structured into a CMEX S-function in SIMULINK. A continuous recruitment scheme approximates the discrete recruitment of slow and fast motor units under physiological conditions. This makes it possible to predict force output during smooth recruitment and derecruitment without having to simulate explicitly a large number of independently recruited units. We removed the intermediate state variable, effective length (L-eff), which had been introduced to model the delayed length dependency of the activation-frequency relationship, but which had little effect and could introduce instability under physiological conditions of use. Both of these changes greatly reduce the number of state variables with little loss of accuracy compared to the original VM. The performance of VM 4.0 was validated by comparison with VM 3.1.5 for both single-muscle force production and a multi-joint task. The improved VM 4.0 model is more suitable for the analysis of neural control of movements and for design of prosthetic systems to restore lost or impaired motor functions. VM 4.0 is available via the internet and includes options to use the original VM model, which remains useful for detailed simulations of single motor unit behavior.

  • 22.
    Song, Dan
    et al.
    Biomedical Engineering, University of Southern California (USC), Los Angeles, USA.
    Raphael, G.
    Lan, N.
    Loeb, G.E.
    Improvement in computational efficiency of virtual muscle model2007Konferansepaper (Fagfellevurdert)
  • 23.
    Zarubin, Dmitry
    et al.
    Universität Stuttgart.
    Pokorny, Florian T.
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Song, Dan
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Toussaint, Marc
    Universität Stuttgart.
    Kragic, Danica
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Topological Synergies for Grasp Transfer2013Inngår i: Hand Synergies - how to tame the complexity of grapsing: Workshop, IEEE International Conference on Robotics and Automation (ICRA 2013), 2013Konferansepaper (Fagfellevurdert)
    Abstract [en]

    In this contribution, we propose a novel approach towards representing physically stable grasps which enables us to transfer grasps between different hand kinematics. We use a low dimensional topologically inspired coordinate representation which we call topological synergies, and which is motivated by the topological notion of winding numbers. We address the transfer problem as a stochastic optimization task and carry out motion planning in our topologically inspired coordinates using the Approximate Inference Control (AICO) framework. This perspective allows us to compute not only the final grasp itself, but also a trajectory in configuration space leading to it. We evaluate our approach using the simulation framework PhysX. The presented experiments, which develop further recent attempts to use topologically inspired coordinates in robotics, demonstrate that our approach makes it possible to transfer a large percentage of grasps between a simulated human hand and a 3-finger Schunk hand.

  • 24.
    Zhang, Cheng
    et al.
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Song, Dan
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Kjellström, Hedvig
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Contextual Modeling with Labeled Multi-LDA2013Inngår i: 2013 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), IEEE , 2013, s. 2264-2271Konferansepaper (Fagfellevurdert)
    Abstract [en]

    Learning about activities and object affordances from human demonstration are important cognitive capabilities for robots functioning in human environments, for example, being able to classify objects and knowing how to grasp them for different tasks. To achieve such capabilities, we propose a Labeled Multi-modal Latent Dirichlet Allocation (LM-LDA), which is a generative classifier trained with two different data cues, for instance, one cue can be traditional visual observation and another cue can be contextual information. The novel aspects of the LM-LDA classifier, compared to other methods for encoding contextual information are that, I) even with only one of the cues present at execution time, the classification will be better than single cue classification since cue correlations are encoded in the model, II) one of the cues (e.g., common grasps for the observed object class) can be inferred from the other cue (e.g., the appearance of the observed object). This makes the method suitable for robot online and transfer learning; a capability highly desirable in cognitive robotic applications. Our experiments show a clear improvement for classification and a reasonable inference of the missing data.

1 - 24 of 24
RefereraExporteraLink til resultatlisten
Permanent link
Referera
Referensformat
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Annet format
Fler format
Språk
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Annet språk
Fler språk
Utmatningsformat
  • html
  • text
  • asciidoc
  • rtf