Change search
Refine search result
1234567 151 - 200 of 417
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Rows per page
  • 5
  • 10
  • 20
  • 50
  • 100
  • 250
Sort
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
Select
The maximal number of hits you can export is 250. When you want to export more records please use the Create feeds function.
  • 151.
    Hedström, Andreas
    et al.
    KTH, School of Computer Science and Communication (CSC), Numerical Analysis and Computer Science, NADA.
    Christensen, Henrik I.
    KTH, School of Computer Science and Communication (CSC), Numerical Analysis and Computer Science, NADA.
    Lundberg, Carl
    KTH, School of Computer Science and Communication (CSC), Numerical Analysis and Computer Science, NADA.
    A wearable GUI for field robots2006In: Field and Service Robotics / [ed] Corke, P; Sukkarieh, S, BERLIN: SPRINGER-VERLAG BERLIN , 2006, Vol. 25, p. 367-376Conference paper (Refereed)
    Abstract [en]

    In most search and rescue or reconnaissance missions involving field robots the requirements of the operator being mobile and alert to sudden changes in the near environment, are just as important as the ability to control the robot proficiently. This implies that the GUI platform should be light-weight and portable, and that the GUI itself is carefully designed for the task at hand. In this paper different platform solutions and design of a user-friendly GUI for a packbot will be discussed. Our current wearable system will be presented along with some results from initial field tests in urban search and rescue facilities.

  • 152.
    Heshmati-Alamdari, Shahab
    Control Systems Lab, School of Mechanical Engineering, National Technical University of Athens.
    Cooperative and Interaction Control for Underwater Robotic Vehicles2018Doctoral thesis, monograph (Other academic)
    Abstract [en]

    In this dissertation we address the problem of robust control for underwater robotic vehicles under resource constraints and inspired by practical applications in the field of marine robotics. By the term “resource constraints” we refer to systems with constraints on communication, sensing and energy resources. Within this context, the ultimate objective of this dissertation lies in the development and implementation of efficient control strategies for autonomous single and multiple underwater robotic systems considering significant issues such as: external disturbances, limited power resources, strict communication constraints along with underwater sensing and localization issues. Specifically, we focused on cooperative and interaction control methodologies for single and multiple Underwater Vehicle Manipulator Systems (UVMSs) considering the aforementioned issues and limitations, a topic of utmost challenging area of marine robotics. More precisely, the contributions of this thesis lie in the scope of three topics: i) Motion Control, ii) Visual servoing and iii) Interaction&Cooperative Transportation. In the first part, we formulated in a generic way the problem of Autonomous Underwater Vehicle (AUV) motion operating in a constrained environment including obstacles. Various constraints such as: obstacles, workspace boundaries, thruster saturation, system‘s sensing range and predefined upper bound of the vehicle velocity are considered during the control design. Moreover, the controller has been designed in a way that the vehicle exploits the ocean currents, which results in reduced energy consumption by the thrusters and consequently increases significantly the autonomy of the system. In the second part of the thesis, we have formulated a number of novel visual servoing control strategies in order to stabilize the robot (or robot’s end-effector) close to the point of interest considering significant issues such as: camera Field of View (FoV), Camera Calibration uncertainties and the resolution of visual tracking algorithm. In the third part of the thesis, regarding the interaction task, we present a robust interaction control scheme for a UVMS in contact with the environment, with great applications in underwater robotics (e.g. sampling of the sea organisms, underwater welding, object handling). The proposed control scheme does not required any a priori knowledge of the UVMS dynamical parameters or the stiffness model. It guarantees a predefined behavior in terms of desired overshoot, transient and steady state response and it is robust with respect to external disturbances and measurement noises. Moreover, we have addressed the problem of cooperative object transportation for a team of UVMSs in a constrained workspace involving static obstacles. First, for case when the robots are equipped with appropriate force/torque sensors at its end effector we have proposed a decentralized impedance control scheme with the coordination relying solely on implicit communication arising from the physical interaction of the robots with the commonly grasped object. Second, for case when the robots are not equipped with force/torque sensor at it end effector, we have proposed a decentralized predictive control approach which takes into account constraints that emanate from control input saturation as well kinematic and representation singularities. Finally, numerical simulations performed in MATLAB and ROS environments, along with extensive real-time experiments conducted with available Control Systems Lab (CSL) robotic equipment, demonstrate and verify the effectiveness of the claimed results.

    Download full text (pdf)
    fulltext
  • 153. Huang, Lirong
    et al.
    Hjalmarsson, Håkan
    KTH, School of Electrical Engineering (EES), Automatic Control. KTH, School of Electrical Engineering (EES), Centres, ACCESS Linnaeus Centre.
    Koeppl, Heinz
    Almost sure stability and stabilization of discrete-time stochastic systems2015In: Systems & control letters (Print), ISSN 0167-6911, E-ISSN 1872-7956, Vol. 82, p. 26-32Article in journal (Refereed)
    Abstract [en]

    As is well known, noise may play a stabilizing or destabilizing role in continuous-time systems. But, for analysis and design of discrete-time systems, noise is treated as disturbance in the literature. This paper studies almost sure stability of general n-dimensional nonlinear time-varying discrete-time stochastic systems and presents a criterion based on a numerical result derived from Higham (2001), which exploits the stabilizing role of noise in discrete-time systems. As an application of the established results, this paper proposes a novel controller design method for almost sure-stabilization of linear discrete-time stochastic systems. The effectiveness of the proposed design method is verified with an example (an aircraft model subject to state-dependent noise), to which the existing results do not apply.

  • 154.
    Huebner, Kai
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.
    BADGr-A toolbox for box-based approximation, decomposition and GRasping2012In: Robotics and Autonomous Systems, ISSN 0921-8890, E-ISSN 1872-793X, Vol. 60, no 3, p. 367-376Article in journal (Refereed)
    Abstract [en]

    In this paper, we conclude our work on shape approximation by box primitives for the goal of simple and efficient grasping. As a main product of our research, we present the BADGr toolbox for Box-based Approximation, Decomposition and Grasping of objects. The contributions of the work presented here are twofold: in terms of shape approximation, we provide an algorithm for creating a 3D box primitive representation to identify object parts from 3D point clouds. We motivate and evaluate this choice particularly towards the task of grasping. As a contribution in the field of grasping, we further provide a grasp hypothesis generation framework that utilizes the chosen box presentation in a flexible manner.

  • 155.
    Högman, Virgile
    et al.
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Björkman, Mårten
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Kragic, Danica
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Interactive object classification using sensorimotor contingencies2013In: 2013 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), IEEE , 2013, p. 2799-2805Conference paper (Refereed)
    Abstract [en]

    Understanding and representing objects and their function is a challenging task. Objects we manipulate in our daily activities can be described and categorized in various ways according to their properties or affordances, depending also on our perception of those. In this work, we are interested in representing the knowledge acquired through interaction with objects, describing these in terms of action-effect relations, i.e. sensorimotor contingencies, rather than static shape or appearance representations. We demonstrate how a robot learns sensorimotor contingencies through pushing using a probabilistic model. We show how functional categories can be discovered and how entropy-based action selection can improve object classification.

    Download full text (pdf)
    fulltext
  • 156.
    Högman, Virgile
    et al.
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Björkman, Mårten
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Maki, Atsuto
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Kragic, Danica
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    A sensorimotor learning framework for object categorization2016In: IEEE Transactions on Cognitive and Developmental Systems, ISSN 2379-8920, Vol. 8, no 1, p. 15-25Article in journal (Refereed)
    Abstract [en]

    This paper presents a framework that enables a robot to discover various object categories through interaction. The categories are described using action-effect relations, i.e. sensorimotor contingencies rather than more static shape or appearance representation. The framework provides a functionality to classify objects and the resulting categories, associating a class with a specific module. We demonstrate the performance of the framework by studying a pushing behavior in robots, encoding the sensorimotor contingencies and their predictability with Gaussian Processes. We show how entropy-based action selection can improve object classification and how functional categories emerge from the similarities of effects observed among the objects. We also show how a multidimensional action space can be realized by parameterizing pushing using both position and velocity.

    Download full text (pdf)
    fulltext
  • 157.
    Iglesias, José
    KTH, School of Computer Science and Communication (CSC), Robotics, perception and learning, RPL.
    A force control based strategy for extrinsic in-hand object manipulation through prehensile-pushing primitives2017Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Object manipulation is a complex task for robots. It often implies a compromise between the degrees-of-freedom of hand and its fingers have (dexterity) and its cost and complexity in terms of control. One strategy to increase the dexterity of robotic hands with low dexterity is called extrinsic manipulation and its principle is to exploit additional accelerations on the object caused by the effect of external forces. We propose a force control based method for performing extrinsic in-hand object manipulation, with force-torque feedback. For this purpose, we use a prehensile pushing action, which consists of pushing the object against an external surface, under quasistatic assumptions. By using a control strategy, we also achieve robustness to parameter uncertainty (such as friction) and perturbations, that are not completely captured by mathematical models of the system. The force control strategy is performed in two different ways: the contact force generated by the interaction between the object and the external surface is controlled using an admittance controller, while an additional control of gripping force applied by the gripper on the object is done through a PI controller. A Kalman filter is used for the estimation of the state of the object, based on force-torque measurements of a sensor at the wrist of the robot. We validate our approach by conducting experiments on a PR2 robot, available at the Robotics, Perception, and Learning lab at KTH Royal Institute of Technology.

    Download full text (pdf)
    fulltext
  • 158. Jacobs, T.
    et al.
    Virk, Gurvinder S.
    KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS. University of Gävle.
    ISO 13482 - The new safety standard for personal care robots2014In: Proceedings for the Joint Conference of ISR 2014 - 45th International Symposium on Robotics and Robotik 2014 - 8th German Conference on Robotics, ISR/ROBOTIK 2014, 2014, p. 698-703Conference paper (Refereed)
    Abstract [en]

    In the future, personal care robots will work in close interaction with humans. This poses a great challenge to the manufacturers of such robots who have to ensure the safety of their systems. Up to now, only general safety standards for machines were available and the lack of a specialized safety standard with detailed requirements has resulted in uncertainty and a relatively high residual risk for manufacturers. This situation is changed with the publication of ISO 13482, a safety standard for personal care robots. This paper gives an overview about the contents of the new safety standard and the expected effects for service robot manufacturers and the way, personal care robots will be developed in the future. The scope of the standard and its application in the risk assessment process is described. Special focus lies on the aspect of intended close-interaction and contact between human and robot, and the possibility to validate that all safety requirements have been met.

  • 159. Jansson, Magnus
    et al.
    Harnefors, L.
    Wallmark, Oskar
    Leksell, Mats
    KTH, School of Electrical Engineering (EES), Electrical Machines and Power Electronics.
    Synchronization at startup and stable rotation reversal of sensorless nonsalient PMSM drives2006In: IEEE transactions on industrial electronics (1982. Print), ISSN 0278-0046, E-ISSN 1557-9948, Vol. 53, no 2, p. 379-387Article in journal (Refereed)
    Abstract [en]

    In this paper, a variant of the well-known voltage model is applied to rotor position estimation for sensorless control of nonsalient permanent-magnet synchronous motors (PMSMs). Particular focus is on a low-speed operation. It is shown that a guaranteed synchronization from any initial rotor position and stable reversal of rotation can be accomplished, in both cases under load. Stable rotation reversal is accomplished by making the estimator insensitive to the stator resistance. It is also shown that the closed-loop speed dynamics are similar to those of a sensored drive for speeds above approximately 0.1 per unit, provided that the model stator inductance is underestimated. Experimental results support the theory.

  • 160.
    Jensfelt, Patric
    et al.
    KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Kragic, Danica
    KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Folkesson, John
    Dept of Mechanical Engineering, Massachusetts Institute of Technology.
    Bearing-Only Vision SLAM with Distinguishable Image Feature2007In: Vision Systems Applications / [ed] Goro Obinata and Ashish Dutta, InTech, 2007Chapter in book (Refereed)
  • 161. Ji, W.
    et al.
    Liu, X.
    Wang, Lihui
    KTH, School of Industrial Engineering and Management (ITM), Production Engineering.
    Wang, G.
    Research on modelling of ball-nosed end mill with chamfered cutting edge for 5-axis grinding2016In: The International Journal of Advanced Manufacturing Technology, ISSN 0268-3768, E-ISSN 1433-3015, Vol. 87, no 9-12Article in journal (Refereed)
    Abstract [en]

    This paper presents models related to the manufacturing of ball-nosed end mills of solid carbide (BEMSC) with a chamfered cutting edge (CCE). A parallel grinding wheel (PGW) is selected, and the relationship between CCE face and PGW working face is determined. Based on the geometry models of BEMSC established in our previous work, the centre and axis vectors of PGW are calculated for the grinding of CCE face on bath the ball-nosed end and the cylinder, which is validated through a numerical simulation. In order to produce the tool, a grinding machine, SAACKE UMIF, is chosen. Targeting the grinding data of BEMSC, the transformations are carried out between the coordinate systems of workpiece and the NC programme according to the structural features of the machine. An algorithm is derived for dispersing grinding paths. As a result, the centre data and axis vector are generated with respect to the grinding machine. The BEMSC with CCE is machined using the selected machine, which demonstrates the correctness of the established models. Finally, the performance of the machined cutting tool is validated in comparison with a common BEMSC without CCE in the milling of a mould of a multi-hardness joint structure.

  • 162.
    Ji, Wei
    et al.
    KTH, School of Industrial Engineering and Management (ITM), Production Engineering.
    Wang, Lihui
    KTH, School of Industrial Engineering and Management (ITM), Production Engineering, Production Systems.
    Industrial robotic machining: a review2019In: The International Journal of Advanced Manufacturing Technology, ISSN 0268-3768, E-ISSN 1433-3015, Vol. 103, no 1-4, p. 1239-1255Article, review/survey (Refereed)
    Abstract [en]

    For the past three decades, robotic machining has attracted a large amount of research interest owning to the benefit of cost efficiency, high flexibility and multi-functionality of industrial robot. Covering articles published on the subjects of robotic machining in the past 30 years or so; this paper aims to provide an up-to-date review of robotic machining research works, a critical analysis of publications that publish the research works, and an understanding of the future directions in the field. The research works are organised into two operation categories, low material removal rate (MRR) and high MRR, according their machining properties, and the research topics are reviewed and highlighted separately. Then, a set of statistical analysis is carried out in terms of published years and countries. Towards an applicable robotic machining, the future trends and key research points are identified at the end of this paper.

  • 163.
    Ji, Wei
    et al.
    KTH, School of Industrial Engineering and Management (ITM), Production Engineering.
    Wang, Yuquan
    KTH, School of Industrial Engineering and Management (ITM), Production Engineering.
    Liu, Hongyi
    KTH, School of Industrial Engineering and Management (ITM), Production Engineering.
    Wang, Lihui
    KTH, School of Industrial Engineering and Management (ITM), Production Engineering.
    Interface architecture design for minimum programming in human-robot collaboration2018In: 51st CIRP Conference on Manufacturing Systems, Elsevier, 2018, Vol. 72, p. 129-134Conference paper (Refereed)
    Abstract [en]

    Many metal components, especially large-sized ones, need to be ground or deburred after turning or milling to improve the surface qualities, which heavily depends on human interventions. Robot arms, combining movable platforms, are applied to reduce the human work. However, robots and human should work together due to the fact that most of the large-sized parts belong to small-batch products, resulting in a large number of programming for operating a robot and movable platform. Targeting the problem, this paper proposes a new interface architecture towards minimum programming in human-robot collaboration. Within the context, a four-layer architecture is designed: user interface, function block (FB), functional modules and hardware. The user interface is associated with use cases. Then, FB, with embedded algorithms and knowledge and driven by events, is to provide a dynamic link to the relevant application interface (APIs) of the functional modules in terms of the case requirements. The functional modules are related to the hardware and software functions; and the hardware and humans are considered in terms of the conditions on shop floors. This method provides three-level applications based on the skills of users: (1) the operators on shop floors, can operate both robots and movable platforms programming-freely; (2) engineers are able to customise the functions and tasks by dragging/dropping and linking the relevant FBs with minimum programming; (3) the new functions can be added by importing the APIs through programming.

  • 164. Johansson, R.
    et al.
    Skantze, Gabriel
    KTH, School of Computer Science and Communication (CSC), Speech, Music and Hearing, TMH.
    Jönsson, A.
    A psychotherapy training environment with virtual patients implemented using the furhat robot platform2017In: 17th International Conference on Intelligent Virtual Agents, IVA 2017, Springer, 2017, Vol. 10498, p. 184-187Conference paper (Refereed)
    Abstract [en]

    We present a demonstration system for psychotherapy training that uses the Furhat social robot platform to implement virtual patients. The system runs an educational program with various modules, starting with training of basic psychotherapeutic skills and then moves on to tasks where these skills need to be integrated. Such training relies heavily on observing and dealing with both verbal and non-verbal in-session patient behavior. Hence, the Furhat robot is an ideal platform for implementing this. This paper describes the rationale for this system and its implementation.

  • 165.
    Johansson, Stefan
    et al.
    KTH, School of Computer Science and Communication (CSC), Media Technology and Interaction Design, MID.
    Gulliksen, Jan
    KTH, School of Computer Science and Communication (CSC), Media Technology and Interaction Design, MID.
    Lantz, Ann
    KTH, School of Computer Science and Communication (CSC), Media Technology and Interaction Design, MID.
    Cognitive Accessibility for Mentally Disabled Persons2015In: Human–Computer Interaction, Springer Berlin/Heidelberg, 2015, p. 418-435Conference paper (Refereed)
    Abstract [en]

    The emergence of various digital channels, the development of different devices and the change in the way we communicate and carry out various types of services have quickly grown and continues to grow. This may offer both new opportunities for inclusion and risks for creating new barriers in the society. In a recent study we have explored the questions: Is the society digitally accessible for persons with mental disabilities? How do persons with mental disabilities cope with their situation? What are the benefits and obstacles they face? Based on the answers to these questions we wanted to explore if there is a digital divide between the citizens in general and the citizens with mental disabilities. And if so; what is the nature of this divide? Methods used in the study were Participatory action research oriented with data collection via research circles. In total over 100 persons participated. The results show that a digital divide is present. Persons with mental disabilities differ from citizens in general in how they have access to digital resources. The result also indicates that services and systems on a societal scale do not deliver the expected efficiency when it comes to supporting citizens with mental disabilities. And finally the results indicate that the special needs this group might have are often not identified in wider surveys on the citizen's use of Internet, digital services and use of different technical devices. Several of the participants describe this as being left outside and not fully participate in a society where digital presence is considered a prerequisite for a full citizenship.

  • 166.
    Johnson-Roberson, Matthew
    et al.
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Bohg, Jeannette
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Skantze, Gabriel
    KTH, School of Computer Science and Communication (CSC), Speech, Music and Hearing, TMH.
    Gustafsson, Joakim
    KTH, School of Computer Science and Communication (CSC), Speech, Music and Hearing, TMH.
    Carlson, Rolf
    KTH, School of Computer Science and Communication (CSC), Speech, Music and Hearing, TMH.
    Kragic, Danica
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Rasolzadeh, Babak
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Enhanced Visual Scene Understanding through Human-Robot Dialog2011In: 2011 IEEE/RSJ International Conference on Intelligent Robots and Systems, IEEE , 2011, p. 3342-3348Conference paper (Refereed)
    Abstract [en]

    We propose a novel human-robot-interaction framework for robust visual scene understanding. Without any a-priori knowledge about the objects, the task of the robot is to correctly enumerate how many of them are in the scene and segment them from the background. Our approach builds on top of state-of-the-art computer vision methods, generating object hypotheses through segmentation. This process is combined with a natural dialog system, thus including a ‘human in the loop’ where, by exploiting the natural conversation of an advanced dialog system, the robot gains knowledge about ambiguous situations. We present an entropy-based system allowing the robot to detect the poorest object hypotheses and query the user for arbitration. Based on the information obtained from the human-robot dialog, the scene segmentation can be re-seeded and thereby improved. We present experimental results on real data that show an improved segmentation performance compared to segmentation without interaction.

  • 167. Jurado, I.
    et al.
    Quevedo, D. E.
    Johansson, Karl Henrik
    KTH, School of Electrical Engineering (EES), Automatic Control. KTH, School of Electrical Engineering (EES), Centres, ACCESS Linnaeus Centre.
    Ahlén, A.
    Cooperative Dynamic MPC for Networked Control Systems2014In: Intelligent Systems, Control and Automation: Science and Engineering, ISSN 2213-8986, Vol. 69, p. 357-373Article in journal (Refereed)
    Abstract [en]

    This work studies cooperative MPC for Networked Control Systems with multiple wireless nodes. Communication between nodes is affected by random packet dropouts. An algorithm is presented to decide at each time instant which nodes will calculate the control input and which will only relay data. The nodes chosen to calculate the control values solve a cooperative MPC by communicating with their neighbors. This algorithm makes the control architecture flexible by adapting it to the possible changes in the network conditions.

  • 168.
    Kao, ChungYao
    et al.
    KTH, Superseded Departments, Mathematics.
    Lincoln, B
    Simple stability criteria for systems with time-varying delays2004In: Automatica, ISSN 0005-1098, E-ISSN 1873-2836, Vol. 40, no 8, p. 1429-1434Article in journal (Refereed)
    Abstract [en]

    This paper considers the problem of checking stability of linear feedback systems with time-varying but bounded delays. Simple but powerful criteria of stability are presented for both continuous-time and discrete-time systems. Using these criteria, stability can be checked in a closed loop Bode plot. This makes it easy to design the system for robustness.

  • 169.
    Karagiannis, Ioannis
    KTH, School of Electrical Engineering (EES).
    Design of Gyro Based Roll-Stabilization Controller for a Concept Amphibious Commuter Vehicle2015Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    In this master thesis the gyroscopic stabilization of a two-wheeled amphibious concept vehicle is investigated. The key idea is to neutralize external torques applied on the vehicle by the counter torque produced from the two gyroscopes attached on the vehicle. Here the gyroscopes are used as actuators, not as sensors. When a torque is applied in order to rotate a gyroscope whose flywheel is spinning, then the gyroscope precesses and it generates a moment, orthogonal to both the torque and the spinning axis. This phenomenon is known as gyroscopic precession. As the vehicle leans from its upright position we expect to generate sufficient gyroscopic reaction moment to bring the vehicle back and get it stabilized.

     

    We first derive the equations of motion based on Lagrangian mechanics. It is worth mentioning that we only consider the control dynamics of a static vehicle. This is the so called regulator problem where we try to counteract the effects of disturbances. The trajectory tracking (servo problem) and the water-travelling can be considered as an extension of the current project. We linearize the dynamics around an equilibrium and we study the stability of the linearized model. We then design an LQG controller, a Glover-McFarlane controller and a cascade PID controller. Regarding the implementation part, we only focus on the cascade PID controller. The results from both simulations and experiments with a small-scale prototype are presented and discussed.

    Download full text (pdf)
    fulltext
  • 170.
    Karaoǧuz, Hakan
    et al.
    KTH, School of Computer Science and Communication (CSC), Robotics, perception and learning, RPL.
    Işil Bozma, H.
    Merging appearance-based spatial knowledge in multirobot systems2016In: IEEE International Conference on Intelligent Robots and Systems, IEEE, 2016, p. 5107-5112Conference paper (Refereed)
    Abstract [en]

    This paper considers the merging of appearancebased spatial knowledge among robots having compatible visual sensing. Each robot is assumed to retain its knowledge in its individual long-term spatial memory where i) the place knowledge and their spatial relations are retained in an organized manner in place and map memories respectively; and ii) a 'place' refers to a spatial region as designated by a collection of associated appearances. In the proposed approach, each robot communicates with another robot, receives its memory and then merges the received knowledge with its own. The novelty of the merging process is that it is done in two stages: merging of place knowledge followed by the merging of map knowledge. As each robot's place memory is processed as a whole or in portions, the merging process scales easily with respect to the amount and overlap of the appearance data. Furthermore, the merging can be done in decentralized manner. Our experimental results with a team of three robots demonstrate that the resulting merged knowledge enables the robots to reason about learned places.

  • 171.
    Karayiannidis, Yiannis
    et al.
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Dimarogonas, Dimos
    KTH, School of Electrical Engineering (EES), Automatic Control. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Kragic, Danica
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Multi-agent average consensus control with prescribed performance guarantees2012In: 2012 IEEE 51st Annual Conference on Decision and Control (CDC), IEEE , 2012, p. 2219-2225Conference paper (Refereed)
    Abstract [en]

    This work proposes a distributed control scheme for the state agreement problem which can guarantee prescribed performance for the system transient. In particular, i) we consider a set of agents that can exchange information according to a static communication graph, ii) we a priori define time-dependent constraints at the edge's space (errors between agents that exchange information) and iii) we design a distributed controller to guarantee that the errors between the neighboring agents do not violate the constraints. Following this technique the contributions are twofold: a) the convergence rate of the system and the communication structure of the agents' network which are strictly connected can be decoupled, and b) the connectivity properties of the initially formed communication graph are rendered invariant by appropriately designing the prescribed performance bounds. It is also shown how the structure and the parameters of the prescribed performance controller can be chosen in case of connected tree graphs and connected graphs with cycles. Simulation results validate the theoretically proven findings while enlightening the merit of the proposed prescribed performance agreement protocol as compared to the linear one.

  • 172.
    Karayiannidis, Yiannis
    et al.
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Doulgeri, Zoe
    Aristotle University of Thessaloniki.
    Regressor-free prescribed performance robot tracking2013In: Robotica (Cambridge. Print), ISSN 0263-5747, E-ISSN 1469-8668Article in journal (Refereed)
    Abstract [en]

    Fast and robust tracking against unknown disturbances is required in many modern complex robotic structures and applications, for which knowledge of the full exact nonlinear system is unreasonable to assume. This paper proposes a regressor-free nonlinear controller of low complexity which ensures prescribed performance position error tracking subject to unknown endogenous and exogenous bounded dynamics assuming that joint position and velocity measurements are available. It is theoretically shown and demonstrated by a simulation study that the proposed controller can guarantee tracking of the desired joint position trajectory with a priori determined accuracy, overshoot and speed of response. Preliminary experimental results to a simplified system are promising for validating the controller to more complex structures.

  • 173.
    Karayiannidis, Yiannis
    et al.
    KTH, School of Computer Science and Communication (CSC), Robotics, perception and learning, RPL. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS. Chalmers University of Technology, Sweden.
    Droukas, L.
    Doulgeri, Z.
    Operational space robot control for motion performance and safe interaction under Unintentional Contacts2017In: 2016 European Control Conference, ECC 2016, Institute of Electrical and Electronics Engineers Inc. , 2017, p. 407-412Conference paper (Refereed)
    Abstract [en]

    A control law achieving motion performance of quality and compliant reaction to unintended contacts for robot manipulators is proposed in this work. It achieves prescribed performance evolution of the position error under disturbance forces up to a tunable level of magnitude. Beyond this level, it deviates from the desired trajectory complying to what is now interpreted as unintentional contact force, thus achieving enhanced safety by decreasing interaction forces. The controller is a passivity model based controller utilizing an artificial potential that induces vanishing vector fields. Simulation results with a three degrees of freedom (DOF) robot under the control of the proposed scheme, verify theoretical findings and illustrate motion performance and compliance under an external force of short duration in comparison with a switched impedance scheme.

  • 174.
    Karayiannidis, Yiannis
    et al.
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. Chalmers, Sweden.
    Smith, Christian
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.
    Barrientos, Francisco Eli Vina
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.
    Ögren, Petter
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.
    Kragic, Danica
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.
    An Adaptive Control Approach for Opening Doors and Drawers Under Uncertainties2016In: IEEE Transactions on robotics, ISSN 1552-3098, E-ISSN 1941-0468, Vol. 32, no 1, p. 161-175Article in journal (Refereed)
    Abstract [en]

    We study the problem of robot interaction with mechanisms that afford one degree of freedom motion, e.g., doors and drawers. We propose a methodology for simultaneous compliant interaction and estimation of constraints imposed by the joint. Our method requires no prior knowledge of the mechanisms' kinematics, including the type of joint, prismatic or revolute. The method consists of a velocity controller that relies on force/torque measurements and estimation of the motion direction, the distance, and the orientation of the rotational axis. It is suitable for velocity controlled manipulators with force/torque sensor capabilities at the end-effector. Forces and torques are regulated within given constraints, while the velocity controller ensures that the end-effector of the robot moves with a task-related desired velocity. We give proof that the estimates converge to the true values under valid assumptions on the grasp, and error bounds for setups with inaccuracies in control, measurements, or modeling. The method is evaluated in different scenarios involving opening a representative set of door and drawer mechanisms found in household environments.

  • 175.
    Karayiannidis, Yiannis
    et al.
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Smith, Christian
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Kragic, Danica
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Mapping Human Intentions to Robot Motions via Physical Interaction Through a Jointly-held Object2014In: Robot and Human Interactive Communication, 2014 RO-MAN: The 23rd IEEE International Symposium on, 2014, p. 391-397Conference paper (Refereed)
    Abstract [en]

    In this paper we consider the problem of human-robot collaborative manipulation of an object, where the human is active in controlling the motion, and the robot is passively following the human's lead. Assuming that the human grasp of the object only allows for transfer of forces and not torques, there is a disambiguity as to whether the human desires translation or rotation. In this paper, we analyze different approaches to this problem both theoretically and in experiment. This leads to the proposal of a control methodology that uses switching between two different admittance control modes based on the magnitude of measured force to achieve disambiguation of the rotation/translation problem.

    Download full text (pdf)
    Roman2014Karayiannidis
  • 176.
    Karayiannidis, Yiannis
    et al.
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Smith, Christian
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Vina, Francisco
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Kragic, Danica
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Online Kinematics Estimation for Active Human-Robot Manipulation of Jointly Held Objects2013In: 2013 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), IEEE , 2013, p. 4872-4878Conference paper (Refereed)
    Abstract [en]

    This paper introduces a method for estimating the constraints imposed by a human agent on a jointly manipulated object. These estimates can be used to infer knowledge of where the human is grasping an object, enabling the robot to plan trajectories for manipulating the object while subject to the constraints. We describe the method in detail, motivate its validity theoretically, and demonstrate its use in co-manipulation tasks with a real robot.

    Download full text (pdf)
    iros2013karayiannidis
  • 177.
    Karayiannidis, Yiannis
    et al.
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.
    Smith, Christian
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.
    Vina, Francisco
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.
    Ögren, Petter
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.
    Kragic, Danica
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.
    Design of force-driven online motion plans for door opening under uncertainties2012In: Workshop on Real-time Motion Planning: Online, Reactive, and in Real-time, 2012Conference paper (Refereed)
    Abstract [en]

    The problem of door opening is fundamental for household robotic applications. Domestic environments are generally less structured than industrial environments and thus several types of uncertainties associated with the dynamics and kinematics of a door must be dealt with to achieve successful opening. This paper proposes a method that can open doors without prior knowledge of the door kinematics. The proposed method can be implemented on a velocity-controlled manipulator with force sensing capabilities at the end-effector. The velocity reference is designed by using feedback of force measurements while constraint and motion directions are updated online based on adaptive estimates of the position of the door hinge. The online estimator is appropriately designed in order to identify the unknown directions. The proposed scheme has theoretically guaranteed performance which is further demonstrated in experiments on a real robot. Experimental results additionally show the robustness of the proposed method under disturbances introduced by the motion of the mobile platform.

  • 178.
    Karayiannidis, Yiannis
    et al.
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.
    Smith, Christian
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.
    Vina, Francisco
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.
    Ögren, Petter
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.
    Kragic, Danica
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.
    Interactive perception and manipulation of unknown constrained mechanisms using adaptive control2013In: ICRA 2013 Mobile Manipulation Workshop on Interactive Perception, 2013Conference paper (Refereed)
  • 179.
    Karayiannidis, Yiannis
    et al.
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Smith, Christian
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Vina, Francisco
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Ögren, Petter
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Kragic, Danica
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Model-free robot manipulation of doors and drawers by means of fixed-grasps2013In: 2013 IEEE International Conference on Robotics and Automation (ICRA), New York: IEEE , 2013, p. 4485-4492Conference paper (Refereed)
    Abstract [en]

    This paper addresses the problem of robot interaction with objects attached to the environment through joints such as doors or drawers. We propose a methodology that requires no prior knowledge of the objects’ kinematics, including the type of joint - either prismatic or revolute. The method consists of a velocity controller which relies onforce/torque measurements and estimation of the motion direction,rotational axis and the distance from the center of rotation.The method is suitable for any velocity controlled manipulatorwith a force/torque sensor at the end-effector. The force/torquecontrol regulates the applied forces and torques within givenconstraints, while the velocity controller ensures that the endeffectormoves with a task-related desired tangential velocity. The paper also provides a proof that the estimates converge tothe actual values. The method is evaluated in different scenarios typically met in a household environment.

    Download full text (pdf)
    icra2013Karayiannidis
  • 180.
    Karayiannidis, Yiannis
    et al.
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Smith, Christian
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Vina, Francisco
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Ögren, Petter
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Kragic, Danica
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    "Open Sesame!" Adaptive Force/Velocity Control for Opening Unknown Doors2012In: Intelligent Robots and Systems (IROS), 2012 IEEE/RSJ International Conference on, IEEE , 2012, p. 4040-4047Conference paper (Refereed)
    Abstract [en]

    The problem of door opening is fundamental for robots operating in domestic environments. Since these environments are generally less structured than industrial environments, several types of uncertainties associated with the dynamics and kinematics of a door must be dealt with to achieve successful opening. This paper proposes a method that can open doors without prior knowledge of the door kinematics. The proposed method can be implemented on a velocity-controlled manipulator with force sensing capabilities at the end-effector. The method consists of a velocity controller which uses force measurements and estimates of the radial direction based on adaptive estimates of the position of the door hinge. The control action is decomposed into an estimated radial and tangential direction following the concept of hybrid force/motion control. A force controller acting within the velocity controller regulates the radial force to a desired small value while the velocity controller ensures that the end effector of the robot moves with a desired tangential velocity leading to task completion. This paper also provides a proof that the adaptive estimates of the radial direction converge to the actual radial vector. The performance of the control scheme is demonstrated in both simulation and on a real robot.

    Download full text (pdf)
    Iros2012Karayiannidis
  • 181.
    Karayiannidis, Yiannis
    et al.
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.
    Smith, Christian
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.
    Ögren, Petter
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.
    Kragic, Danica
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.
    Adaptive force/velocity control for opening unknown doors2012In: Robot Control, Volume 10, Part  1, 2012, p. 753-758Conference paper (Refereed)
    Abstract [en]

    The problem of door opening is fundamental for robots operating in domesticenvironments. Since these environments are generally unstructured, a robot must deal withseveral types of uncertainties associated with the dynamics and kinematics of a door to achievesuccessful opening. The present paper proposes a dynamic force/velocity controller which usesadaptive estimation of the radial direction based on adaptive estimates of the door hinge’sposition. The control action is decomposed into estimated radial and tangential directions,which are proved to converge to the corresponding actual values. The force controller usesreactive compensation of the tangential forces and regulates the radial force to a desired smallvalue, while the velocity controller ensures that the robot’s end-effector moves with a desiredtangential velocity. The performance of the control scheme is demonstrated in simulation witha 2 DoF planar manipulator opening a door.

  • 182.
    Karlsson, Johan
    et al.
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Optimization and Systems Theory.
    Georgiou, Tryphon T.
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Optimization and Systems Theory.
    Signal analysis, moment problems & uncertainty measures2005In: IEEE Proceedings: Conference on Decision and Control (CDC), ISSN 0191-2216, p. 5710-5715Article in journal (Refereed)
    Abstract [en]

    Modern spectral estimation techniques often rely on second order statistics of a time-series to determine a power spectrum consistent with data. Such statistics provide moment constraints on the power spectrum. In this paper we study possible distance functions between spectra which permit a reasonable quantitative description of the uncertainty in moment problems. Typically, there is an infinite family or spectra consistent with given moments. A distance function between power spectra should permit estimating the diameter of the uncertainty family, a diameter which shrinks as new data accumulates. Abstract properties of such distance functions are discussed and certain specific options are put forth. These distance functions permit alternative descriptions of uncertainty in moment problems. While the paper focuses on the role of such measures in signal analysis, moment problems are ubiquitous in science and engineering, and the conclusions drawn herein are relevant over a wider spectrum of problems.

  • 183. Khan, M. S. L.
    et al.
    Li, Haibo
    KTH, School of Computer Science and Communication (CSC), Media Technology and Interaction Design, MID.
    Réhman, S. U.
    Expressive multimedia: Bringing action to physical world by dancing-tablet2015In: HCMC 2015 - Proceedings of the 2nd Workshop on Computational Models of Social Interactions: Human-Computer-Media Communication, co-located with ACM MM 2015, ACM Digital Library, 2015, p. 9-14Conference paper (Refereed)
    Abstract [en]

    The design practice based on embodied interaction concept focuses on developing new user interfaces for computer devices that merge the digital content with the physical world. In this work we have proposed a novel embodied interaction based design in which the 'action' information of the digital content is presented in the physical world. More specifically, we have mapped the 'action' information of the video content from the digital world into the physical world. The motivating example presented in this paper is our novel dancing-tablet, in which a tablet-PC dances on the rhythm of the song, hence the 'action' information is not just confined into a 2D flat display but also expressed by it. This paper presents i) hardware design of our mechatronic dancingtablet platform, ii) software algorithm for musical feature extraction and iii) embodied computational model for mapping 'action' information of the musical expression to the mechatronic platform. Our user study shows that the overall perception of audio-video music is enhanced by our dancingtablet setup.

  • 184. Khan, Sheraz
    et al.
    Dometios, Athanasios
    Verginis, Christos
    Tzafestas, Costas
    Wollherr, Dirk
    Buss, Martin
    RMAP: a rectangular cuboid approximation framework for 3D environment mapping2014In: Autonomous Robots, ISSN 0929-5593, E-ISSN 1573-7527, Vol. 37, p. 261-277Article in journal (Refereed)
    Abstract [en]

    This paper presents a rectangular cuboid approximation framework (RMAP) for 3D mapping. The goal of RMAP is to provide computational and memory efficient environment representations for 3D robotic mapping using axis aligned rectangular cuboids (RC). This paper focuses on two aspects of the RMAP framework: (i) An occupancy grid approach and (ii) A RC approximation of 3D environments based on point cloud density. The RMAP occupancy grid is based on the Rtree data structure which is composed of a hierarchy of RC. The proposed approach is capable of generating probabilistic 3D representations with multiresolution capabilities. It reduces the memory complexity in large scale 3D occupancy grids by avoiding explicit modelling of free space. In contrast to point cloud and fixed resolution cell representations based on beam end point observations, an approximation approach using point cloud density is presented. The proposed approach generates variable sized RC approximations that are memory efficient for axis aligned surfaces. Evaluation of the RMAP occupancy grid and approximation approach based on computational and memory complexity on different datasets shows the effectiveness of this framework for 3D mapping.

    Download full text (pdf)
    fulltext
  • 185.
    Kokic, Mia
    et al.
    KTH, School of Electrical Engineering and Computer Science (EECS), Centres, Centre for Autonomous Systems, CAS. KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Robotics, Perception and Learning, RPL.
    Kragic, Danica
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Robotics, Perception and Learning, RPL. KTH, School of Electrical Engineering and Computer Science (EECS), Centres, Centre for Autonomous Systems, CAS.
    Bohg, Jeannette
    Stanford Univ, Dept Comp Sci, Stanford, CA 94305 USA..
    Learning Task-Oriented Grasping From Human Activity Datasets2020In: IEEE Robotics and Automation Letters, ISSN 2377-3766, E-ISSN 1949-3045, Vol. 5, no 2, p. 3352-3359Article in journal (Refereed)
    Abstract [en]

    We propose to leverage a real-world, human activity RGB dataset to teach a robot <italic>Task-Oriented Grasping</italic> (TOG). We develop a model that takes as input an RGB image and outputs a hand pose and configuration as well as an object pose and a shape. We follow the insight that jointly estimating hand and object poses increases accuracy compared to estimating these quantities independently of each other. Given the trained model, we process an RGB dataset to automatically obtain the data to train a TOG model. This model takes as input an object point cloud and outputs a suitable region for task-specific grasping. Our ablation study shows that training an object pose predictor with the hand pose information (and vice versa) is better than training without this information. Furthermore, our results on a real-world dataset show the applicability and competitiveness of our method over state-of-the-art. Experiments with a robot demonstrate that our method can allow a robot to preform TOG on novel objects.

  • 186.
    Kootstra, Geert
    et al.
    Faculty of Mathematics and Natural Sciences, University of Groningen, The Netherlands.
    de Jong, Sjoerd
    Faculty of Mathematics and Natural Sciences, University of Groningen, The Netherlands.
    Schomaker, Lambert R. B.
    Faculty of Mathematics and Natural Sciences, University of Groningen, The Netherlands.
    Using local symmetry for landmark selection2009In: Computer Vision Systems, Springer , 2009, Vol. 5815, p. 94-103Chapter in book (Refereed)
    Abstract [en]

    Most visual Simultaneous Localization And Mapping (SLAM) methods use interest points as landmarks in their maps of the environment. Often the interest points are detected using contrast features, for instance those of the Scale Invariant Feature Transform (SIFT). The SIFT interest points, however, have problems with stability, and noise robustness. Taking our inspiration from human vision, we therefore propose the use of local symmetry to select interest points. Our method, the MUlti-scale Symmetry Transform (MUST), was tested on a robot-generated database including ground-truth information to quantify SLAM performance. We show that interest points selected using symmetry are more robust to noise and contrast manipulations, have a slightly better repeatability, and above all, result in better overall SLAM performance.

    Download full text (pdf)
    kootstra09icvs.pdf
  • 187.
    Kootstra, Gert
    et al.
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Bergström, Niklas
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Kragic, Danica
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Fast and Automatic Detection and Segmentation of Unknown Objects2010In: Proceedings of the 10th IEEE-RAS International Conference on Humanoid Robots (Humanoids), IEEE , 2010, p. 442-447Conference paper (Refereed)
    Abstract [en]

    This paper focuses on the fast and automatic detection and segmentation of unknown objects in unknown environments. Many existing object detection and segmentation methods assume prior knowledge about the object or human interference. However, an autonomous system operating in the real world will often be confronted with previously unseen objects. To solve this problem, we propose a segmentation approach named Automatic Detection And Segmentation (ADAS). For the detection of objects, we use symmetry, one of the Gestalt principles for figure-ground segregation to detect salient objects in a scene. From the initial seed, the object is segmented by iteratively applying graph cuts. We base the segmentation on both 2D and 3D cues: color, depth, and plane information. Instead of using a standard grid-based representation of the image, we use super pixels. Besides being a more natural representation, the use of super pixels greatly improves the processing time of the graph cuts, and provides more noise-robust color and depth information. The results show that both the object-detection as well as the object-segmentation method are successful and outperform existing methods.

    Download full text (pdf)
    kootstra10humanoids.pdf
  • 188.
    Kootstra, Gert
    et al.
    Faculty of Mathematics and Natural Sciences, University of Groningen, The Netherlands.
    de Boer, Bart
    Univesity of Amsterdam, The Netherlands.
    Tackling the Premature Convergence Problem in Monte-Carlo Localization2009In: Robotics and Autonomous Systems, ISSN 0921-8890, E-ISSN 1872-793X, Vol. 57, no 11, p. 1107-1118Article in journal (Refereed)
    Abstract [en]

    Monte-Carlo localization uses particle filtering to estimate the position of the robot. The method is known to suffer from the loss of potential positions when there is ambiguity present in the environment. Since many indoor environments are highly symmetric, this problem of premature convergence is problematic for indoor robot navigation. It is, however, rarely studied in particle filters. We introduce a number of so-called niching methods used in genetic algorithms, and implement them on a particle filter for Monte-Carlo localization. The experiments show a significant improvement in the diversity maintaining performance of the particle filter.

    Download full text (pdf)
    kootstra09ras.pdf
  • 189.
    Kootstra, Gert
    et al.
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.
    Kragic, Danica
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.
    Fast and Bottom-Up Object Detection and Segmentation using Gestalt Principles2011In: Proceedings of the International Conference on Robotics and Automation (ICRA), IEEE , 2011, p. 3423-3428Conference paper (Refereed)
    Abstract [en]

    In many scenarios, domestic robot will regularly encounter unknown objects. In such cases, top-down knowledge about the object for detection, recognition, and classification cannot be used. To learn about the object, or to be able to grasp it, bottom-up object segmentation is an important competence for the robot. Also when there is top-down knowledge, prior segmentation of the object can improve recognition and classification. In this paper, we focus on the problem of bottom-up detection and segmentation of unknown objects. Gestalt psychology studies the same phenomenon in human vision. We propose the utilization of a number of Gestalt principles. Our method starts by generating a set of hypotheses about the location of objects using symmetry. These hypotheses are then used to initialize the segmentation process. The main focus of the paper is on the evaluation of the resulting object segments using Gestalt principles to select segments with high figural goodness. The results show that the Gestalt principles can be successfully used for detection and segmentation of unknown objects. The results furthermore indicate that the Gestalt measures for the goodness of a segment correspond well with the objective quality of the segment. We exploit this to improve the overall segmentation performance.

    Download full text (pdf)
    kootstra11icra.pdf
  • 190.
    Kootstra, Gert
    et al.
    Faculty of Mathematics and Natural Sciences, University of Groningen, The Netherlands.
    Nederveen, Arco
    Faculty of Mathematics and Natural Sciences, University of Groningen, The Netherlands.
    de Boer, Bart
    Faculty of Mathematics and Natural Sciences, University of Groningen, The Netherlands.
    Paying Attention to Symmetry2008In: Proceedings of the British Machine Vision Conference (BMVC2008), The British Machine Vision Association and Society for Pattern Recognition , 2008, p. 1115-1125Conference paper (Refereed)
    Abstract [en]

    Humans are very sensitive to symmetry in visual patterns. Symmetry is detected and recognized very rapidly. While viewing symmetrical patterns eye fixations are concentrated along the axis of symmetry or the symmetrical center of the patterns. This suggests that symmetry is a highly salient feature. Existing computational models of saliency, however, have mainly focused on contrast as a measure of saliency. These models do not take symmetry into account. In this paper, we discuss local symmetry as measure of saliency. We developed a number of symmetry models an performed an eye tracking study with human participants viewing photographic images to test the models. The performance of our symmetry models is compared with the contrast saliency model of Itti et al. [1]. The results show that the symmetry models better match the human data than the contrast model. This indicates that symmetry is a salient structural feature for humans, a finding which can be exploited in computer vision.

    Download full text (pdf)
    kootstra08bmvc.pdf
  • 191.
    Kootstra, Gert
    et al.
    Faculty of Mathematics and Natural Sciences, University of Groningen, The Netherlands.
    Schomaker, Lambert R. B.
    Faculty of Mathematics and Natural Sciences, University of Groningen, The Netherlands.
    Using Symmetrical Regions-of-Interest to Improve Visual SLAM2009In: Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2009), IEEE , 2009, p. 930-935Conference paper (Refereed)
    Abstract [en]

    Simultaneous Localization and Mapping (SLAM) based on visual information is a challenging problem. One of the main problems with visual SLAM is to find good quality landmarks, that can be detected despite noise and small changes in viewpoint. Many approaches use SIFT interest points as visual landmarks. The problem with the SIFT interest points detector, however, is that it results in a large number of points, of which many are not stable across observations. We propose the use of local symmetry to find regions of interest instead. Symmetry is a stimulus that occurs frequently in everyday environments where our robots operate in, making it useful for SLAM. Furthermore, symmetrical forms are inherently redundant, and can therefore be more robustly detected. By using regions instead of points-of-interest, the landmarks are more stable. To test the performance of our model, we recorded a SLAM database with a mobile robot, and annotated the database by manually adding ground-truth positions. The results show that symmetrical regions-of-interest are less susceptible to noise, are more stable, and above all, result in better SLAM performance.

    Download full text (pdf)
    kootstra09iros.pdf
  • 192.
    Kootstra, Gert
    et al.
    Faculty of Mathematics and Natural Sciences, University of Groningen, The Netherlands.
    Ypma, Jelmer
    Faculty of Mathematics and Natural Sciences, University of Groningen, The Netherlands.
    de Boer, Bart
    Faculty of Mathematics and Natural Sciences, University of Groningen, The Netherlands.
    Active Exploration and Keypoint Clustering for Object Recognition2008In: Proceedings of the IEEE International Conference on Robotics and Automation (ICRA 2008), IEEE , 2008, p. 1005-1010Conference paper (Refereed)
    Abstract [en]

    Object recognition is a challenging problem for artificial systems. This is especially true for objects that are placed in cluttered and uncontrolled environments. To challenge this problem, we discuss an active approach to object recognition. Instead of passively observing objects, we use a robot to actively explore the objects. This enables the system to learn objects from different viewpoints and to actively select viewpoints for optimal recognition. Active vision furthermore simplifies the segmentation of the object from its background. As the basis for object recognition we use the Scale Invariant Feature Transform (SIFT). SIFT has been a successful method for image representation. However, a known drawback of SIFT is that the computational complexity of the algorithm increases with the number of keypoints. We discuss a growing-when-required (GWR) network for efficient clustering of the key- points. The results show successful learning of 3D objects in real-world environments. The active approach is successful in separating the object from its cluttered background, and the active selection of viewpoint further increases the performance. Moreover, the GWR-network strongly reduces the number of keypoints.

    Download full text (pdf)
    kootstra08icra.pdf
  • 193.
    Kootstra, Gert
    et al.
    Faculty of Mathematics and Natural Sciences, University of Groningen, The Netherlands.
    Ypma, Jelmer
    Faculty of Mathematics and Natural Sciences, University of Groningen, The Netherlands.
    de Boer, Bart
    Faculty of Mathematics and Natural Sciences, University of Groningen, The Netherlands.
    Exploring Objects for Recognition in the Real World2007In: Proceedings of the IEEE International Conference on Robotics and Biomimetics (ROBIO 2007), IEEE , 2007, p. 429-434Conference paper (Refereed)
    Abstract [en]

    Perception in natural systems is a highly active process. In this paper, we adopt the strategy of natural systems to explore objects for 3D object recognition using robots. The exploration of objects enables the system to learn objects from different viewpoints, which is essential for 3D object recognition. Exploration furthermore simplifies the segmentation of the object from its background, which is important for object learning in real-world environments, which are usually highly cluttered. We use the scale invariant feature transform (SIFT) as the basis for our object recognition system. We discuss our active vision approach to learn and recognize 3D objects in cluttered and uncontrolled environments. Furthermore, we propose a model to reduce the number of SIFT keypoints stored in the object database. It is a known drawback of SIFT that the computational complexity of the algorithm increases rapidly with the number of keypoints. We discuss the use of a growing-when-required (GWR) network, which is based on the Kohonen self organizing feature map, for efficient clustering of the keypoints. The results show successful learning of 3D objects in a cluttered and uncontrolled environment. Moreover, the GWR-network strongly reduces the number of keypoints.

    Download full text (pdf)
    kootstra07robio.pdf
  • 194. Kostavelis, I.
    et al.
    Boukas, E.
    Nalpantidis, Lazaros
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Gasteratos, A.
    Path tracing on polar depth maps for robot navigation2012In: Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), Springer Berlin/Heidelberg, 2012, p. 395-404Conference paper (Refereed)
    Abstract [en]

    In this paper a Cellular Automata-based (CA) path estimation algorithm suitable for safe robot navigation is presented. The proposed method combines well established 3D vision techniques with CA operations and traces a collision free route from the foot of the robot to the horizon of a scene. Firstly, the depth map of the scene is obtained and, then, a polar transformation is applied. A v-disparity image calculation processing step is applied to the initial depth map separating the ground plane from the obstacles. In the next step, a CA floor field is formed representing all the distances from the robot to the traversable regions in the scene. The target point that the robot should move towards to, is tracked down and an additional CA routine is applied to the floor field revealing a traversable route that the robot should follow to reach its target location.

  • 195.
    Kragic, Danica
    KTH, School of Electrical Engineering and Computer Science (EECS), Robotics, Perception and Learning, RPL. KTH, School of Electrical Engineering and Computer Science (EECS), Centres, Centre for Autonomous Systems, CAS.
    From active perception to deep learning2018In: SCIENCE ROBOTICS, ISSN 2470-9476, Vol. 3, no 23, article id eaav1778Article in journal (Other academic)
  • 196.
    Kragic, Danica
    et al.
    KTH, School of Electrical Engineering and Computer Science (EECS), Robotics, Perception and Learning, RPL.
    Daniilidis, Kostas
    University of Pennsylvania, Department of Computer and Information Science, 3330 Walnut Street, Philadelphia, PA 19104, United States.
    3-D vision for navigation and grasping2016In: Springer Handbook of Robotics, Springer International Publishing , 2016, p. 811-824Chapter in book (Other academic)
    Abstract [en]

    In this chapter, we describe algorithms for three-dimensional (3-D) vision that help robots accomplish navigation and grasping. To model cameras, we start with the basics of perspective projection and distortion due to lenses. This projection from a 3-D world to a two-dimensional (2-D) image can be inverted only by using information from the world or multiple 2-D views. If we know the 3-D model of an object or the location of 3-D landmarks, we can solve the pose estimation problem from one view. When two views are available, we can compute the 3-D motion and triangulate to reconstruct the world up to a scale factor. When multiple views are given either as sparse viewpoints or a continuous incoming video, then the robot path can be computer and point tracks can yield a sparse 3-D representation of the world. In order to grasp objects, we can estimate 3-D pose of the end effector or 3-D coordinates of the graspable points on the object.

  • 197.
    Kragic, Danica
    et al.
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.
    Hager, Gregory D.
    Special Issue on Robotic Vision2012In: The international journal of robotics research, ISSN 0278-3649, E-ISSN 1741-3176, Vol. 31, no 4, p. 379-380Article in journal (Refereed)
  • 198. Krug, R.
    et al.
    Lilienthal, A. J.
    Kragic, Danica
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.
    Bekiroglu, Y.
    Analytic grasp success prediction with tactile feedback2016In: Proceedings - IEEE International Conference on Robotics and Automation, Institute of Electrical and Electronics Engineers (IEEE), 2016, p. 165-171Conference paper (Refereed)
    Abstract [en]

    Predicting grasp success is useful for avoiding failures in many robotic applications. Based on reasoning in wrench space, we address the question of how well analytic grasp success prediction works if tactile feedback is incorporated. Tactile information can alleviate contact placement uncertainties and facilitates contact modeling. We introduce a wrench-based classifier and evaluate it on a large set of real grasps. The key finding of this work is that exploiting tactile information allows wrench-based reasoning to perform on a level with existing methods based on learning or simulation. Different from these methods, the suggested approach has no need for training data, requires little modeling effort and is computationally efficient. Furthermore, our method affords task generalization by considering the capabilities of the grasping device and expected disturbance forces/moments in a physically meaningful way.

  • 199.
    Krug, Robert
    et al.
    KTH, School of Electrical Engineering and Computer Science (EECS), Robotics, Perception and Learning, RPL.
    Bekiroglu, Yasemin
    Vicarious AI, San Francisco, CA USA..
    Kragic, Danica
    KTH, School of Electrical Engineering and Computer Science (EECS), Robotics, Perception and Learning, RPL.
    Roa, Maximo A.
    German Aerosp Ctr DLR, Inst Robot & Mechatron, D-82234 Wessling, Germany..
    Evaluating the Quality of Non-Prehensile Balancing Grasps2018In: 2018 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION (ICRA), IEEE Computer Society, 2018, p. 4215-4220Conference paper (Refereed)
    Abstract [en]

    Assessing grasp quality and, subsequently, predicting grasp success is useful for avoiding failures in many autonomous robotic applications. In addition, interest in non-prehensile grasping and manipulation has been growing as it offers the potential for a large increase in dexterity. However, while force-closure grasping has been the subject of intense study for many years, few existing works have considered quality metrics for non-prehensile grasps. Furthermore, no studies exist to validate them in practice. In this work we use a real-world data set of non-prehensile balancing grasps and use it to experimentally validate a wrench-based quality metric by means of its grasp success prediction capability. The overall accuracy of up to 84% is encouraging and in line with existing results for force-closure grasps.

  • 200.
    Krug, Robert
    et al.
    KTH, School of Computer Science and Communication (CSC), Robotics, perception and learning, RPL.
    Bekiroglu, Yasemin
    Roa, Maximo
    Grasp Quality Evaluation Done Right: How Assumed Contact Force Bounds Affect Wrench-Based Quality Metrics2017Conference paper (Refereed)
    Abstract [en]

    Wrench-based quality metrics play an important role in many applications such as grasp planning or grasp success prediction. In this work, we study the following discrepancy which is frequently overlooked in practice: the quality metrics are commonly computed under the assumption of sum-magnitude bounded contact forces, but the corresponding grasps are executed by a fully actuated device where the contact forces are limited independently. By means of experiments carried out in simulation and on real hardware, we show that in this setting the values of these metrics are severely underestimated. This can lead to erroneous conclusions regarding the actual capabilities of the grasps under consideration. Our findings highlight the importance of matching the physical properties of the task and the grasping device with the chosen quality metrics.

    Download full text (pdf)
    fulltext
1234567 151 - 200 of 417
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf