In this work we summarize the solution developed by Team KTH for the Amazon Picking Challenge 2016 in Leipzig, Germany. The competition simulated a warehouse automation scenario and it was divided in two tasks: a picking task where a robot picks items from a shelf and places them in a tote and a stowing task which is the inverse task where the robot picks items from a tote and places them in a shelf. We describe our approach to the problem starting from a high level overview of our system and later delving into details of our perception pipeline and our strategy for manipulation and grasping. The solution was implemented using a Baxter robot equipped with additional sensors.
In this chapter we summarize the solution developed by team KTH for the Amazon Picking Challenge 2016 in Leipzig, Germany. The competition, which simulated a warehouse automation scenario, was divided into two parts: a picking task, where the robot picks items from a shelf and places them into a tote, and a stowing task, where the robot picks items from a tote and places them in a shelf. We describe our approach to the problem starting with a high-level overview of the system, delving later into the details of our perception pipeline and strategy for manipulation and grasping. The hardware platform used in our solution consists of a Baxter robot equipped with multiple vision sensors.
Robotic assembly in unstructured environments is a challenging task, due to the added uncertainties. These can be mitigated through the employment of assembly systems, which offer a modular approach to the assembly problem via the conjunction of primitives. In this paper, we use a dual-arm manipulator in order to execute a folding assembly primitive. When executing a folding primitive, two parts are brought into rigid contact and posteriorly translated and rotated. A switched controller is employed in order to ensure that the relative motion of the parts follows the desired model, while regulating the contact forces. The control is complemented with an estimator based on a Kalman filter, which tracks the contact point between parts based on force and torque measurements. Experimental results are provided, and the effectiveness of the control and contact point estimation is shown.
In this paper we present the system we developed for the Amazon Picking Challenge 2015, and discuss some of the lessons learned that may prove useful to researchers and future teams developing autonomous robot picking systems. For the competition we used a PR2 robot, which is a dual arm robot research platform equipped with a mobile base and a variety of 2D and 3D sensors. We adopted a behavior tree to model the overall task execution, where we coordinate the different perception, localization, navigation, and manipulation activities of the system in a modular fashion. Our perception system detects and localizes the target objects in the shelf and it consisted of two components: one for detecting textured rigid objects using the SimTrack vision system, and one for detecting non-textured or nonrigid objects using RGBD features. In addition, we designed a set of grasping strategies to enable the robot to reach and grasp objects inside the confined volume of shelf bins. The competition was a unique opportunity to integrate the work of various researchers at the Robotics, Perception and Learning laboratory (formerly the Computer Vision and Active Perception Laboratory, CVAP) of KTH, and it tested the performance of our robotic system and defined the future direction of our research.
We study the problem of robot interaction with mechanisms that afford one degree of freedom motion, e.g., doors and drawers. We propose a methodology for simultaneous compliant interaction and estimation of constraints imposed by the joint. Our method requires no prior knowledge of the mechanisms' kinematics, including the type of joint, prismatic or revolute. The method consists of a velocity controller that relies on force/torque measurements and estimation of the motion direction, the distance, and the orientation of the rotational axis. It is suitable for velocity controlled manipulators with force/torque sensor capabilities at the end-effector. Forces and torques are regulated within given constraints, while the velocity controller ensures that the end-effector of the robot moves with a task-related desired velocity. We give proof that the estimates converge to the true values under valid assumptions on the grasp, and error bounds for setups with inaccuracies in control, measurements, or modeling. The method is evaluated in different scenarios involving opening a representative set of door and drawer mechanisms found in household environments.
One of the big challenges for robots working outside of traditional industrial settings is the ability to robustly and flexibly grasp and manipulate tools for various tasks. When a tool is interacting with another object during task execution, several problems arise: a tool can be partially or completely occluded from the robot's view, it can slip or shift in the robot's hand - thus, the robot may lose the information about the exact position of the tool in the hand. Thus, there is a need for online calibration and/or recalibration of the tool. In this paper, we present a model-free online tool-tip calibration method that uses force/torque measurements and an adaptive estimation scheme to estimate the point of contact between a tool and the environment. An adaptive force control component guarantees that interaction forces are limited even before the contact point estimate has converged. We also show how to simultaneously estimate the location and normal direction of the surface being touched by the tool-tip as the contact point is estimated. The stability of the the overall scheme and the convergence of the estimated parameters are theoretically proven and the performance is evaluated in experiments on a real robot.
This paper introduces a method for estimating the constraints imposed by a human agent on a jointly manipulated object. These estimates can be used to infer knowledge of where the human is grasping an object, enabling the robot to plan trajectories for manipulating the object while subject to the constraints. We describe the method in detail, motivate its validity theoretically, and demonstrate its use in co-manipulation tasks with a real robot.
The problem of door opening is fundamental for household robotic applications. Domestic environments are generally less structured than industrial environments and thus several types of uncertainties associated with the dynamics and kinematics of a door must be dealt with to achieve successful opening. This paper proposes a method that can open doors without prior knowledge of the door kinematics. The proposed method can be implemented on a velocity-controlled manipulator with force sensing capabilities at the end-effector. The velocity reference is designed by using feedback of force measurements while constraint and motion directions are updated online based on adaptive estimates of the position of the door hinge. The online estimator is appropriately designed in order to identify the unknown directions. The proposed scheme has theoretically guaranteed performance which is further demonstrated in experiments on a real robot. Experimental results additionally show the robustness of the proposed method under disturbances introduced by the motion of the mobile platform.
This paper addresses the problem of robot interaction with objects attached to the environment through joints such as doors or drawers. We propose a methodology that requires no prior knowledge of the objects’ kinematics, including the type of joint - either prismatic or revolute. The method consists of a velocity controller which relies onforce/torque measurements and estimation of the motion direction,rotational axis and the distance from the center of rotation.The method is suitable for any velocity controlled manipulatorwith a force/torque sensor at the end-effector. The force/torquecontrol regulates the applied forces and torques within givenconstraints, while the velocity controller ensures that the endeffectormoves with a task-related desired tangential velocity. The paper also provides a proof that the estimates converge tothe actual values. The method is evaluated in different scenarios typically met in a household environment.
The problem of door opening is fundamental for robots operating in domestic environments. Since these environments are generally less structured than industrial environments, several types of uncertainties associated with the dynamics and kinematics of a door must be dealt with to achieve successful opening. This paper proposes a method that can open doors without prior knowledge of the door kinematics. The proposed method can be implemented on a velocity-controlled manipulator with force sensing capabilities at the end-effector. The method consists of a velocity controller which uses force measurements and estimates of the radial direction based on adaptive estimates of the position of the door hinge. The control action is decomposed into an estimated radial and tangential direction following the concept of hybrid force/motion control. A force controller acting within the velocity controller regulates the radial force to a desired small value while the velocity controller ensures that the end effector of the robot moves with a desired tangential velocity leading to task completion. This paper also provides a proof that the adaptive estimates of the radial direction converge to the actual radial vector. The performance of the control scheme is demonstrated in both simulation and on a real robot.
In this work we propose a sliding mode controllerfor in-hand manipulation that repositions a tool in the robot’shand by using gravity and controlling the slippage of the tool. In our approach, the robot holds the tool with a pinch graspand we model the system as a link attached to the grippervia a passive revolute joint with friction, i.e., the grasp onlyaffords rotational motions of the tool around a given axis ofrotation. The robot controls the slippage by varying the openingbetween the fingers in order to allow the tool to move tothe desired angular position following a reference trajectory.We show experimentally how the proposed controller achievesconvergence to the desired tool orientation under variations ofthe tool’s inertial parameters.
Autonomous grasping and manipulation of toolsenables robots to perform a large variety of tasks in unstructuredenvironments such as households. Many commonhousehold tasks involve controlling the motion of the tip of a toolwhile it is in contact with another object. Thus, for these types oftasks the robot requires knowledge of the location of the contactpoint while it is executing the task in order to accomplish themanipulation objective. In this work we propose an integraladaptive control law that uses force/torque measurements toestimate online the location of the contact point between thetool manipulated by the robot and the surface which the tooltouches
Robotic manipulators today are mostly constrained to perform fixed, repetitive tasks. Engineers design the robot’s workcell specifically tailoredto the task, minimizing all possible uncertainties such as the location of tools and parts that the robot manipulates. However, autonomous robots must be capable of manipulating novel objects with unknown physical properties such as their inertial parameters, friction and shape. In this thesis we address the problem of uncertainty connected to kinematic constraints and friction forces in several robotic manipulation tasks. We design adaptive controllers for opening one degree of freedom mechanisms, such as doors and drawers, under the presence of uncertainty in the kinematic parameters of the system. Furthermore, we formulate adaptive estimators for determining the location of the contact point between a tool grasped by the robot and the environment in manipulation tasks where the robot needs to exert forces with the tool on another object, as in the case of screwing or drilling. We also propose a learning framework based on Gaussian Process regression and dual arm manipulation to estimate the static friction properties of objects. The second problem we address in this thesis is related to the mechanical simplicity of most robotic grippers available in the market. Their lower cost and higher robustness compared to more mechanically advanced hands make them attractive for industrial and research robots. However, the simple mechanical design restrictsthem from performing in-hand manipulation, i.e. repositioning of objects in the robot’s hand, by using the fingers to push, slide and roll the object. Researchers have proposed thus to use extrinsic dexterity instead, i.e. to exploit resources and features of the environment, such as gravity or inertial forces, that can help the robot to perform regrasps. Given that the robot must then interact with the environment, the problem of uncertainty becomes highly relevant. We propose controllers for performing pivoting, i.e. reorienting the grasped object in the robot’s hand, using gravity and controlling the friction exerted by the fingertips by varying the grasping force.
In this work we present an adaptive control approach for pivoting, which is an in-hand manipulation maneuver that consists of rotating a grasped object to a desired orientation relative to the robot’s hand. We perform pivoting by means of gravity, allowing the object to rotate between the fingers of a one degree of freedom gripper and controlling the gripping force to ensure that the object follows a reference trajectory and arrives at the desired angular position. We use a visual pose estimation system to track the pose of the object and force measurements from tactile sensors to control the gripping force. The adaptive controller employs an update law that accommodates for errors in the friction coefficient,which is one of the most common sources of uncertainty in manipulation. Our experiments confirm that the proposed adaptive controller successfully pivots a grasped object in the presence of uncertainty in the object’s friction parameters.
Object grasping is commonly followed by someform of object manipulation – either when using the grasped object as a tool or actively changing its position in the hand through in-hand manipulation to afford further interaction. In this process, slippage may occur due to inappropriate contact forces, various types of noise and/or due to the unexpected interaction or collision with the environment. In this paper, we study the problem of identifying continuous bounds on the forces and torques that can be applied on a grasped object before slippage occurs. We model the problem as kinesthetic rather than cutaneous learning given that the measurements originate from a wrist mounted force-torque sensor. Given the continuous output, this regression problem is solved using a Gaussian Process approach.We demonstrate a dual armed humanoid robot that can autonomously learn force and torque bounds and use these to execute actions on objects such as sliding and pushing. We show that the model can be used not only for the detection of maximum allowable forces and torques but also for potentially identifying what types of tasks, denoted as manipulation affordances, a specific grasp configuration allows. The latter can then be used to either avoid specific motions or as a simple step of achieving in-hand manipulation of objects through interaction with the environment.
In this paper, we present a technique for online generation of dual arm trajectories using constraint based programming based on bound margins. Using this formulation, we take both equality and inequality constraints into account, in a way that incorporates both feedback and feedforward terms, enabling e.g. tracking of timed trajectories in a new way. The technique is applied to a dual arm manipulator performing a bi-manual task. We present experimental validation of the approach, including comparisons between simulations and real experiments of a complex bimanual tracking task. We also show how to add force feedback to the framework, to account for modeling errors in the systems. We compare the results with and without feedback, and show how the resulting trajectory is modified to achieve the prescribed interaction forces.