Robotic mobile manipulation in unstructured environments requires integration of a number of key reasearch areas such as localization, navigation, object recognition, visual tracking/servoing, grasping and object manipulation. It has been demonstrated that, given the above, and through simple sequencing of basic skills, a robust system can be designed, [19]. In order to provide the robustness and flexibility required of the overall robotic system in unstructured and dynamic everyday environments, it is important to consider a wide range of individual skills using different sensory modalities. In this work, we consider a combination of deliberative and reactive control together with the use of multiple sensory modalities for modeling and execution of manipulation tasks. Special consideration is given to the design of a vision system necessary for object recognition and scene segmentation as well as learning principles in terms of grasping.
QC 20150421