Change search
Link to record
Permanent link

Direct link
BETA
Publications (10 of 11) Show all publications
Abbeloos, W., Ataer-Cansizoglu, E., Caccamo, S., Taguchi, Y. & Domae, Y. (2018). 3D object discovery and modeling using single RGB-D images containing multiple object instances. In: Proceedings - 2017 International Conference on 3D Vision, 3DV 2017: . Paper presented at 7th IEEE International Conference on 3D Vision, 3DV 2017, Qingdao, China, 10 October 2017 through 12 October 2017 (pp. 431-439). Institute of Electrical and Electronics Engineers (IEEE)
Open this publication in new window or tab >>3D object discovery and modeling using single RGB-D images containing multiple object instances
Show others...
2018 (English)In: Proceedings - 2017 International Conference on 3D Vision, 3DV 2017, Institute of Electrical and Electronics Engineers (IEEE), 2018, p. 431-439Conference paper, Published paper (Refereed)
Abstract [en]

Unsupervised object modeling is important in robotics, especially for handling a large set of objects. We present a method for unsupervised 3D object discovery, reconstruction, and localization that exploits multiple instances of an identical object contained in a single RGB-D image. The proposed method does not rely on segmentation, scene knowledge, or user input, and thus is easily scalable. Our method aims to find recurrent patterns in a single RGB-D image by utilizing appearance and geometry of the salient regions. We extract keypoints and match them in pairs based on their descriptors. We then generate triplets of the keypoints matching with each other using several geometric criteria to minimize false matches. The relative poses of the matched triplets are computed and clustered to discover sets of triplet pairs with similar relative poses. Triplets belonging to the same set are likely to belong to the same object and are used to construct an initial object model. Detection of remaining instances with the initial object model using RANSAC allows to further expand and refine the model. The automatically generated object models are both compact and descriptive. We show quantitative and qualitative results on RGB-D images with various objects including some from the Amazon Picking Challenge. We also demonstrate the use of our method in an object picking scenario with a robotic arm.

Place, publisher, year, edition, pages
Institute of Electrical and Electronics Engineers (IEEE), 2018
Keywords
Computer-vision, Discovery, Keypoints, Matching, Pose-estimation, Reconstruction, RGB-D, Robotics, Unsupervised
National Category
Electrical Engineering, Electronic Engineering, Information Engineering
Identifiers
urn:nbn:se:kth:diva-238227 (URN)10.1109/3DV.2017.00056 (DOI)2-s2.0-85048838691 (Scopus ID)9781538626108 (ISBN)
Conference
7th IEEE International Conference on 3D Vision, 3DV 2017, Qingdao, China, 10 October 2017 through 12 October 2017
Note

QC 20181114

Available from: 2018-11-14 Created: 2018-11-14 Last updated: 2018-11-14Bibliographically approved
Caccamo, S. (2018). Enhancing geometric maps through environmental interactions. (Doctoral dissertation). Stockholm: KTH Royal Institute of Technology
Open this publication in new window or tab >>Enhancing geometric maps through environmental interactions
2018 (English)Doctoral thesis, comprehensive summary (Other academic)
Abstract [en]

The deployment of rescue robots in real operations is becoming increasingly commonthanks to recent advances in AI technologies and high performance hardware. Rescue robots can now operate for extended period of time, cover wider areas andprocess larger amounts of sensory information making them considerably more usefulduring real life threatening situations, including both natural or man-made disasters.

In this thesis we present results of our research which focuses on investigating ways of enhancing visual perception for Unmanned Ground Vehicles (UGVs) through environmental interactions using different sensory systems, such as tactile sensors and wireless receivers.

We argue that a geometric representation of the robot surroundings built upon vision data only, may not suffice in overcoming challenging scenarios, and show that robot interactions with the environment can provide a rich layer of new information that needs to be suitably represented and merged into the cognitive world model. Visual perception for mobile ground vehicles is one of the fundamental problems in rescue robotics. Phenomena such as rain, fog, darkness, dust, smoke and fire heavily influence the performance of visual sensors, and often result in highly noisy data, leading to unreliable or incomplete maps.

We address this problem through a collection of studies and structure the thesis as follow:Firstly, we give an overview of the Search & Rescue (SAR) robotics field, and discuss scenarios, hardware and related scientific questions.Secondly, we focus on the problems of control and communication. Mobile robotsrequire stable communication with the base station to exchange valuable information. Communication loss often presents a significant mission risk and disconnected robotsare either abandoned, or autonomously try to back-trace their way to the base station. We show how non-visual environmental properties (e.g. the WiFi signal distribution) can be efficiently modeled using probabilistic active perception frameworks based on Gaussian Processes, and merged into geometric maps so to facilitate the SAR mission. We then show how to use tactile perception to enhance mapping. Implicit environmental properties such as the terrain deformability, are analyzed through strategic glancesand touches and then mapped into probabilistic models.Lastly, we address the problem of reconstructing objects in the environment. Wepresent a technique for simultaneous 3D reconstruction of static regions and rigidly moving objects in a scene that enables on-the-fly model generation. Although this thesis focuses mostly on rescue UGVs, the concepts presented canbe applied to other mobile platforms that operates under similar circumstances. To make sure that the suggested methods work, we have put efforts into design of user interfaces and the evaluation of those in user studies.

Place, publisher, year, edition, pages
Stockholm: KTH Royal Institute of Technology, 2018. p. 58
Series
TRITA-EECS-AVL ; 2018:26
Keywords
Gaussian Processes Robotics UGV Active perception geometric maps
National Category
Engineering and Technology
Research subject
Computer Science
Identifiers
urn:nbn:se:kth:diva-225957 (URN)978-91-7729-720-8 (ISBN)
Public defence
2018-04-18, F3, Lindstedtsvägen 26, Sing-Sing, floor 2, KTH Campus, Stockholm, 10:00 (English)
Opponent
Supervisors
Funder
EU, FP7, Seventh Framework Programme
Note

QC 20180411

Available from: 2018-04-11 Created: 2018-04-11 Last updated: 2018-04-11Bibliographically approved
Parasuraman, R., Caccamo, S., Båberg, F., Ögren, P. & Neerincx, M. (2017). A New UGV Teleoperation Interface for Improved Awareness of Network Connectivity and Physical Surroundings. Journal of Human-Robot Interaction, 6(3), 48-70
Open this publication in new window or tab >>A New UGV Teleoperation Interface for Improved Awareness of Network Connectivity and Physical Surroundings
Show others...
2017 (English)In: Journal of Human-Robot Interaction, E-ISSN 2163-0364, Vol. 6, no 3, p. 48-70Article in journal (Refereed) Published
Abstract [en]

A reliable wireless connection between the operator and the teleoperated unmanned ground vehicle (UGV) is critical in many urban search and rescue (USAR) missions. Unfortunately, as was seen in, for example, the Fukushima nuclear disaster, the networks available in areas where USAR missions take place are often severely limited in range and coverage. Therefore, during mission execution, the operator needs to keep track of not only the physical parts of the mission, such as navigating through an area or searching for victims, but also the variations in network connectivity across the environment. In this paper, we propose and evaluate a new teleoperation user interface (UI) that includes a way of estimating the direction of arrival (DoA) of the radio signal strength (RSS) and integrating the DoA information in the interface. The evaluation shows that using the interface results in more objects found, and less aborted missions due to connectivity problems, as compared to a standard interface. The proposed interface is an extension to an existing interface centered on the video stream captured by the UGV. But instead of just showing the network signal strength in terms of percent and a set of bars, the additional information of DoA is added in terms of a color bar surrounding the video feed. With this information, the operator knows what movement directions are safe, even when moving in regions close to the connectivity threshold.

Place, publisher, year, edition, pages
Journal of Human-Robot Interaction, 2017
Keywords
teleoperation, UGV, search and rescue, FLC, network connectivity, user interface
National Category
Robotics
Identifiers
urn:nbn:se:kth:diva-223539 (URN)10.5898/JHRI.6.3.Parasuraman (DOI)000424170700004 ()
Funder
EU, FP7, Seventh Framework Programme, FP7-ICT-609763 TRADR
Note

QC 20180222

Available from: 2018-02-22 Created: 2018-02-22 Last updated: 2018-04-11Bibliographically approved
Abbeloos, W., Caccamo, S., Ataer-Cansizoglu, E., Taguchi, Y., Feng, C. & Lee, T.-Y. -. (2017). Detecting and Grouping Identical Objects for Region Proposal and Classification. In: 2017 IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops: . Paper presented at 30th IEEE Conference on Computer Vision and Pattern Recognition Workshops, CVPRW 2017, Honolulu, United States, 21 July 2017 through 26 July 2017 (pp. 501-502). IEEE Computer Society, 2017, Article ID 8014810.
Open this publication in new window or tab >>Detecting and Grouping Identical Objects for Region Proposal and Classification
Show others...
2017 (English)In: 2017 IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops, IEEE Computer Society, 2017, Vol. 2017, p. 501-502, article id 8014810Conference paper, Published paper (Refereed)
Abstract [en]

Often multiple instances of an object occur in the same scene, for example in a warehouse. Unsupervised multi-instance object discovery algorithms are able to detect and identify such objects. We use such an algorithm to provide object proposals to a convolutional neural network (CNN) based classifier. This results in fewer regions to evaluate, compared to traditional region proposal algorithms. Additionally, it enables using the joint probability of multiple instances of an object, resulting in improved classification accuracy. The proposed technique can also split a single class into multiple sub-classes corresponding to the different object types, enabling hierarchical classification.

Place, publisher, year, edition, pages
IEEE Computer Society, 2017
Series
IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops, ISSN 2160-7508
National Category
Computer Vision and Robotics (Autonomous Systems)
Identifiers
urn:nbn:se:kth:diva-218547 (URN)10.1109/CVPRW.2017.76 (DOI)000426448300070 ()2-s2.0-85030248255 (Scopus ID)9781538607336 (ISBN)
Conference
30th IEEE Conference on Computer Vision and Pattern Recognition Workshops, CVPRW 2017, Honolulu, United States, 21 July 2017 through 26 July 2017
Note

QC 20171130

Available from: 2017-11-30 Created: 2017-11-30 Last updated: 2018-03-22Bibliographically approved
Caccamo, S., Parasuraman, R., Freda, L., Gianni, M. & Ögren, P. (2017). RCAMP: A Resilient Communication-Aware Motion Planner for Mobile Robots with Autonomous Repair of Wireless Connectivity. In: Bicchi, A Okamura, A (Ed.), 2017 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS (IROS): . Paper presented at IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), SEP 24-28, 2017, Vancouver, CANADA (pp. 2010-2017). IEEE
Open this publication in new window or tab >>RCAMP: A Resilient Communication-Aware Motion Planner for Mobile Robots with Autonomous Repair of Wireless Connectivity
Show others...
2017 (English)In: 2017 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS (IROS) / [ed] Bicchi, A Okamura, A, IEEE , 2017, p. 2010-2017Conference paper, Published paper (Refereed)
Abstract [en]

Mobile robots, be it autonomous or teleoperated, require stable communication with the base station to exchange valuable information. Given the stochastic elements in radio signal propagation, such as shadowing and fading, and the possibilities of unpredictable events or hardware failures, communication loss often presents a significant mission risk, both in terms of probability and impact, especially in Urban Search and Rescue (USAR) operations. Depending on the circumstances, disconnected robots are either abandoned, or attempt to autonomously back-trace their way to the base station. Although recent results in Communication-Aware Motion Planning can be used to effectively manage connectivity with robots, there are no results focusing on autonomously re-establishing the wireless connectivity of a mobile robot without back-tracing or using detailed a priori information of the network. In this paper, we present a robust and online radio signal mapping method using Gaussian Random Fields, and propose a Resilient Communication-Aware Motion Planner (RCAMP) that integrates the above signal mapping framework with a motion planner. RCAMP considers both the environment and the physical constraints of the robot, based on the available sensory information. We also propose a self-repair strategy using RCMAP, that takes both connectivity and the goal position into account when driving to a connection-safe position in the event of a communication loss. We demonstrate the proposed planner in a set of realistic simulations of an exploration task in single or multi-channel communication scenarios.

Place, publisher, year, edition, pages
IEEE, 2017
Series
IEEE International Conference on Intelligent Robots and Systems, ISSN 2153-0858
Keywords
Mobile Robots, Self-Repair, Wireless Communication, Communication-Aware Motion Planning
National Category
Electrical Engineering, Electronic Engineering, Information Engineering
Identifiers
urn:nbn:se:kth:diva-225803 (URN)10.1109/IROS.2017.8206020 (DOI)000426978202045 ()2-s2.0-85041962473 (Scopus ID)978-1-5386-2682-5 (ISBN)
Conference
IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), SEP 24-28, 2017, Vancouver, CANADA
Note

QC 20180409

Available from: 2018-04-09 Created: 2018-04-09 Last updated: 2019-04-09Bibliographically approved
Almeida, D., Ambrus, R., Caccamo, S., Chen, X., Cruciani, S., Pinto Basto De Carvalho, J. F., . . . Kragic, D. (2017). Team KTH’s Picking Solution for the Amazon Picking Challenge 2016. In: Warehouse Picking Automation Workshop 2017: Solutions, Experience, Learnings and Outlook of the Amazon Robotics Challenge. Paper presented at ICRA 2017.
Open this publication in new window or tab >>Team KTH’s Picking Solution for the Amazon Picking Challenge 2016
Show others...
2017 (English)In: Warehouse Picking Automation Workshop 2017: Solutions, Experience, Learnings and Outlook of the Amazon Robotics Challenge, 2017Conference paper, Oral presentation only (Other (popular science, discussion, etc.))
Abstract [en]

In this work we summarize the solution developed by Team KTH for the Amazon Picking Challenge 2016 in Leipzig, Germany. The competition simulated a warehouse automation scenario and it was divided in two tasks: a picking task where a robot picks items from a shelf and places them in a tote and a stowing task which is the inverse task where the robot picks items from a tote and places them in a shelf. We describe our approach to the problem starting from a high level overview of our system and later delving into details of our perception pipeline and our strategy for manipulation and grasping. The solution was implemented using a Baxter robot equipped with additional sensors.

National Category
Robotics
Research subject
Computer Science
Identifiers
urn:nbn:se:kth:diva-215327 (URN)
Conference
ICRA 2017
Note

QC 20171009

Available from: 2017-10-07 Created: 2017-10-07 Last updated: 2018-05-24Bibliographically approved
Caccamo, S., Bekiroglu, Y., Ek, C. H. & Kragic, D. (2016). Active Exploration Using Gaussian Random Fields and Gaussian Process Implicit Surfaces. In: 2016 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS (IROS 2016): . Paper presented at IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), OCT 09-14, 2016, Daejeon, SOUTH KOREA (pp. 582-589). Institute of Electrical and Electronics Engineers (IEEE)
Open this publication in new window or tab >>Active Exploration Using Gaussian Random Fields and Gaussian Process Implicit Surfaces
2016 (English)In: 2016 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS (IROS 2016), Institute of Electrical and Electronics Engineers (IEEE), 2016, p. 582-589Conference paper, Published paper (Refereed)
Abstract [en]

In this work we study the problem of exploring surfaces and building compact 3D representations of the environment surrounding a robot through active perception. We propose an online probabilistic framework that merges visual and tactile measurements using Gaussian Random Field and Gaussian Process Implicit Surfaces. The system investigates incomplete point clouds in order to find a small set of regions of interest which are then physically explored with a robotic arm equipped with tactile sensors. We show experimental results obtained using a PrimeSense camera, a Kinova Jaco2 robotic arm and Optoforce sensors on different scenarios. We then demostrate how to use the online framework for object detection and terrain classification.

Place, publisher, year, edition, pages
Institute of Electrical and Electronics Engineers (IEEE), 2016
Keywords
Active perception, Surface reconstruction, Gaussian process, Implicit surface, Random field, Tactile exploration
National Category
Robotics
Identifiers
urn:nbn:se:kth:diva-202672 (URN)10.1109/IROS.2016.7759112 (DOI)000391921700086 ()2-s2.0-85006371409 (Scopus ID)978-1-5090-3762-9 (ISBN)
Conference
IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), OCT 09-14, 2016, Daejeon, SOUTH KOREA
Note

QC 20170306

Available from: 2017-03-06 Created: 2017-03-06 Last updated: 2018-04-11Bibliographically approved
Caccamo, S., Güler, P., Kjellström, H. & Kragic, D. (2016). Active perception and modeling of deformable surfaces using Gaussian processes and position-based dynamics. In: IEEE-RAS International Conference on Humanoid Robots: . Paper presented at 16th IEEE-RAS International Conference on Humanoid Robots, Humanoids 2016, 15 November 2016 through 17 November 2016 (pp. 530-537). IEEE
Open this publication in new window or tab >>Active perception and modeling of deformable surfaces using Gaussian processes and position-based dynamics
2016 (English)In: IEEE-RAS International Conference on Humanoid Robots, IEEE, 2016, p. 530-537Conference paper, Published paper (Refereed)
Abstract [en]

Exploring and modeling heterogeneous elastic surfaces requires multiple interactions with the environment and a complex selection of physical material parameters. The most common approaches model deformable properties from sets of offline observations using computationally expensive force-based simulators. In this work we present an online probabilistic framework for autonomous estimation of a deformability distribution map of heterogeneous elastic surfaces from few physical interactions. The method takes advantage of Gaussian Processes for constructing a model of the environment geometry surrounding a robot. A fast Position-based Dynamics simulator uses focused environmental observations in order to model the elastic behavior of portions of the environment. Gaussian Process Regression maps the local deformability on the whole environment in order to generate a deformability distribution map. We show experimental results using a PrimeSense camera, a Kinova Jaco2 robotic arm and an Optoforce sensor on different deformable surfaces.

Place, publisher, year, edition, pages
IEEE, 2016
Keywords
Active perception, Deformability modeling, Gaussian process, Position-based dynamics, Tactile exploration, Anthropomorphic robots, Deformation, Dynamics, Gaussian noise (electronic), Probability distributions, Robots, Active perceptions, Environmental observation, Gaussian process regression, Gaussian Processes, Multiple interactions, Physical interactions, Probabilistic framework, Gaussian distribution
National Category
Computer Vision and Robotics (Autonomous Systems)
Identifiers
urn:nbn:se:kth:diva-202842 (URN)10.1109/HUMANOIDS.2016.7803326 (DOI)000403009300081 ()2-s2.0-85010190205 (Scopus ID)9781509047185 (ISBN)
Conference
16th IEEE-RAS International Conference on Humanoid Robots, Humanoids 2016, 15 November 2016 through 17 November 2016
Note

QC 20170317

Available from: 2017-03-17 Created: 2017-03-17 Last updated: 2018-04-11Bibliographically approved
Båberg, F., Wang, Y., Caccamo, S. & Ögren, P. (2016). Adaptive object centered teleoperation control of a mobile manipulator. In: 2016 IEEE International Conference on Robotics and Automation (ICRA): . Paper presented at 2016 IEEE International Conference on Robotics and Automation (ICRA), Stockholm, May 16-21, 2016 (pp. 455-461). Institute of Electrical and Electronics Engineers (IEEE)
Open this publication in new window or tab >>Adaptive object centered teleoperation control of a mobile manipulator
2016 (English)In: 2016 IEEE International Conference on Robotics and Automation (ICRA), Institute of Electrical and Electronics Engineers (IEEE), 2016, p. 455-461Conference paper, Published paper (Refereed)
Abstract [en]

Teleoperation of a mobile robot manipulating and exploring an object shares many similarities with the manipulation of virtual objects in a 3D design software such as AutoCAD. The user interfaces are however quite different, mainly for historical reasons. In this paper we aim to change that, and draw inspiration from the 3D design community to propose a teleoperation interface control mode that is identical to the ones being used to locally navigate the virtual viewpoint of most Computer Aided Design (CAD) softwares.

The proposed mobile manipulator control framework thus allows the user to focus on the 3D objects being manipulated, using control modes such as orbit object and pan object, supported by data from the wrist mounted RGB-D sensor. The gripper of the robot performs the desired motions relative to the object, while the manipulator arm and base moves in a way that realizes the desired gripper motions. The system redundancies are exploited in order to take additional constraints, such as obstacle avoidance, into account, using a constraint based programming framework.

Place, publisher, year, edition, pages
Institute of Electrical and Electronics Engineers (IEEE), 2016
Series
Proceedings - IEEE International Conference on Robotics and Automation, ISSN 1050-4729
Keywords
virtual object, mobile manipulation, teleoperation
National Category
Robotics
Research subject
Computer Science
Identifiers
urn:nbn:se:kth:diva-182902 (URN)10.1109/ICRA.2016.7487166 (DOI)000389516200057 ()2-s2.0-84977527389 (Scopus ID)9781467380263 (ISBN)
Conference
2016 IEEE International Conference on Robotics and Automation (ICRA), Stockholm, May 16-21, 2016
Projects
TRADR
Funder
EU, FP7, Seventh Framework Programme, FP7-ICT-609763 TRADR
Note

QC 20160829

Available from: 2016-02-24 Created: 2016-02-24 Last updated: 2017-01-19Bibliographically approved
Båberg, F., Caccamo, S., Smets, N., Neerincx, M. & Ögren, P. (2016). Free Look UGV Teleoperation Control Tested in Game Environment: Enhanced Performance and Reduced Workload. In: International Symposium on Safety,Security and Rescue Robotics: . Paper presented at International Symposium on Safety, Security and Rescue Robotics, Lausanne, October 23-27th, 2016.
Open this publication in new window or tab >>Free Look UGV Teleoperation Control Tested in Game Environment: Enhanced Performance and Reduced Workload
Show others...
2016 (English)In: International Symposium on Safety,Security and Rescue Robotics, 2016Conference paper, Published paper (Refereed)
Abstract [en]

Concurrent telecontrol of the chassis and camera ofan Unmanned Ground Vehicle (UGV) is a demanding task forUrban Search and Rescue (USAR) teams. The standard way ofcontrolling UGVs is called Tank Control (TC), but there is reasonto believe that Free Look Control (FLC), a control mode used ingames, could reduce this load substantially by decoupling, andproviding separate controls for, camera translation and rotation.The general hypothesis is that FLC (1) reduces robot operators’workload and (2) enhances their performance for dynamic andtime-critical USAR scenarios. A game-based environment wasset-up to systematically compare FLC with TC in two typicalsearch and rescue tasks: navigation and exploration. The resultsshow that FLC improves mission performance in both exploration(search) and path following (navigation) scenarios. In the former,more objects were found, and in the latter shorter navigationtimes were achieved. FLC also caused lower workload and stresslevels in both scenarios, without inducing a significant differencein the number of collisions. Finally, FLC was preferred by 75% of the subjects for exploration, and 56% for path following.

Keywords
Teleoperation, UGV, Search and Rescue, First Response, Disaster Response, FPS, Computer Game
National Category
Robotics
Research subject
Computer Science
Identifiers
urn:nbn:se:kth:diva-192941 (URN)10.1109/SSRR.2016.7784321 (DOI)000391310800053 ()2-s2.0-85009804146 (Scopus ID)
Conference
International Symposium on Safety, Security and Rescue Robotics, Lausanne, October 23-27th, 2016
Projects
TRADR
Funder
EU, FP7, Seventh Framework Programme, FP7-ICT-609763
Note

QC 20161212

Available from: 2016-09-26 Created: 2016-09-23 Last updated: 2018-04-11Bibliographically approved
Organisations
Identifiers
ORCID iD: ORCID iD iconorcid.org/0000-0002-6716-1111

Search in DiVA

Show all publications