kth.sePublications
Change search
Link to record
Permanent link

Direct link
Pieropan, AlessandroORCID iD iconorcid.org/0000-0003-2314-2880
Publications (10 of 12) Show all publications
Hang, K., Vina, F., Colledanchise, M., Pauwels, K., Pieropan, A. & Kragic, D. (2020). Team CVAP’s Mobile Picking System at the Amazon Picking Challenge 2015. In: Advances on Robotic Item Picking: Applications in Warehousing and E-Commerce Fulfillment (pp. 1-12). Springer Nature
Open this publication in new window or tab >>Team CVAP’s Mobile Picking System at the Amazon Picking Challenge 2015
Show others...
2020 (English)In: Advances on Robotic Item Picking: Applications in Warehousing and E-Commerce Fulfillment, Springer Nature , 2020, p. 1-12Chapter in book (Other academic)
Abstract [en]

In this paper we present the system we developed for the Amazon Picking Challenge 2015, and discuss some of the lessons learned that may prove useful to researchers and future teams developing autonomous robot picking systems. For the competition we used a PR2 robot, which is a dual arm robot research platform equipped with a mobile base and a variety of 2D and 3D sensors. We adopted a behavior tree to model the overall task execution, where we coordinate the different perception, localization, navigation, and manipulation activities of the system in a modular fashion. Our perception system detects and localizes the target objects in the shelf and it consisted of two components: one for detecting textured rigid objects using the SimTrack vision system, and one for detecting non-textured or nonrigid objects using RGBD features. In addition, we designed a set of grasping strategies to enable the robot to reach and grasp objects inside the confined volume of shelf bins. The competition was a unique opportunity to integrate the work of various researchers at the Robotics, Perception and Learning laboratory (formerly the Computer Vision and Active Perception Laboratory, CVAP) of KTH, and it tested the performance of our robotic system and defined the future direction of our research.

Place, publisher, year, edition, pages
Springer Nature, 2020
Keywords
Autonomous picking system, Behavior trees, Dual arm robot, Mobile picking, MoveIt, Parallel gripper, PR2 robot, SIFT, Texture-based tracking, Volumetric reasoning
National Category
Robotics and automation
Identifiers
urn:nbn:se:kth:diva-331965 (URN)10.1007/978-3-030-35679-8_1 (DOI)2-s2.0-85149591750 (Scopus ID)
Note

Part of ISBN 9783030356798 9783030356781

QC 20230714

Available from: 2023-07-17 Created: 2023-07-17 Last updated: 2025-02-09Bibliographically approved
Pieropan, A., Bergström, N., Ishikawa, M. & Kjellström, H. (2016). Robust and adaptive keypoint-based object tracking. Advanced Robotics, 30(4), 258-269
Open this publication in new window or tab >>Robust and adaptive keypoint-based object tracking
2016 (English)In: Advanced Robotics, ISSN 0169-1864, E-ISSN 1568-5535, Vol. 30, no 4, p. 258-269Article in journal (Refereed) Published
National Category
Computer Sciences
Identifiers
urn:nbn:se:kth:diva-269112 (URN)10.1080/01691864.2015.1129360 (DOI)000372182900003 ()2-s2.0-84960959205 (Scopus ID)
Note

QC 20200625

Available from: 2020-03-04 Created: 2020-03-04 Last updated: 2022-09-23Bibliographically approved
Pieropan, A., Bergström, N., Ishikawa, M., Kragic, D. & Kjellström, H. (2016). Robust tracking of unknown objects through adaptive size estimation and appearance learning. In: Proceedings - IEEE International Conference on Robotics and Automation: . Paper presented at 2016 IEEE International Conference on Robotics and Automation, ICRA 2016, 16 May 2016 through 21 May 2016 (pp. 559-566). IEEE conference proceedings
Open this publication in new window or tab >>Robust tracking of unknown objects through adaptive size estimation and appearance learning
Show others...
2016 (English)In: Proceedings - IEEE International Conference on Robotics and Automation, IEEE conference proceedings, 2016, p. 559-566Conference paper, Published paper (Refereed)
Abstract [en]

This work employs an adaptive learning mechanism to perform tracking of an unknown object through RGBD cameras. We extend our previous framework to robustly track a wider range of arbitrarily shaped objects by adapting the model to the measured object size. The size is estimated as the object undergoes motion, which is done by fitting an inscribed cuboid to the measurements. The region spanned by this cuboid is used during tracking, to determine whether or not new measurements should be added to the object model. In our experiments we test our tracker with a set of objects of arbitrary shape and we show the benefit of the proposed model due to its ability to adapt to the object shape which leads to more robust tracking results.

Place, publisher, year, edition, pages
IEEE conference proceedings, 2016
Keywords
Adaptive learning mechanism, Appearance learning, Arbitrary shape, Object model, Rgb-d cameras, Robust tracking, Size estimation, Unknown objects, Robotics
National Category
Computer graphics and computer vision
Identifiers
urn:nbn:se:kth:diva-197233 (URN)10.1109/ICRA.2016.7487179 (DOI)000389516200070 ()2-s2.0-84977519696 (Scopus ID)9781467380263 (ISBN)
Conference
2016 IEEE International Conference on Robotics and Automation, ICRA 2016, 16 May 2016 through 21 May 2016
Note

QC 20161207

Available from: 2016-12-07 Created: 2016-11-30 Last updated: 2025-02-07Bibliographically approved
Pieropan, A. (2015). Action Recognition for Robot Learning. (Doctoral dissertation). Stockholm: KTH Royal Institute of Technology
Open this publication in new window or tab >>Action Recognition for Robot Learning
2015 (English)Doctoral thesis, comprehensive summary (Other academic)
Abstract [en]

This thesis builds on the observation that robots cannot be programmed to handle any possible situation in the world. Like humans, they need mechanisms to deal with previously unseen situations and unknown objects. One of the skills humans rely on to deal with the unknown is the ability to learn by observing others. This thesis addresses the challenge of enabling a robot to learn from a human instructor. In particular, it is focused on objects. How can a robot find previously unseen objects? How can it track the object with its gaze? How can the object be employed in activities? Throughout this thesis, these questions are addressed with the end goal of allowing a robot to observe a human instructor and learn how to perform an activity. The robot is assumed to know very little about the world and it is supposed to discover objects autonomously. Given a visual input, object hypotheses are formulated by leveraging on common contextual knowledge often used by humans (e.g. gravity, compactness, convexity). Moreover, unknown objects are tracked and their appearance is updated over time since only a small fraction of the object is visible from the robot initially. Finally, object functionality is inferred by looking how the human instructor is manipulating objects and how objects are used in relation to others. All the methods included in this thesis have been evaluated on datasets that are publicly available or that we collected, showing the importance of these learning abilities.

Place, publisher, year, edition, pages
Stockholm: KTH Royal Institute of Technology, 2015. p. v, 38
Series
TRITA-CSC-A, ISSN 1653-5723 ; 2015:09
National Category
Computer graphics and computer vision
Research subject
Computer Science
Identifiers
urn:nbn:se:kth:diva-165680 (URN)
Public defence
2015-05-21, F3, Lindstedtsvägen 26, KTH, Stockholm, 10:00 (English)
Opponent
Supervisors
Note

QC 20150504

Available from: 2015-05-04 Created: 2015-04-29 Last updated: 2025-02-07Bibliographically approved
Güler, R., Pauwels, K., Pieropan, A., Kjellström, H. & Kragic, D. (2015). Estimating the Deformability of Elastic Materials using Optical Flow and Position-based Dynamics. In: Humanoid Robots (Humanoids), 2015 IEEE-RAS 15th International Conference on: . Paper presented at IEEE-RAS International Conference on Humanoid Robots, November 3-5, KIST, Seoul, Korea (pp. 965-971). IEEE conference proceedings
Open this publication in new window or tab >>Estimating the Deformability of Elastic Materials using Optical Flow and Position-based Dynamics
Show others...
2015 (English)In: Humanoid Robots (Humanoids), 2015 IEEE-RAS 15th International Conference on, IEEE conference proceedings, 2015, p. 965-971Conference paper, Published paper (Refereed)
Abstract [en]

Knowledge of the physical properties of objects is essential in a wide range of robotic manipulation scenarios. A robot may not always be aware of such properties prior to interaction. If an object is incorrectly assumed to be rigid, it may exhibit unpredictable behavior when grasped. In this paper, we use vision based observation of the behavior of an object a robot is interacting with and use it as the basis for estimation of its elastic deformability. This is estimated in a local region around the interaction point using a physics simulator. We use optical flow to estimate the parameters of a position-based dynamics simulation using meshless shape matching (MSM). MSM has been widely used in computer graphics due to its computational efficiency, which is also important for closed-loop control in robotics. In a controlled experiment we demonstrate that our method can qualitatively estimate the physical properties of objects with different degrees of deformability.

Place, publisher, year, edition, pages
IEEE conference proceedings, 2015
National Category
Computer graphics and computer vision
Identifiers
urn:nbn:se:kth:diva-175162 (URN)10.1109/HUMANOIDS.2015.7363486 (DOI)000377954900145 ()2-s2.0-84962249847 (Scopus ID)
Conference
IEEE-RAS International Conference on Humanoid Robots, November 3-5, KIST, Seoul, Korea
Note

QC 20160217

Available from: 2015-10-09 Created: 2015-10-09 Last updated: 2025-02-07Bibliographically approved
Pieropan, A., Bergström, N., Ishikawa, M. & Kjellström, H. (2015). Robust 3D tracking of unknown objects. In: Proceedings - IEEE International Conference on Robotics and Automation: . Paper presented at 2015 IEEE International Conference on Robotics and Automation, ICRA 2015, 26 May 2015 through 30 May 2015 (pp. 2410-2417). IEEE conference proceedings (June)
Open this publication in new window or tab >>Robust 3D tracking of unknown objects
2015 (English)In: Proceedings - IEEE International Conference on Robotics and Automation, IEEE conference proceedings, 2015, no June, p. 2410-2417Conference paper, Published paper (Refereed)
Abstract [en]

Visual tracking of unknown objects is an essential task in robotic perception, of importance to a wide range of applications. In the general scenario, the robot has no full 3D model of the object beforehand, just the partial view of the object visible in the first video frame. A tracker with this information only will inevitably lose track of the object after occlusions or large out-of-plane rotations. The way to overcome this is to incrementally learn the appearances of new views of the object. However, this bootstrapping approach is sensitive to drifting due to occasional inclusion of the background into the model. In this paper we propose a method that exploits 3D point coherence between views to overcome the risk of learning the background, by only learning the appearances at the faces of an inscribed cuboid. This is closely related to the popular idea of 2D object tracking using bounding boxes, with the additional benefit of recovering the full 3D pose of the object as well as learning its full appearance from all viewpoints. We show quantitatively that the use of an inscribed cuboid to guide the learning leads to significantly more robust tracking than with other state-of-the-art methods. We show that our tracker is able to cope with 360 degree out-of-plane rotation, large occlusion and fast motion.

Place, publisher, year, edition, pages
IEEE conference proceedings, 2015
Keywords
Tracking (position), Fast motions, Large occlusion, Out-of-plane rotation, Partial views, Robust tracking, State-of-the-art methods, Unknown objects, Visual Tracking, Robotics
National Category
Electrical Engineering, Electronic Engineering, Information Engineering
Identifiers
urn:nbn:se:kth:diva-176136 (URN)10.1109/ICRA.2015.7139520 (DOI)000370974902060 ()2-s2.0-84938249572 (Scopus ID)
Conference
2015 IEEE International Conference on Robotics and Automation, ICRA 2015, 26 May 2015 through 30 May 2015
Note

QC 20151202. QC 20160411

Available from: 2015-12-02 Created: 2015-11-02 Last updated: 2022-06-23Bibliographically approved
Pieropan, A., Salvi, G., Pauwels, K. & Kjellström, H. (2014). Audio-Visual Classification and Detection of Human Manipulation Actions. In: 2014 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2014): . Paper presented at 2014 IEEE/RSJ International Conference on Intelligent Robots and Systems, IROS 2014, Palmer House Hilton Hotel Chicago, United States, 14 September 2014 through 18 September 2014 (pp. 3045-3052). IEEE conference proceedings
Open this publication in new window or tab >>Audio-Visual Classification and Detection of Human Manipulation Actions
2014 (English)In: 2014 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2014), IEEE conference proceedings, 2014, p. 3045-3052Conference paper, Published paper (Refereed)
Abstract [en]

Humans are able to merge information from multiple perceptional modalities and formulate a coherent representation of the world. Our thesis is that robots need to do the same in order to operate robustly and autonomously in an unstructured environment. It has also been shown in several fields that multiple sources of information can complement each other, overcoming the limitations of a single perceptual modality. Hence, in this paper we introduce a data set of actions that includes both visual data (RGB-D video and 6DOF object pose estimation) and acoustic data. We also propose a method for recognizing and segmenting actions from continuous audio-visual data. The proposed method is employed for extensive evaluation of the descriptive power of the two modalities, and we discuss how they can be used jointly to infer a coherent interpretation of the recorded action.

Place, publisher, year, edition, pages
IEEE conference proceedings, 2014
Series
IEEE International Conference on Intelligent Robots and Systems, ISSN 2153-0858
Keywords
Acoustic data, Audio-visual, Audio-visual data, Coherent representations, Human manipulation, Multiple source, Unstructured environments, Visual data
National Category
Computer graphics and computer vision
Identifiers
urn:nbn:se:kth:diva-158004 (URN)10.1109/IROS.2014.6942983 (DOI)000349834603023 ()2-s2.0-84911478073 (Scopus ID)978-1-4799-6934-0 (ISBN)
Conference
2014 IEEE/RSJ International Conference on Intelligent Robots and Systems, IROS 2014, Palmer House Hilton Hotel Chicago, United States, 14 September 2014 through 18 September 2014
Note

QC 20150122

Available from: 2014-12-18 Created: 2014-12-18 Last updated: 2025-02-07Bibliographically approved
Pieropan, A., Ek, C. H. & Kjellström, H. (2014). Recognizing Object Affordances in Terms of Spatio-Temporal Object-Object Relationships. In: Humanoid Robots (Humanoids), 2014 14th IEEE-RAS International Conference on: . Paper presented at International Conference on Humanoid Robots,November 18-20th 2014, Madrid, Spain (pp. 52-58). IEEE conference proceedings
Open this publication in new window or tab >>Recognizing Object Affordances in Terms of Spatio-Temporal Object-Object Relationships
2014 (English)In: Humanoid Robots (Humanoids), 2014 14th IEEE-RAS International Conference on, IEEE conference proceedings, 2014, p. 52-58Conference paper, Published paper (Refereed)
Abstract [en]

In this paper we describe a probabilistic framework that models the interaction between multiple objects in a scene.We present a spatio-temporal feature encoding pairwise interactions between each object in the scene. By the use of a kernel representation we embed object interactions in a vector space which allows us to define a metric comparing interactions of different temporal extent. Using this metric we define a probabilistic model which allows us to represent and extract the affordances of individual objects based on the structure of their interaction. In this paper we focus on the presented pairwise relationships but the model can naturally be extended to incorporate additional cues related to a single object or multiple objects. We compare our approach with traditional kernel approaches and show a significant improvement.

Place, publisher, year, edition, pages
IEEE conference proceedings, 2014
National Category
Computer graphics and computer vision
Identifiers
urn:nbn:se:kth:diva-158008 (URN)10.1109/HUMANOIDS.2014.7041337 (DOI)000392843800010 ()2-s2.0-84945185392 (Scopus ID)
Conference
International Conference on Humanoid Robots,November 18-20th 2014, Madrid, Spain
Note

QC 20141223

Available from: 2014-12-18 Created: 2014-12-18 Last updated: 2025-02-07Bibliographically approved
Pieropan, A., Bergström, N., Kjellström, H. & Ishikawa, M. (2014). Robust Tracking through Learning. In: 32nd Annual Conference of the Robotics Society of Japan, 2014: . Paper presented at The 32nd Annual Conference of the RSJ, 4-6 sept, 2014, Japan.
Open this publication in new window or tab >>Robust Tracking through Learning
2014 (English)In: 32nd Annual Conference of the Robotics Society of Japan, 2014, 2014Conference paper, Published paper (Refereed)
National Category
Computer graphics and computer vision
Identifiers
urn:nbn:se:kth:diva-158009 (URN)
Conference
The 32nd Annual Conference of the RSJ, 4-6 sept, 2014, Japan
Note

QC 20150206

Available from: 2014-12-18 Created: 2014-12-18 Last updated: 2025-02-07Bibliographically approved
Pieropan, A. & Kjellström, H. (2014). Unsupervised object exploration using context. In: The 23rd IEEE International Symposium on Robot and Human Interactive Communication, 2014 RO-MAN: . Paper presented at International Symposium on Robot and Human Interactive Communication,25-29th August, Edinburgh, Scotland, UK. IEEE conference proceedings
Open this publication in new window or tab >>Unsupervised object exploration using context
2014 (English)In: The 23rd IEEE International Symposium on Robot and Human Interactive Communication, 2014 RO-MAN, IEEE conference proceedings, 2014, p. -506Conference paper, Published paper (Refereed)
Abstract [en]

In order for robots to function in unstructured environments in interaction with humans, they must be able to reason about the world in a semantic meaningful way. An essential capability is to segment the world into semantic plausible object hypotheses. In this paper we propose a general framework which can be used for reasoning about objects and their functionality in manipulation activities. Our system employs a hierarchical segmentation framework that extracts object hypotheses from RGB-D video. Motivated by cognitive studies on humans, our work leverages on contextual information, e.g., that objects obey the laws of physics, to formulate object hypotheses from regions in a mathematically principled manner.

Place, publisher, year, edition, pages
IEEE conference proceedings, 2014
National Category
Computer graphics and computer vision
Identifiers
urn:nbn:se:kth:diva-158006 (URN)10.1109/ROMAN.2014.6926302 (DOI)000366603200082 ()2-s2.0-84937571379 (Scopus ID)978-1-4799-6763-6 (ISBN)
Conference
International Symposium on Robot and Human Interactive Communication,25-29th August, Edinburgh, Scotland, UK
Note

Qc 20150122

Available from: 2014-12-18 Created: 2014-12-18 Last updated: 2025-02-07Bibliographically approved
Organisations
Identifiers
ORCID iD: ORCID iD iconorcid.org/0000-0003-2314-2880

Search in DiVA

Show all publications