kth.sePublications
Change search
Link to record
Permanent link

Direct link
Ek, Carl Henrik
Publications (10 of 50) Show all publications
Caccamo, S., Bekiroglu, Y., Ek, C. H. & Kragic, D. (2016). Active Exploration Using Gaussian Random Fields and Gaussian Process Implicit Surfaces. In: 2016 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS (IROS 2016): . Paper presented at IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), OCT 09-14, 2016, Daejeon, SOUTH KOREA (pp. 582-589). Institute of Electrical and Electronics Engineers (IEEE)
Open this publication in new window or tab >>Active Exploration Using Gaussian Random Fields and Gaussian Process Implicit Surfaces
2016 (English)In: 2016 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS (IROS 2016), Institute of Electrical and Electronics Engineers (IEEE), 2016, p. 582-589Conference paper, Published paper (Refereed)
Abstract [en]

In this work we study the problem of exploring surfaces and building compact 3D representations of the environment surrounding a robot through active perception. We propose an online probabilistic framework that merges visual and tactile measurements using Gaussian Random Field and Gaussian Process Implicit Surfaces. The system investigates incomplete point clouds in order to find a small set of regions of interest which are then physically explored with a robotic arm equipped with tactile sensors. We show experimental results obtained using a PrimeSense camera, a Kinova Jaco2 robotic arm and Optoforce sensors on different scenarios. We then demostrate how to use the online framework for object detection and terrain classification.

Place, publisher, year, edition, pages
Institute of Electrical and Electronics Engineers (IEEE), 2016
Keywords
Active perception, Surface reconstruction, Gaussian process, Implicit surface, Random field, Tactile exploration
National Category
Robotics and automation
Identifiers
urn:nbn:se:kth:diva-202672 (URN)10.1109/IROS.2016.7759112 (DOI)000391921700086 ()2-s2.0-85006371409 (Scopus ID)978-1-5090-3762-9 (ISBN)
Conference
IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), OCT 09-14, 2016, Daejeon, SOUTH KOREA
Note

QC 20170306

Available from: 2017-03-06 Created: 2017-03-06 Last updated: 2025-02-09Bibliographically approved
Bekiroglu, Y., Damianou, A., Detry, R., Stork, J. A., Kragic, D. & Ek, C. H. (2016). Probabilistic consolidation of grasp experience. In: Proceedings - IEEE International Conference on Robotics and Automation: . Paper presented at 2016 IEEE International Conference on Robotics and Automation, ICRA 2016, 16 May 2016 through 21 May 2016 (pp. 193-200). IEEE conference proceedings
Open this publication in new window or tab >>Probabilistic consolidation of grasp experience
Show others...
2016 (English)In: Proceedings - IEEE International Conference on Robotics and Automation, IEEE conference proceedings, 2016, p. 193-200Conference paper, Published paper (Refereed)
Abstract [en]

We present a probabilistic model for joint representation of several sensory modalities and action parameters in a robotic grasping scenario. Our non-linear probabilistic latent variable model encodes relationships between grasp-related parameters, learns the importance of features, and expresses confidence in estimates. The model learns associations between stable and unstable grasps that it experiences during an exploration phase. We demonstrate the applicability of the model for estimating grasp stability, correcting grasps, identifying objects based on tactile imprints and predicting tactile imprints from object-relative gripper poses. We performed experiments on a real platform with both known and novel objects, i.e., objects the robot trained with, and previously unseen objects. Grasp correction had a 75% success rate on known objects, and 73% on new objects. We compared our model to a traditional regression model that succeeded in correcting grasps in only 38% of cases.

Place, publisher, year, edition, pages
IEEE conference proceedings, 2016
National Category
Robotics and automation
Identifiers
urn:nbn:se:kth:diva-197236 (URN)10.1109/ICRA.2016.7487133 (DOI)000389516200024 ()2-s2.0-84977472359 (Scopus ID)9781467380263 (ISBN)
Conference
2016 IEEE International Conference on Robotics and Automation, ICRA 2016, 16 May 2016 through 21 May 2016
Note

QC 20161207

Available from: 2016-12-07 Created: 2016-11-30 Last updated: 2025-02-09Bibliographically approved
Damianou, A., Ek, C. H., Boorman, L., Lawrence, N. D. & Prescott, T. J. (2015). A Top-Down Approach for a Synthetic Autobiographical Memory System. In: BIOMIMETIC AND BIOHYBRID SYSTEMS, LIVING MACHINES 2015: . Paper presented at 4th International Conference on Biomimetic and Biohybrid Systems (Living Machines), JUL 28-31, 2015, Barcelona, SPAIN (pp. 280-292). Springer
Open this publication in new window or tab >>A Top-Down Approach for a Synthetic Autobiographical Memory System
Show others...
2015 (English)In: BIOMIMETIC AND BIOHYBRID SYSTEMS, LIVING MACHINES 2015, Springer, 2015, p. 280-292Conference paper, Published paper (Refereed)
Abstract [en]

Autobiographical memory (AM) refers to the organisation of one's experience into a coherent narrative. The exact neural mechanisms responsible for the manifestation of AM in humans are unknown. On the other hand, the field of psychology has provided us with useful understanding about the functionality of a bio-inspired synthetic AM (SAM) system, in a higher level of description. This paper is concerned with a top-down approach to SAM, where known components and organisation guide the architecture but the unknown details of each module are abstracted. By using Bayesian latent variable models we obtain a transparent SAM system with which we can interact in a structured way. This allows us to reveal the properties of specific sub-modules and map them to functionality observed in biological systems. The top-down approach can cope well with the high performance requirements of a bio-inspired cognitive system. This is demonstrated in experiments using faces data.

Place, publisher, year, edition, pages
Springer, 2015
Series
Lecture Notes in Artificial Intelligence, ISSN 0302-9743 ; 9222
Keywords
Synthetic autobiographical memory, Hippocampus, Robotics, Deep Gaussian process, MRD
National Category
Computer Sciences
Identifiers
urn:nbn:se:kth:diva-177974 (URN)10.1007/978-3-319-22979-9_28 (DOI)000364183200028 ()2-s2.0-84947125286 (Scopus ID)978-3-319-22979-9 (ISBN)978-3-319-22978-2 (ISBN)
Conference
4th International Conference on Biomimetic and Biohybrid Systems (Living Machines), JUL 28-31, 2015, Barcelona, SPAIN
Note

QC 20151202

Available from: 2015-12-02 Created: 2015-11-30 Last updated: 2024-03-15Bibliographically approved
Hjelm, M., Ek, C. H., Detry, R. & Kragic, D. (2015). Learning Human Priors for Task-Constrained Grasping. In: COMPUTER VISION SYSTEMS (ICVS 2015): . Paper presented at 10th International Conference on Computer Vision Systems (ICVS), JUL 06-09, 2015, Copenhagen, DENMARK (pp. 207-217). Springer Berlin/Heidelberg
Open this publication in new window or tab >>Learning Human Priors for Task-Constrained Grasping
2015 (English)In: COMPUTER VISION SYSTEMS (ICVS 2015), Springer Berlin/Heidelberg, 2015, p. 207-217Conference paper, Published paper (Refereed)
Abstract [en]

An autonomous agent using manmade objects must understand how task conditions the grasp placement. In this paper we formulate task based robotic grasping as a feature learning problem. Using a human demonstrator to provide examples of grasps associated with a specific task, we learn a representation, such that similarity in task is reflected by similarity in feature. The learned representation discards parts of the sensory input that is redundant for the task, allowing the agent to ground and reason about the relevant features for the task. Synthesized grasps for an observed task on previously unseen objects can then be filtered and ordered by matching to learned instances without the need of an analytically formulated metric. We show on a real robot how our approach is able to utilize the learned representation to synthesize and perform valid task specific grasps on novel objects.

Place, publisher, year, edition, pages
Springer Berlin/Heidelberg, 2015
Series
Lecture Notes in Computer Science, ISSN 0302-9743 ; 9163
National Category
Computer graphics and computer vision
Identifiers
urn:nbn:se:kth:diva-177975 (URN)10.1007/978-3-319-20904-3_20 (DOI)000364183300020 ()2-s2.0-84949035044 (Scopus ID)978-3-319-20904-3 (ISBN)978-3-319-20903-6 (ISBN)
Conference
10th International Conference on Computer Vision Systems (ICVS), JUL 06-09, 2015, Copenhagen, DENMARK
Note

QC 20151202

Available from: 2015-12-02 Created: 2015-11-30 Last updated: 2025-02-07Bibliographically approved
Stork, J. A., Ek, C. H., Bekiroglu, Y. & Kragic, D. (2015). Learning Predictive State Representation for in-hand manipulation. In: Proceedings - IEEE International Conference on Robotics and Automation: . Paper presented at 2015 IEEE International Conference on Robotics and Automation, ICRA 2015, 26 May 2015 through 30 May 2015 (pp. 3207-3214). IEEE conference proceedings (June)
Open this publication in new window or tab >>Learning Predictive State Representation for in-hand manipulation
2015 (English)In: Proceedings - IEEE International Conference on Robotics and Automation, IEEE conference proceedings, 2015, no June, p. 3207-3214Conference paper, Published paper (Refereed)
Abstract [en]

We study the use of Predictive State Representation (PSR) for modeling of an in-hand manipulation task through interaction with the environment. We extend the original PSR model to a new domain of in-hand manipulation and address the problem of partial observability by introducing new kernel-based features that integrate both actions and observations. The model is learned directly from haptic data and is used to plan series of actions that rotate the object in the hand to a specific configuration by pushing it against a table. Further, we analyze the model's belief states using additional visual data and enable planning of action sequences when the observations are ambiguous. We show that the learned representation is geometrically meaningful by embedding labeled action-observation traces. Suitability for planning is demonstrated by a post-grasp manipulation example that changes the object state to multiple specified target configurations.

Place, publisher, year, edition, pages
IEEE conference proceedings, 2015
Keywords
Action sequences, Hand manipulation, Partial observability, Predictive state representation, Psr models, Target configurations, Visual data, Robotics
National Category
Electrical Engineering, Electronic Engineering, Information Engineering
Identifiers
urn:nbn:se:kth:diva-176107 (URN)10.1109/ICRA.2015.7139641 (DOI)000370974903031 ()2-s2.0-84938273485 (Scopus ID)
Conference
2015 IEEE International Conference on Robotics and Automation, ICRA 2015, 26 May 2015 through 30 May 2015
Note

QC 20151202. QC 20160411

Available from: 2015-12-02 Created: 2015-11-02 Last updated: 2024-03-15Bibliographically approved
Stork, J. A., Ek, C. H. & Kragic, D. (2015). Learning Predictive State Representations for Planning. In: 2015 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS (IROS): . Paper presented at IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), SEP 28-OCT 02, 2015, Hamburg, GERMANY (pp. 3427-3434). IEEE Press
Open this publication in new window or tab >>Learning Predictive State Representations for Planning
2015 (English)In: 2015 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS (IROS), IEEE Press, 2015, p. 3427-3434Conference paper, Published paper (Refereed)
Abstract [en]

Predictive State Representations (PSRs) allow modeling of dynamical systems directly in observables and without relying on latent variable representations. A problem that arises from learning PSRs is that it is often hard to attribute semantic meaning to the learned representation. This makes generalization and planning in PSRs challenging. In this paper, we extend PSRs and introduce the notion of PSRs that include prior information (P-PSRs) to learn representations which are suitable for planning and interpretation. By learning a low-dimensional embedding of test features we map belief points of similar semantic to the same region of a subspace. This facilitates better generalization for planning and semantical interpretation of the learned representation. In specific, we show how to overcome the training sample bias and introduce feature selection such that the resulting representation emphasizes observables related to the planning task. We show that our P-PSRs result in qualitatively meaningful representations and present quantitative results that indicate improved suitability for planning.

Place, publisher, year, edition, pages
IEEE Press, 2015
Series
IEEE International Conference on Intelligent Robots and Systems, ISSN 2153-0858
National Category
Computer graphics and computer vision
Identifiers
urn:nbn:se:kth:diva-185107 (URN)10.1109/IROS.2015.7353855 (DOI)000371885403089 ()2-s2.0-84958177858 (Scopus ID)978-1-4799-9994-1 (ISBN)
Conference
IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), SEP 28-OCT 02, 2015, Hamburg, GERMANY
Note

QC 20160412

Available from: 2016-04-12 Created: 2016-04-11 Last updated: 2025-02-07Bibliographically approved
Sharif Razavian, A., Azizpour, H., Maki, A., Sullivan, J., Ek, C. H. & Carlsson, S. (2015). Persistent Evidence of Local Image Properties in Generic ConvNets. In: Paulsen, Rasmus R., Pedersen, Kim S. (Ed.), Image Analysis: 19th Scandinavian Conference, SCIA 2015, Copenhagen, Denmark, June 15-17, 2015. Proceedings. Paper presented at Scandinavian Conference on Image Analysis, Copenhagen, Denmark, 15-17 June, 2015 (pp. 249-262). Springer Publishing Company
Open this publication in new window or tab >>Persistent Evidence of Local Image Properties in Generic ConvNets
Show others...
2015 (English)In: Image Analysis: 19th Scandinavian Conference, SCIA 2015, Copenhagen, Denmark, June 15-17, 2015. Proceedings / [ed] Paulsen, Rasmus R., Pedersen, Kim S., Springer Publishing Company, 2015, p. 249-262Conference paper, Published paper (Refereed)
Abstract [en]

Supervised training of a convolutional network for object classification should make explicit any information related to the class of objects and disregard any auxiliary information associated with the capture of the image or thevariation within the object class. Does this happen in practice? Although this seems to pertain to the very final layers in the network, if we look at earlier layers we find that this is not the case. Surprisingly, strong spatial information is implicit. This paper addresses this, in particular, exploiting the image representation at the first fully connected layer,i.e. the global image descriptor which has been recently shown to be most effective in a range of visual recognition tasks. We empirically demonstrate evidences for the finding in the contexts of four different tasks: 2d landmark detection, 2d object keypoints prediction, estimation of the RGB values of input image, and recovery of semantic label of each pixel. We base our investigation on a simple framework with ridge rigression commonly across these tasks,and show results which all support our insight. Such spatial information can be used for computing correspondence of landmarks to a good accuracy, but should potentially be useful for improving the training of the convolutional nets for classification purposes.

Place, publisher, year, edition, pages
Springer Publishing Company, 2015
Series
Image Processing, Computer Vision, Pattern Recognition, and Graphics ; 9127
National Category
Computer Systems
Identifiers
urn:nbn:se:kth:diva-172140 (URN)10.1007/978-3-319-19665-7_21 (DOI)2-s2.0-84947982864 (Scopus ID)
Conference
Scandinavian Conference on Image Analysis, Copenhagen, Denmark, 15-17 June, 2015
Note

Qc 20150828

Available from: 2015-08-13 Created: 2015-08-13 Last updated: 2024-03-15Bibliographically approved
Song, D., Ek, C. H., Hübner, K. & Kragic, D. (2015). Task-Based Robot Grasp Planning Using Probabilistic Inference. IEEE Transactions on robotics, 31(3), 546-561
Open this publication in new window or tab >>Task-Based Robot Grasp Planning Using Probabilistic Inference
2015 (English)In: IEEE Transactions on robotics, ISSN 1552-3098, E-ISSN 1941-0468, Vol. 31, no 3, p. 546-561Article in journal (Refereed) Published
Abstract [en]

Grasping and manipulating everyday objects in a goal-directed manner is an important ability of a service robot. The robot needs to reason about task requirements and ground these in the sensorimotor information. Grasping and interaction with objects are challenging in real-world scenarios, where sensorimotor uncertainty is prevalent. This paper presents a probabilistic framework for the representation and modeling of robot-grasping tasks. The framework consists of Gaussian mixture models for generic data discretization, and discrete Bayesian networks for encoding the probabilistic relations among various task-relevant variables, including object and action features as well as task constraints. We evaluate the framework using a grasp database generated in a simulated environment including a human and two robot hand models. The generative modeling approach allows the prediction of grasping tasks given uncertain sensory data, as well as object and grasp selection in a task-oriented manner. Furthermore, the graphical model framework provides insights into dependencies between variables and features relevant for object grasping.

National Category
Electrical Engineering, Electronic Engineering, Information Engineering
Identifiers
urn:nbn:se:kth:diva-170982 (URN)10.1109/TRO.2015.2409912 (DOI)000356518700003 ()2-s2.0-84926395738 (Scopus ID)
Note

QC 20150713

Available from: 2015-07-13 Created: 2015-07-13 Last updated: 2024-03-15Bibliographically approved
Baisero, A., Pokorny, F. T., Kragic, D. & Ek, C. H. (2015). The path kernel: A novel kernel for sequential data. In: Ana Fred, Maria De Marsico (Ed.), Pattern Recognition: Applications and Methods : International Conference, ICPRAM 2013 Barcelona, Spain, February 15–18, 2013 Revised Selected Papers. Paper presented at 2nd International Conference on Pattern Recognition Applications and Methods, ICPRAM 2013; Barcelona; Spain; 15 February 2013 through 18 (pp. 71-84). Springer Berlin/Heidelberg
Open this publication in new window or tab >>The path kernel: A novel kernel for sequential data
2015 (English)In: Pattern Recognition: Applications and Methods : International Conference, ICPRAM 2013 Barcelona, Spain, February 15–18, 2013 Revised Selected Papers / [ed] Ana Fred, Maria De Marsico, Springer Berlin/Heidelberg, 2015, p. 71-84Conference paper, Published paper (Refereed)
Abstract [en]

We define a novel kernel function for finite sequences of arbitrary length which we call the path kernel. We evaluate this kernel in a classification scenario using synthetic data sequences and show that our kernel can outperform state of the art sequential similarity measures. Furthermore, we find that, in our experiments, a clustering of data based on the path kernel results in much improved interpretability of such clusters compared to alternative approaches such as dynamic time warping or the global alignment kernel.

Place, publisher, year, edition, pages
Springer Berlin/Heidelberg, 2015
Series
Advances in Intelligent Systems and Computing, ISSN 2194-5357 ; 318
Keywords
Kernels, Sequences, Pattern recognition, Dynamic time warping, Global alignment, Interpretability, Kernel function, Similarity measure, State of the art, Classification (of information)
National Category
Computer and Information Sciences
Identifiers
urn:nbn:se:kth:diva-167388 (URN)10.1007/978-3-319-12610-4_5 (DOI)000364822300005 ()2-s2.0-84914163004 (Scopus ID)9783319126098 (ISBN)
Conference
2nd International Conference on Pattern Recognition Applications and Methods, ICPRAM 2013; Barcelona; Spain; 15 February 2013 through 18
Note

QC 20150601

Available from: 2015-06-01 Created: 2015-05-22 Last updated: 2024-03-15Bibliographically approved
Afkham, H. M., Ek, C. H. & Carlsson, S. (2014). A topological framework for training latent variable models. In: Proceedings - International Conference on Pattern Recognition: . Paper presented at 22nd International Conference on Pattern Recognition, ICPR 2014, 24 August 2014 through 28 August 2014 (pp. 2471-2476).
Open this publication in new window or tab >>A topological framework for training latent variable models
2014 (English)In: Proceedings - International Conference on Pattern Recognition, 2014, p. 2471-2476Conference paper, Published paper (Refereed)
Abstract [en]

We discuss the properties of a class of latent variable models that assumes each labeled sample is associated with a set of different features, with no prior knowledge of which feature is the most relevant feature to be used. Deformable-Part Models (DPM) can be seen as good examples of such models. These models are usually considered to be expensive to train and very sensitive to the initialization. In this paper, we focus on the learning of such models by introducing a topological framework and show how it is possible to both reduce the learning complexity and produce more robust decision boundaries. We will also argue how our framework can be used for producing robust decision boundaries without exploiting the dataset bias or relying on accurate annotations. To experimentally evaluate our method and compare with previously published frameworks, we focus on the problem of image classification with object localization. In this problem, the correct location of the objects is unknown, during both training and testing stages, and is considered as a latent variable.

Keywords
Pattern recognition, Topology, Deformable part models, Latent variable models, Learning complexity, Object localization, Prior knowledge, Relevant features, Robust decisions, Training and testing, Image classification
National Category
Electrical Engineering, Electronic Engineering, Information Engineering
Identifiers
urn:nbn:se:kth:diva-167941 (URN)10.1109/ICPR.2014.427 (DOI)000359818002099 ()2-s2.0-84919941135 (Scopus ID)9781479952083 (ISBN)
Conference
22nd International Conference on Pattern Recognition, ICPR 2014, 24 August 2014 through 28 August 2014
Note

QC 20150605

Available from: 2015-06-05 Created: 2015-05-22 Last updated: 2024-03-15Bibliographically approved
Organisations

Search in DiVA

Show all publications