kth.sePublikationer
Ändra sökning
Länk till posten
Permanent länk

Direktlänk
Ek, Carl Henrik
Publikationer (10 of 50) Visa alla publikationer
Caccamo, S., Bekiroglu, Y., Ek, C. H. & Kragic, D. (2016). Active Exploration Using Gaussian Random Fields and Gaussian Process Implicit Surfaces. In: 2016 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS (IROS 2016): . Paper presented at IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), OCT 09-14, 2016, Daejeon, SOUTH KOREA (pp. 582-589). Institute of Electrical and Electronics Engineers (IEEE)
Öppna denna publikation i ny flik eller fönster >>Active Exploration Using Gaussian Random Fields and Gaussian Process Implicit Surfaces
2016 (Engelska)Ingår i: 2016 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS (IROS 2016), Institute of Electrical and Electronics Engineers (IEEE), 2016, s. 582-589Konferensbidrag, Publicerat paper (Refereegranskat)
Abstract [en]

In this work we study the problem of exploring surfaces and building compact 3D representations of the environment surrounding a robot through active perception. We propose an online probabilistic framework that merges visual and tactile measurements using Gaussian Random Field and Gaussian Process Implicit Surfaces. The system investigates incomplete point clouds in order to find a small set of regions of interest which are then physically explored with a robotic arm equipped with tactile sensors. We show experimental results obtained using a PrimeSense camera, a Kinova Jaco2 robotic arm and Optoforce sensors on different scenarios. We then demostrate how to use the online framework for object detection and terrain classification.

Ort, förlag, år, upplaga, sidor
Institute of Electrical and Electronics Engineers (IEEE), 2016
Nyckelord
Active perception, Surface reconstruction, Gaussian process, Implicit surface, Random field, Tactile exploration
Nationell ämneskategori
Robotik och automation
Identifikatorer
urn:nbn:se:kth:diva-202672 (URN)10.1109/IROS.2016.7759112 (DOI)000391921700086 ()2-s2.0-85006371409 (Scopus ID)978-1-5090-3762-9 (ISBN)
Konferens
IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), OCT 09-14, 2016, Daejeon, SOUTH KOREA
Anmärkning

QC 20170306

Tillgänglig från: 2017-03-06 Skapad: 2017-03-06 Senast uppdaterad: 2025-02-09Bibliografiskt granskad
Bekiroglu, Y., Damianou, A., Detry, R., Stork, J. A., Kragic, D. & Ek, C. H. (2016). Probabilistic consolidation of grasp experience. In: Proceedings - IEEE International Conference on Robotics and Automation: . Paper presented at 2016 IEEE International Conference on Robotics and Automation, ICRA 2016, 16 May 2016 through 21 May 2016 (pp. 193-200). IEEE conference proceedings
Öppna denna publikation i ny flik eller fönster >>Probabilistic consolidation of grasp experience
Visa övriga...
2016 (Engelska)Ingår i: Proceedings - IEEE International Conference on Robotics and Automation, IEEE conference proceedings, 2016, s. 193-200Konferensbidrag, Publicerat paper (Refereegranskat)
Abstract [en]

We present a probabilistic model for joint representation of several sensory modalities and action parameters in a robotic grasping scenario. Our non-linear probabilistic latent variable model encodes relationships between grasp-related parameters, learns the importance of features, and expresses confidence in estimates. The model learns associations between stable and unstable grasps that it experiences during an exploration phase. We demonstrate the applicability of the model for estimating grasp stability, correcting grasps, identifying objects based on tactile imprints and predicting tactile imprints from object-relative gripper poses. We performed experiments on a real platform with both known and novel objects, i.e., objects the robot trained with, and previously unseen objects. Grasp correction had a 75% success rate on known objects, and 73% on new objects. We compared our model to a traditional regression model that succeeded in correcting grasps in only 38% of cases.

Ort, förlag, år, upplaga, sidor
IEEE conference proceedings, 2016
Nationell ämneskategori
Robotik och automation
Identifikatorer
urn:nbn:se:kth:diva-197236 (URN)10.1109/ICRA.2016.7487133 (DOI)000389516200024 ()2-s2.0-84977472359 (Scopus ID)9781467380263 (ISBN)
Konferens
2016 IEEE International Conference on Robotics and Automation, ICRA 2016, 16 May 2016 through 21 May 2016
Anmärkning

QC 20161207

Tillgänglig från: 2016-12-07 Skapad: 2016-11-30 Senast uppdaterad: 2025-02-09Bibliografiskt granskad
Damianou, A., Ek, C. H., Boorman, L., Lawrence, N. D. & Prescott, T. J. (2015). A Top-Down Approach for a Synthetic Autobiographical Memory System. In: BIOMIMETIC AND BIOHYBRID SYSTEMS, LIVING MACHINES 2015: . Paper presented at 4th International Conference on Biomimetic and Biohybrid Systems (Living Machines), JUL 28-31, 2015, Barcelona, SPAIN (pp. 280-292). Springer
Öppna denna publikation i ny flik eller fönster >>A Top-Down Approach for a Synthetic Autobiographical Memory System
Visa övriga...
2015 (Engelska)Ingår i: BIOMIMETIC AND BIOHYBRID SYSTEMS, LIVING MACHINES 2015, Springer, 2015, s. 280-292Konferensbidrag, Publicerat paper (Refereegranskat)
Abstract [en]

Autobiographical memory (AM) refers to the organisation of one's experience into a coherent narrative. The exact neural mechanisms responsible for the manifestation of AM in humans are unknown. On the other hand, the field of psychology has provided us with useful understanding about the functionality of a bio-inspired synthetic AM (SAM) system, in a higher level of description. This paper is concerned with a top-down approach to SAM, where known components and organisation guide the architecture but the unknown details of each module are abstracted. By using Bayesian latent variable models we obtain a transparent SAM system with which we can interact in a structured way. This allows us to reveal the properties of specific sub-modules and map them to functionality observed in biological systems. The top-down approach can cope well with the high performance requirements of a bio-inspired cognitive system. This is demonstrated in experiments using faces data.

Ort, förlag, år, upplaga, sidor
Springer, 2015
Serie
Lecture Notes in Artificial Intelligence, ISSN 0302-9743 ; 9222
Nyckelord
Synthetic autobiographical memory, Hippocampus, Robotics, Deep Gaussian process, MRD
Nationell ämneskategori
Datavetenskap (datalogi)
Identifikatorer
urn:nbn:se:kth:diva-177974 (URN)10.1007/978-3-319-22979-9_28 (DOI)000364183200028 ()2-s2.0-84947125286 (Scopus ID)978-3-319-22979-9 (ISBN)978-3-319-22978-2 (ISBN)
Konferens
4th International Conference on Biomimetic and Biohybrid Systems (Living Machines), JUL 28-31, 2015, Barcelona, SPAIN
Anmärkning

QC 20151202

Tillgänglig från: 2015-12-02 Skapad: 2015-11-30 Senast uppdaterad: 2024-03-15Bibliografiskt granskad
Hjelm, M., Ek, C. H., Detry, R. & Kragic, D. (2015). Learning Human Priors for Task-Constrained Grasping. In: COMPUTER VISION SYSTEMS (ICVS 2015): . Paper presented at 10th International Conference on Computer Vision Systems (ICVS), JUL 06-09, 2015, Copenhagen, DENMARK (pp. 207-217). Springer Berlin/Heidelberg
Öppna denna publikation i ny flik eller fönster >>Learning Human Priors for Task-Constrained Grasping
2015 (Engelska)Ingår i: COMPUTER VISION SYSTEMS (ICVS 2015), Springer Berlin/Heidelberg, 2015, s. 207-217Konferensbidrag, Publicerat paper (Refereegranskat)
Abstract [en]

An autonomous agent using manmade objects must understand how task conditions the grasp placement. In this paper we formulate task based robotic grasping as a feature learning problem. Using a human demonstrator to provide examples of grasps associated with a specific task, we learn a representation, such that similarity in task is reflected by similarity in feature. The learned representation discards parts of the sensory input that is redundant for the task, allowing the agent to ground and reason about the relevant features for the task. Synthesized grasps for an observed task on previously unseen objects can then be filtered and ordered by matching to learned instances without the need of an analytically formulated metric. We show on a real robot how our approach is able to utilize the learned representation to synthesize and perform valid task specific grasps on novel objects.

Ort, förlag, år, upplaga, sidor
Springer Berlin/Heidelberg, 2015
Serie
Lecture Notes in Computer Science, ISSN 0302-9743 ; 9163
Nationell ämneskategori
Datorgrafik och datorseende
Identifikatorer
urn:nbn:se:kth:diva-177975 (URN)10.1007/978-3-319-20904-3_20 (DOI)000364183300020 ()2-s2.0-84949035044 (Scopus ID)978-3-319-20904-3 (ISBN)978-3-319-20903-6 (ISBN)
Konferens
10th International Conference on Computer Vision Systems (ICVS), JUL 06-09, 2015, Copenhagen, DENMARK
Anmärkning

QC 20151202

Tillgänglig från: 2015-12-02 Skapad: 2015-11-30 Senast uppdaterad: 2025-02-07Bibliografiskt granskad
Stork, J. A., Ek, C. H., Bekiroglu, Y. & Kragic, D. (2015). Learning Predictive State Representation for in-hand manipulation. In: Proceedings - IEEE International Conference on Robotics and Automation: . Paper presented at 2015 IEEE International Conference on Robotics and Automation, ICRA 2015, 26 May 2015 through 30 May 2015 (pp. 3207-3214). IEEE conference proceedings (June)
Öppna denna publikation i ny flik eller fönster >>Learning Predictive State Representation for in-hand manipulation
2015 (Engelska)Ingår i: Proceedings - IEEE International Conference on Robotics and Automation, IEEE conference proceedings, 2015, nr June, s. 3207-3214Konferensbidrag, Publicerat paper (Refereegranskat)
Abstract [en]

We study the use of Predictive State Representation (PSR) for modeling of an in-hand manipulation task through interaction with the environment. We extend the original PSR model to a new domain of in-hand manipulation and address the problem of partial observability by introducing new kernel-based features that integrate both actions and observations. The model is learned directly from haptic data and is used to plan series of actions that rotate the object in the hand to a specific configuration by pushing it against a table. Further, we analyze the model's belief states using additional visual data and enable planning of action sequences when the observations are ambiguous. We show that the learned representation is geometrically meaningful by embedding labeled action-observation traces. Suitability for planning is demonstrated by a post-grasp manipulation example that changes the object state to multiple specified target configurations.

Ort, förlag, år, upplaga, sidor
IEEE conference proceedings, 2015
Nyckelord
Action sequences, Hand manipulation, Partial observability, Predictive state representation, Psr models, Target configurations, Visual data, Robotics
Nationell ämneskategori
Elektroteknik och elektronik
Identifikatorer
urn:nbn:se:kth:diva-176107 (URN)10.1109/ICRA.2015.7139641 (DOI)000370974903031 ()2-s2.0-84938273485 (Scopus ID)
Konferens
2015 IEEE International Conference on Robotics and Automation, ICRA 2015, 26 May 2015 through 30 May 2015
Anmärkning

QC 20151202. QC 20160411

Tillgänglig från: 2015-12-02 Skapad: 2015-11-02 Senast uppdaterad: 2024-03-15Bibliografiskt granskad
Stork, J. A., Ek, C. H. & Kragic, D. (2015). Learning Predictive State Representations for Planning. In: 2015 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS (IROS): . Paper presented at IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), SEP 28-OCT 02, 2015, Hamburg, GERMANY (pp. 3427-3434). IEEE Press
Öppna denna publikation i ny flik eller fönster >>Learning Predictive State Representations for Planning
2015 (Engelska)Ingår i: 2015 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS (IROS), IEEE Press, 2015, s. 3427-3434Konferensbidrag, Publicerat paper (Refereegranskat)
Abstract [en]

Predictive State Representations (PSRs) allow modeling of dynamical systems directly in observables and without relying on latent variable representations. A problem that arises from learning PSRs is that it is often hard to attribute semantic meaning to the learned representation. This makes generalization and planning in PSRs challenging. In this paper, we extend PSRs and introduce the notion of PSRs that include prior information (P-PSRs) to learn representations which are suitable for planning and interpretation. By learning a low-dimensional embedding of test features we map belief points of similar semantic to the same region of a subspace. This facilitates better generalization for planning and semantical interpretation of the learned representation. In specific, we show how to overcome the training sample bias and introduce feature selection such that the resulting representation emphasizes observables related to the planning task. We show that our P-PSRs result in qualitatively meaningful representations and present quantitative results that indicate improved suitability for planning.

Ort, förlag, år, upplaga, sidor
IEEE Press, 2015
Serie
IEEE International Conference on Intelligent Robots and Systems, ISSN 2153-0858
Nationell ämneskategori
Datorgrafik och datorseende
Identifikatorer
urn:nbn:se:kth:diva-185107 (URN)10.1109/IROS.2015.7353855 (DOI)000371885403089 ()2-s2.0-84958177858 (Scopus ID)978-1-4799-9994-1 (ISBN)
Konferens
IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), SEP 28-OCT 02, 2015, Hamburg, GERMANY
Anmärkning

QC 20160412

Tillgänglig från: 2016-04-12 Skapad: 2016-04-11 Senast uppdaterad: 2025-02-07Bibliografiskt granskad
Sharif Razavian, A., Azizpour, H., Maki, A., Sullivan, J., Ek, C. H. & Carlsson, S. (2015). Persistent Evidence of Local Image Properties in Generic ConvNets. In: Paulsen, Rasmus R., Pedersen, Kim S. (Ed.), Image Analysis: 19th Scandinavian Conference, SCIA 2015, Copenhagen, Denmark, June 15-17, 2015. Proceedings. Paper presented at Scandinavian Conference on Image Analysis, Copenhagen, Denmark, 15-17 June, 2015 (pp. 249-262). Springer Publishing Company
Öppna denna publikation i ny flik eller fönster >>Persistent Evidence of Local Image Properties in Generic ConvNets
Visa övriga...
2015 (Engelska)Ingår i: Image Analysis: 19th Scandinavian Conference, SCIA 2015, Copenhagen, Denmark, June 15-17, 2015. Proceedings / [ed] Paulsen, Rasmus R., Pedersen, Kim S., Springer Publishing Company, 2015, s. 249-262Konferensbidrag, Publicerat paper (Refereegranskat)
Abstract [en]

Supervised training of a convolutional network for object classification should make explicit any information related to the class of objects and disregard any auxiliary information associated with the capture of the image or thevariation within the object class. Does this happen in practice? Although this seems to pertain to the very final layers in the network, if we look at earlier layers we find that this is not the case. Surprisingly, strong spatial information is implicit. This paper addresses this, in particular, exploiting the image representation at the first fully connected layer,i.e. the global image descriptor which has been recently shown to be most effective in a range of visual recognition tasks. We empirically demonstrate evidences for the finding in the contexts of four different tasks: 2d landmark detection, 2d object keypoints prediction, estimation of the RGB values of input image, and recovery of semantic label of each pixel. We base our investigation on a simple framework with ridge rigression commonly across these tasks,and show results which all support our insight. Such spatial information can be used for computing correspondence of landmarks to a good accuracy, but should potentially be useful for improving the training of the convolutional nets for classification purposes.

Ort, förlag, år, upplaga, sidor
Springer Publishing Company, 2015
Serie
Image Processing, Computer Vision, Pattern Recognition, and Graphics ; 9127
Nationell ämneskategori
Datorsystem
Identifikatorer
urn:nbn:se:kth:diva-172140 (URN)10.1007/978-3-319-19665-7_21 (DOI)2-s2.0-84947982864 (Scopus ID)
Konferens
Scandinavian Conference on Image Analysis, Copenhagen, Denmark, 15-17 June, 2015
Anmärkning

Qc 20150828

Tillgänglig från: 2015-08-13 Skapad: 2015-08-13 Senast uppdaterad: 2024-03-15Bibliografiskt granskad
Song, D., Ek, C. H., Hübner, K. & Kragic, D. (2015). Task-Based Robot Grasp Planning Using Probabilistic Inference. IEEE Transactions on robotics, 31(3), 546-561
Öppna denna publikation i ny flik eller fönster >>Task-Based Robot Grasp Planning Using Probabilistic Inference
2015 (Engelska)Ingår i: IEEE Transactions on robotics, ISSN 1552-3098, E-ISSN 1941-0468, Vol. 31, nr 3, s. 546-561Artikel i tidskrift (Refereegranskat) Published
Abstract [en]

Grasping and manipulating everyday objects in a goal-directed manner is an important ability of a service robot. The robot needs to reason about task requirements and ground these in the sensorimotor information. Grasping and interaction with objects are challenging in real-world scenarios, where sensorimotor uncertainty is prevalent. This paper presents a probabilistic framework for the representation and modeling of robot-grasping tasks. The framework consists of Gaussian mixture models for generic data discretization, and discrete Bayesian networks for encoding the probabilistic relations among various task-relevant variables, including object and action features as well as task constraints. We evaluate the framework using a grasp database generated in a simulated environment including a human and two robot hand models. The generative modeling approach allows the prediction of grasping tasks given uncertain sensory data, as well as object and grasp selection in a task-oriented manner. Furthermore, the graphical model framework provides insights into dependencies between variables and features relevant for object grasping.

Nationell ämneskategori
Elektroteknik och elektronik
Identifikatorer
urn:nbn:se:kth:diva-170982 (URN)10.1109/TRO.2015.2409912 (DOI)000356518700003 ()2-s2.0-84926395738 (Scopus ID)
Anmärkning

QC 20150713

Tillgänglig från: 2015-07-13 Skapad: 2015-07-13 Senast uppdaterad: 2024-03-15Bibliografiskt granskad
Baisero, A., Pokorny, F. T., Kragic, D. & Ek, C. H. (2015). The path kernel: A novel kernel for sequential data. In: Ana Fred, Maria De Marsico (Ed.), Pattern Recognition: Applications and Methods : International Conference, ICPRAM 2013 Barcelona, Spain, February 15–18, 2013 Revised Selected Papers. Paper presented at 2nd International Conference on Pattern Recognition Applications and Methods, ICPRAM 2013; Barcelona; Spain; 15 February 2013 through 18 (pp. 71-84). Springer Berlin/Heidelberg
Öppna denna publikation i ny flik eller fönster >>The path kernel: A novel kernel for sequential data
2015 (Engelska)Ingår i: Pattern Recognition: Applications and Methods : International Conference, ICPRAM 2013 Barcelona, Spain, February 15–18, 2013 Revised Selected Papers / [ed] Ana Fred, Maria De Marsico, Springer Berlin/Heidelberg, 2015, s. 71-84Konferensbidrag, Publicerat paper (Refereegranskat)
Abstract [en]

We define a novel kernel function for finite sequences of arbitrary length which we call the path kernel. We evaluate this kernel in a classification scenario using synthetic data sequences and show that our kernel can outperform state of the art sequential similarity measures. Furthermore, we find that, in our experiments, a clustering of data based on the path kernel results in much improved interpretability of such clusters compared to alternative approaches such as dynamic time warping or the global alignment kernel.

Ort, förlag, år, upplaga, sidor
Springer Berlin/Heidelberg, 2015
Serie
Advances in Intelligent Systems and Computing, ISSN 2194-5357 ; 318
Nyckelord
Kernels, Sequences, Pattern recognition, Dynamic time warping, Global alignment, Interpretability, Kernel function, Similarity measure, State of the art, Classification (of information)
Nationell ämneskategori
Data- och informationsvetenskap
Identifikatorer
urn:nbn:se:kth:diva-167388 (URN)10.1007/978-3-319-12610-4_5 (DOI)000364822300005 ()2-s2.0-84914163004 (Scopus ID)9783319126098 (ISBN)
Konferens
2nd International Conference on Pattern Recognition Applications and Methods, ICPRAM 2013; Barcelona; Spain; 15 February 2013 through 18
Anmärkning

QC 20150601

Tillgänglig från: 2015-06-01 Skapad: 2015-05-22 Senast uppdaterad: 2024-03-15Bibliografiskt granskad
Afkham, H. M., Ek, C. H. & Carlsson, S. (2014). A topological framework for training latent variable models. In: Proceedings - International Conference on Pattern Recognition: . Paper presented at 22nd International Conference on Pattern Recognition, ICPR 2014, 24 August 2014 through 28 August 2014 (pp. 2471-2476).
Öppna denna publikation i ny flik eller fönster >>A topological framework for training latent variable models
2014 (Engelska)Ingår i: Proceedings - International Conference on Pattern Recognition, 2014, s. 2471-2476Konferensbidrag, Publicerat paper (Refereegranskat)
Abstract [en]

We discuss the properties of a class of latent variable models that assumes each labeled sample is associated with a set of different features, with no prior knowledge of which feature is the most relevant feature to be used. Deformable-Part Models (DPM) can be seen as good examples of such models. These models are usually considered to be expensive to train and very sensitive to the initialization. In this paper, we focus on the learning of such models by introducing a topological framework and show how it is possible to both reduce the learning complexity and produce more robust decision boundaries. We will also argue how our framework can be used for producing robust decision boundaries without exploiting the dataset bias or relying on accurate annotations. To experimentally evaluate our method and compare with previously published frameworks, we focus on the problem of image classification with object localization. In this problem, the correct location of the objects is unknown, during both training and testing stages, and is considered as a latent variable.

Nyckelord
Pattern recognition, Topology, Deformable part models, Latent variable models, Learning complexity, Object localization, Prior knowledge, Relevant features, Robust decisions, Training and testing, Image classification
Nationell ämneskategori
Elektroteknik och elektronik
Identifikatorer
urn:nbn:se:kth:diva-167941 (URN)10.1109/ICPR.2014.427 (DOI)000359818002099 ()2-s2.0-84919941135 (Scopus ID)9781479952083 (ISBN)
Konferens
22nd International Conference on Pattern Recognition, ICPR 2014, 24 August 2014 through 28 August 2014
Anmärkning

QC 20150605

Tillgänglig från: 2015-06-05 Skapad: 2015-05-22 Senast uppdaterad: 2024-03-15Bibliografiskt granskad
Organisationer

Sök vidare i DiVA

Visa alla publikationer