Endre søk
Link to record
Permanent link

Direct link
Ek, Carl Henrik
Publikasjoner (10 av 50) Visa alla publikasjoner
Caccamo, S., Bekiroglu, Y., Ek, C. H. & Kragic, D. (2016). Active Exploration Using Gaussian Random Fields and Gaussian Process Implicit Surfaces. In: 2016 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS (IROS 2016): . Paper presented at IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), OCT 09-14, 2016, Daejeon, SOUTH KOREA (pp. 582-589). Institute of Electrical and Electronics Engineers (IEEE)
Åpne denne publikasjonen i ny fane eller vindu >>Active Exploration Using Gaussian Random Fields and Gaussian Process Implicit Surfaces
2016 (engelsk)Inngår i: 2016 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS (IROS 2016), Institute of Electrical and Electronics Engineers (IEEE), 2016, s. 582-589Konferansepaper, Publicerat paper (Fagfellevurdert)
Abstract [en]

In this work we study the problem of exploring surfaces and building compact 3D representations of the environment surrounding a robot through active perception. We propose an online probabilistic framework that merges visual and tactile measurements using Gaussian Random Field and Gaussian Process Implicit Surfaces. The system investigates incomplete point clouds in order to find a small set of regions of interest which are then physically explored with a robotic arm equipped with tactile sensors. We show experimental results obtained using a PrimeSense camera, a Kinova Jaco2 robotic arm and Optoforce sensors on different scenarios. We then demostrate how to use the online framework for object detection and terrain classification.

sted, utgiver, år, opplag, sider
Institute of Electrical and Electronics Engineers (IEEE), 2016
Emneord
Active perception, Surface reconstruction, Gaussian process, Implicit surface, Random field, Tactile exploration
HSV kategori
Identifikatorer
urn:nbn:se:kth:diva-202672 (URN)10.1109/IROS.2016.7759112 (DOI)000391921700086 ()2-s2.0-85006371409 (Scopus ID)978-1-5090-3762-9 (ISBN)
Konferanse
IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), OCT 09-14, 2016, Daejeon, SOUTH KOREA
Merknad

QC 20170306

Tilgjengelig fra: 2017-03-06 Laget: 2017-03-06 Sist oppdatert: 2025-02-09bibliografisk kontrollert
Bekiroglu, Y., Damianou, A., Detry, R., Stork, J. A., Kragic, D. & Ek, C. H. (2016). Probabilistic consolidation of grasp experience. In: Proceedings - IEEE International Conference on Robotics and Automation: . Paper presented at 2016 IEEE International Conference on Robotics and Automation, ICRA 2016, 16 May 2016 through 21 May 2016 (pp. 193-200). IEEE conference proceedings
Åpne denne publikasjonen i ny fane eller vindu >>Probabilistic consolidation of grasp experience
Vise andre…
2016 (engelsk)Inngår i: Proceedings - IEEE International Conference on Robotics and Automation, IEEE conference proceedings, 2016, s. 193-200Konferansepaper, Publicerat paper (Fagfellevurdert)
Abstract [en]

We present a probabilistic model for joint representation of several sensory modalities and action parameters in a robotic grasping scenario. Our non-linear probabilistic latent variable model encodes relationships between grasp-related parameters, learns the importance of features, and expresses confidence in estimates. The model learns associations between stable and unstable grasps that it experiences during an exploration phase. We demonstrate the applicability of the model for estimating grasp stability, correcting grasps, identifying objects based on tactile imprints and predicting tactile imprints from object-relative gripper poses. We performed experiments on a real platform with both known and novel objects, i.e., objects the robot trained with, and previously unseen objects. Grasp correction had a 75% success rate on known objects, and 73% on new objects. We compared our model to a traditional regression model that succeeded in correcting grasps in only 38% of cases.

sted, utgiver, år, opplag, sider
IEEE conference proceedings, 2016
HSV kategori
Identifikatorer
urn:nbn:se:kth:diva-197236 (URN)10.1109/ICRA.2016.7487133 (DOI)000389516200024 ()2-s2.0-84977472359 (Scopus ID)9781467380263 (ISBN)
Konferanse
2016 IEEE International Conference on Robotics and Automation, ICRA 2016, 16 May 2016 through 21 May 2016
Merknad

QC 20161207

Tilgjengelig fra: 2016-12-07 Laget: 2016-11-30 Sist oppdatert: 2025-02-09bibliografisk kontrollert
Damianou, A., Ek, C. H., Boorman, L., Lawrence, N. D. & Prescott, T. J. (2015). A Top-Down Approach for a Synthetic Autobiographical Memory System. In: BIOMIMETIC AND BIOHYBRID SYSTEMS, LIVING MACHINES 2015: . Paper presented at 4th International Conference on Biomimetic and Biohybrid Systems (Living Machines), JUL 28-31, 2015, Barcelona, SPAIN (pp. 280-292). Springer
Åpne denne publikasjonen i ny fane eller vindu >>A Top-Down Approach for a Synthetic Autobiographical Memory System
Vise andre…
2015 (engelsk)Inngår i: BIOMIMETIC AND BIOHYBRID SYSTEMS, LIVING MACHINES 2015, Springer, 2015, s. 280-292Konferansepaper, Publicerat paper (Fagfellevurdert)
Abstract [en]

Autobiographical memory (AM) refers to the organisation of one's experience into a coherent narrative. The exact neural mechanisms responsible for the manifestation of AM in humans are unknown. On the other hand, the field of psychology has provided us with useful understanding about the functionality of a bio-inspired synthetic AM (SAM) system, in a higher level of description. This paper is concerned with a top-down approach to SAM, where known components and organisation guide the architecture but the unknown details of each module are abstracted. By using Bayesian latent variable models we obtain a transparent SAM system with which we can interact in a structured way. This allows us to reveal the properties of specific sub-modules and map them to functionality observed in biological systems. The top-down approach can cope well with the high performance requirements of a bio-inspired cognitive system. This is demonstrated in experiments using faces data.

sted, utgiver, år, opplag, sider
Springer, 2015
Serie
Lecture Notes in Artificial Intelligence, ISSN 0302-9743 ; 9222
Emneord
Synthetic autobiographical memory, Hippocampus, Robotics, Deep Gaussian process, MRD
HSV kategori
Identifikatorer
urn:nbn:se:kth:diva-177974 (URN)10.1007/978-3-319-22979-9_28 (DOI)000364183200028 ()2-s2.0-84947125286 (Scopus ID)978-3-319-22979-9 (ISBN)978-3-319-22978-2 (ISBN)
Konferanse
4th International Conference on Biomimetic and Biohybrid Systems (Living Machines), JUL 28-31, 2015, Barcelona, SPAIN
Merknad

QC 20151202

Tilgjengelig fra: 2015-12-02 Laget: 2015-11-30 Sist oppdatert: 2024-03-15bibliografisk kontrollert
Hjelm, M., Ek, C. H., Detry, R. & Kragic, D. (2015). Learning Human Priors for Task-Constrained Grasping. In: COMPUTER VISION SYSTEMS (ICVS 2015): . Paper presented at 10th International Conference on Computer Vision Systems (ICVS), JUL 06-09, 2015, Copenhagen, DENMARK (pp. 207-217). Springer Berlin/Heidelberg
Åpne denne publikasjonen i ny fane eller vindu >>Learning Human Priors for Task-Constrained Grasping
2015 (engelsk)Inngår i: COMPUTER VISION SYSTEMS (ICVS 2015), Springer Berlin/Heidelberg, 2015, s. 207-217Konferansepaper, Publicerat paper (Fagfellevurdert)
Abstract [en]

An autonomous agent using manmade objects must understand how task conditions the grasp placement. In this paper we formulate task based robotic grasping as a feature learning problem. Using a human demonstrator to provide examples of grasps associated with a specific task, we learn a representation, such that similarity in task is reflected by similarity in feature. The learned representation discards parts of the sensory input that is redundant for the task, allowing the agent to ground and reason about the relevant features for the task. Synthesized grasps for an observed task on previously unseen objects can then be filtered and ordered by matching to learned instances without the need of an analytically formulated metric. We show on a real robot how our approach is able to utilize the learned representation to synthesize and perform valid task specific grasps on novel objects.

sted, utgiver, år, opplag, sider
Springer Berlin/Heidelberg, 2015
Serie
Lecture Notes in Computer Science, ISSN 0302-9743 ; 9163
HSV kategori
Identifikatorer
urn:nbn:se:kth:diva-177975 (URN)10.1007/978-3-319-20904-3_20 (DOI)000364183300020 ()2-s2.0-84949035044 (Scopus ID)978-3-319-20904-3 (ISBN)978-3-319-20903-6 (ISBN)
Konferanse
10th International Conference on Computer Vision Systems (ICVS), JUL 06-09, 2015, Copenhagen, DENMARK
Merknad

QC 20151202

Tilgjengelig fra: 2015-12-02 Laget: 2015-11-30 Sist oppdatert: 2025-02-07bibliografisk kontrollert
Stork, J. A., Ek, C. H., Bekiroglu, Y. & Kragic, D. (2015). Learning Predictive State Representation for in-hand manipulation. In: Proceedings - IEEE International Conference on Robotics and Automation: . Paper presented at 2015 IEEE International Conference on Robotics and Automation, ICRA 2015, 26 May 2015 through 30 May 2015 (pp. 3207-3214). IEEE conference proceedings (June)
Åpne denne publikasjonen i ny fane eller vindu >>Learning Predictive State Representation for in-hand manipulation
2015 (engelsk)Inngår i: Proceedings - IEEE International Conference on Robotics and Automation, IEEE conference proceedings, 2015, nr June, s. 3207-3214Konferansepaper, Publicerat paper (Fagfellevurdert)
Abstract [en]

We study the use of Predictive State Representation (PSR) for modeling of an in-hand manipulation task through interaction with the environment. We extend the original PSR model to a new domain of in-hand manipulation and address the problem of partial observability by introducing new kernel-based features that integrate both actions and observations. The model is learned directly from haptic data and is used to plan series of actions that rotate the object in the hand to a specific configuration by pushing it against a table. Further, we analyze the model's belief states using additional visual data and enable planning of action sequences when the observations are ambiguous. We show that the learned representation is geometrically meaningful by embedding labeled action-observation traces. Suitability for planning is demonstrated by a post-grasp manipulation example that changes the object state to multiple specified target configurations.

sted, utgiver, år, opplag, sider
IEEE conference proceedings, 2015
Emneord
Action sequences, Hand manipulation, Partial observability, Predictive state representation, Psr models, Target configurations, Visual data, Robotics
HSV kategori
Identifikatorer
urn:nbn:se:kth:diva-176107 (URN)10.1109/ICRA.2015.7139641 (DOI)000370974903031 ()2-s2.0-84938273485 (Scopus ID)
Konferanse
2015 IEEE International Conference on Robotics and Automation, ICRA 2015, 26 May 2015 through 30 May 2015
Merknad

QC 20151202. QC 20160411

Tilgjengelig fra: 2015-12-02 Laget: 2015-11-02 Sist oppdatert: 2024-03-15bibliografisk kontrollert
Stork, J. A., Ek, C. H. & Kragic, D. (2015). Learning Predictive State Representations for Planning. In: 2015 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS (IROS): . Paper presented at IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), SEP 28-OCT 02, 2015, Hamburg, GERMANY (pp. 3427-3434). IEEE Press
Åpne denne publikasjonen i ny fane eller vindu >>Learning Predictive State Representations for Planning
2015 (engelsk)Inngår i: 2015 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS (IROS), IEEE Press, 2015, s. 3427-3434Konferansepaper, Publicerat paper (Fagfellevurdert)
Abstract [en]

Predictive State Representations (PSRs) allow modeling of dynamical systems directly in observables and without relying on latent variable representations. A problem that arises from learning PSRs is that it is often hard to attribute semantic meaning to the learned representation. This makes generalization and planning in PSRs challenging. In this paper, we extend PSRs and introduce the notion of PSRs that include prior information (P-PSRs) to learn representations which are suitable for planning and interpretation. By learning a low-dimensional embedding of test features we map belief points of similar semantic to the same region of a subspace. This facilitates better generalization for planning and semantical interpretation of the learned representation. In specific, we show how to overcome the training sample bias and introduce feature selection such that the resulting representation emphasizes observables related to the planning task. We show that our P-PSRs result in qualitatively meaningful representations and present quantitative results that indicate improved suitability for planning.

sted, utgiver, år, opplag, sider
IEEE Press, 2015
Serie
IEEE International Conference on Intelligent Robots and Systems, ISSN 2153-0858
HSV kategori
Identifikatorer
urn:nbn:se:kth:diva-185107 (URN)10.1109/IROS.2015.7353855 (DOI)000371885403089 ()2-s2.0-84958177858 (Scopus ID)978-1-4799-9994-1 (ISBN)
Konferanse
IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), SEP 28-OCT 02, 2015, Hamburg, GERMANY
Merknad

QC 20160412

Tilgjengelig fra: 2016-04-12 Laget: 2016-04-11 Sist oppdatert: 2025-02-07bibliografisk kontrollert
Sharif Razavian, A., Azizpour, H., Maki, A., Sullivan, J., Ek, C. H. & Carlsson, S. (2015). Persistent Evidence of Local Image Properties in Generic ConvNets. In: Paulsen, Rasmus R., Pedersen, Kim S. (Ed.), Image Analysis: 19th Scandinavian Conference, SCIA 2015, Copenhagen, Denmark, June 15-17, 2015. Proceedings. Paper presented at Scandinavian Conference on Image Analysis, Copenhagen, Denmark, 15-17 June, 2015 (pp. 249-262). Springer Publishing Company
Åpne denne publikasjonen i ny fane eller vindu >>Persistent Evidence of Local Image Properties in Generic ConvNets
Vise andre…
2015 (engelsk)Inngår i: Image Analysis: 19th Scandinavian Conference, SCIA 2015, Copenhagen, Denmark, June 15-17, 2015. Proceedings / [ed] Paulsen, Rasmus R., Pedersen, Kim S., Springer Publishing Company, 2015, s. 249-262Konferansepaper, Publicerat paper (Fagfellevurdert)
Abstract [en]

Supervised training of a convolutional network for object classification should make explicit any information related to the class of objects and disregard any auxiliary information associated with the capture of the image or thevariation within the object class. Does this happen in practice? Although this seems to pertain to the very final layers in the network, if we look at earlier layers we find that this is not the case. Surprisingly, strong spatial information is implicit. This paper addresses this, in particular, exploiting the image representation at the first fully connected layer,i.e. the global image descriptor which has been recently shown to be most effective in a range of visual recognition tasks. We empirically demonstrate evidences for the finding in the contexts of four different tasks: 2d landmark detection, 2d object keypoints prediction, estimation of the RGB values of input image, and recovery of semantic label of each pixel. We base our investigation on a simple framework with ridge rigression commonly across these tasks,and show results which all support our insight. Such spatial information can be used for computing correspondence of landmarks to a good accuracy, but should potentially be useful for improving the training of the convolutional nets for classification purposes.

sted, utgiver, år, opplag, sider
Springer Publishing Company, 2015
Serie
Image Processing, Computer Vision, Pattern Recognition, and Graphics ; 9127
HSV kategori
Identifikatorer
urn:nbn:se:kth:diva-172140 (URN)10.1007/978-3-319-19665-7_21 (DOI)2-s2.0-84947982864 (Scopus ID)
Konferanse
Scandinavian Conference on Image Analysis, Copenhagen, Denmark, 15-17 June, 2015
Merknad

Qc 20150828

Tilgjengelig fra: 2015-08-13 Laget: 2015-08-13 Sist oppdatert: 2024-03-15bibliografisk kontrollert
Song, D., Ek, C. H., Hübner, K. & Kragic, D. (2015). Task-Based Robot Grasp Planning Using Probabilistic Inference. IEEE Transactions on robotics, 31(3), 546-561
Åpne denne publikasjonen i ny fane eller vindu >>Task-Based Robot Grasp Planning Using Probabilistic Inference
2015 (engelsk)Inngår i: IEEE Transactions on robotics, ISSN 1552-3098, E-ISSN 1941-0468, Vol. 31, nr 3, s. 546-561Artikkel i tidsskrift (Fagfellevurdert) Published
Abstract [en]

Grasping and manipulating everyday objects in a goal-directed manner is an important ability of a service robot. The robot needs to reason about task requirements and ground these in the sensorimotor information. Grasping and interaction with objects are challenging in real-world scenarios, where sensorimotor uncertainty is prevalent. This paper presents a probabilistic framework for the representation and modeling of robot-grasping tasks. The framework consists of Gaussian mixture models for generic data discretization, and discrete Bayesian networks for encoding the probabilistic relations among various task-relevant variables, including object and action features as well as task constraints. We evaluate the framework using a grasp database generated in a simulated environment including a human and two robot hand models. The generative modeling approach allows the prediction of grasping tasks given uncertain sensory data, as well as object and grasp selection in a task-oriented manner. Furthermore, the graphical model framework provides insights into dependencies between variables and features relevant for object grasping.

HSV kategori
Identifikatorer
urn:nbn:se:kth:diva-170982 (URN)10.1109/TRO.2015.2409912 (DOI)000356518700003 ()2-s2.0-84926395738 (Scopus ID)
Merknad

QC 20150713

Tilgjengelig fra: 2015-07-13 Laget: 2015-07-13 Sist oppdatert: 2024-03-15bibliografisk kontrollert
Baisero, A., Pokorny, F. T., Kragic, D. & Ek, C. H. (2015). The path kernel: A novel kernel for sequential data. In: Ana Fred, Maria De Marsico (Ed.), Pattern Recognition: Applications and Methods : International Conference, ICPRAM 2013 Barcelona, Spain, February 15–18, 2013 Revised Selected Papers. Paper presented at 2nd International Conference on Pattern Recognition Applications and Methods, ICPRAM 2013; Barcelona; Spain; 15 February 2013 through 18 (pp. 71-84). Springer Berlin/Heidelberg
Åpne denne publikasjonen i ny fane eller vindu >>The path kernel: A novel kernel for sequential data
2015 (engelsk)Inngår i: Pattern Recognition: Applications and Methods : International Conference, ICPRAM 2013 Barcelona, Spain, February 15–18, 2013 Revised Selected Papers / [ed] Ana Fred, Maria De Marsico, Springer Berlin/Heidelberg, 2015, s. 71-84Konferansepaper, Publicerat paper (Fagfellevurdert)
Abstract [en]

We define a novel kernel function for finite sequences of arbitrary length which we call the path kernel. We evaluate this kernel in a classification scenario using synthetic data sequences and show that our kernel can outperform state of the art sequential similarity measures. Furthermore, we find that, in our experiments, a clustering of data based on the path kernel results in much improved interpretability of such clusters compared to alternative approaches such as dynamic time warping or the global alignment kernel.

sted, utgiver, år, opplag, sider
Springer Berlin/Heidelberg, 2015
Serie
Advances in Intelligent Systems and Computing, ISSN 2194-5357 ; 318
Emneord
Kernels, Sequences, Pattern recognition, Dynamic time warping, Global alignment, Interpretability, Kernel function, Similarity measure, State of the art, Classification (of information)
HSV kategori
Identifikatorer
urn:nbn:se:kth:diva-167388 (URN)10.1007/978-3-319-12610-4_5 (DOI)000364822300005 ()2-s2.0-84914163004 (Scopus ID)9783319126098 (ISBN)
Konferanse
2nd International Conference on Pattern Recognition Applications and Methods, ICPRAM 2013; Barcelona; Spain; 15 February 2013 through 18
Merknad

QC 20150601

Tilgjengelig fra: 2015-06-01 Laget: 2015-05-22 Sist oppdatert: 2024-03-15bibliografisk kontrollert
Afkham, H. M., Ek, C. H. & Carlsson, S. (2014). A topological framework for training latent variable models. In: Proceedings - International Conference on Pattern Recognition: . Paper presented at 22nd International Conference on Pattern Recognition, ICPR 2014, 24 August 2014 through 28 August 2014 (pp. 2471-2476).
Åpne denne publikasjonen i ny fane eller vindu >>A topological framework for training latent variable models
2014 (engelsk)Inngår i: Proceedings - International Conference on Pattern Recognition, 2014, s. 2471-2476Konferansepaper, Publicerat paper (Fagfellevurdert)
Abstract [en]

We discuss the properties of a class of latent variable models that assumes each labeled sample is associated with a set of different features, with no prior knowledge of which feature is the most relevant feature to be used. Deformable-Part Models (DPM) can be seen as good examples of such models. These models are usually considered to be expensive to train and very sensitive to the initialization. In this paper, we focus on the learning of such models by introducing a topological framework and show how it is possible to both reduce the learning complexity and produce more robust decision boundaries. We will also argue how our framework can be used for producing robust decision boundaries without exploiting the dataset bias or relying on accurate annotations. To experimentally evaluate our method and compare with previously published frameworks, we focus on the problem of image classification with object localization. In this problem, the correct location of the objects is unknown, during both training and testing stages, and is considered as a latent variable.

Emneord
Pattern recognition, Topology, Deformable part models, Latent variable models, Learning complexity, Object localization, Prior knowledge, Relevant features, Robust decisions, Training and testing, Image classification
HSV kategori
Identifikatorer
urn:nbn:se:kth:diva-167941 (URN)10.1109/ICPR.2014.427 (DOI)000359818002099 ()2-s2.0-84919941135 (Scopus ID)9781479952083 (ISBN)
Konferanse
22nd International Conference on Pattern Recognition, ICPR 2014, 24 August 2014 through 28 August 2014
Merknad

QC 20150605

Tilgjengelig fra: 2015-06-05 Laget: 2015-05-22 Sist oppdatert: 2024-03-15bibliografisk kontrollert
Organisasjoner