Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Using Richer Models for Articulated Pose Estimation of Footballers
KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. (Computer Vision)ORCID iD: 0000-0003-4181-2753
KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. (Computer Vision)
2012 (English)In: Proceedings British Machine Vision Conference 2012., 2012, 6.1-6.10 p.Conference paper, Published paper (Refereed)
Abstract [en]

We present a fully automatic procedure for reconstructing the pose of a person in 3Dfrom images taken from multiple views. We demonstrate a novel approach for learningmore complex models using SVM-Rank, to reorder a set of high scoring configurations.The new model in many cases can resolve the problem of double counting of limbswhich happens often in the pictorial structure based models. We address the problemof flipping ambiguity to find the correct correspondences of 2D predictions across allviews. We obtain improvements for 2D prediction over the state of art methods on ourdataset. We show that the results in many cases are good enough for a fully automatic3D reconstruction with uncalibrated cameras.

Place, publisher, year, edition, pages
2012. 6.1-6.10 p.
National Category
Computer Vision and Robotics (Autonomous Systems)
Identifiers
URN: urn:nbn:se:kth:diva-104558DOI: 10.5244/C.26.6ISI: 000346356200003Scopus ID: 2-s2.0-84898491407ISBN: 1-901725-46-4 (print)OAI: oai:DiVA.org:kth-104558DiVA: diva2:565039
Conference
British Machine Vision Conference, Surrey,3-7 September 2012
Note

QC 20121114

Available from: 2012-11-14 Created: 2012-11-05 Last updated: 2015-10-06Bibliographically approved
In thesis
1. Correspondence Estimation in Human Face and Posture Images
Open this publication in new window or tab >>Correspondence Estimation in Human Face and Posture Images
2014 (English)Doctoral thesis, comprehensive summary (Other academic)
Abstract [en]

Many computer vision tasks such as object detection, pose estimation,and alignment are directly related to the estimation of correspondences overinstances of an object class. Other tasks such as image classification andverification if not completely solved can largely benefit from correspondenceestimation. This thesis presents practical approaches for tackling the corre-spondence estimation problem with an emphasis on deformable objects.Different methods presented in this thesis greatly vary in details but theyall use a combination of generative and discriminative modeling to estimatethe correspondences from input images in an efficient manner. While themethods described in this work are generic and can be applied to any object,two classes of objects of high importance namely human body and faces arethe subjects of our experimentations.When dealing with human body, we are mostly interested in estimating asparse set of landmarks – specifically we are interested in locating the bodyjoints. We use pictorial structures to model the articulation of the body partsgeneratively and learn efficient discriminative models to localize the parts inthe image. This is a common approach explored by many previous works. Wefurther extend this hybrid approach by introducing higher order terms to dealwith the double-counting problem and provide an algorithm for solving theresulting non-convex problem efficiently. In another work we explore the areaof multi-view pose estimation where we have multiple calibrated cameras andwe are interested in determining the pose of a person in 3D by aggregating2D information. This is done efficiently by discretizing the 3D search spaceand use the 3D pictorial structures model to perform the inference.In contrast to the human body, faces have a much more rigid structureand it is relatively easy to detect the major parts of the face such as eyes,nose and mouth, but performing dense correspondence estimation on facesunder various poses and lighting conditions is still challenging. In a first workwe deal with this variation by partitioning the face into multiple parts andlearning separate regressors for each part. In another work we take a fullydiscriminative approach and learn a global regressor from image to landmarksbut to deal with insufficiency of training data we augment it by a large numberof synthetic images. While we have shown great performance on the standardface datasets for performing correspondence estimation, in many scenariosthe RGB signal gets distorted as a result of poor lighting conditions andbecomes almost unusable. This problem is addressed in another work wherewe explore use of depth signal for dense correspondence estimation. Hereagain a hybrid generative/discriminative approach is used to perform accuratecorrespondence estimation in real-time.

Place, publisher, year, edition, pages
Stockholm, Sweden: KTH Royal Institute of Technology, 2014. vii, 32 p.
Series
TRITA-CSC-A, ISSN 1653-5723 ; 2014:14
National Category
Computer Vision and Robotics (Autonomous Systems)
Research subject
Computer Science
Identifiers
urn:nbn:se:kth:diva-150115 (URN)978-91-7595-261-1 (ISBN)
Public defence
2014-10-10, Kollegiesalen, Brinellvägen 8, KTH, Stockholm, 10:00 (English)
Opponent
Supervisors
Note

QC 20140919

Available from: 2014-09-19 Created: 2014-08-29 Last updated: 2014-09-19Bibliographically approved

Open Access in DiVA

No full text

Other links

Publisher's full textScopusPaper on conference website

Authority records BETA

Kazemi, Vahid

Search in DiVA

By author/editor
Kazemi, VahidSullivan, Josephine
By organisation
Computer Vision and Active Perception, CVAP
Computer Vision and Robotics (Autonomous Systems)

Search outside of DiVA

GoogleGoogle Scholar

doi
isbn
urn-nbn

Altmetric score

doi
isbn
urn-nbn
Total: 149 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf