Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Mixture component identification and learning for visual recognition
KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. (Computer Vision Group)
KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. (Computer Vision Group)ORCID iD: 0000-0001-5211-6388
KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. (Computer Vision Group)
KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. (Computer Vision Group)
2012 (English)In: Computer Vision – ECCV 2012: 12th European Conference on Computer Vision, Florence, Italy, October 7-13, 2012, Proceedings, Part VI, Springer, 2012, 115-128 p.Conference paper, Published paper (Refereed)
Abstract [en]

The non-linear decision boundary between object and background classes - due to large intra-class variations - needs to be modelled by any classifier wishing to achieve good results. While a mixture of linear classifiers is capable of modelling this non-linearity, learning this mixture from weakly annotated data is non-trivial and is the paper's focus. Our approach is to identify the modes in the distribution of our positive examples by clustering, and to utilize this clustering in a latent SVM formulation to learn the mixture model. The clustering relies on a robust measure of visual similarity which suppresses uninformative clutter by using a novel representation based on the exemplar SVM. This subtle clustering of the data leads to learning better mixture models, as is demonstrated via extensive evaluations on Pascal VOC 2007. The final classifier, using a HOG representation of the global image patch, achieves performance comparable to the state-of-the-art while being more efficient at detection time.

Place, publisher, year, edition, pages
Springer, 2012. 115-128 p.
Series
Lecture Notes in Computer Science, ISSN 0302-9743 ; 7577
Keyword [en]
Decision boundary, Detection time, Image patches, Intra-class variation, Linear classifiers, Mixture components, Mixture model, Non-Linearity, Non-trivial, Positive examples, Visual recognition, Visual similarity, Weakly annotated data
National Category
Computer Vision and Robotics (Autonomous Systems)
Identifiers
URN: urn:nbn:se:kth:diva-106987DOI: 10.1007/978-3-642-33783-3_9ISI: 000342828800009Scopus ID: 2-s2.0-84867892975ISBN: 978-364233782-6 (print)OAI: oai:DiVA.org:kth-106987DiVA: diva2:574329
Conference
12th European Conference on Computer Vision, ECCV 2012;Florence;7 October 2012 through 13 October 2012
Funder
ICT - The Next Generation
Note

QC 20121207

Available from: 2012-12-05 Created: 2012-12-05 Last updated: 2016-09-08Bibliographically approved
In thesis
1. Data Driven Visual Recognition
Open this publication in new window or tab >>Data Driven Visual Recognition
2014 (English)Doctoral thesis, comprehensive summary (Other academic)
Abstract [en]

This thesis is mostly about supervised visual recognition problems. Based on a general definition of categories, the contents are divided into two parts: one which models categories and one which is not category based. We are interested in data driven solutions for both kinds of problems.

In the category-free part, we study novelty detection in temporal and spatial domains as a category-free recognition problem. Using data driven models, we demonstrate that based on a few reference exemplars, our methods are able to detect novelties in ego-motions of people, and changes in the static environments surrounding them.

In the category level part, we study object recognition. We consider both object category classification and localization, and propose scalable data driven approaches for both problems. A mixture of parametric classifiers, initialized with a sophisticated clustering of the training data, is demonstrated to adapt to the data better than various baselines such as the same model initialized with less subtly designed procedures. A nonparametric large margin classifier is introduced and demonstrated to have a multitude of advantages in comparison to its competitors: better training and testing time costs, the ability to make use of indefinite/invariant and deformable similarity measures, and adaptive complexity are the main features of the proposed model.

We also propose a rather realistic model of recognition problems, which quantifies the interplay between representations, classifiers, and recognition performances. Based on data-describing measures which are aggregates of pairwise similarities of the training data, our model characterizes and describes the distributions of training exemplars. The measures are shown to capture many aspects of the difficulty of categorization problems and correlate significantly to the observed recognition performances. Utilizing these measures, the model predicts the performance of particular classifiers on distributions similar to the training data. These predictions, when compared to the test performance of the classifiers on the test sets, are reasonably accurate.

We discuss various aspects of visual recognition problems: what is the interplay between representations and classification tasks, how can different models better adapt to the training data, etc. We describe and analyze the aforementioned methods that are designed to tackle different visual recognition problems, but share one common characteristic: being data driven.

Place, publisher, year, edition, pages
KTH Royal Institute of Technology, 2014. ix, 36 p.
Keyword
Visual Recognition, Data Driven, Supervised Learning, Mixture Models, Non-Parametric Models, Category Recognition, Novelty Detection
National Category
Computer Vision and Robotics (Autonomous Systems)
Identifiers
urn:nbn:se:kth:diva-145865 (URN)978-91-7595-197-3 (ISBN)
Public defence
2014-06-12, F3, Lindstedtsvägen 26, KTH, Stockholm, 14:00 (English)
Opponent
Supervisors
Note

QC 20140604

Available from: 2014-06-04 Created: 2014-06-02 Last updated: 2017-02-22Bibliographically approved
2. Visual Representations and Models: From Latent SVM to Deep Learning
Open this publication in new window or tab >>Visual Representations and Models: From Latent SVM to Deep Learning
2016 (English)Doctoral thesis, comprehensive summary (Other academic)
Abstract [en]

Two important components of a visual recognition system are representation and model. Both involves the selection and learning of the features that are indicative for recognition and discarding those features that are uninformative. This thesis, in its general form, proposes different techniques within the frameworks of two learning systems for representation and modeling. Namely, latent support vector machines (latent SVMs) and deep learning.

First, we propose various approaches to group the positive samples into clusters of visually similar instances. Given a fixed representation, the sampled space of the positive distribution is usually structured. The proposed clustering techniques include a novel similarity measure based on exemplar learning, an approach for using additional annotation, and augmenting latent SVM to automatically find clusters whose members can be reliably distinguished from background class. 

In another effort, a strongly supervised DPM is suggested to study how these models can benefit from privileged information. The extra information comes in the form of semantic parts annotation (i.e. their presence and location). And they are used to constrain DPMs latent variables during or prior to the optimization of the latent SVM. Its effectiveness is demonstrated on the task of animal detection.

Finally, we generalize the formulation of discriminative latent variable models, including DPMs, to incorporate new set of latent variables representing the structure or properties of negative samples. Thus, we term them as negative latent variables. We show this generalization affects state-of-the-art techniques and helps the visual recognition by explicitly searching for counter evidences of an object presence.

Following the resurgence of deep networks, in the last works of this thesis we have focused on deep learning in order to produce a generic representation for visual recognition. A Convolutional Network (ConvNet) is trained on a largely annotated image classification dataset called ImageNet with $\sim1.3$ million images. Then, the activations at each layer of the trained ConvNet can be treated as the representation of an input image. We show that such a representation is surprisingly effective for various recognition tasks, making it clearly superior to all the handcrafted features previously used in visual recognition (such as HOG in our first works on DPM). We further investigate the ways that one can improve this representation for a task in mind. We propose various factors involving before or after the training of the representation which can improve the efficacy of the ConvNet representation. These factors are analyzed on 16 datasets from various subfields of visual recognition.

Place, publisher, year, edition, pages
Stockholm, Sweden: KTH Royal Institute of Technology, 2016. 172 p.
Series
TRITA-CSC-A, ISSN 1653-5723 ; 21
Keyword
Computer Vision, Machine Learning, Artificial Intelligence, Deep Learning, Learning Representation, Deformable Part Models, Discriminative Latent Variable Models, Convolutional Networks, Object Recognition, Object Detection
National Category
Electrical Engineering, Electronic Engineering, Information Engineering Computer Systems
Research subject
Computer Science
Identifiers
urn:nbn:se:kth:diva-192289 (URN)978-91-7729-110-7 (ISBN)
External cooperation:
Public defence
2016-09-27, Kollegiesalen, Brinellvägen 8, KTH-huset, våningsplan 4, KTH Campus, Stockholm, 15:26 (English)
Opponent
Supervisors
Note

QC 20160908

Available from: 2016-09-08 Created: 2016-09-08 Last updated: 2016-09-09Bibliographically approved

Open Access in DiVA

No full text

Other links

Publisher's full textScopus

Authority records BETA

Azizpour, Hossein

Search in DiVA

By author/editor
Aghazadeh, OmidAzizpour, HosseinSullivan, JosephineCarlsson, Stefan
By organisation
Computer Vision and Active Perception, CVAP
Computer Vision and Robotics (Autonomous Systems)

Search outside of DiVA

GoogleGoogle Scholar

doi
isbn
urn-nbn

Altmetric score

doi
isbn
urn-nbn
Total: 362 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf