Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Convolutional Network Representation for Visual Recognition
KTH, School of Computer Science and Communication (CSC), Robotics, perception and learning, RPL.
2017 (English)Doctoral thesis, comprehensive summary (Other academic)
Abstract [en]

Image representation is a key component in visual recognition systems. In visual recognition problem, the solution or the model should be able to learn and infer the quality of certain visual semantics in the image. Therefore, it is important for the model to represent the input image in a way that the semantics of interest can be inferred easily and reliably. This thesis is written in the form of a compilation of publications and tries to look into the Convolutional Networks (CovnNets) representation in visual recognition problems from an empirical perspective. Convolutional Network is a special class of Neural Networks with a hierarchical structure where every layer’s output (except for the last layer) will be the input of another one. It was shown that ConvNets are powerful tools to learn a generic representation of an image. In this body of work, we first showed that this is indeed the case and ConvNet representation with a simple classifier can outperform highly-tuned pipelines based on hand-crafted features. To be precise, we first trained a ConvNet on a large dataset, then for every image in another task with a small dataset, we feedforward the image to the ConvNet and take the ConvNets activation on a certain layer as the image representation. Transferring the knowledge from the large dataset (source task) to the small dataset (target task) proved to be effective and outperformed baselines on a variety of tasks in visual recognition. We also evaluated the presence of spatial visual semantics in ConvNet representation and observed that ConvNet retains significant spatial information despite the fact that it has never been explicitly trained to preserve low-level semantics. We then tried to investigate the factors that affect the transferability of these representations. We studied various factors on a diverse set of visual recognition tasks and found a consistent correlation between the effect of those factors and the similarity of the target task to the source task. This intuition alongside the experimental results provides a guideline to improve the performance of visual recognition tasks using ConvNet features. Finally, we addressed the task of visual instance retrieval specifically as an example of how these simple intuitions can increase the performance of the target task massively.

Place, publisher, year, edition, pages
KTH Royal Institute of Technology, 2017. , p. 130
Series
TRITA-CSC-A, ISSN 1653-5723 ; 2017:01
Keywords [en]
Convolutional Network, Visual Recognition, Transfer Learning
National Category
Robotics
Research subject
Computer Science
Identifiers
URN: urn:nbn:se:kth:diva-197919ISBN: 978-91-7729-213-5 (print)OAI: oai:DiVA.org:kth-197919DiVA, id: diva2:1054887
Public defence
2017-01-13, F3, Lindstedtsvagen 26, Stockholm, 10:00 (English)
Opponent
Supervisors
Note

QC 20161209

Available from: 2016-12-09 Created: 2016-12-09 Last updated: 2016-12-23Bibliographically approved
List of papers
1. CNN features off-the-shelf: An Astounding Baseline for Recognition
Open this publication in new window or tab >>CNN features off-the-shelf: An Astounding Baseline for Recognition
2014 (English)In: Proceedings of CVPR 2014, 2014Conference paper, Published paper (Refereed)
Abstract [en]

Recent results indicate that the generic descriptors extracted from the convolutional neural networks are very powerful. This paper adds to the mounting evidence that this is indeed the case. We report on a series of experiments conducted for different recognition tasks using the publicly available code and model of the OverFeat network which was trained to perform object classification on ILSVRC13. We use features extracted from the OverFeat network as a generic image representation to tackle the diverse range of recognition tasks of object image classification, scene recognition, fine grained recognition, attribute detection and image retrieval applied to a diverse set of datasets. We selected these tasks and datasets as they gradually move further away from the original task and data the OverFeat network was trained to solve. Astonishingly, we report consistent superior results compared to the highly tuned state-of-the-art systems in all the visual classification tasks on various datasets. For instance retrieval it consistently outperforms low memory footprint methods except for sculptures dataset. The results are achieved using a linear SVM classifier (or L2 distance in case of retrieval) applied to a feature representation of size 4096 extracted from a layer in the net. The representations are further modified using simple augmentation techniques e.g. jittering. The results strongly suggest that features obtained from deep learning with convolutional nets should be the primary candidate in most visual recognition tasks.

National Category
Computer Sciences
Identifiers
urn:nbn:se:kth:diva-149178 (URN)10.1109/CVPRW.2014.131 (DOI)000349552300079 ()2-s2.0-84908537903 (Scopus ID)
Conference
Computer Vision and Pattern Recognition (CVPR) 2014, DeepVision workshop,June 28, 2014, Columbus, Ohio
Note

Best Paper Runner-up Award.

QC 20140825

Available from: 2014-08-16 Created: 2014-08-16 Last updated: 2018-01-11Bibliographically approved
2. Persistent Evidence of Local Image Properties in Generic ConvNets
Open this publication in new window or tab >>Persistent Evidence of Local Image Properties in Generic ConvNets
Show others...
2015 (English)In: Image Analysis: 19th Scandinavian Conference, SCIA 2015, Copenhagen, Denmark, June 15-17, 2015. Proceedings / [ed] Paulsen, Rasmus R., Pedersen, Kim S., Springer Publishing Company, 2015, p. 249-262Conference paper, Published paper (Refereed)
Abstract [en]

Supervised training of a convolutional network for object classification should make explicit any information related to the class of objects and disregard any auxiliary information associated with the capture of the image or thevariation within the object class. Does this happen in practice? Although this seems to pertain to the very final layers in the network, if we look at earlier layers we find that this is not the case. Surprisingly, strong spatial information is implicit. This paper addresses this, in particular, exploiting the image representation at the first fully connected layer,i.e. the global image descriptor which has been recently shown to be most effective in a range of visual recognition tasks. We empirically demonstrate evidences for the finding in the contexts of four different tasks: 2d landmark detection, 2d object keypoints prediction, estimation of the RGB values of input image, and recovery of semantic label of each pixel. We base our investigation on a simple framework with ridge rigression commonly across these tasks,and show results which all support our insight. Such spatial information can be used for computing correspondence of landmarks to a good accuracy, but should potentially be useful for improving the training of the convolutional nets for classification purposes.

Place, publisher, year, edition, pages
Springer Publishing Company, 2015
Series
Image Processing, Computer Vision, Pattern Recognition, and Graphics ; 9127
National Category
Computer Systems
Identifiers
urn:nbn:se:kth:diva-172140 (URN)10.1007/978-3-319-19665-7_21 (DOI)2-s2.0-84947982864 (Scopus ID)
Conference
Scandinavian Conference on Image Analysis, Copenhagen, Denmark, 15-17 June, 2015
Note

Qc 20150828

Available from: 2015-08-13 Created: 2015-08-13 Last updated: 2016-12-09Bibliographically approved
3. Factors of Transferability for a Generic ConvNet Representation
Open this publication in new window or tab >>Factors of Transferability for a Generic ConvNet Representation
Show others...
2016 (English)In: IEEE Transaction on Pattern Analysis and Machine Intelligence, ISSN 0162-8828, E-ISSN 1939-3539, Vol. 38, no 9, p. 1790-1802, article id 7328311Article in journal (Refereed) Published
Abstract [en]

Evidence is mounting that Convolutional Networks (ConvNets) are the most effective representation learning method for visual recognition tasks. In the common scenario, a ConvNet is trained on a large labeled dataset (source) and the feed-forward units activation of the trained network, at a certain layer of the network, is used as a generic representation of an input image for a task with relatively smaller training set (target). Recent studies have shown this form of representation transfer to be suitable for a wide range of target visual recognition tasks. This paper introduces and investigates several factors affecting the transferability of such representations. It includes parameters for training of the source ConvNet such as its architecture, distribution of the training data, etc. and also the parameters of feature extraction such as layer of the trained ConvNet, dimensionality reduction, etc. Then, by optimizing these factors, we show that significant improvements can be achieved on various (17) visual recognition tasks. We further show that these visual recognition tasks can be categorically ordered based on their similarity to the source task such that a correlation between the performance of tasks and their similarity to the source task w.r.t. the proposed factors is observed.

Place, publisher, year, edition, pages
IEEE Computer Society Digital Library, 2016
National Category
Computer Vision and Robotics (Autonomous Systems)
Research subject
Computer Science
Identifiers
urn:nbn:se:kth:diva-177033 (URN)10.1109/TPAMI.2015.2500224 (DOI)000381432700006 ()2-s2.0-84981266620 (Scopus ID)
Note

QC 20161208

Available from: 2015-11-13 Created: 2015-11-13 Last updated: 2018-01-10Bibliographically approved
4. Visual instance retrieval with deep convolutional networks
Open this publication in new window or tab >>Visual instance retrieval with deep convolutional networks
2016 (English)In: ITE Transactions on Media Technology and Applications, ISSN 2186-7364, Vol. 4, no 3, p. 251-258Article in journal (Refereed) Published
Abstract [en]

This paper provides an extensive study on the availability of image representations based on convolutional networks (ConvNets) for the task of visual instance retrieval. Besides the choice of convolutional layers, we present an efficient pipeline exploiting multi-scale schemes to extract local features, in particular, by taking geometric invariance into explicit account, i.e. positions, scales and spatial consistency. In our experiments using five standard image retrieval datasets, we demonstrate that generic ConvNet image representations can outperform other state-of-the-art methods if they are extracted appropriately.

Place, publisher, year, edition, pages
Institute of Image Information and Television Engineers, 2016
Keywords
Convolutional network, Learning representation, Multi-resolution search, Visual instance retrieval
National Category
Computer Vision and Robotics (Autonomous Systems)
Identifiers
urn:nbn:se:kth:diva-195472 (URN)2-s2.0-84979503481 (Scopus ID)
Note

QC 20161125

Available from: 2016-11-25 Created: 2016-11-03 Last updated: 2018-01-13Bibliographically approved

Open Access in DiVA

fulltext(2157 kB)290 downloads
File information
File name FULLTEXT02.pdfFile size 2157 kBChecksum SHA-512
dba5343d1d3369ce520e55ac7e37144e38e88c632a68a0fbf729d3b952af537cedf47a7ccec6baddd871fdcdb424de6cd0c4877adeae8337b3d2eb0f262615e6
Type fulltextMimetype application/pdf

Search in DiVA

By author/editor
Sharif Razavian, Ali
By organisation
Robotics, perception and learning, RPL
Robotics

Search outside of DiVA

GoogleGoogle Scholar
Total: 290 downloads
The number of downloads is the sum of all downloads of full texts. It may include eg previous versions that are now no longer available

isbn
urn-nbn

Altmetric score

isbn
urn-nbn
Total: 1943 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf