Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Transferring from face recognition to face attribute prediction through adaptive selection of off-the-shelf CNN representations
KTH, School of Computer Science and Communication (CSC), Robotics, perception and learning, RPL.ORCID iD: 0000-0002-8673-0797
KTH, School of Computer Science and Communication (CSC), Robotics, perception and learning, RPL.
KTH, School of Computer Science and Communication (CSC), Media Technology and Interaction Design, MID.ORCID iD: 0000-0003-3779-5647
2016 (English)In: 2016 23rd International Conference on Pattern Recognition, ICPR 2016, Institute of Electrical and Electronics Engineers (IEEE), 2016, p. 2264-2269, article id 7899973Conference paper (Refereed)
Abstract [en]

This paper addresses the problem of transferring CNNs pre-trained for face recognition to a face attribute prediction task. To transfer an off-the-shelf CNN to a novel task, a typical solution is to fine-tune the network towards the novel task. As demonstrated in the state-of-the-art face attribute prediction approach, fine-tuning the high-level CNN hidden layer by using labeled attribute data leads to significant performance improvements. In this paper, however, we tackle the same problem but through a different approach. Rather than using an end-to-end network, we select face descriptors from off-the-shelf hierarchical CNN representations for recognizing different attributes. Through such an adaptive representation selection, even without any fine-tuning, our results still outperform the state-of-the-art face attribute prediction approach on the latest large-scale dataset for an error rate reduction of more than 20%. Moreover, by using intensive empirical probes, we have identified several key factors that are significant for achieving promising face attribute prediction performance. These results attempt to gain and update our understandings of the nature of CNN features and how they can be better applied to the transferred novel tasks.

Place, publisher, year, edition, pages
Institute of Electrical and Electronics Engineers (IEEE), 2016. p. 2264-2269, article id 7899973
Series
International Conference on Pattern Recognition, ISSN 1051-4651
National Category
Computer Engineering
Identifiers
URN: urn:nbn:se:kth:diva-215409DOI: 10.1109/ICPR.2016.7899973ISI: 000406771302043Scopus ID: 2-s2.0-85019109911ISBN: 978-1-5090-4847-2 (print)OAI: oai:DiVA.org:kth-215409DiVA, id: diva2:1148103
Conference
23rd International Conference on Pattern Recognition, ICPR 2016, Cancun CenterCancun, Mexico, 4 December 2016 through 8 December 2016
Note

QC 20171010

Available from: 2017-10-10 Created: 2017-10-10 Last updated: 2018-01-13Bibliographically approved

Open Access in DiVA

No full text in DiVA

Other links

Publisher's full textScopus

Search in DiVA

By author/editor
Zhong, YangSullivan, JosephineLi, Haibo
By organisation
Robotics, perception and learning, RPLMedia Technology and Interaction Design, MID
Computer Engineering

Search outside of DiVA

GoogleGoogle Scholar

doi
isbn
urn-nbn

Altmetric score

doi
isbn
urn-nbn
Total: 10 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf