Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
The Preimage of Rectifier Network Activities
KTH, School of Electrical Engineering and Computer Science (EECS), Robotics, Perception and Learning, RPL.
KTH, School of Computer Science and Communication (CSC), Computational Science and Technology (CST).ORCID iD: 0000-0001-5211-6388
KTH, School of Electrical Engineering and Computer Science (EECS), Robotics, Perception and Learning, RPL.
KTH, School of Electrical Engineering and Computer Science (EECS), Robotics, Perception and Learning, RPL.
Show others and affiliations
2017 (English)In: International Conference on Learning Representations (ICLR), 2017Conference paper, Published paper (Refereed)
Abstract [en]

The preimage of the activity at a certain level of a deep network is the set of inputs that result in the same node activity. For fully connected multi layer rectifier networks we demonstrate how to compute the preimages of activities at arbitrary levels from knowledge of the parameters in a deep rectifying network. If the preimage set of a certain activity in the network contains elements from more than one class it means that these classes are irreversibly mixed. This implies that preimage sets which are piecewise linear manifolds are building blocks for describing the input manifolds specific classes, ie all preimages should ideally be from the same class. We believe that the knowledge of how to compute preimages will be valuable in understanding the efficiency displayed by deep learning networks and could potentially be used in designing more efficient training algorithms.

Place, publisher, year, edition, pages
2017.
National Category
Computer Vision and Robotics (Autonomous Systems) Computer Vision and Robotics (Autonomous Systems)
Identifiers
URN: urn:nbn:se:kth:diva-259164Scopus ID: 2-s2.0-85071123889OAI: oai:DiVA.org:kth-259164DiVA, id: diva2:1350663
Conference
International Conference on Learning Representations (ICLR)
Note

QC 20190916

Available from: 2019-09-11 Created: 2019-09-11 Last updated: 2019-09-16Bibliographically approved

Open Access in DiVA

fulltext(359 kB)3 downloads
File information
File name FULLTEXT01.pdfFile size 359 kBChecksum SHA-512
e0989b5d434118f78b07f7fbef35e0a400aba8657efadcc1eb26950039f7e0b786313e2d48b79774a9df6bdad3fb76e6bd11ca80101cf0013f20e2a6a3dafbd6
Type fulltextMimetype application/pdf

Scopus

Authority records BETA

Azizpour, HosseinRazavian, Ali SharifSullivan, JosephineSmith, Kevin

Search in DiVA

By author/editor
Carlsson, StefanAzizpour, HosseinRazavian, Ali SharifSullivan, JosephineSmith, Kevin
By organisation
Robotics, Perception and Learning, RPLComputational Science and Technology (CST)Computational Science and Technology (CST)
Computer Vision and Robotics (Autonomous Systems)Computer Vision and Robotics (Autonomous Systems)

Search outside of DiVA

GoogleGoogle Scholar
Total: 3 downloads
The number of downloads is the sum of all downloads of full texts. It may include eg previous versions that are now no longer available

urn-nbn

Altmetric score

urn-nbn
Total: 67 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf