Endre søk
RefereraExporteraLink to record
Permanent link

Direct link
Referera
Referensformat
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Annet format
Fler format
Språk
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Annet språk
Fler språk
Utmatningsformat
  • html
  • text
  • asciidoc
  • rtf
The Preimage of Rectifier Network Activities
KTH, Skolan för elektroteknik och datavetenskap (EECS), Robotik, perception och lärande, RPL.
KTH, Skolan för datavetenskap och kommunikation (CSC), Beräkningsvetenskap och beräkningsteknik (CST).ORCID-id: 0000-0001-5211-6388
KTH, Skolan för elektroteknik och datavetenskap (EECS), Robotik, perception och lärande, RPL.
KTH, Skolan för elektroteknik och datavetenskap (EECS), Robotik, perception och lärande, RPL.
Vise andre og tillknytning
2017 (engelsk)Inngår i: International Conference on Learning Representations (ICLR), 2017Konferansepaper, Publicerat paper (Fagfellevurdert)
Abstract [en]

The preimage of the activity at a certain level of a deep network is the set of inputs that result in the same node activity. For fully connected multi layer rectifier networks we demonstrate how to compute the preimages of activities at arbitrary levels from knowledge of the parameters in a deep rectifying network. If the preimage set of a certain activity in the network contains elements from more than one class it means that these classes are irreversibly mixed. This implies that preimage sets which are piecewise linear manifolds are building blocks for describing the input manifolds specific classes, ie all preimages should ideally be from the same class. We believe that the knowledge of how to compute preimages will be valuable in understanding the efficiency displayed by deep learning networks and could potentially be used in designing more efficient training algorithms.

sted, utgiver, år, opplag, sider
2017.
HSV kategori
Identifikatorer
URN: urn:nbn:se:kth:diva-259164Scopus ID: 2-s2.0-85071123889OAI: oai:DiVA.org:kth-259164DiVA, id: diva2:1350663
Konferanse
International Conference on Learning Representations (ICLR)
Merknad

QC 20190916

Tilgjengelig fra: 2019-09-11 Laget: 2019-09-11 Sist oppdatert: 2019-09-16bibliografisk kontrollert

Open Access i DiVA

fulltext(359 kB)12 nedlastinger
Filinformasjon
Fil FULLTEXT01.pdfFilstørrelse 359 kBChecksum SHA-512
e0989b5d434118f78b07f7fbef35e0a400aba8657efadcc1eb26950039f7e0b786313e2d48b79774a9df6bdad3fb76e6bd11ca80101cf0013f20e2a6a3dafbd6
Type fulltextMimetype application/pdf

Scopus

Personposter BETA

Azizpour, HosseinRazavian, Ali SharifSullivan, JosephineSmith, Kevin

Søk i DiVA

Av forfatter/redaktør
Carlsson, StefanAzizpour, HosseinRazavian, Ali SharifSullivan, JosephineSmith, Kevin
Av organisasjonen

Søk utenfor DiVA

GoogleGoogle Scholar
Totalt: 12 nedlastinger
Antall nedlastinger er summen av alle nedlastinger av alle fulltekster. Det kan for eksempel være tidligere versjoner som er ikke lenger tilgjengelige

urn-nbn

Altmetric

urn-nbn
Totalt: 254 treff
RefereraExporteraLink to record
Permanent link

Direct link
Referera
Referensformat
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Annet format
Fler format
Språk
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Annet språk
Fler språk
Utmatningsformat
  • html
  • text
  • asciidoc
  • rtf