kth.sePublications KTH
Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
On the geometry of rectifier convolutional neural networks
KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Robotics, Perception and Learning, RPL.
KTH, School of Electrical Engineering and Computer Science (EECS). KTH, Centres, Science for Life Laboratory, SciLifeLab.ORCID iD: 0000-0001-5211-6388
KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Robotics, Perception and Learning, RPL.
KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Robotics, Perception and Learning, RPL.ORCID iD: 0000-0003-0579-3372
2019 (English)In: Proceedings - 2019 International Conference on Computer Vision Workshop, ICCVW 2019, Institute of Electrical and Electronics Engineers Inc. , 2019, p. 793-797Conference paper, Published paper (Refereed)
Abstract [en]

While recent studies have shed light on the expressivity, complexity and compositionality of convolutional networks, the real inductive bias of the family of functions reachable by gradient descent on natural data is still unknown. By exploiting symmetries in the preactivation space of convolutional layers, we present preliminary empirical evidence of regularities in the preimage of trained rectifier networks, in terms of arrangements of polytopes, and relate it to the nonlinear transformations applied by the network to its input.

Place, publisher, year, edition, pages
Institute of Electrical and Electronics Engineers Inc. , 2019. p. 793-797
Keywords [en]
Convolutional networks, Deep learning, Heometry, Preimage, Understanding, Computer vision, Convolution, Gradient methods, Rectifying circuits, Compositionality, Gradient descent, Inductive bias, Non-linear transformations, Pre images, Convolutional neural networks
National Category
Robotics and automation
Identifiers
URN: urn:nbn:se:kth:diva-274163DOI: 10.1109/ICCVW.2019.00106ISI: 000554591600099Scopus ID: 2-s2.0-85082492932OAI: oai:DiVA.org:kth-274163DiVA, id: diva2:1444972
Conference
17th IEEE/CVF International Conference on Computer Vision Workshop, ICCVW 2019, 27 October 2019 through 28 October 2019
Note

QC 20200622

Part of ISBN 9781728150239

Available from: 2020-06-22 Created: 2020-06-22 Last updated: 2025-02-09Bibliographically approved

Open Access in DiVA

No full text in DiVA

Other links

Publisher's full textScopus

Authority records

Gamba, MatteoAzizpour, HosseinCarlsson, StefanBjörkman, Mårten

Search in DiVA

By author/editor
Gamba, MatteoAzizpour, HosseinCarlsson, StefanBjörkman, Mårten
By organisation
Robotics, Perception and Learning, RPLSchool of Electrical Engineering and Computer Science (EECS)Science for Life Laboratory, SciLifeLab
Robotics and automation

Search outside of DiVA

GoogleGoogle Scholar

doi
urn-nbn

Altmetric score

doi
urn-nbn
Total: 200 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf