kth.sePublications
Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
On Spectral Properties of Gradient-Based Explanation Methods
KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Robotics, Perception and Learning, RPL.ORCID iD: 0000-0002-6193-7126
KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Robotics, Perception and Learning, RPL.ORCID iD: 0000-0003-4535-2520
KTH, School of Electrical Engineering and Computer Science (EECS), Computer Science.ORCID iD: 0000-0001-5211-6388
2025 (English)In: Computer Vision – ECCV 2024 - 18th European Conference, Proceedings, Springer Nature , 2025, p. 282-299Conference paper, Published paper (Refereed)
Abstract [en]

Understanding the behavior of deep networks is crucial to increase our confidence in their results. Despite an extensive body of work for explaining their predictions, researchers have faced reliability issues, which can be attributed to insufficient formalism. In our research, we adopt novel probabilistic and spectral perspectives to formally analyze explanation methods. Our study reveals a pervasive spectral bias stemming from the use of gradient, and sheds light on some common design choices that have been discovered experimentally, in particular, the use of squared gradient and input perturbation. We further characterize how the choice of perturbation hyperparameters in explanation methods, such as SmoothGrad, can lead to inconsistent explanations and introduce two remedies based on our proposed formalism: (i) a mechanism to determine a standard perturbation scale, and (ii) an aggregation method which we call SpectralLens. Finally, we substantiate our theoretical results through quantitative evaluations.

Place, publisher, year, edition, pages
Springer Nature , 2025. p. 282-299
Keywords [en]
Deep Neural Networks, Explainability, Gradient-based Explanation Methods, Probabilistic Machine Learning, Probabilistic Pixel Attribution Techniques, Spectral Analysis
National Category
Computer Sciences
Identifiers
URN: urn:nbn:se:kth:diva-357698DOI: 10.1007/978-3-031-73021-4_17ISI: 001416940200017Scopus ID: 2-s2.0-85210488897OAI: oai:DiVA.org:kth-357698DiVA, id: diva2:1920805
Conference
18th European Conference on Computer Vision, ECCV 2024, Milan, Italy, Sep 29 2024 - Oct 4 2024
Note

Part of ISBN 978-303173020-7

QC 20241213

Available from: 2024-12-12 Created: 2024-12-12 Last updated: 2025-03-17Bibliographically approved

Open Access in DiVA

No full text in DiVA

Other links

Publisher's full textScopus

Authority records

Mehrpanah, AmirEnglesson, ErikAzizpour, Hossein

Search in DiVA

By author/editor
Mehrpanah, AmirEnglesson, ErikAzizpour, Hossein
By organisation
Robotics, Perception and Learning, RPLComputer Science
Computer Sciences

Search outside of DiVA

GoogleGoogle Scholar

doi
urn-nbn

Altmetric score

doi
urn-nbn
Total: 98 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf