kth.sePublications
Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Adversarial Robustness in Deep Learning: Attacks on Fragile Neurons
Univ Reading, Reading, Berks, England..
KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematics (Div.).ORCID iD: 0000-0001-6306-6777
Univ Catania, Catania, Italy.;Univ Cambridge, Cambridge, England..
Univ Reading, Reading, Berks, England..
2021 (English)In: Artificial Neural Networks And Machine Learning - ICANN 2021, Pt I / [ed] Farkas, I Masulli, P Otte, S Wermter, S, Springer Nature , 2021, Vol. 12891, p. 16-28Conference paper, Published paper (Refereed)
Abstract [en]

We identify fragile and robust neurons of deep learning architectures using nodal dropouts of the first convolutional layer. Using an adversarial targeting algorithm, we correlate these neurons with the distribution of adversarial attacks on the network. Adversarial robustness of neural networks has gained significant attention in recent times and highlights an intrinsic weaknesses of deep learning networks against carefully constructed distortion applied to input images. In this paper, we evaluate the robustness of state-of-the-art image classification models trained on the MNIST and CIFAR10 datasets against the fast gradient sign method attack, a simple yet effective method of deceiving neural networks. Our method identifies the specific neurons of a network that are most affected by the adversarial attack being applied. We, therefore, propose to make fragile neurons more robust against these attacks by compressing features within robust neurons and amplifying the fragile neurons proportionally.

Place, publisher, year, edition, pages
Springer Nature , 2021. Vol. 12891, p. 16-28
Series
Lecture Notes in Computer Science, ISSN 0302-9743
Keywords [en]
Deep learning, Fragile neurons, Data perturbation, Adversarial targeting, Robustness analysis, Adversarial robustness
National Category
Computer Sciences
Identifiers
URN: urn:nbn:se:kth:diva-305419DOI: 10.1007/978-3-030-86362-3_2ISI: 000711965200002Scopus ID: 2-s2.0-85115441681OAI: oai:DiVA.org:kth-305419DiVA, id: diva2:1615811
Conference
30th International Conference on Artificial Neural Networks (ICANN), SEP 14-17, 2021, ELECTR NETWORK
Note

Part of proceedings: ISBN 978-3-030-86362-3, QC 20230118

Available from: 2021-12-01 Created: 2021-12-01 Last updated: 2023-01-18Bibliographically approved

Open Access in DiVA

No full text in DiVA

Other links

Publisher's full textScopus

Authority records

Martino, Ivan

Search in DiVA

By author/editor
Martino, Ivan
By organisation
Mathematics (Div.)
Computer Sciences

Search outside of DiVA

GoogleGoogle Scholar

doi
urn-nbn

Altmetric score

doi
urn-nbn
Total: 28 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf