kth.sePublications
Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Fragility, robustness and antifragility in deep learning
University of Reading, United Kingdom.
KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematics (Div.).ORCID iD: 0000-0001-6306-6777
University of Catania, Italy; University of Cambridge, United Kingdom.
Newcastle University, United Kingdom.
2024 (English)In: Artificial Intelligence, ISSN 0004-3702, E-ISSN 1872-7921, Vol. 327, article id 104060Article in journal (Refereed) Published
Abstract [en]

We propose a systematic analysis of deep neural networks (DNNs) based on a signal processing technique for network parameter removal, in the form of synaptic filters that identifies the fragility, robustness and antifragility characteristics of DNN parameters. Our proposed analysis investigates if the DNN performance is impacted negatively, invariantly, or positively on both clean and adversarially perturbed test datasets when the DNN undergoes synaptic filtering. We define three filtering scores for quantifying the fragility, robustness and antifragility characteristics of DNN parameters based on the performances for (i) clean dataset, (ii) adversarial dataset, and (iii) the difference in performances of clean and adversarial datasets. We validate the proposed systematic analysis on ResNet-18, ResNet-50, SqueezeNet-v1.1 and ShuffleNet V2 x1.0 network architectures for MNIST, CIFAR10 and Tiny ImageNet datasets. The filtering scores, for a given network architecture, identify network parameters that are invariant in characteristics across different datasets over learning epochs. Vice-versa, for a given dataset, the filtering scores identify the parameters that are invariant in characteristics across different network architectures. We show that our synaptic filtering method improves the test accuracy of ResNet and ShuffleNet models on adversarial dataset when only the robust and antifragile parameters are selectively retrained at any given epoch, thus demonstrating applications of the proposed strategy in improving model robustness.

Place, publisher, year, edition, pages
Elsevier BV , 2024. Vol. 327, article id 104060
Keywords [en]
Adversarial attacks, Adversarial robustness, Adversarial training, Deep neural networks, Network sparsity, Robustness analysis
National Category
Computer Systems
Identifiers
URN: urn:nbn:se:kth:diva-342190DOI: 10.1016/j.artint.2023.104060Scopus ID: 2-s2.0-85181127495OAI: oai:DiVA.org:kth-342190DiVA, id: diva2:1827891
Note

QC 20240115

Available from: 2024-01-15 Created: 2024-01-15 Last updated: 2024-01-15Bibliographically approved

Open Access in DiVA

No full text in DiVA

Other links

Publisher's full textScopus

Authority records

Martino, Ivan

Search in DiVA

By author/editor
Martino, Ivan
By organisation
Mathematics (Div.)
In the same journal
Artificial Intelligence
Computer Systems

Search outside of DiVA

GoogleGoogle Scholar

doi
urn-nbn

Altmetric score

doi
urn-nbn
Total: 42 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf