kth.sePublications KTH
Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Adversarial Training with Maximal Coding Rate Reduction
KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Information Science and Engineering.
KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Information Science and Engineering. Harbin Institute of Technology, Department of Control Science and Engineering, Harbin, China.
KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Information Science and Engineering.ORCID iD: 0000-0002-7807-5681
2024 (English)In: Conference Record of the 58th Asilomar Conference on Signals, Systems and Computers, ACSSC 2024, Institute of Electrical and Electronics Engineers (IEEE) , 2024, p. 1866-1870Conference paper, Published paper (Refereed)
Abstract [en]

Deep convolutional networks can solve various complex tasks in the field of image processing. However, adversarial attacks have been shown to have the ability of fooling deep learning models. Adversarial training is one commonly used strategy to improve the robustness of deep learning models against adversarial examples, which is performed by incorporating adversarial examples into the training process. Traditionally, during this process, cross-entropy loss is used as the loss function. In order to improve the robustness of deep learning models against adversarial examples, we propose in this paper two new methods of adversarial training by applying the principle of Maximal Coding Rate Reduction (MCR2). We evaluate the performance of different adversarial training methods by comparing the clean accuracy and adversarial accuracy. It is shown that adversarial training with the MCR2 loss function yields a more robust network than the traditional adversarial training method. In our experiments, adversarial accuracies are improved by up to 10%. The two loss functions are discussed by using a model.

Place, publisher, year, edition, pages
Institute of Electrical and Electronics Engineers (IEEE) , 2024. p. 1866-1870
Keywords [en]
adversarial attack, adversarial example, adversarial training, deep neural networks, Machine learning, quadratic similarity queries on compressed data
National Category
Electrical Engineering, Electronic Engineering, Information Engineering
Identifiers
URN: urn:nbn:se:kth:diva-362682DOI: 10.1109/IEEECONF60004.2024.10942802ISI: 001479671800342Scopus ID: 2-s2.0-105002685564OAI: oai:DiVA.org:kth-362682DiVA, id: diva2:1954124
Conference
58th Asilomar Conference on Signals, Systems and Computers, ACSSC 2024, Hybrid, Pacific Grove, United States of America, Oct 27 2024 - Oct 30 2024
Note

Part of ISBN 9798350354058

QC 20250425

Available from: 2025-04-23 Created: 2025-04-23 Last updated: 2025-12-05Bibliographically approved

Open Access in DiVA

No full text in DiVA

Other links

Publisher's full textScopus

Authority records

Zhao, HongboFlierl, Markus

Search in DiVA

By author/editor
Chu, Hsiang-YuZhao, HongboFlierl, Markus
By organisation
Information Science and Engineering
Electrical Engineering, Electronic Engineering, Information Engineering

Search outside of DiVA

GoogleGoogle Scholar

doi
urn-nbn

Altmetric score

doi
urn-nbn
Total: 66 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf