kth.sePublications
Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Conformalized Adversarial Attack Detection for Graph Neural Networks
KTH, School of Electrical Engineering and Computer Science (EECS), Computer Science, Software and Computer systems, SCS.ORCID iD: 0000-0001-9969-4660
KTH, School of Electrical Engineering and Computer Science (EECS), Computer Science, Software and Computer systems, SCS.ORCID iD: 0000-0003-2745-6414
KTH, School of Electrical Engineering and Computer Science (EECS), Computer Science, Software and Computer systems, SCS.ORCID iD: 0000-0001-8382-0300
KTH, School of Electrical Engineering and Computer Science (EECS), Computer Science, Software and Computer systems, SCS.ORCID iD: 0000-0001-5923-4440
2023 (English)In: Proceedings of the 12th Symposium on Conformal and Probabilistic Prediction with Applications, COPA 2023, ML Research Press , 2023, p. 311-323Conference paper, Published paper (Refereed)
Abstract [en]

Graph Neural Networks (GNNs) have achieved remarkable performance on diverse graph representation learning tasks. However, recent studies have unveiled their susceptibility to adversarial attacks, leading to the development of various defense techniques to enhance their robustness. In this work, instead of improving the robustness, we propose a framework to detect adversarial attacks and provide an adversarial certainty score in the prediction. Our framework evaluates whether an input graph significantly deviates from the original data and provides a well-calibrated p-value based on this score through the conformal paradigm, therby controlling the false alarm rate. We demonstrate the effectiveness of our approach on various benchmark datasets. Although we focus on graph classification, the proposed framework can be readily adapted for other graph-related tasks, such as node classification.

Place, publisher, year, edition, pages
ML Research Press , 2023. p. 311-323
Keywords [en]
Adversarial Attacks, Conformal Prediction, Graph Neural Networks
National Category
Computer Sciences
Identifiers
URN: urn:nbn:se:kth:diva-340788ISI: 001221733900021Scopus ID: 2-s2.0-85178660782OAI: oai:DiVA.org:kth-340788DiVA, id: diva2:1819817
Conference
12th Symposium on Conformal and Probabilistic Prediction with Applications, COPA 2023, Limassol, Cyprus, Sep 13 2023 - Sep 15 2023
Note

QC 20231215

Available from: 2023-12-15 Created: 2023-12-15 Last updated: 2024-07-05Bibliographically approved

Open Access in DiVA

No full text in DiVA

Scopus

Authority records

Ennadir, SofianeAlkhatib, AmrBoström, HenrikVazirgiannis, Michalis

Search in DiVA

By author/editor
Ennadir, SofianeAlkhatib, AmrBoström, HenrikVazirgiannis, Michalis
By organisation
Software and Computer systems, SCS
Computer Sciences

Search outside of DiVA

GoogleGoogle Scholar

urn-nbn

Altmetric score

urn-nbn
Total: 86 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf