kth.sePublications KTH
Operational message
There are currently operational disruptions. Troubleshooting is in progress.
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Trustworthy Autonomy in Cyber-Physical systems
KTH, School of Electrical Engineering and Computer Science (EECS).
KTH, School of Electrical Engineering and Computer Science (EECS).
2025 (English)Independent thesis Basic level (degree of Bachelor), 10 credits / 15 HE creditsStudent thesis
Abstract [en]

Machine learning models, mainly those utilized in computer vision using convolutional neural networks (CNNs), can currently train on extensive datasets and images. Despite their capabilities, they are still vulnerable to special images called adversarial patches that can manipulate their output. Adversarial patches can penetrate various defense models, such as NutNet and NAPGuard. This thesis aims to attack a representative machine learning vision model, YOLOv2. Such adversarial patch attacks could target YOLOv2, and a defense model, NutNet, that is designed to protect victim models against such attacks. Our study was conducted by creating adversarial patches and exploring their impact based on where the patch was located on the image, the number of patches, real-world execution scenarios, and the size of training dataset, which consists of pictures of people in various environments. Patch creation involved training on different-sized dataset for different numbers of epochs. We also used a patch from other work to launch attacks. The attack methods were also varied, for digital attacks, patches were placed directly on the images. For real-world attacks, the adversarial patches were printed and physically applied on a person. We assumed two situations throughout the evaluations, with and without adversarial patch defenses enabled. This approach led to the effectiveness of adversarial patches being evaluated under various conditions. The result showed that patches could attack the object detector with high effectiveness, reaching 75% when NutNet was deactivated. Without the defense model activated the attack success rate reached up to 49.6%.

Abstract [sv]

Maskininlärningsmodeller, som används inom datorseende med konvolutionella neurala nätverk (KNN), tränas på omfattande datamängder och bilder. Trots deras kapacitet är de fortfarande sårbara för fysiska cyber attacker som kallas adversariella patchar. Adversariella patchar kan passera olika försvarsmodeller, såsom NutNet och NAPGuard. Detta arbete syftar till att attackera en representativ maskininlärningsmodell, YOLOv2. En attackerare med patch-attacker skulle kunna rikta sig mot YOLOv2, samt försvarsmodellen NutNet, som är designad för att skydda utsatta modeller mot sådana attacker. Vår studie genomfördes genom att skapa adversariella patchar och undersöka deras påverkan baserat på patchens placering på bilden, antalet patchar, attack scenarier i verkliga världen och storleken på träningsdatan, som består av bilder på människor i olika miljöer. Patchskapandet innebar träning på dataset av olika storlek under olika antal epoker. Vi använde också en patch från annat arbete för att genomföra attacker. Olika attack metoder användes där för digitala attacker placerades patcharna direkt på bilderna. För attacker i verkliga världen skrevs de adversariella patcharna ut och applicerades fysiskt på en person. Vi antog två situationer under utvärderingarna, med och utan aktiverat försvar mot adversariella patchar. Detta ledde till att effektiviteten hos de adversariella patcharna kunde utvärderas under olika förhållanden. Resultatet visade att patcharna kunde attackera objektdetektor med hög effektivitet, och nådde upp till 75% när NutNet var avaktiverad. När NutNet var aktiverad nådde attack säkerheten 49,6%.

Place, publisher, year, edition, pages
2025. , p. 489-495
Series
TRITA-EECS-EX ; 2025:148
National Category
Electrical Engineering, Electronic Engineering, Information Engineering
Identifiers
URN: urn:nbn:se:kth:diva-376170OAI: oai:DiVA.org:kth-376170DiVA, id: diva2:2034538
Supervisors
Examiners
Projects
Kandidatexamensarbete i Elektroteknik 2025, EECS, KTHAvailable from: 2026-02-02 Created: 2026-02-02

Open Access in DiVA

fulltext(80627 kB)11 downloads
File information
File name FULLTEXT01.pdfFile size 80627 kBChecksum SHA-512
35ce0a386dafe4649eb99cbe0efdfed651a3c9044e3339612422234d17a7e8ec21d4fd4aa201500c3c7a8f57194994b78b3e0cfbd5319ecd49f18a5d8a7ff775
Type fulltextMimetype application/pdf

By organisation
School of Electrical Engineering and Computer Science (EECS)
Electrical Engineering, Electronic Engineering, Information Engineering

Search outside of DiVA

GoogleGoogle Scholar
The number of downloads is the sum of all downloads of full texts. It may include eg previous versions that are now no longer available

urn-nbn

Altmetric score

urn-nbn
Total: 4071 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf