kth.sePublications
Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Poisoning Actuation Attacks Against the Learning of an Optimal Controller
Daniel Guggenheim School of Aerospace Engineering, Georgia Institute of Technology, Atlanta, GA, USA, 30332.
KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Information Science and Engineering.ORCID iD: 0000-0001-5983-0875
Daniel Guggenheim School of Aerospace Engineering, Georgia Institute of Technology, Atlanta, GA, USA, 30332.
Carnegie Mellon University/Software Engineering Institute, Pittsburgh, PA, USA, 15213.
2024 (English)In: 2024 American Control Conference, ACC 2024, Institute of Electrical and Electronics Engineers (IEEE) , 2024, p. 4838-4843Conference paper, Published paper (Refereed)
Abstract [en]

In this paper, we study the problem of poisoning the learning of an optimal controller by means of an actuation attack. We specifically consider a user who is gathering data from a linear system in the form of input and state measurements, and who uses these data to learn an optimal controller. Nevertheless, these measurements are corrupted by an attacker who has access to the system's actuators, and who is using them to launch an actuation attack during the learning process. We design this actuation attack so that it optimally corrupts the data used by the user: it forces the user to learn as closely as possible a gain that the attacker has selected, and which is unrelated to the actual optimal control gain. We prove that this poisoning actuation attack design boils down to the solution of certain coupled matrix equations, which we solve using the block successive over-relaxation (SOR) iterative procedure. Simulations on an aircraft model demonstrate theoretical findings, showing how the poisoning attack is effective in misleading the user towards learning an incorrect gain for the system.

Place, publisher, year, edition, pages
Institute of Electrical and Electronics Engineers (IEEE) , 2024. p. 4838-4843
Keywords [en]
actuation attacks, cyber-physical systems, Learning poisoning
National Category
Control Engineering
Identifiers
URN: urn:nbn:se:kth:diva-354304DOI: 10.23919/ACC60939.2024.10644755Scopus ID: 2-s2.0-85204433140OAI: oai:DiVA.org:kth-354304DiVA, id: diva2:1902963
Conference
2024 American Control Conference, ACC 2024, Toronto, Canada, Jul 10 2024 - Jul 12 2024
Note

Part of ISBN 9798350382655

Available from: 2024-10-02 Created: 2024-10-02 Last updated: 2024-10-03Bibliographically approved

Open Access in DiVA

No full text in DiVA

Other links

Publisher's full textScopus

Authority records

Kanellopoulos, Aris

Search in DiVA

By author/editor
Kanellopoulos, Aris
By organisation
Information Science and Engineering
Control Engineering

Search outside of DiVA

GoogleGoogle Scholar

doi
urn-nbn

Altmetric score

doi
urn-nbn
Total: 42 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf