kth.sePublications
Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Reinforcement Learning Based Approach for Flip Attack Detection
KTH, School of Electrical Engineering and Computer Science (EECS). School of Electrical and Electronic Engineering, Nanyang Technological University, Singapore.
KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Decision and Control Systems (Automatic Control).ORCID iD: 0000-0002-1857-2301
KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Decision and Control Systems (Automatic Control).ORCID iD: 0000-0002-3672-5316
Show others and affiliations
2020 (English)In: Proceedings of the IEEE Conference on Decision and Control, Institute of Electrical and Electronics Engineers Inc. , 2020, p. 3212-3217Conference paper, Published paper (Refereed)
Abstract [en]

This paper addresses the detection problem of flip attacks to sensor network systems where the attacker flips the distribution of manipulated sensor measurements of a binary state. The detector decides to continue taking observations or to stop based on the sensor measurements, and the goal is to have the flip attack recognized as fast as possible while trying to avoid terminating the measurements when no attack is present. The detection problem can be modeled as a partially observable Markov decision process (POMDP) by assuming an attack probability, with the dynamics of the hidden states of the POMDP characterized by a stochastic shortest path (SSP) problem. The optimal policy of the SSP solely depends on the transition costs and is independent of the assumed attack possibility. By using a fixed-length window and suitable feature function of the measurements, a Markov decision process (MDP) is used to approximate the behavior of the POMDP. The optimal solution of the approximated MDP can then be solved by any standard reinforcement learning methods. Numerical evaluations demonstrates the effectiveness of the method.

Place, publisher, year, edition, pages
Institute of Electrical and Electronics Engineers Inc. , 2020. p. 3212-3217
Keywords [en]
Learning systems, Markov processes, Numerical methods, Sensor networks, Stochastic systems, Detection problems, Markov Decision Processes, Optimal solutions, Partially observable Markov decision process, Reinforcement learning method, Sensor measurements, Sensor network systems, Stochastic shortest paths, Reinforcement learning
National Category
Control Engineering
Identifiers
URN: urn:nbn:se:kth:diva-301207DOI: 10.1109/CDC42340.2020.9303818ISI: 000717663402086Scopus ID: 2-s2.0-85099875545OAI: oai:DiVA.org:kth-301207DiVA, id: diva2:1591689
Conference
59th IEEE Conference on Decision and Control, CDC 2020, 14 December 2020 through 18 December 2020
Funder
Knut and Alice Wallenberg FoundationSwedish Foundation for Strategic ResearchSwedish Research Council
Note

QC 20220201

Available from: 2021-09-07 Created: 2021-09-07 Last updated: 2024-01-10Bibliographically approved

Open Access in DiVA

No full text in DiVA

Other links

Publisher's full textScopus

Authority records

Liu, HanxiaoLi, YuchaoMårtensson, JonasJohansson, Karl H.

Search in DiVA

By author/editor
Liu, HanxiaoLi, YuchaoMårtensson, JonasJohansson, Karl H.
By organisation
School of Electrical Engineering and Computer Science (EECS)Decision and Control Systems (Automatic Control)
Control Engineering

Search outside of DiVA

GoogleGoogle Scholar

doi
urn-nbn

Altmetric score

doi
urn-nbn
Total: 151 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf