kth.sePublications
Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Using Fault Injection for the Training of Functions to Detect Soft Errors of DNNs in Automotive Vehicles
KTH, School of Industrial Engineering and Management (ITM), Machine Design (Dept.), Mechatronics.ORCID iD: 0000-0002-8028-3607
KTH, School of Industrial Engineering and Management (ITM), Machine Design (Dept.), Mechatronics.ORCID iD: 0000-0001-7048-0108
2022 (English)In: New Advances in Dependability of Networks and Systems / [ed] Zamojski, W., Mazurkiewicz, J., Sugier, J., Walkowiak, T., Kacprzyk, J., Springer, 2022, Vol. 484, p. 308-318Conference paper, Published paper (Refereed)
Abstract [en]

Advanced functions based on Deep Neural Networks (DNN) have been widely used in automotive vehicles for the perception of operational conditions. To be able to fully exploit the potential benefits of higher levels of automated driving, the trustworthiness of such functions has to be properly ensured. This remains a challenging task for the industry as traditional approaches to system verification and validation, fault-tolerance design, become insufficient, due to the fact that many of these functions are inherently contextual and probabilistic in operation and failure. This paper presents a data centric approach to the fault characterization and data generation for the training of monitoring functions to detect soft errors of DNN functions during operation. In particular, a Fault Injection (FI) method has been developed to systematically inject both layer- and neuron-wise faults into the neural networks, including bit-flip, stuck-at, etc. The impacts of injected faults are then quantified via a probabilistic criterion based on Kullback-Leibler Divergence. We demonstrate the proposed approach based on the tests with an Alexnet.

Place, publisher, year, edition, pages
Springer, 2022. Vol. 484, p. 308-318
Series
Lecture Notes in Networks and Systems, ISSN 2367-3370 ; 484
Keywords [en]
Neural Networks, Anomaly, Soft errors, Fault injection, Kullback-Leibler Divergence, Machine learning
National Category
Electrical Engineering, Electronic Engineering, Information Engineering
Research subject
Computer Science; Information and Communication Technology; Industrial Engineering and Management
Identifiers
URN: urn:nbn:se:kth:diva-313002DOI: 10.1007/978-3-031-06746-4_30Scopus ID: 2-s2.0-85131909885OAI: oai:DiVA.org:kth-313002DiVA, id: diva2:1661512
Conference
DepCoS-RELCOMEX 2022. 27 June - 1 July; Wrocław, Poland
Projects
EUREKA EURIPIDES Trust-E
Funder
Vinnova, 2020-05117
Note

QC 20220616

Part of proceedings: ISBN 978-3-031-06745-7; 978-3-031-06746-4

Available from: 2022-05-27 Created: 2022-05-27 Last updated: 2023-06-08Bibliographically approved

Open Access in DiVA

No full text in DiVA

Other links

Publisher's full textScopus

Authority records

Su, PengChen, DeJiu

Search in DiVA

By author/editor
Su, PengChen, DeJiu
By organisation
Mechatronics
Electrical Engineering, Electronic Engineering, Information Engineering

Search outside of DiVA

GoogleGoogle Scholar

doi
urn-nbn

Altmetric score

doi
urn-nbn
Total: 115 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf