Advanced functions based on Deep Neural Networks (DNN) have been widely used in automotive vehicles for the perception of operational conditions. To be able to fully exploit the potential benefits of higher levels of automated driving, the trustworthiness of such functions has to be properly ensured. This remains a challenging task for the industry as traditional approaches to system verification and validation, fault-tolerance design, become insufficient, due to the fact that many of these functions are inherently contextual and probabilistic in operation and failure. This paper presents a data centric approach to the fault characterization and data generation for the training of monitoring functions to detect soft errors of DNN functions during operation. In particular, a Fault Injection (FI) method has been developed to systematically inject both layer- and neuron-wise faults into the neural networks, including bit-flip, stuck-at, etc. The impacts of injected faults are then quantified via a probabilistic criterion based on Kullback-Leibler Divergence. We demonstrate the proposed approach based on the tests with an Alexnet.
QC 20220616
Part of proceedings: ISBN 978-3-031-06745-7; 978-3-031-06746-4