Open this publication in new window or tab >>2024 (English)In: Computers & security (Print), ISSN 0167-4048, E-ISSN 1872-6208, Vol. 140, article id 103743Article in journal (Refereed) Published
Abstract [en]
The complexity of ICT infrastructures is continuously increasing, presenting a formidable challenge in safeguarding them against cyber attacks. In light of escalating cyber threats and limited availability of expert resources, organizations must explore more efficient approaches to assess their resilience and undertake proactive measures. Threat modeling is an effective approach for assessing the cyber resilience of ICT systems. One method is to utilize Attack Graphs, which visually represent the steps taken by adversaries during an attack. Previously, MAL (the Meta Attack Language) was proposed, which serves as a framework for developing Domain-Specific Languages (DSLs) and generating Attack Graphs for modeled infrastructures. coreLang is a MAL-based threat modeling language that utilizes such Attack Graphs to enable attack simulations and security assessments for the generic ICT domain. Developing domain-specific languages for threat modeling and attack simulations provides a powerful approach for conducting security assessments of infrastructures. However, ensuring the correctness of these modeling languages raises a separate research question. In this study we conduct an empirical experiment aiming to falsify such a domain-specific threat modeling language. The potential inability to falsify the language through our empirical testing would lead to its corroboration, strengthening our belief in its validity within the parameters of our study. The outcomes of this approach indicated that, on average, the assessments generated by attack simulations outperformed those of human experts. Additionally, both human experts and simulations exhibited significantly superior performance compared to random guessers in their assessments. While specific human experts occasionally achieved better assessments for particular questions in the experiments, the efficiency of simulation-generated assessments surpasses that of human domain experts.
Place, publisher, year, edition, pages
Elsevier BV, 2024
Keywords
Cyber attack simulations, Cyber security, Domain experts, Domain-specific threat modeling language, Empirical language evaluation
National Category
Computer Sciences Computer Systems
Identifiers
urn:nbn:se:kth:diva-343486 (URN)10.1016/j.cose.2024.103743 (DOI)001181589500001 ()2-s2.0-85184028408 (Scopus ID)
Note
QC 20240215
2024-02-152024-02-152025-05-02Bibliographically approved