kth.sePublications KTH
Change search
Link to record
Permanent link

Direct link
Publications (10 of 10) Show all publications
Heiding, F., Schneier, B., Vishwanath, A., Bernstein, J. & Park, P. S. (2024). Devising and Detecting Phishing Emails Using Large Language Models. IEEE Access, 12, 42131-42146
Open this publication in new window or tab >>Devising and Detecting Phishing Emails Using Large Language Models
Show others...
2024 (English)In: IEEE Access, E-ISSN 2169-3536, Vol. 12, p. 42131-42146Article in journal (Refereed) Published
Abstract [en]

AI programs, built using large language models, make it possible to automatically create phishing emails based on a few data points about a user. The V-Triad is a set of rules for manually designing phishing emails to exploit our cognitive heuristics and biases. In this study, we compare the performance of phishing emails created automatically by GPT-4 and manually using the V-Triad. We also combine GPT-4 with the V-Triad to assess their combined potential. A fourth group, exposed to generic phishing emails, was our control group. We use a red teaming approach by simulating attackers and emailing 112 participants recruited for the study. The control group emails received a click-through rate between 19-28%, the GPT-generated emails 30-44%, emails generated by the V-Triad 69-79%, and emails generated by GPT and the V-Triad 43-81%. Each participant was asked to explain why they pressed or did not press a link in the email. These answers often contradict each other, highlighting the importance of personal differences. Next, we used four popular large language models (GPT, Claude, PaLM, and LLaMA) to detect the intention of phishing emails and compare the results to human detection. The language models demonstrated a strong ability to detect malicious intent, even in non-obvious phishing emails. They sometimes surpassed human detection, although often being slightly less accurate than humans. Finally, we analyze of the economic aspects of AI-enabled phishing attacks, showing how large language models increase the incentives of phishing and spear phishing by reducing their costs.

Place, publisher, year, edition, pages
Institute of Electrical and Electronics Engineers (IEEE), 2024
Keywords
Phishing, large language models, social engineering, artificial intelligence
National Category
Computer Sciences
Identifiers
urn:nbn:se:kth:diva-345143 (URN)10.1109/ACCESS.2024.3375882 (DOI)001192203500001 ()2-s2.0-85187996490 (Scopus ID)
Note

QC 20240408

Available from: 2024-04-08 Created: 2024-04-08 Last updated: 2024-09-18Bibliographically approved
Heiding, F. (2024). Mitigating AI-Enabled Cyber Attacks on Hardware, Software, and System Users. (Doctoral dissertation). Stockholm: KTH Royal Institute of Technology
Open this publication in new window or tab >>Mitigating AI-Enabled Cyber Attacks on Hardware, Software, and System Users
2024 (English)Doctoral thesis, comprehensive summary (Other academic)
Abstract [en]

This doctoral thesis addresses the rapidly evolving landscape of computer security threats posed by advancements in artificial intelligence (AI), particularly large language models (LLMs). We demonstrate how AI can automate and enhance cyberattacks to identify the most pressing dangers and present feasible mitigation strategies. The study is divided into two main branches: attacks targeting hardware and software systems and attacks focusing on system users, such as phishing. The first paper of the thesis identifies research communities within computer security red teaming. We created a Python tool to scrape and analyze 23,459 articles from Scopus's database, highlighting popular communities such as smart grids and attack graphs and providing a comprehensive overview of prominent authors, institutions, communities, and sub-communities. The second paper conducts red teaming assessments of connected devices commonly found in modern households, such as connected vacuum cleaners and door locks. Our experiments demonstrate how easily attackers can exploit different devices and emphasize the need for improved security measures and public awareness. The third paper explores the use of LLMs to generate phishing emails. The findings demonstrate that while human experts still outperform LLMs, a hybrid approach combining human expertise and AI significantly reduces the cost and time requirements to launch phishing attacks while maintaining high success rates. We further analyze the economic aspects of AI-enhanced phishing to show how LLMs affect the attacker's incentive for various phishing use cases. The fourth study evaluates LLMs' potential to automate and enhance cyberattacks on hardware and software systems. We create a framework for evaluating the capability of LLMs to conduct attacks on hardware and software and evaluate the framework by conducting 31 AI-automated cyberattacks on devices from connected households. The results indicate that while LLMs can reduce attack costs, they do not significantly increase the attacks' damage or scalability. We expect this to change with future LLM versions, but the findings present an opportunity for proactive measures to develop benchmarks and defensive tools to control the misuse of LLMs.

Abstract [sv]

Moderna cyberattacker förändras snabbt som följd av framsteg inom artificiell intelligent (AI), särskilt via stora språkmodeller (LLM:er). Vi demonstrerar hur AI kan automatisera och förbättra cyberattacker för att identifiera de största hoten och presenterar strategier för att motverka dem. Studien är uppdelad i två delar: attacker riktade mot hårdvaru- och mjukvarusystem samt attacker fokuserade på systemanvändare, likt phishing. Avhandlingens första artikel identifierar forskningsgrupper inom red teaming. Vi skapade ett Python-verktyg för att hämta och analysera 23,459 artiklar från Scopus databas, vilket gav en översikt av framstående författare, institutioner och utvecklingen av olika grupper och sub-grupper inom forskningsområdet. Avhandlingens andra artikel genomför red teaming-tester av uppkopplade enheter från moderna hushåll, exempelvis uppkopplade dammsugare och dörrlås. Våra experiment visar hur lätt angripare kan hitta sårbarheter i enheter och betonar behovet av förbättrade säkerhetsåtgärder och ökad allmän medvetenhet. Den tredje artikeln utforskar användningen av LLMs för att generera phishing-meddelanden. Resultaten visar att mänskliga experter fortfarande presterar bättre än LLMs, men en hybridmetod som kombinerar mänsklig expertis och AI reducerar kostnaderna och tiden som krävs för att lansera nätfiskeattacker och behåller hög kvalitet i meddelandena. Den fjärde studien utvärderar LLM:ers potential att automatisera och förbättra cyberattacker på hårdvaru- och mjukvarusystem. Vi skapar ett ramverk för att utvärdera LLM:ers förmåga att genomföra attacker mot hårdvara och mjukvara och utvärderar ramverket genom att genomföra 31 AI-automatiserade cyberattacker på enheter från uppkopplade hushåll. Resultaten indikerar att LLM:er kan minska attackkostnaderna, men de medför inte en märkvärd ökning av attackernas skada eller skalbarhet. Vi förväntar oss att detta kommer att förändras med framtida LLM-versioner, men resultaten presenterar en möjlighet för proaktiva åtgärder för att utveckla riktmärken och försvarsverktyg för att kontrollera skadlig användning av LLMs.

Place, publisher, year, edition, pages
Stockholm: KTH Royal Institute of Technology, 2024. p. x, 71
Series
TRITA-EECS-AVL ; 2024:68
Keywords
Computer security, Red teaming, phishing, artificial intelligence, large language models
National Category
Computer Sciences
Research subject
Computer Science
Identifiers
urn:nbn:se:kth:diva-353243 (URN)9789181060409 (ISBN)
Public defence
2024-10-10, https://kth-se.zoom.us/j/61272075034, D31, Lindstedtsvägen 9, Stockholm, 13:00 (English)
Opponent
Supervisors
Note

QC 20241004

Available from: 2024-09-19 Created: 2024-09-18 Last updated: 2024-10-21Bibliographically approved
Süren, E., Heiding, F., Olegård, J. & Lagerström, R. (2023). PatrIoT: practical and agile threat research for IoT. International Journal of Information Security, 22(1), 213-233
Open this publication in new window or tab >>PatrIoT: practical and agile threat research for IoT
2023 (English)In: International Journal of Information Security, ISSN 1615-5262, E-ISSN 1615-5270, Vol. 22, no 1, p. 213-233Article in journal, Editorial material (Refereed) Published
Abstract [en]

The Internet of things (IoT) products, which have been widely adopted, still pose challenges in the modern cybersecurity landscape. Many IoT devices are resource-constrained and almost constantly online. Furthermore, the security features of these devices are less often of concern, and fewer methods, standards, and guidelines are available for testing them. Although a few approaches are available to assess the security posture of IoT products, the ones in use are mostly based on traditional non-IoT-focused techniques and generally lack the attackers' perspective. This study provides a four-stage IoT vulnerability research methodology built on top of four key elements: logical attack surface decomposition, compilation of top 100 weaknesses, lightweight risk scoring, and step-by-step penetration testing guidelines. Our proposed methodology is evaluated with multiple IoT products. The results indicate that PatrIoT allows cyber security practitioners without much experience to advance vulnerability research activities quickly and reduces the risk of critical IoT penetration testing steps being overlooked.

Place, publisher, year, edition, pages
Springer Nature, 2023
National Category
Computer Systems
Identifiers
urn:nbn:se:kth:diva-321646 (URN)10.1007/s10207-022-00633-3 (DOI)000885228800001 ()2-s2.0-85142242003 (Scopus ID)
Note

QC 20221201

Available from: 2022-11-18 Created: 2022-11-18 Last updated: 2023-10-16Bibliographically approved
Heiding, F., Süren, E., Olegård, J. & Lagerström, R. (2023). Penetration testing of connected households. Computers & Security, 126, Article ID 103067.
Open this publication in new window or tab >>Penetration testing of connected households
2023 (English)In: Computers & Security, ISSN 0167-4048, E-ISSN 1872-6208, Vol. 126, article id 103067Article in journal (Refereed) Published
Abstract [en]

Connected devices have become an integral part of modern homes and household devices, such as vac-uum cleaners and refrigerators, are now often connected to networks. This connectivity introduces an entry point for cyber attackers. The plethora of successful cyber attacks against household IoT indicates that the security of these devices, or the security of applications related to these devices, is often lacking. Existing penetration testing studies usually focus on individual devices, and recent studies often men-tion the need for more extensive vulnerability assessments. Therefore, this study investigates the cyber security of devices commonly located in connected homes. Systematic penetration tests were conducted on 22 devices in five categories related to connected homes: smart door locks, smart cameras, smart car adapters/garages, smart appliances, and miscellaneous smart home devices. In total, 17 vulnerabilities were discovered and published as new CVEs. Some CVEs received critical severity rankings from the National Vulnerability Database (NVD), reaching 9.8/10. The devices are already being sold and used worldwide, and the discovered vulnerabilities could lead to severe consequences for residents, such as an attacker gaining physical access to the house. In addition to the published CVEs, 52 weaknesses were discovered that could potentially lead to new CVEs in the future. To our knowledge, this is the most comprehensive study on penetration testing of connected household products.

Place, publisher, year, edition, pages
Elsevier BV, 2023
Keywords
Penetration testing, Ethical hacking, Internet of things, Connected households, Smart home, Pentest, Cyber security
National Category
Computer and Information Sciences
Identifiers
urn:nbn:se:kth:diva-324051 (URN)10.1016/j.cose.2022.103067 (DOI)000917439700001 ()2-s2.0-85144826963 (Scopus ID)
Note

QC 20230222

Available from: 2023-02-22 Created: 2023-02-22 Last updated: 2025-08-28Bibliographically approved
Heiding, F., Katsikeas, S. & Lagerström, R. (2023). Research communities in cyber security vulnerability assessments: A comprehensive literature review. Computer Science Review, 48, Article ID 100551.
Open this publication in new window or tab >>Research communities in cyber security vulnerability assessments: A comprehensive literature review
2023 (English)In: Computer Science Review, ISSN 1574-0137, E-ISSN 1876-7745, Vol. 48, article id 100551Article, review/survey (Refereed) Published
Abstract [en]

Ethical hacking and vulnerability assessments are gaining rapid momentum as academic fields of study. Still, it is sometimes unclear what research areas are included in the categories and how they fit into the traditional academic framework. Previous studies have reviewed literature in the field, but the attempts use manual analysis and thus fail to provide a comprehensive view of the domain. To better understand how the area is treated within academia, 537,629 related articles from the Scopus database were analyzed. A Python script was used for data mining as well as analysis of the data, and 23,459 articles were included in the final synthesis. The publication dates of the articles ranged from 1975 to 2022. They were authored by 53,495 authors and produced an aggregated total of 836,956 citations. Fifteen research communities were detected using the Louvain community detection algorithm: (smart grids, attack graphs, security testing, software vulnerabilities, Internet of Things (IoT), network vulnerability, vulnerability analysis, Android, cascading failures, authentication, Software-Defined Networking (SDN), spoofing attacks, malware, trust models, and red teaming). In addition, each community had several individual subcommunities, constituting a total of 126. From the trends of the analyzed studies, it is clear that research interest in ethical hacking and vulnerability assessment is increasing.

Place, publisher, year, edition, pages
Elsevier BV, 2023
Keywords
Systematic literature review, SLR, Vulnerability assessment, Ethical hacking, Cybersecurity, Scopus, Penetration testing
National Category
Software Engineering
Identifiers
urn:nbn:se:kth:diva-326627 (URN)10.1016/j.cosrev.2023.100551 (DOI)000969160400001 ()2-s2.0-85151293888 (Scopus ID)
Note

QC 20230509

Available from: 2023-05-09 Created: 2023-05-09 Last updated: 2024-09-18Bibliographically approved
Wester, P., Heiding, F. & Lagerström, R. (2021). Anomaly-based Intrusion Detection using Tree Augmented Naive Bayes. In: International Workshop on Enterprise Distributed Object Computing, EDOCW: . Paper presented at International Workshop on Enterprise Distributed Object Computing, EDOCW. IEEE
Open this publication in new window or tab >>Anomaly-based Intrusion Detection using Tree Augmented Naive Bayes
2021 (English)In: International Workshop on Enterprise Distributed Object Computing, EDOCW, IEEE, 2021Conference paper, Published paper (Refereed)
Abstract [en]

Information technology is continuously becoming a more central part of society and together with the increased connectivity and inter-dependency of devices, it is becoming more important to keep systems secure. Most modern enterprises use some form of intrusion detection in order to detect hostile cyber activity that enters the organization. One of the major challenges of intrusion detection in computer networks is to detect types of intrusions that have previously not been encountered. These unknown intrusions are generally detected by methods collectively called anomaly detection. It is nowadays popular to use various artificial intelligence schemes to enhance anomaly detection of network traffic, and many state-of-the-art models reach a detection rate of well over 99%. One such promising algorithm is the Tree Augmented Naive Bayes (TAN) Classifier. However, it is crucial to implement TAN correctly in order to benefit from its full performance. This study implements a TAN classifier for anomaly based intrusion detection of computer network traffic, and displays practical insights on how to maximize its performance. The algorithm is implemented in two data sets with data from simulated cyber attacks: NSL-KDD and UNSW-NB15. We contribute to the field by validating the usefulness of TAN for anomaly detection in computer networks, as well as providing practical insights to new practitioners who want to utilize TAN in intrusion detection systems.

Place, publisher, year, edition, pages
IEEE, 2021
Keywords
Smoothing methods, Computational modeling, Intrusion detection, Telecommunication traffic, Organizations, Computer networks, Classification algorithms
National Category
Computer Sciences
Research subject
Computer Science
Identifiers
urn:nbn:se:kth:diva-305880 (URN)10.1109/EDOCW52865.2021.00040 (DOI)000744466000015 ()2-s2.0-85123006466 (Scopus ID)
Conference
International Workshop on Enterprise Distributed Object Computing, EDOCW
Note

QC 20220214

Available from: 2021-12-07 Created: 2021-12-07 Last updated: 2022-06-25Bibliographically approved
Välja, M., Heiding, F., Franke, U. & Lagerström, R. (2020). Automating threat modeling using an ontology framework: Validated with data from critical infrastructures. Cybersecurity, 3(1)
Open this publication in new window or tab >>Automating threat modeling using an ontology framework: Validated with data from critical infrastructures
2020 (English)In: Cybersecurity, E-ISSN 2523-3246, Vol. 3, no 1Article in journal (Refereed) Published
Abstract [en]

Threat modeling is of increasing importance to IT security, and it is a complex and resource demanding task. The aim of automating threat modeling is to simplify model creation by using data that are already available. However, the collected data often lack context; this can make the automated models less precise in terms of domain knowledge than those created by an expert human modeler. The lack of domain knowledge in modeling automation can be addressed with ontologies. In this paper, we introduce an ontology framework to improve automatic threat modeling. The framework is developed with conceptual modeling and validated using three different datasets: a small scale utility lab, water utility control network, and university IT environment. The framework produced successful results such as standardizing input sources, removing duplicate name entries, and grouping application software more logically.

Place, publisher, year, edition, pages
Springer Nature, 2020
Keywords
Automated modeling, Conceptual models, Ontologies, Ontology framework, Threat modeling
National Category
Computer Systems Computer Sciences
Research subject
Computer Science
Identifiers
urn:nbn:se:kth:diva-282960 (URN)10.1186/s42400-020-00060-8 (DOI)000672545300001 ()2-s2.0-85091719168 (Scopus ID)
Funder
SweGRIDS - Swedish Centre for Smart Grids and Energy Storage
Note

QC 20201012

Available from: 2020-10-04 Created: 2020-10-04 Last updated: 2022-06-25Bibliographically approved
Heiding, F. & Lagerström, R. (2020). Ethical Principles for Designing Responsible Offensive Cyber Security Training. In: Privacy and Identity 2020: . Paper presented at Privacy and Identity 2020 International Summer School, Maribor, Slovenia, September 21–23, 2020, (pp. 21-39).
Open this publication in new window or tab >>Ethical Principles for Designing Responsible Offensive Cyber Security Training
2020 (English)In: Privacy and Identity 2020, 2020, p. 21-39Conference paper, Published paper (Refereed)
Abstract [en]

In this paper we present five principles for designing ethically responsible offensive cyber security training. The principles can be implemented in existing or new study plans and target both academic and non-academic courses. Subject matter experts within various cyber security domains were consulted to validate and fine tune the principles, together with a literature review of ethical studies in related domains. The background for designing the principles is the continuous popularity of offensive cyber security (penetration testing, ethical hacking). Offensive cyber security means actively trying to break or compromise a system in order to find its vulnerabilities. If this expertise is placed in the wrong hands, the person can cause severe damage to organizations, civilians and society at large. The proposed ethical principles are created in order to mitigate these risks while maintaining the upsides of offensive cyber security. This is achieved by incorporating the ethical principles in offensive cyber security training, in order to facilitate the practitioners with ethical knowledge of how and when to use their acquired expertise.

Keywords
Ethical principles Offensive cyber security training Ethical hacking Penetration testing Privacy Security training Ethical guideline Ethical framework
National Category
Computer Sciences Information Systems
Identifiers
urn:nbn:se:kth:diva-293871 (URN)10.1007/978-3-030-72465-8_2 (DOI)2-s2.0-85107332440 (Scopus ID)
Conference
Privacy and Identity 2020 International Summer School, Maribor, Slovenia, September 21–23, 2020,
Funder
SweGRIDS - Swedish Centre for Smart Grids and Energy Storage
Note

ISBN: 978-3-030-72465-8

QC 20210504

Available from: 2021-05-04 Created: 2021-05-04 Last updated: 2022-06-25Bibliographically approved
Heiding, F., Lagerström, R., Wallström, A. & Omer, M.-A. (2020). Securing IoT Devices using Geographic and Continuous Login Blocking: A Honeypot Study. In: Proceedings of the 6th International Conference on Information Systems Security and Privacy 2020: . Paper presented at 6th International Conference on Information Systems Security and Privacy, ICISSP 2020, Valletta, Malta, February 25-27, 2020. (pp. 424-431). INSTICC
Open this publication in new window or tab >>Securing IoT Devices using Geographic and Continuous Login Blocking: A Honeypot Study
2020 (English)In: Proceedings of the 6th International Conference on Information Systems Security and Privacy 2020, INSTICC , 2020, p. 424-431Conference paper, Published paper (Refereed)
Abstract [en]

IoT (Internet of Things) devices have grown exponentially in the last years, both in the sheer number of devices and concerning areas of applications being introduced. Together with this rapid development we are faced with an increased need for IoT Security. Devices that have previously been analogue, such as refrigerators, door locks, and cars are now turning digital and are exposed to the threats posed by an Internet connection. This paper investigates how two existing security features (geographic IP Blocking with GeoIP and rate-limited connections with fail2ban) can be used to enhance the security of IoT devices. We analyze the success of each method by comparing units with and without the security features, collecting and comparing data about the received attacks for both kinds. The result shows that the GeoIP security feature can reduce attacks by roughly 93% and fail2ban by up to 99%. Further work in the field is encouraged to validate our findings, create better GeoIP tools, and to better understand the potential of the security techniques at a larger scale. The security features are implemented in aws instances made to simulate IoT devices, and measured with honeypots and IDSs (Intrusion Detection Systems) that collect data from the received attacks. The research is made as a fundamental work to later be extended by implementing the security features in more devices, such as single board computers that will simulate IoT devies even more accurately.

Place, publisher, year, edition, pages
INSTICC, 2020
National Category
Computer Systems Computer Sciences
Identifiers
urn:nbn:se:kth:diva-282961 (URN)10.5220/0008954704240431 (DOI)000570766300043 ()2-s2.0-85083023600 (Scopus ID)
Conference
6th International Conference on Information Systems Security and Privacy, ICISSP 2020, Valletta, Malta, February 25-27, 2020.
Note

Duplicate in Scopus 2-s2.0-85176319032

QC 20201019

Available from: 2020-10-04 Created: 2020-10-04 Last updated: 2023-11-23Bibliographically approved
Heiding, F. A Framework for Evaluating Large Language Models’ Capability to Conduct Cyberattacks.
Open this publication in new window or tab >>A Framework for Evaluating Large Language Models’ Capability to Conduct Cyberattacks
(English)Manuscript (preprint) (Other academic)
Abstract [en]

As large language models continue to evolve, they have the potential to automate and enhance various aspects of computer security, including red teaming assessments. In this article, we conduct 32 computer security attacks and compare their success rates when performed manually and with assistance from large language models. The security assessments target five connected devices commonly found in modern households (two door locks, one vacuum cleaner, one garage door, and one smart vehicle adapter). We use attacks such as denial-of-service attacks, Man-in-the-Middle, authentication brute force, malware creation, and other common attack types. Each attack was performed twice, once by a human and once by an LLM, and scored for damage, reproducibility, exploitability, affected users, and discoverability based on the DREAD framework for computer security risk assessments. For the LLM-assisted attacks, we also scored the LLM's capacity to perform the attack autonomously. LLMs regularly increased the reproducibility and exploitability of attacks, but no LLM-based attack enhanced the damage inflicted on the device, and the language models often required manual input to complete the attack. 

National Category
Computer Sciences
Identifiers
urn:nbn:se:kth:diva-353244 (URN)
Note

Submitted to the International Conference on Learning Representations (ICLR)

QC 20240918

Available from: 2024-09-13 Created: 2024-09-13 Last updated: 2024-09-18Bibliographically approved
Organisations
Identifiers
ORCID iD: ORCID iD iconorcid.org/0000-0001-7884-966x

Search in DiVA

Show all publications