Open this publication in new window or tab >>Show others...
2024 (English)In: IEEE Access, E-ISSN 2169-3536, Vol. 12, p. 42131-42146Article in journal (Refereed) Published
Abstract [en]
AI programs, built using large language models, make it possible to automatically create phishing emails based on a few data points about a user. The V-Triad is a set of rules for manually designing phishing emails to exploit our cognitive heuristics and biases. In this study, we compare the performance of phishing emails created automatically by GPT-4 and manually using the V-Triad. We also combine GPT-4 with the V-Triad to assess their combined potential. A fourth group, exposed to generic phishing emails, was our control group. We use a red teaming approach by simulating attackers and emailing 112 participants recruited for the study. The control group emails received a click-through rate between 19-28%, the GPT-generated emails 30-44%, emails generated by the V-Triad 69-79%, and emails generated by GPT and the V-Triad 43-81%. Each participant was asked to explain why they pressed or did not press a link in the email. These answers often contradict each other, highlighting the importance of personal differences. Next, we used four popular large language models (GPT, Claude, PaLM, and LLaMA) to detect the intention of phishing emails and compare the results to human detection. The language models demonstrated a strong ability to detect malicious intent, even in non-obvious phishing emails. They sometimes surpassed human detection, although often being slightly less accurate than humans. Finally, we analyze of the economic aspects of AI-enabled phishing attacks, showing how large language models increase the incentives of phishing and spear phishing by reducing their costs.
Place, publisher, year, edition, pages
Institute of Electrical and Electronics Engineers (IEEE), 2024
Keywords
Phishing, large language models, social engineering, artificial intelligence
National Category
Computer Sciences
Identifiers
urn:nbn:se:kth:diva-345143 (URN)10.1109/ACCESS.2024.3375882 (DOI)001192203500001 ()2-s2.0-85187996490 (Scopus ID)
Note
QC 20240408
2024-04-082024-04-082024-09-18Bibliographically approved