kth.sePublications
Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Neural Transfer Learning for Repairing Security Vulnerabilities in C Code
KTH, School of Electrical Engineering and Computer Science (EECS), Computer Science, Theoretical Computer Science, TCS.ORCID iD: 0000-0002-6673-6438
Colorado State University, USA.
KTH, School of Electrical Engineering and Computer Science (EECS), Computer Science, Theoretical Computer Science, TCS.ORCID iD: 0000-0003-3505-3383
2023 (English)In: IEEE Transactions on Software Engineering, ISSN 0098-5589, E-ISSN 1939-3520, Vol. 49, no 1, p. 147-165Article in journal (Refereed) Published
Abstract [en]

In this paper, we address the problem of automatic repair of software vulnerabilities with deep learning. The major problem with data-driven vulnerability repair is that the few existing datasets of known confirmed vulnerabilities consist of only a few thousand examples. However, training a deep learning model often requires hundreds of thousands of examples. In this work, we leverage the intuition that the bug fixing task and the vulnerability fixing task are related and that the knowledge learned from bug fixes can be transferred to fixing vulnerabilities. In the machine learning community, this technique is called transfer learning. In this paper, we propose an approach for repairing security vulnerabilities named VRepair which is based on transfer learning. VRepair is first trained on a large bug fix corpus and is then tuned on a vulnerability fix dataset, which is an order of magnitude smaller. In our experiments, we show that a model trained only on a bug fix corpus can already fix some vulnerabilities. Then, we demonstrate that transfer learning improves the ability to repair vulnerable C functions. We also show that the transfer learning model performs better than a model trained with a denoising task and fine-tuned on the vulnerability fixing task. To sum up, this paper shows that transfer learning works well for repairing security vulnerabilities in C compared to learning on a small dataset.

Place, publisher, year, edition, pages
Institute of Electrical and Electronics Engineers (IEEE) , 2023. Vol. 49, no 1, p. 147-165
Keywords [en]
Codes, Computer bugs, seq2seq learning, Software, Task analysis, Training, transfer learning, Transformers, vulnerability fixing, C (programming language), Costs, Deep learning, Job analysis, Knowledge management, Personnel training, Program debugging, Code, Security vulnerabilities, Transformer, Repair
National Category
Other Electrical Engineering, Electronic Engineering, Information Engineering Computer Sciences
Identifiers
URN: urn:nbn:se:kth:diva-320561DOI: 10.1109/TSE.2022.3147265ISI: 001020827200008Scopus ID: 2-s2.0-85124188450OAI: oai:DiVA.org:kth-320561DiVA, id: diva2:1706456
Funder
Swedish Foundation for Strategic Research, trustfull
Note

QC 20231117

Available from: 2022-10-26 Created: 2022-10-26 Last updated: 2023-11-17Bibliographically approved
In thesis
1. Source Code Representations of Deep Learning for Program Repair
Open this publication in new window or tab >>Source Code Representations of Deep Learning for Program Repair
2023 (English)Doctoral thesis, comprehensive summary (Other academic)
Alternative title[sv]
Källkodsrepresentationer för djupinlärning av Programreparation
Abstract [en]

Deep learning, leveraging artificial neural networks, has demonstrated significant capabilities in understanding intricate patterns within data. In recent years, its prowess has been extended to the vast domain of source code, where it aids in diverse software engineering tasks such as program repair, code summarization, and vulnerability detection. However, using deep learning for analyzing source code poses unique challenges. This thesis primarily focuses on the challenges of representing source code to deep learning models for the purpose of automated program repair, a task that aims to automatically fix program bugs.

Source code, inherently different from natural languages, is both large in size and unique in vocabulary due to freely named identifiers, thus presenting the out-of-vocabulary challenge. Furthermore, its inherent precision requires exact representation; even a minor error can cause complete system failures. These characteristics underscore the importance of designing appropriate input and output representations for deep learning models, ensuring that they can efficiently and accurately process code for the purposes of program repair. The core contributions of this thesis address these challenges.

First, we propose a compact input representation that encapsulates the essential context for bug fixing. The compact input representation retains the relevant information that is essential to understanding the bug while removing unnecessary context that might add noise to the model.

Second, we tackle the out-of-vocabulary problem by harnessing techniques from natural language processing, capitalizing on existing code elements for bug fixes, and drawing parallels to the redundancy assumption in traditional program repair approaches.

Third, to address the precision of source code, we integrate bug information into the input representation and pivot the model's output from complete code generation to concise edit instructions, offering a more focused and accurate approach.

Last, we show that by unifying the source code representation across multiple code-related tasks, we facilitate transfer and multi-task learning. Both learning strategies can help in mitigating issues faced when training on limited datasets.

Abstract [sv]

Djupinlärning, som utnyttjar artificiella neurala nätverk, har visat betydande förmågor att förstå de komplexa mönster som finns i data. Under de senaste åren har dess förmåga utökats till den enorma domänen av källkod, där den hjälper till med olika uppgifter inom mjukvaruutveckling såsom programreparation, kodsummering och detektering av sårbarheter. Att använda djupinlärning för att analysera källkod medför dock unika utmaningar. Denna avhandling fokuserar främst på utmaningarna med att representera källkod för djupinlärningsmodeller i syfte att reparera program.

Källkod, som i grunden skiljer sig från naturliga språk, är både stor i storlek och unik i ordförråd på grund av fritt namngivna identifierare, vilket medför problemet med ord utanför ordförrådet. Dessutom kräver dess naturliga precision en exakt representation; även ett mindre fel kan orsaka totala systemfel. Dessa egenskaper understryker vikten av att designa lämpliga in- och utdatarepresentationer för djupinlärningsmodeller, för att säkerställa att de kan bearbeta koden effektivt och korrekt för ändamålet att reparera program. De centrala bidragen i denna avhandling löser dessa utmaningar.

För det första föreslår vi en kompakt indatarepresentation som fångar den väsentliga kontexten för buggfixning. Den kompakta indatarepresentationen behåller den relevanta informationen som är nödvändig för att förstå buggen, samtidigt som den tar bort onödig kontext som kan vara brus för modellen.

För det andra löser vi problemet med ord utanför ordförrådet genom att utnyttja tekniker från naturlig språkbehandling, och dra nytta av befintliga kodelement för buggfixar, vilket drar paralleller till redundansantagandet i traditionella programreparationsmetoder.

För det tredje, för att hantera källkodens precision, integrerar vi bugg information i indatarepresentationen och ändrar modellens utdata från fullständig kodgenerering till korta redigeringsinstruktioner, vilket erbjuder ett mer fokuserat och korrekt tillvägagångssätt.

Slutligen visar vi att genom att ena källkodsrepresentationen över flera kodrelaterade uppgifter underlättar vi överföring och fleruppgiftsinlärning. Båda inlärningsstrategierna kan mildra problem som uppstår vid träning på begränsade data.

Place, publisher, year, edition, pages
Sweden: KTH Royal Institute of Technology, 2023. p. xi, 117
Series
TRITA-EECS-AVL ; 2023:83
Keywords
Code Representation, Deep Learning, Program Repair
National Category
Computer Sciences
Research subject
Computer Science
Identifiers
urn:nbn:se:kth:diva-339763 (URN)978-91-8040-764-9 (ISBN)
Public defence
2023-12-11, F3, Lindstedtsvägen 26, Stockholm, 09:00 (English)
Opponent
Supervisors
Funder
Swedish Foundation for Strategic Research, Trustfull
Note

QC 20231117

Available from: 2023-11-17 Created: 2023-11-17 Last updated: 2023-11-21Bibliographically approved

Open Access in DiVA

No full text in DiVA

Other links

Publisher's full textScopus

Authority records

Chen, ZiminMonperrus, Martin

Search in DiVA

By author/editor
Chen, ZiminMonperrus, Martin
By organisation
Theoretical Computer Science, TCS
In the same journal
IEEE Transactions on Software Engineering
Other Electrical Engineering, Electronic Engineering, Information EngineeringComputer Sciences

Search outside of DiVA

GoogleGoogle Scholar

doi
urn-nbn

Altmetric score

doi
urn-nbn
Total: 189 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf