kth.sePublications
Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
PLUR: A Unifying, Graph-Based View of Program Learning, Understanding, and Repair
KTH, School of Electrical Engineering and Computer Science (EECS), Computer Science, Theoretical Computer Science, TCS.ORCID iD: 0000-0002-6673-6438
Show others and affiliations
2021 (English)In: Advances in Neural Information Processing Systems, Neural Information Processing Systems Foundation , 2021, Vol. 28, p. 23089-23101Conference paper, Published paper (Refereed)
Abstract [en]

Machine learning for understanding and editing source code has recently attracted significant interest, with many developments in new models, new code representations, and new tasks. This proliferation can appear disparate and disconnected, making each approach seemingly unique and incompatible, thus obscuring the core machine learning challenges and contributions. In this work, we demonstrate that the landscape can be significantly simplified by taking a general approach of mapping a graph to a sequence of tokens and pointers. Our main result is to show that 16 recently published tasks of different shapes can be cast in this form, based on which a single model architecture achieves near or above state-of-the-art results on nearly all tasks, outperforming custom models like code2seq and alternative generic models like Transformers. This unification further enables multitask learning and a series of cross-cutting experiments about the importance of different modeling choices for code understanding and repair tasks. The full framework, called PLUR, is easily extensible to more tasks, and will be open-sourced (https://github.com/google-research/plur).

Place, publisher, year, edition, pages
Neural Information Processing Systems Foundation , 2021. Vol. 28, p. 23089-23101
Series
Advances in neural information processing systems, ISSN 10495258
Keywords [en]
Codes (symbols), Machine learning, Repair, Code representation, Custom models, Different shapes, Generic modeling, Graph-based, Modeling architecture, Program learning, Single models, Source codes, State of the art, Graphic methods
National Category
Computer Sciences
Identifiers
URN: urn:nbn:se:kth:diva-316197ISI: 000922928403005Scopus ID: 2-s2.0-85129742594OAI: oai:DiVA.org:kth-316197DiVA, id: diva2:1693683
Conference
35th Conference on Neural Information Processing Systems, NeurIPS 2021, Virtual/Online, 6 - 14 December 2021
Note

Part of proceedings: ISBN 978-1-7138-4539-3 

QC 20220907

Available from: 2022-09-07 Created: 2022-09-07 Last updated: 2023-11-17Bibliographically approved
In thesis
1. Source Code Representations of Deep Learning for Program Repair
Open this publication in new window or tab >>Source Code Representations of Deep Learning for Program Repair
2023 (English)Doctoral thesis, comprehensive summary (Other academic)
Alternative title[sv]
Källkodsrepresentationer för djupinlärning av Programreparation
Abstract [en]

Deep learning, leveraging artificial neural networks, has demonstrated significant capabilities in understanding intricate patterns within data. In recent years, its prowess has been extended to the vast domain of source code, where it aids in diverse software engineering tasks such as program repair, code summarization, and vulnerability detection. However, using deep learning for analyzing source code poses unique challenges. This thesis primarily focuses on the challenges of representing source code to deep learning models for the purpose of automated program repair, a task that aims to automatically fix program bugs.

Source code, inherently different from natural languages, is both large in size and unique in vocabulary due to freely named identifiers, thus presenting the out-of-vocabulary challenge. Furthermore, its inherent precision requires exact representation; even a minor error can cause complete system failures. These characteristics underscore the importance of designing appropriate input and output representations for deep learning models, ensuring that they can efficiently and accurately process code for the purposes of program repair. The core contributions of this thesis address these challenges.

First, we propose a compact input representation that encapsulates the essential context for bug fixing. The compact input representation retains the relevant information that is essential to understanding the bug while removing unnecessary context that might add noise to the model.

Second, we tackle the out-of-vocabulary problem by harnessing techniques from natural language processing, capitalizing on existing code elements for bug fixes, and drawing parallels to the redundancy assumption in traditional program repair approaches.

Third, to address the precision of source code, we integrate bug information into the input representation and pivot the model's output from complete code generation to concise edit instructions, offering a more focused and accurate approach.

Last, we show that by unifying the source code representation across multiple code-related tasks, we facilitate transfer and multi-task learning. Both learning strategies can help in mitigating issues faced when training on limited datasets.

Abstract [sv]

Djupinlärning, som utnyttjar artificiella neurala nätverk, har visat betydande förmågor att förstå de komplexa mönster som finns i data. Under de senaste åren har dess förmåga utökats till den enorma domänen av källkod, där den hjälper till med olika uppgifter inom mjukvaruutveckling såsom programreparation, kodsummering och detektering av sårbarheter. Att använda djupinlärning för att analysera källkod medför dock unika utmaningar. Denna avhandling fokuserar främst på utmaningarna med att representera källkod för djupinlärningsmodeller i syfte att reparera program.

Källkod, som i grunden skiljer sig från naturliga språk, är både stor i storlek och unik i ordförråd på grund av fritt namngivna identifierare, vilket medför problemet med ord utanför ordförrådet. Dessutom kräver dess naturliga precision en exakt representation; även ett mindre fel kan orsaka totala systemfel. Dessa egenskaper understryker vikten av att designa lämpliga in- och utdatarepresentationer för djupinlärningsmodeller, för att säkerställa att de kan bearbeta koden effektivt och korrekt för ändamålet att reparera program. De centrala bidragen i denna avhandling löser dessa utmaningar.

För det första föreslår vi en kompakt indatarepresentation som fångar den väsentliga kontexten för buggfixning. Den kompakta indatarepresentationen behåller den relevanta informationen som är nödvändig för att förstå buggen, samtidigt som den tar bort onödig kontext som kan vara brus för modellen.

För det andra löser vi problemet med ord utanför ordförrådet genom att utnyttja tekniker från naturlig språkbehandling, och dra nytta av befintliga kodelement för buggfixar, vilket drar paralleller till redundansantagandet i traditionella programreparationsmetoder.

För det tredje, för att hantera källkodens precision, integrerar vi bugg information i indatarepresentationen och ändrar modellens utdata från fullständig kodgenerering till korta redigeringsinstruktioner, vilket erbjuder ett mer fokuserat och korrekt tillvägagångssätt.

Slutligen visar vi att genom att ena källkodsrepresentationen över flera kodrelaterade uppgifter underlättar vi överföring och fleruppgiftsinlärning. Båda inlärningsstrategierna kan mildra problem som uppstår vid träning på begränsade data.

Place, publisher, year, edition, pages
Sweden: KTH Royal Institute of Technology, 2023. p. xi, 117
Series
TRITA-EECS-AVL ; 2023:83
Keywords
Code Representation, Deep Learning, Program Repair
National Category
Computer Sciences
Research subject
Computer Science
Identifiers
urn:nbn:se:kth:diva-339763 (URN)978-91-8040-764-9 (ISBN)
Public defence
2023-12-11, F3, Lindstedtsvägen 26, Stockholm, 09:00 (English)
Opponent
Supervisors
Funder
Swedish Foundation for Strategic Research, Trustfull
Note

QC 20231117

Available from: 2023-11-17 Created: 2023-11-17 Last updated: 2023-11-21Bibliographically approved

Open Access in DiVA

No full text in DiVA

Scopus

Authority records

Chen, Zimin

Search in DiVA

By author/editor
Chen, Zimin
By organisation
Theoretical Computer Science, TCS
Computer Sciences

Search outside of DiVA

GoogleGoogle Scholar

urn-nbn

Altmetric score

urn-nbn
Total: 50 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf