kth.sePublications KTH
Change search
Link to record
Permanent link

Direct link
Publications (10 of 12) Show all publications
Chen, Z., Fang, S. & Monperrus, M. (2024). Supersonic: Learning to Generate Source Code Optimizations in C/C plus. IEEE Transactions on Software Engineering, 50(11), 2849-2864
Open this publication in new window or tab >>Supersonic: Learning to Generate Source Code Optimizations in C/C plus
2024 (English)In: IEEE Transactions on Software Engineering, ISSN 0098-5589, E-ISSN 1939-3520, Vol. 50, no 11, p. 2849-2864Article in journal (Refereed) Published
Abstract [en]

Software optimization refines programs for resource efficiency while preserving functionality. Traditionally, it is a process done by developers and compilers. This paper introduces a third option, automated optimization at the source code level. We present Supersonic , a neural approach targeting minor source code modifications for optimization. Using a seq2seq model, Supersonic is trained on C/C++ program pairs ( x(t) , x(t+1) ), where x(t+1) is an optimized version of x(t) , and outputs a diff. Supersonic 's performance is benchmarked against OpenAI's GPT-3.5-Turbo and GPT-4 on competitive programming tasks. The experiments show that Supersonic not only outperforms both models on the code optimization task but also minimizes the extent of the change with a model more than 600x smaller than GPT-3.5-Turbo and 3700x smaller than GPT-4.

Place, publisher, year, edition, pages
Institute of Electrical and Electronics Engineers (IEEE), 2024
Keywords
Optimization, Codes, Training, Source coding, Task analysis, Decoding, Vectors, Code optimization, Seq2Seq learning, large language model
National Category
Software Engineering Computer Sciences
Identifiers
urn:nbn:se:kth:diva-358594 (URN)10.1109/TSE.2024.3423769 (DOI)001369099900004 ()2-s2.0-85199377843 (Scopus ID)
Funder
Swedish Foundation for Strategic Research, trustfullWallenberg AI, Autonomous Systems and Software Program (WASP)
Note

QC 20250124

Available from: 2025-01-24 Created: 2025-01-24 Last updated: 2025-02-03Bibliographically approved
Yu, Z., Martinez, M., Chen, Z., Bissyande, T. F. F. & Monperrus, M. (2023). Learning the Relation Between Code Features and Code Transforms With Structured Prediction. IEEE Transactions on Software Engineering, 49(7), 3872-3900
Open this publication in new window or tab >>Learning the Relation Between Code Features and Code Transforms With Structured Prediction
Show others...
2023 (English)In: IEEE Transactions on Software Engineering, ISSN 0098-5589, E-ISSN 1939-3520, Vol. 49, no 7, p. 3872-3900Article in journal (Refereed) Published
Abstract [en]

To effectively guide the exploration of the code transform space for automated code evolution techniques, we present in this article the first approach for structurally predicting code transforms at the level of AST nodes using conditional random fields (CRFs). Our approach first learns offline a probabilistic model that captures how certain code transforms are applied to certain AST nodes, and then uses the learned model to predict transforms for arbitrary new, unseen code snippets. Our approach involves a novel representation of both programs and code transforms. Specifically, we introduce the formal framework for defining the so-called AST-level code transforms and we demonstrate how the CRF model can be accordingly designed, learned, and used for prediction. We instantiate our approach in the context of repair transform prediction for Java programs. Our instantiation contains a set of carefully designed code features, deals with the training data imbalance issue, and comprises transform constraints that are specific to code. We conduct a large-scale experimental evaluation based on a dataset of bug fixing commits from real-world Java projects. The results show that when the popular evaluation metric top-3 is used, our approach predicts the code transforms with an accuracy varying from 41% to 53% depending on the transforms. Our model outperforms two baselines based on history probability and neural machine translation (NMT), suggesting the importance of considering code structure in achieving good prediction accuracy. In addition, a proof-of-concept synthesizer is implemented to concretize some repair transforms to get the final patches. The evaluation of the synthesizer on the Defects4j benchmark confirms the usefulness of the predicted AST-level repair transforms in producing high-quality patches.

Place, publisher, year, edition, pages
Institute of Electrical and Electronics Engineers (IEEE), 2023
Keywords
Code transform, big code, machine learning, program repair
National Category
Computer Sciences
Identifiers
urn:nbn:se:kth:diva-334718 (URN)10.1109/TSE.2023.3275380 (DOI)001033501500012 ()2-s2.0-85161054017 (Scopus ID)
Funder
Swedish Foundation for Strategic Research, trustullWallenberg AI, Autonomous Systems and Software Program (WASP)
Note

QC 20231127

Available from: 2023-08-24 Created: 2023-08-24 Last updated: 2023-11-27Bibliographically approved
Chen, Z., Salawa, M., Vijayvergiya, M., Petrović, G., Ivanković, M. & Just, R. (2023). MuRS: Mutant Ranking and Suppression using Identifier Templates. In: ESEC/FSE 2023 - Proceedings of the 31st ACM Joint Meeting European Software Engineering Conference and Symposium on the Foundations of Software Engineering: . Paper presented at 31st ACM Joint Meeting European Software Engineering Conference and Symposium on the Foundations of Software Engineering, ESEC/FSE 2023, San Francisco, United States of America, Dec 3 2023 - Dec 9 2023 (pp. 1798-1808). Association for Computing Machinery (ACM)
Open this publication in new window or tab >>MuRS: Mutant Ranking and Suppression using Identifier Templates
Show others...
2023 (English)In: ESEC/FSE 2023 - Proceedings of the 31st ACM Joint Meeting European Software Engineering Conference and Symposium on the Foundations of Software Engineering, Association for Computing Machinery (ACM) , 2023, p. 1798-1808Conference paper, Published paper (Refereed)
Abstract [en]

Diff-based mutation testing is a mutation testing approach that only mutates lines affected by a code change under review. This approach scales independently of the code-base size and introduces test goals (mutants) that are directly relevant to an engineer's goal such as fixing a bug, adding a new feature, or refactoring existing functionality. Google's mutation testing service integrates diff-based mutation testing into the code review process and continuously gathers developer feedback on mutants surfaced during code review. To enhance the developer experience, the mutation testing service uses a number of manually-written rules that suppress not-useful mutants - mutants that have consistently received negative developer feedback. However, while effective, manually implementing suppression rules requires significant engineering time. This paper proposes and evaluates MuRS, an automated approach that groups mutants by patterns in the source code under test and uses these patterns to rank and suppress future mutants based on historical developer feedback on mutants in the same group. To evaluate MuRS, we conducted an A/B testing study, comparing MuRS to the existing mutation testing service. Despite the strong baseline, which uses manually-written suppression rules, the results show a statistically significantly lower negative feedback ratio of 11.45% for MuRS versus 12.41% for the baseline. The results also show that MuRS is able to recover existing suppression rules implemented in the baseline. Finally, the results show that statement-deletion mutant groups received both the most positive and negative developer feedback, suggesting a need for additional context that can distinguish between useful and not-useful mutants in these groups. Overall, MuRS is able to recover existing suppression rules and automatically learn additional, finer-grained suppression rules from developer feedback.

Place, publisher, year, edition, pages
Association for Computing Machinery (ACM), 2023
Keywords
Code Review, Developer Feedback, Mutation Testing
National Category
Software Engineering
Identifiers
urn:nbn:se:kth:diva-341953 (URN)10.1145/3611643.3613901 (DOI)001148157800145 ()2-s2.0-85180550494 (Scopus ID)
Conference
31st ACM Joint Meeting European Software Engineering Conference and Symposium on the Foundations of Software Engineering, ESEC/FSE 2023, San Francisco, United States of America, Dec 3 2023 - Dec 9 2023
Note

Part of ISBN 9798400703270

QC 20240108

Available from: 2024-01-08 Created: 2024-01-08 Last updated: 2024-03-05Bibliographically approved
Chen, Z., Kommrusch, S. J. & Monperrus, M. (2023). Neural Transfer Learning for Repairing Security Vulnerabilities in C Code. IEEE Transactions on Software Engineering, 49(1), 147-165
Open this publication in new window or tab >>Neural Transfer Learning for Repairing Security Vulnerabilities in C Code
2023 (English)In: IEEE Transactions on Software Engineering, ISSN 0098-5589, E-ISSN 1939-3520, Vol. 49, no 1, p. 147-165Article in journal (Refereed) Published
Abstract [en]

In this paper, we address the problem of automatic repair of software vulnerabilities with deep learning. The major problem with data-driven vulnerability repair is that the few existing datasets of known confirmed vulnerabilities consist of only a few thousand examples. However, training a deep learning model often requires hundreds of thousands of examples. In this work, we leverage the intuition that the bug fixing task and the vulnerability fixing task are related and that the knowledge learned from bug fixes can be transferred to fixing vulnerabilities. In the machine learning community, this technique is called transfer learning. In this paper, we propose an approach for repairing security vulnerabilities named VRepair which is based on transfer learning. VRepair is first trained on a large bug fix corpus and is then tuned on a vulnerability fix dataset, which is an order of magnitude smaller. In our experiments, we show that a model trained only on a bug fix corpus can already fix some vulnerabilities. Then, we demonstrate that transfer learning improves the ability to repair vulnerable C functions. We also show that the transfer learning model performs better than a model trained with a denoising task and fine-tuned on the vulnerability fixing task. To sum up, this paper shows that transfer learning works well for repairing security vulnerabilities in C compared to learning on a small dataset.

Place, publisher, year, edition, pages
Institute of Electrical and Electronics Engineers (IEEE), 2023
Keywords
Codes, Computer bugs, seq2seq learning, Software, Task analysis, Training, transfer learning, Transformers, vulnerability fixing, C (programming language), Costs, Deep learning, Job analysis, Knowledge management, Personnel training, Program debugging, Code, Security vulnerabilities, Transformer, Repair
National Category
Other Electrical Engineering, Electronic Engineering, Information Engineering Computer Sciences
Identifiers
urn:nbn:se:kth:diva-320561 (URN)10.1109/TSE.2022.3147265 (DOI)001020827200008 ()2-s2.0-85124188450 (Scopus ID)
Funder
Swedish Foundation for Strategic Research, trustfull
Note

QC 20231117

Available from: 2022-10-26 Created: 2022-10-26 Last updated: 2023-11-17Bibliographically approved
He, Y., Chen, Z. & Le Goues, C. (2023). PreciseBugCollector: Extensible, Executable and Precise Bug-fix Collection. In: 2023 38TH IEEE/ACM INTERNATIONAL CONFERENCE ON AUTOMATED SOFTWARE ENGINEERING, ASE: . Paper presented at 38th IEEE/ACM International Conference on Automated Software Engineering (ASE), SEP 11-15, 2023, Echternach, LUXEMBOURG (pp. 1899-1910). Institute of Electrical and Electronics Engineers (IEEE)
Open this publication in new window or tab >>PreciseBugCollector: Extensible, Executable and Precise Bug-fix Collection
2023 (English)In: 2023 38TH IEEE/ACM INTERNATIONAL CONFERENCE ON AUTOMATED SOFTWARE ENGINEERING, ASE, Institute of Electrical and Electronics Engineers (IEEE) , 2023, p. 1899-1910Conference paper, Published paper (Refereed)
Abstract [en]

Bug datasets are vital for enabling deep learning techniques to address software maintenance tasks related to bugs. However, existing bug datasets suffer from precise and scale limitations: they are either small-scale but precise with manual validation or large-scale but imprecise with simple commit message processing. In this paper, we introduce PreciseBugCollector, a precise, multi-language bug collection approach that overcomes these two limitations. PreciseBugCollector is based on two novel components: a) A bug tracker to map the codebase repositories with external bug repositories to trace bug type information, and b) A bug injector to generate project-specific bugs by injecting noise into the correct codebases and then executing them against their test suites to obtain test failure messages. We implement PreciseBugCollector against three sources: 1) A bug tracker that links to the national vulnerability data set (NVD) to collect general-wise vulnerabilities, 2) A bug tracker that links to OSS-Fuzz to collect general-wise bugs, and 3) A bug injector based on 16 injection rules to generate project-wise bugs. To date, PreciseBugCollector comprises 1 057 818 bugs extracted from 2 968 open-source projects. Of these, 12 602 bugs are sourced from bug repositories (NVD and OSS-Fuzz), while the remaining 1 045 216 project-specific bugs are generated by the bug injector. Considering the challenge objectives, we argue that a bug injection approach is highly valuable for the industrial setting, since project-specific bugs align with domain knowledge, share the same codebase, and adhere to the coding style employed in industrial projects.

Place, publisher, year, edition, pages
Institute of Electrical and Electronics Engineers (IEEE), 2023
Series
IEEE ACM International Conference on Automated Software Engineering, ISSN 1527-1366
Keywords
Bug datasets, Program repair, Software testing and debugging
National Category
Software Engineering
Identifiers
urn:nbn:se:kth:diva-342856 (URN)10.1109/ASE56229.2023.00163 (DOI)001103357200176 ()2-s2.0-85179006191 (Scopus ID)
Conference
38th IEEE/ACM International Conference on Automated Software Engineering (ASE), SEP 11-15, 2023, Echternach, LUXEMBOURG
Note

Part of proceedings ISBN 979-8-3503-2996-4

QC 20240201

Available from: 2024-02-01 Created: 2024-02-01 Last updated: 2024-02-06Bibliographically approved
Chen, Z. (2023). Source Code Representations of Deep Learning for Program Repair. (Doctoral dissertation). Sweden: KTH Royal Institute of Technology
Open this publication in new window or tab >>Source Code Representations of Deep Learning for Program Repair
2023 (English)Doctoral thesis, comprehensive summary (Other academic)
Alternative title[sv]
Källkodsrepresentationer för djupinlärning av Programreparation
Abstract [en]

Deep learning, leveraging artificial neural networks, has demonstrated significant capabilities in understanding intricate patterns within data. In recent years, its prowess has been extended to the vast domain of source code, where it aids in diverse software engineering tasks such as program repair, code summarization, and vulnerability detection. However, using deep learning for analyzing source code poses unique challenges. This thesis primarily focuses on the challenges of representing source code to deep learning models for the purpose of automated program repair, a task that aims to automatically fix program bugs.

Source code, inherently different from natural languages, is both large in size and unique in vocabulary due to freely named identifiers, thus presenting the out-of-vocabulary challenge. Furthermore, its inherent precision requires exact representation; even a minor error can cause complete system failures. These characteristics underscore the importance of designing appropriate input and output representations for deep learning models, ensuring that they can efficiently and accurately process code for the purposes of program repair. The core contributions of this thesis address these challenges.

First, we propose a compact input representation that encapsulates the essential context for bug fixing. The compact input representation retains the relevant information that is essential to understanding the bug while removing unnecessary context that might add noise to the model.

Second, we tackle the out-of-vocabulary problem by harnessing techniques from natural language processing, capitalizing on existing code elements for bug fixes, and drawing parallels to the redundancy assumption in traditional program repair approaches.

Third, to address the precision of source code, we integrate bug information into the input representation and pivot the model's output from complete code generation to concise edit instructions, offering a more focused and accurate approach.

Last, we show that by unifying the source code representation across multiple code-related tasks, we facilitate transfer and multi-task learning. Both learning strategies can help in mitigating issues faced when training on limited datasets.

Abstract [sv]

Djupinlärning, som utnyttjar artificiella neurala nätverk, har visat betydande förmågor att förstå de komplexa mönster som finns i data. Under de senaste åren har dess förmåga utökats till den enorma domänen av källkod, där den hjälper till med olika uppgifter inom mjukvaruutveckling såsom programreparation, kodsummering och detektering av sårbarheter. Att använda djupinlärning för att analysera källkod medför dock unika utmaningar. Denna avhandling fokuserar främst på utmaningarna med att representera källkod för djupinlärningsmodeller i syfte att reparera program.

Källkod, som i grunden skiljer sig från naturliga språk, är både stor i storlek och unik i ordförråd på grund av fritt namngivna identifierare, vilket medför problemet med ord utanför ordförrådet. Dessutom kräver dess naturliga precision en exakt representation; även ett mindre fel kan orsaka totala systemfel. Dessa egenskaper understryker vikten av att designa lämpliga in- och utdatarepresentationer för djupinlärningsmodeller, för att säkerställa att de kan bearbeta koden effektivt och korrekt för ändamålet att reparera program. De centrala bidragen i denna avhandling löser dessa utmaningar.

För det första föreslår vi en kompakt indatarepresentation som fångar den väsentliga kontexten för buggfixning. Den kompakta indatarepresentationen behåller den relevanta informationen som är nödvändig för att förstå buggen, samtidigt som den tar bort onödig kontext som kan vara brus för modellen.

För det andra löser vi problemet med ord utanför ordförrådet genom att utnyttja tekniker från naturlig språkbehandling, och dra nytta av befintliga kodelement för buggfixar, vilket drar paralleller till redundansantagandet i traditionella programreparationsmetoder.

För det tredje, för att hantera källkodens precision, integrerar vi bugg information i indatarepresentationen och ändrar modellens utdata från fullständig kodgenerering till korta redigeringsinstruktioner, vilket erbjuder ett mer fokuserat och korrekt tillvägagångssätt.

Slutligen visar vi att genom att ena källkodsrepresentationen över flera kodrelaterade uppgifter underlättar vi överföring och fleruppgiftsinlärning. Båda inlärningsstrategierna kan mildra problem som uppstår vid träning på begränsade data.

Place, publisher, year, edition, pages
Sweden: KTH Royal Institute of Technology, 2023. p. xi, 117
Series
TRITA-EECS-AVL ; 2023:83
Keywords
Code Representation, Deep Learning, Program Repair
National Category
Computer Sciences
Research subject
Computer Science
Identifiers
urn:nbn:se:kth:diva-339763 (URN)978-91-8040-764-9 (ISBN)
Public defence
2023-12-11, F3, Lindstedtsvägen 26, Stockholm, 09:00 (English)
Opponent
Supervisors
Funder
Swedish Foundation for Strategic Research, Trustfull
Note

QC 20231117

Available from: 2023-11-17 Created: 2023-11-17 Last updated: 2023-11-21Bibliographically approved
Chen, Z., Fang, S. & Monperrus, M. (2023). Supersonic: Learning to Generate Source Code Optimizations in C/C++.
Open this publication in new window or tab >>Supersonic: Learning to Generate Source Code Optimizations in C/C++
2023 (English)Manuscript (preprint) (Other academic)
Abstract [en]

Software optimization refines programs for resource efficiency while preserving functionality. Traditionally, it is a process done by developers and compilers. This paper introduces a third option, automated optimization at the source code level. We present SUPERSONIC, a neural approach targeting minor source code modifications for optimization. Using a seq2seq model, SUPERSONIC is trained on C/C++ program pairs (xt, xt+1), where xt+1 is an optimized version of xt, and outputs a diff. SUPERSONIC’s performance is benchmarked against OpenAI’s GPT-3.5-Turbo and GPT-4 on competitive programming tasks. The experiments show that SUPERSONIC not only outperforms both models on the code optimization task but also minimizes the extent of the change with a model more than 600x smaller than GPT-3.5-Turbo and 3700x smaller than GPT-4.

National Category
Computer Sciences
Research subject
Computer Science
Identifiers
urn:nbn:se:kth:diva-339526 (URN)
Funder
Swedish Foundation for Strategic Research, Trustfull
Note

QC 20231120

Available from: 2023-11-13 Created: 2023-11-13 Last updated: 2023-12-01Bibliographically approved
Baudry, B., Chen, Z., Etemadi, K., Fu, H., Ginelli, D., Kommrusch, S., . . . Yu, Z. (2021). A Software-Repair Robot Based on Continual Learning. IEEE Software, 38(4), 28-35
Open this publication in new window or tab >>A Software-Repair Robot Based on Continual Learning
Show others...
2021 (English)In: IEEE Software, ISSN 0740-7459, E-ISSN 1937-4194, Vol. 38, no 4, p. 28-35Article in journal (Refereed) Published
Abstract [en]

Software bugs are common, and correcting them accounts for a significant portion of the costs in the software development and maintenance process. In this article, we discuss R-Hero, our novel system for learning how to fix bugs based on continual training.

Place, publisher, year, edition, pages
Institute of Electrical and Electronics Engineers (IEEE), 2021
Keywords
Maintenance engineering, Computer bugs, Software development management, Bot (Internet), Training data, Machine learning
National Category
Software Engineering Computer Sciences
Identifiers
urn:nbn:se:kth:diva-299103 (URN)10.1109/MS.2021.3070743 (DOI)000664984000005 ()2-s2.0-85103775192 (Scopus ID)
Funder
Wallenberg AI, Autonomous Systems and Software Program (WASP)Swedish Foundation for Strategic Research , trustfull
Note

QC 20210805

Available from: 2021-08-05 Created: 2021-08-05 Last updated: 2022-06-25Bibliographically approved
Gu, J., Chen, Z. & Monperrus, M. (2021). Multimodal Representation for Neural Code Search. In: 2021 IEEE international conference on software maintenance and evolution (ICSME 2021): . Paper presented at 37th IEEE International Conference on Software Maintenance and Evolution, ICSME 2021, Luxembourg City, 27 September 2021 through 1 October 2021 (pp. 483-494). Institute of Electrical and Electronics Engineers (IEEE)
Open this publication in new window or tab >>Multimodal Representation for Neural Code Search
2021 (English)In: 2021 IEEE international conference on software maintenance and evolution (ICSME 2021), Institute of Electrical and Electronics Engineers (IEEE) , 2021, p. 483-494Conference paper, Published paper (Refereed)
Abstract [en]

Semantic code search is about finding semantically relevant code snippets for a given natural language query. In the state-of-the-art approaches, the semantic similarity between code and query is quantified as the distance of their representation in the shared vector space. In this paper, to improve the vector space, we introduce tree-serialization methods on a simplified form of AST and build the multimodal representation for the code data. We conduct extensive experiments using a single corpus that is large-scale and multi-language: CodeSearchNet. Our results show that both our tree-serialized representations and multimodal learning model improve the performance of code search. Last, we define intuitive quantification metrics oriented to the completeness of semantic and syntactic information of the code data, to help understand the experimental findings.

Place, publisher, year, edition, pages
Institute of Electrical and Electronics Engineers (IEEE), 2021
Series
Proceedings-IEEE International Conference on Software Maintenance, ISSN 1063-6773
Keywords
multimodal learning, program representation, information completeness, tree serialization, code search
National Category
Business Administration Human Geography Communication Systems
Identifiers
urn:nbn:se:kth:diva-312782 (URN)10.1109/ICSME52107.2021.00049 (DOI)000790782500043 ()2-s2.0-85123058224 (Scopus ID)
Conference
37th IEEE International Conference on Software Maintenance and Evolution, ICSME 2021, Luxembourg City, 27 September 2021 through 1 October 2021
Funder
Wallenberg AI, Autonomous Systems and Software Program (WASP)
Note

QC 20220819

Part of proceedings: ISBN 978-1-6654-2882-8

Available from: 2022-05-23 Created: 2022-05-23 Last updated: 2022-08-19Bibliographically approved
Chen, Z., Hellendoorn, V. J., Maniatis, P., Lamblin, P., Manzagol, P.-A. -., Tarlow, D. & Moitra, S. (2021). PLUR: A Unifying, Graph-Based View of Program Learning, Understanding, and Repair. In: Advances in Neural Information Processing Systems: . Paper presented at 35th Conference on Neural Information Processing Systems, NeurIPS 2021, Virtual/Online, 6 - 14 December 2021 (pp. 23089-23101). Neural Information Processing Systems Foundation, 28
Open this publication in new window or tab >>PLUR: A Unifying, Graph-Based View of Program Learning, Understanding, and Repair
Show others...
2021 (English)In: Advances in Neural Information Processing Systems, Neural Information Processing Systems Foundation , 2021, Vol. 28, p. 23089-23101Conference paper, Published paper (Refereed)
Abstract [en]

Machine learning for understanding and editing source code has recently attracted significant interest, with many developments in new models, new code representations, and new tasks. This proliferation can appear disparate and disconnected, making each approach seemingly unique and incompatible, thus obscuring the core machine learning challenges and contributions. In this work, we demonstrate that the landscape can be significantly simplified by taking a general approach of mapping a graph to a sequence of tokens and pointers. Our main result is to show that 16 recently published tasks of different shapes can be cast in this form, based on which a single model architecture achieves near or above state-of-the-art results on nearly all tasks, outperforming custom models like code2seq and alternative generic models like Transformers. This unification further enables multitask learning and a series of cross-cutting experiments about the importance of different modeling choices for code understanding and repair tasks. The full framework, called PLUR, is easily extensible to more tasks, and will be open-sourced (https://github.com/google-research/plur).

Place, publisher, year, edition, pages
Neural Information Processing Systems Foundation, 2021
Series
Advances in neural information processing systems, ISSN 10495258
Keywords
Codes (symbols), Machine learning, Repair, Code representation, Custom models, Different shapes, Generic modeling, Graph-based, Modeling architecture, Program learning, Single models, Source codes, State of the art, Graphic methods
National Category
Computer Sciences
Identifiers
urn:nbn:se:kth:diva-316197 (URN)000922928403005 ()2-s2.0-85129742594 (Scopus ID)
Conference
35th Conference on Neural Information Processing Systems, NeurIPS 2021, Virtual/Online, 6 - 14 December 2021
Note

Part of proceedings: ISBN 978-1-7138-4539-3 

QC 20220907

Available from: 2022-09-07 Created: 2022-09-07 Last updated: 2023-11-17Bibliographically approved
Organisations
Identifiers
ORCID iD: ORCID iD iconorcid.org/0000-0002-6673-6438

Search in DiVA

Show all publications