kth.sePublications
Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Test Benchmarks: Which One Now and in Future?
KTH, School of Electrical Engineering and Computer Science (EECS), Computer Science, Theoretical Computer Science, TCS.ORCID iD: 0000-0002-3656-1614
KTH, School of Electrical Engineering and Computer Science (EECS), Computer Science, Theoretical Computer Science, TCS.
Software Competence Ctr Hagenberg, Hagenberg, Austria..
2021 (English)In: 2021 IEEE 21ST INTERNATIONAL CONFERENCE ON SOFTWARE QUALITY, RELIABILITY AND SECURITY (QRS 2021), Institute of Electrical and Electronics Engineers (IEEE) , 2021, p. 328-336Conference paper, Published paper (Refereed)
Abstract [en]

To evaluate software testing and program analysis tools, the research community relies on collections of sample programs (benchmarks) containing realistic code examples with defects. We investigated 23 benchmark projects for test generation in common programming languages and looked at how they can be categorized according to attributes such as programming language, number of programs and defects, license, and size. From our studies, it is evident that the development and especially maintenance of benchmarks are a big challenge. Out of the 23 benchmark projects we investigated, only four are still active as of today, and only nine have been updated after their initial release. With the underlying programming languages and platforms constantly evolving (often without full backward compatibility), this creates a challenge when comparing new tools to older ones. To exacerbate the situation, many benchmarks do not fully track the provenance and license of the code they include. Sustainable benchmark collections share these key factors: Open hosting of complete (actual) data allowing community involvement, systematic maintenance of license and authorship data, and a unified machine-readable format for such data.

Place, publisher, year, edition, pages
Institute of Electrical and Electronics Engineers (IEEE) , 2021. p. 328-336
Series
IEEE International Conference on Software Quality Reliability and Security, ISSN 2693-9185
Keywords [en]
Software verification, software testing, programming languages, benchmarks, software science, measurement, maintainability, licensing, sustainability
National Category
Computer Sciences
Identifiers
URN: urn:nbn:se:kth:diva-315539DOI: 10.1109/QRS54544.2021.00044ISI: 000814747000034Scopus ID: 2-s2.0-85146197176OAI: oai:DiVA.org:kth-315539DiVA, id: diva2:1682010
Conference
21st IEEE International Conference on Software Quality, Reliability and Security (QRS), 6-10 December, 2021, Hainan, CHINA
Note

Part of proceedings: ISBN 978-1-6654-5813-9

QC 20220708

Available from: 2022-07-08 Created: 2022-07-08 Last updated: 2023-06-08Bibliographically approved

Open Access in DiVA

No full text in DiVA

Other links

Publisher's full textScopus

Authority records

Artho, CyrilleBenali, Adam

Search in DiVA

By author/editor
Artho, CyrilleBenali, Adam
By organisation
Theoretical Computer Science, TCS
Computer Sciences

Search outside of DiVA

GoogleGoogle Scholar

doi
urn-nbn

Altmetric score

doi
urn-nbn
Total: 127 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf