Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Test them all, is it worth it?: Assessing configuration sampling on the JHipster Web development stack
Univ Namur, Fac Comp Sci, NaDI, PReCISE, Namur, Belgium..
CETIC, Charleroi, Belgium..
Univ Rennes, IRISA, CNRS, INRIA, Rennes, France..
Delft Univ Technol, SERG, Delft, Netherlands..ORCID iD: 0000-0002-0831-7606
Show others and affiliations
2019 (English)In: Journal of Empirical Software Engineering, ISSN 1382-3256, E-ISSN 1573-7616, Vol. 24, no 2, p. 674-717Article in journal (Refereed) Published
Abstract [en]

Many approaches for testing configurable software systems start from the same assumption: it is impossible to test all configurations. This motivated the definition of variability-aware abstractions and sampling techniques to cope with large configuration spaces. Yet, there is no theoretical barrier that prevents the exhaustive testing of all configurations by simply enumerating them if the effort required to do so remains acceptable. Not only this: we believe there is a lot to be learned by systematically and exhaustively testing a configurable system. In this case study, we report on the first ever endeavour to test all possible configurations of the industry-strength, open source configurable software system JHipster, a popular code generator for web applications. We built a testing scaffold for the 26,000+ configurations of JHipster using a cluster of 80 machines during 4 nights for a total of 4,376 hours (182 days) CPU time. We find that 35.70% configurations fail and we identify the feature interactions that cause the errors. We show that sampling strategies (like dissimilarity and 2-wise): (1) are more effective to find faults than the 12 default configurations used in the JHipster continuous integration; (2) can be too costly and exceed the available testing budget. We cross this quantitative analysis with the qualitative assessment of JHipster's lead developers.

Place, publisher, year, edition, pages
SPRINGER , 2019. Vol. 24, no 2, p. 674-717
Keywords [en]
Configuration sampling, Variability-intensive system, Software testing, JHipster, Case study
National Category
Computer and Information Sciences
Identifiers
URN: urn:nbn:se:kth:diva-249802DOI: 10.1007/s10664-018-9635-4ISI: 000462654200005Scopus ID: 2-s2.0-85049998068OAI: oai:DiVA.org:kth-249802DiVA, id: diva2:1306509
Note

QC 20190424

Available from: 2019-04-24 Created: 2019-04-24 Last updated: 2019-04-24Bibliographically approved

Open Access in DiVA

No full text in DiVA

Other links

Publisher's full textScopus

Authority records BETA

Baudry, Benoit

Search in DiVA

By author/editor
Devroey, XavierBaudry, Benoit
By organisation
Software and Computer systems, SCS
In the same journal
Journal of Empirical Software Engineering
Computer and Information Sciences

Search outside of DiVA

GoogleGoogle Scholar

doi
urn-nbn

Altmetric score

doi
urn-nbn
Total: 24 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf