Download PDFOpen PDF in browser

Predicting Test Case Verdicts Using Textual Analysis of Commited Code Churns

EasyChair Preprint 1177

16 pagesDate: June 12, 2019

Abstract

Background: Continuous Integration (CI) is an agile software development practice that involves producing several clean builds of the software per day. The creation of these builds involve running excessive executions of automated tests, which is fined by high hardware cost and reduced development velocity. Goal: The goal of our research is to develop a method that reduces the number of executed test cases at each CI cycle. Method: We adopt a design research approach with an infrastructure provider company to develop a method that exploits Machine Learning (ML) to predict test case verdicts for committed source code. We train five different ML models on two data sets and evaluate their performance using two simple retrieval measures: precision and recall Results: While the results from training the ML models on the first data-set of test executions revealed low performance, the curated data-set for training showed an improvement on performance with respect to precision and recall. Conclusion: Our results indicate that the method is applicable when training the ML model on churns of small sizes.

Keyphrases: Verdicts, code churn, machine learning, test case selection

BibTeX entry
BibTeX does not have the right entry for preprints. This is a hack for producing the correct reference:
@booklet{EasyChair:1177,
  author    = {Khaled Al-Sabbagh and Miroslaw Staron and Regina Hebig and Wilhelm Meding},
  title     = {Predicting Test Case Verdicts Using Textual Analysis of Commited Code Churns},
  howpublished = {EasyChair Preprint 1177},
  year      = {EasyChair, 2019}}
Download PDFOpen PDF in browser