18 May 2016

Chakkrit Tantithamthavorn

The International Conference on Software Engineering (ICSE) - Doctoral Symposium

Defect prediction models are used to pinpoint risky software modules and understand past pitfalls that lead to defective modules. The predictions and insights that are derived from defect prediction models may not be accurate and reliable if researchers do not consider the impact of experimental components (e.g., datasets, metrics, and classifiers) of defect prediction modelling. Therefore, a lack of awareness and practical guidelines from previous research can lead to invalid predictions and unreliable insights. In this thesis, we investigate the impact that experimental components have on the predictions and insights of defect prediction models. Through case studies of systems that span both proprietary and open-source domains, we find that (1) noise in defect datasets; (2) parameter settings of classification techniques; and (3) model validation techniques have a large impact on the predictions and insights of defect prediction models, suggesting that researchers should carefully select experimental components in order to produce more accurate and reliable defect prediction models.

PDF

@inproceedings{tantithamthavorn2016icseds,
    Author={Chakkrit Tantithamthavorn},
    Title = {Towards a Better Understanding of the Impact of Experimental Components on Defect Prediction Modelling},
    Booktitle = {Companion Proceeding of the International Conference on Software Engineering (ICSE)},
    Pages = {867–870},
    Year = {2016}
}