Online appendix for the paper submitted to the Software Quality Journal.
Authors:
Rudolf Ferenc, Zoltán Tóth, Gergely Ladányi, István Siket, and Tibor Gyimóthy.
Abstract:
Bug datasets have been created and used by many researchers to build and validate novel bug prediction models.
In this work, our aim is to collect existing public source code metrics based bug datasets and unify their contents.
Furthermore, we wish to assess the plethora of collected metrics and the capabilities of the unified bug dataset in bug prediction.
We considered 5 public datasets and we downloaded the corresponding source code for each system in the datasets and performed source code analysis to obtain a common set of source code metrics.
This way, we produced a unified bug dataset at class and file level as well.
We investigated the diversion of metric definitions and values of the different bug datasets.
Finally, we used a decision tree algorithm to show the capabilities of the dataset in bug prediction.
We found that there are statistically significant differences in the values of the original and the newly calculated metrics, furthermore, notations and definitions can severely differ.
We compared the bug prediction capabilities of the original and the extended metric suites (within-project learning).
Afterwards, we merged all classes (and files) into one large dataset which consists of 47,618 elements (43,744 for files) and we evaluated the bug prediction model build on this large dataset as well.
Finally, we also investigated cross-project capabilities of the bug prediction models and datasets.
We made the unified dataset publicly available for everyone.
By using a public unified dataset as an input for different bug prediction related investigations, researchers can make their studies reproducible, thus able to be validated and verified.
Keywords:
Bug dataset, code metrics, static code analysis, bug prediction
Online appendix:
Download link for the Unified Bug Dataset 1.2 (2019-12-21; ~920 MB).