Comparison of Different Impact Analysis Methods and Programmer’s
Opinion - an Empirical Study
Gabriella Tóth, Péter Hegedűs, Judit Jász,
Árpád
Beszédes and Tibor Gyimóthy
In change impact analysis, obtaining guidance from
automatic tools would be highly desirable since this activity is
generally seen as a very difficult program comprehension problem.
However, since the notion of an ‘impact set’ (or dependency set) of a
specific change is usually very inexact and context dependent, the
approaches and algorithms for computing these sets are also very
diverse producing quite different results. The question ‘which
algorithm finds program dependencies in the most efficient way?’ has
been preoccupying researchers for a long time, but there are still very
few results published on the comparison of the different algorithms to
what programmers think are real dependencies. In this work, we report
on our experiment conducted with this goal in mind using
a compact, easily comprehensible Java experimental software system,
simulated program changes, and a group of programmers who were asked to
perform impact analysis with the help of different tools and on the
basis of their programming experience. We show which algorithms turned
out to be the closest to the programmers’ opinion in this case study.
However, the results also certified that most existing algorithms need
to be further enhanced and an effective methodology to use automated
tools to support impact analysis still needs to be found.
Keywords: Change impact
analysis, software dependencies, JRipples, BEFRIEND, software
co-change, SEA, static slicing, call graph, Java.
Back