Abstract | A fault localization technique takes as input a faulty program, and it produces as output a ranked list of suspicious code locations at which the program may be defective. When researchers propose a new fault localization technique, they evaluate it on programs with known faults; they score the technique based on where in its output list the defective code appears. This enables comparison of multiple fault localization techniques to determine which one is better. \par Previous research has evaluated fault localization techniques using artificial faults, generated either by mutation tools or manually. In other words, previous research has determined which fault localization techniques are best at finding artificial faults. However, it is not known which fault localization techniques are best at finding real faults. It is not obvious that the answer is the same, given previous work showing that artificial faults have both similarities to and differences from real faults. \par We performed a replication study to evaluate 10 claims in the literature that compared fault localization techniques. We used 2273 artificial faults in 5 real-world programs. Our results refute 3 of the previous claims. Then, we evaluated the same 10 claims, using 297 \emphreal faults from the 5 programs. Every previous result was refuted or was statistically insignificant. In other words, our experiments show that artificial faults are not useful for predicting which fault localization techniques perform best on real faults. \par In light of these results, we identified a design space that includes many previously-studied fault localization techniques as well as hundreds of new techniques. We experimentally determined which factors in the design space are most important. Then, we extended it with new techniques. Several of our novel techniques outperform all existing techniques, notably in terms of ranking defective code in the top-5 or top-10 reports. |