Abstract | In a test suite, all the test cases should be independent: no test should affect any other test's result, and running the tests in any order should produce the same test results. The assumption of test independence is important so that tests behave consistently as designed. In addition, many downstream testing techniques, including test prioritization, test selection, and test parallelization, assume test independence. However, this critical assumption often does not hold in practice. \par This paper empirically investigates the impact of test dependence on three downstream testing techniques (test prioritization, selection, and parallelization) and proposes a general approach to cope with such impact. It presents two sets of results. \par First, we describe an empirical study to assess the impact of test dependence on 4 test prioritization, 6 test selection, and 2 test parallelization algorithms. Test dependence negatively affects the results of all these downstream testing algorithms. For example, an automatically-generated test suite for the XML-Security program contains 665 tests, and 111 of those tests yield a different test result (success vs. fail) if the suite is parallelized to run on 16 CPUs. \par Second, we present an approach that enhances each test prioritization, selection, and parallelization algorithm to respect test dependence, so that each test in a suite yields the same result before and after applying the downstream testing technique. In an experimental evaluation, the enhanced testing algorithms worked as intended: the test results were consistent even in the presence of test dependence, and they did not substantially compromise the effectiveness of the original testing algorithms. |