-
Notifications
You must be signed in to change notification settings - Fork 9
Nimrod
Authors: Leo Fernandes, Marcio Guimarães, Márcio Ribeiro, Rohit Gheyi, Marcio Delamaro, André Santos
- Overview
Mutation testing is a technique to evaluate the quality of a test suite. It has attracted a lot of interest because of its reputation as a powerful adequacy criterion for test suites and for its ability to guide the test case generation. However, the presence of equivalent mutants increases costs, hindering its usage in industry. An equivalent mutant is syntactically different from the original program but has the same observable behavior. Thus, this mutant is useless to the mutation analysis. The Equivalent Mutant Problem has already been proven undecidable, this way, no complete automated solution exists. Besides, manually detecting equivalent mutants is an error-prone and time-consuming task. In this way, solutions, even partial, can greatly help reduce this cost. To minimize this problem, we introduce an approach to suggest equivalent mutants. Our approach is based on automated behavioral testing, which means test cases based on the behavior of the original program. We perform static analysis to automatically generate tests directed for the entities impacted by the mutation. For each mutant analyzed, our approach can suggesting the mutant as equivalent or non-equivalent. In the case of non-equivalent mutants, our approach provides the test case capable of killing it. For the equivalent mutants suggested, we also provide a ranking of mutants with a strong or weak chance of the mutant being indeed equivalent. To evaluate the approach, we execute it against a set of 1,542 mutants manually classified in previous work as equivalents and non-equivalents. The results indicate that the approach is very effective in suggesting equivalent mutants. It has reached more than 96% of accuracy in five out of eight subjects studied. Compared with manual analysis of the surviving mutants, our approach takes a third of the time to suggest equivalents and is 25 times faster to indicate non-equivalents.
-
Link to Download the Tool
-
Goal The purpose of this study is to evaluate automatic behavioral testing to suggest equivalent mutants from the mutation tester point of view in the context of mutation analysis.
-
Questions
-
RQ1. How effective is Nimrod in suggesting equivalent mutants?
-
RQ2. How long does Nimrod take to analyze a mutant?
-
RQ3. What are the characteristics of the mutants that Nimrod incorrectly identifies as equivalent (false positives)?
-
RQ4. Which mutation operators commonly lead Nimrod to fail?
-
-
Subjects
- bisect
- commons-lang
- joda-time
- pamvotis
- triangle
- xstream
-
Results
-
Replicate
This is a step-by-step to replicate this study (in a Linux/Ubuntu environment).
- Clone the Nimrod project
$ git clone https://github.com/easy-software-ufal/nimrod-hunor.git
$ cd nirod-hunor/
- Configure the $PYTHONPATH and create a virtual environment with virtualenv
$ export PYTHONPATH=$PWD
$ virtualenv -p [path to Python 3] env
$ source env/bin/active
- Get into any subject folder and execute Nimrod
$ cd subjects/triangle/ [or any other subject]
$ python run_nimrod.py
- Observe the standard output and check the results at the nimrod_output/ folder