-
Notifications
You must be signed in to change notification settings - Fork 29
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
add support for ordering / ranking amplified tests by "interestingness" for the developer #196
Comments
Hi @monperrus
Yes, it is what is implemented. The method call to reduce() in Amplification#amplification() is doing this job. In fact, applying I-Amplification to hundred test cases can results with thousand of amplified test cases. Generate assertions for such amount of test is not practicable, because of the instrumentation, and the triple run to discard flaky values. I ran several different configuration (several months ago) to find the best ratio between time / potential of tests, and I found 200, and even with this amount, it is still a little bit long. Anyway, this is a very important problem in DSpot, and we should probably fix it ASAP. I investigated a little bit on those I-Amplification, and I think there are a lot of useless amplified tests, and DSpot may throw away goods test when selecting them randomly.
There is no particular order on this. I suppose that there are ordered in function of the history: first amplified tests are at the begin of the class / json (report file). |
OK, this may be something to work on in the future, but we have many things of higher priority now :-) |
We could order tests by "localness / spreading". From paper: |
Hi Benjamin,
I read in the paper "I-amplification can result with a large set test method. For sake of computation time, we reduce this sets at 200 test cases maximum, selected randomly."
Is this what's implemented? how are generated tests ordered before being presented to the user?
The text was updated successfully, but these errors were encountered: