You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
-\frac{y \left(-1+x\right) \left(1+y\right)}{x+x^2+y-6 x y+x^2 y+y^2+x y^2}
(I added markup above where the bug reporter didn't bother. At least for me, laziness in bug reporting is not something that motivates me to take the trouble to get involved. I have a lot of other things I could be doing.)
Lest you think that an isolated example, here is another:
The same kind of thing isn't limited to just bugs in the answers Mathics produces. We can find it in performance bugs as well.
For example here is a performance bug related to a probably recent regression in our code for 4.1.0.
Do[F[a,a,a,a,a,a,a,a,a,a,a];,{1000}]
The thing I'd like to focus on is the "1000" the number of times this Do is run. Clearly numbers like 1 and 2 are too small, but Is 1000 really necessary?
It turns out that our performance bug has something to do with the function from_python() getting called much more than before:
683886 function calls (665731 primitive calls) in 0.483 seconds
Ordered by: standard name
ncalls tottime percall cumtime percall filename:lineno(function)
...
15018 0.044 0.000 0.047 0.000 atoms.py:1062(from_python)
vs 4.0.0:
versus previously:
602313 function calls (584117 primitive calls) in 0.359 seconds
2016 0.007 0.000 0.013 0.000 expression.py:95(from_python)
Here, I find that we can see something jump out even in as few as 10 iterations of the loop. If you want things to stand out more, then at 50, the difference is clear enough.
Why this matters is that if you are automating benchmarking and running this many times, adding delays is not good. The slower the tests are, the less likely people will run them.
In conclusion, I will say that when you encounter a bug it is likely that it won't be in its simplest form - it takes work to get it to figure out how to narrow a bug so it is.
Similarly when initially coming up with benchmark tests, 1000 might be something to start out with. However after a problem is identified, that is when you probably want to narrow things.
reacted with thumbs up emoji reacted with thumbs down emoji reacted with laugh emoji reacted with hooray emoji reacted with confused emoji reacted with heart emoji reacted with rocket emoji reacted with eyes emoji
-
A number of time people report bugs with extremely complicated examples. I often get lost in before I even get to the problem.
Here is a recent example:
(I added markup above where the bug reporter didn't bother. At least for me, laziness in bug reporting is not something that motivates me to take the trouble to get involved. I have a lot of other things I could be doing.)
Lest you think that an isolated example, here is another:
The same kind of thing isn't limited to just bugs in the answers Mathics produces. We can find it in performance bugs as well.
For example here is a performance bug related to a probably recent regression in our code for 4.1.0.
The thing I'd like to focus on is the "1000" the number of times this
Do
is run. Clearly numbers like 1 and 2 are too small, but Is 1000 really necessary?It turns out that our performance bug has something to do with the function
from_python()
getting called much more than before:versus previously:
Here, I find that we can see something jump out even in as few as 10 iterations of the loop. If you want things to stand out more, then at 50, the difference is clear enough.
Why this matters is that if you are automating benchmarking and running this many times, adding delays is not good. The slower the tests are, the less likely people will run them.
In conclusion, I will say that when you encounter a bug it is likely that it won't be in its simplest form - it takes work to get it to figure out how to narrow a bug so it is.
Similarly when initially coming up with benchmark tests, 1000 might be something to start out with. However after a problem is identified, that is when you probably want to narrow things.
Beta Was this translation helpful? Give feedback.
All reactions