-
-
Notifications
You must be signed in to change notification settings - Fork 399
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Performance of small constraints #1654
Comments
The following test may help in testing performance. It includes large non-convex QCQP feasibility problems from the power system domain, which can be solved with Ipopt. At the time of writing this model build times are similar in time to the solve time, about 2 seconds and 1 second respectively. @mlubin did a quick review. He found type annoations and the @expression macro could provide a 20% performance boost, but thought that overall model build time is most likely related to this issue. |
Closing this for a few reasons:
I think in this case we're going to be unavoidably slower than 0.18, but that's a trade-off we made for using OrderedDict instead of pushing terms into a vector and then processing them later. |
This came up again in #3729. We should investigate other approaches for having a "small dict" as the backing data structure in AffExpr for the common case of an affine expression with one or two elements. (See MOI.Utilities.CleverDict for a related example.) |
If we implement our own OrderedDict, we could do it so it does not allocate if we have very small (< 2 terms?) affine expressions. Side benefit is that we could (?) save a hash here: Line 25 in b734e12
|
I'm very hesitant to roll our own OrderedDict. I'd be more in favour of swapping to something like https://github.com/andyferris/Dictionaries.jl if it was faster. |
I looked at changing to Dictionaries, but it is quite disruptive because:
Lines 105 to 138 in b734e12
|
Because We can think more and try a few options. If we get a nice speedup, this seems an ok thing to motivate a major release that only changes this. This would probably only affect a dozen users. Keeping the Dict API would be ideal, as changes would be minimal. Aside of being a 2.0 release, would there be any other problem in a 2.0 version with "just" this change? |
My bar for releasing 2.0 is exceptionally high. I'd rather have JuMP 1.0 with some small issues than 2.0 with them fixed. The wider impression that JuMP is unstable because of the Julia 0.7-1.0 and JuMP 0.18-0.19-0.20-0.21-0.22-0.23 releases takes a long time to wear away, and I'd rather be able to stand up and say we have a rock solid product that is backward compatible for years, instead of giving the impression that we because released JuMP 2.0 over some small issues so we might release 3.0. |
Creating small constraints like
is rather costly compared to JuMP v0.18. The reason is that creating a
OrderedDict
of two elements is a lot slower than creating aVector
of two elements:Maybe we could create a custom dict optimized for a small number of elements that would not create the internal dictionary if there is 2 elements or less.
That would avoid creating a dictionary for small number of elements.
The text was updated successfully, but these errors were encountered: