-
Notifications
You must be signed in to change notification settings - Fork 9
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Matrix-free solution #117
Comments
Did you try setting this option? https://coin-or.github.io/Ipopt/OPTIONS.html#OPT_hessian_approximation |
Yes, that is what I tried. However, I just realized my mistake - I need to pass the kwarg For completeness (and perhaps integration into the docs to close the loop with using ADNLPModels, NLPModels, NLPModelsIpopt
nlp = ADNLPModel(x -> (x[1] - 1)^2 + 100 * (x[2] - x[1]^2)^2, [-1.2; 1.0]; matrix_free=true)
stats = ipopt(nlp; hessian_approximation="limited-memory") |
Ah, but it does not work with constraints. Is this a bug or a math limitation? (I'm new to this field and still learning). Example: using ADNLPModels, NLPModels, NLPModelsIpopt
n = 10
x0 = ones(n)
x0[1:2:end] .= -1.2
lcon = ucon = zeros(n - 2)
nlp = ADNLPModel(x -> sum((x[i] - 1)^2 + 100 * (x[i+1] - x[i]^2)^2 for i = 1:n-1), x0,
x -> [3 * x[k+1]^3 + 2 * x[k+2] - 5 + sin(x[k+1] - x[k+2]) * sin(x[k+1] + x[k+2]) +
4 * x[k+1] - x[k] * exp(x[k] - x[k+1]) - 3 for k = 1:n-2],
lcon, ucon; matrix_free=true)
stats = ipopt(nlp; hessian_approximation="limited-memory") |
Would you please paste the error message as well? |
ERROR: ArgumentError: The AD backend ADNLPModels.EmptyADbackend() is not loaded. Please load the corresponding AD package.
Stacktrace:
[1] throw_error(b::ADNLPModels.EmptyADbackend)
@ ADNLPModels ~/.local/share/julia/packages/ADNLPModels/7iZmF/src/ad_api.jl:121
[2] jacobian(b::ADNLPModels.EmptyADbackend, ::Function, ::Vector{Float64})
@ ADNLPModels ~/.local/share/julia/packages/ADNLPModels/7iZmF/src/ad_api.jl:125
[3] jac_coord!(b::ADNLPModels.EmptyADbackend, nlp::ADNLPModel{…}, x::Vector{…}, vals::SubArray{…})
@ ADNLPModels ~/.local/share/julia/packages/ADNLPModels/7iZmF/src/ad_api.jl:397
[4] jac_nln_coord!(nlp::ADNLPModel{…}, x::Vector{…}, vals::SubArray{…})
@ ADNLPModels ~/.local/share/julia/packages/ADNLPModels/7iZmF/src/nlp.jl:596
[5] jac_coord!(nlp::ADNLPModel{Float64, Vector{Float64}, Vector{Int64}}, x::Vector{Float64}, vals::Vector{Float64})
@ NLPModels ~/.local/share/julia/packages/NLPModels/XBcWL/src/nlp/api.jl:249
[6] (::NLPModelsIpopt.var"#eval_jac_g#4"{…})(x::Vector{…}, rows::Vector{…}, cols::Vector{…}, values::Vector{…})
@ NLPModelsIpopt ~/.local/share/julia/packages/NLPModelsIpopt/WBABa/src/NLPModelsIpopt.jl:101
[7] _Eval_Jac_G_CB(n::Int32, x_ptr::Ptr{…}, ::Int32, ::Int32, nele_jac::Int32, iRow::Ptr{…}, jCol::Ptr{…}, values_ptr::Ptr{…}, user_data::Ptr{…})
@ Ipopt ~/.local/share/julia/packages/Ipopt/YDBAD/src/C_wrapper.jl:95
[8] IpoptSolve(prob::Ipopt.IpoptProblem)
@ Ipopt ~/.local/share/julia/packages/Ipopt/YDBAD/src/C_wrapper.jl:442
[9] solve!(solver::IpoptSolver, nlp::ADNLPModel{…}, stats::SolverCore.GenericExecutionStats{…}; callback::Function, kwargs::@Kwargs{…})
@ NLPModelsIpopt ~/.local/share/julia/packages/NLPModelsIpopt/WBABa/src/NLPModelsIpopt.jl:240
[10] solve!
@ ~/.local/share/julia/packages/NLPModelsIpopt/WBABa/src/NLPModelsIpopt.jl:161 [inlined]
[11] #ipopt#6
@ ~/.local/share/julia/packages/NLPModelsIpopt/WBABa/src/NLPModelsIpopt.jl:158 [inlined]
[12] top-level scope
@ ~/Dropbox/Projects/Julia/WBLNAOptim_new/scripts/example.jl:11
Some type information was truncated. Use `show(err)` to see complete types. |
cc @tmigot Is this a limitation of |
Hey @kiranshila @dpo ! In this case, the solution is to do:
|
Btw, an improved implementation with in-place constraint function might be:
|
Sounds good, thank you!
Nice! I just copied this from one of your tutorials as an example :), my problem is unfortunately much more complex. |
Hey! I'm attempting to use this package to solve a general constrained optimization problem with a computationally expensive hessian. Ipopt itself claims it can use L-BFGS to compute it, but I can't figure out how to set that option here. Setting the
matrix_free
kwarg in the problem gives runtime errors as ipopt is trying to evaluate the hessian. Any help would be appreciated.The text was updated successfully, but these errors were encountered: