Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Extracting solution of decision variables #128

Open
binkob opened this issue Jan 18, 2025 · 3 comments
Open

Extracting solution of decision variables #128

binkob opened this issue Jan 18, 2025 · 3 comments

Comments

@binkob
Copy link

binkob commented Jan 18, 2025

@sshin23

When solving the examodel, the result is set up in the form:

result = madnlp(em),

however, I could not find a way to extract the exact solution for a variable

I would like to know how specific variables can be extracted. I realized that using ".solution" gets the solution of all variables, but they are combined. For large scale, that can be a problem. I could not find it in the examples.

Ex: If I define variables p and q as decision variables, is there a way I can extract its solution only?

sol_p = value.(p) as used in JuMP.

Thanks

@sshin23
Copy link
Collaborator

sshin23 commented Jan 18, 2025

Hi @binkob, have you checked the link below?

https://exanauts.github.io/ExaModels.jl/dev/guide/

@binkob
Copy link
Author

binkob commented Jan 19, 2025

@sshin23
The example worked with the direct examodel formulation. However, I have my formulation in JuMP and used the JuMP->Examodel conversion.
I know it's in the experimental stage, but I found it flexible for my application.

ERROR: type Array has no field offset

I get this error when after solving and I try to fetch variable x.

I also run the jump.jl example and got the same error.

Image

I might be missing something.

using ExaModels, JuMP, CUDA

N = 10
jm = Model()

@variable(jm, x[i = 1:N], start = mod(i, 2) == 1 ? -1.2 : 1.0)
@constraint(
jm,
s[i = 1:N-2],
3x[i+1]^3 + 2x[i+2] - 5 + sin(x[i+1] - x[i+2])sin(x[i+1] + x[i+2]) + 4x[i+1] -
x[i]exp(x[i] - x[i+1]) - 3 == 0.0
)
@objective(jm, Min, sum(100(x[i-1]^2 - x[i])^2 + (x[i-1] - 1)^2 for i = 2:N))

em = ExaModel(jm; backend = CUDABackend())

em = ExaModel(jm) #; backend = nothing)

Here, note that only scalar objective/constraints created via @constraint and @objective API are supported. Older syntax like @NLconstraint and @NLobjective are not supported.

We can solve the model using any of the solvers supported by ExaModels. For example, we can use MadNLP:

using MadNLPGPU

result = madnlp(em)

x = solution(result,x)

Thank you for your time.

@sshin23
Copy link
Collaborator

sshin23 commented Jan 25, 2025

If you are using JuMP interface and want to access the solution, you may use the following interface:

using ExaModels, JuMP, CUDA
using MadNLPGPU

set_optimizer(jm,  ExaModels.MadNLPOptimizer)
optimize!(jm)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants