Replies: 8 comments
-
This solved for me in 25 iterations (i.e. the minimum number) using (A,b) as above and P = I. It solved in 50 iterations with P = 0. Maybe a bug in your implementation? Are you setting l = u = b for the equalities? |
Beta Was this translation helpful? Give feedback.
-
Yes, I am using l = u = b for the equalities? Is there a separate way to set it? |
Beta Was this translation helpful? Give feedback.
-
No, that is the correct way of doing it. I was just trying to a make a guess at what could be wrong, since the solver works fine for me (through the Julia interface) using your problem data. If you are doing this directly in C, then another common point of difficulty is passing an "A" matrix that is badly formatted. The solver will report nnz(A) + nnz(P) in the output. Are they as expected for your problem data? |
Beta Was this translation helpful? Give feedback.
-
They are as expected. This is only a problem that seems to come out when I run it in an MPC like fashion where I am repeatedly calling the solver over and over to replan a trajectory through obstacles based on the vehicles last position at around 10Hz. Running the planning one at a time with human normal like clicking speed of 1Hz does not have this issue. |
Beta Was this translation helpful? Give feedback.
-
Are you running in embedded mode (such as using the direct C code), the full OSQP C code or a higher-level interface? What settings are you using for the adaptive rho? |
Beta Was this translation helpful? Give feedback.
-
I am running it in full OSQP C code. Currently the adaptive Rho is default settings. |
Beta Was this translation helpful? Give feedback.
-
It is very hard to diagnose the problem without more information, but I would guess either:
With 1 & 2, it's maybe more likely that those would happen if you hit an infeasible problem and try to initialise the solver at the next time step with the converged values from the infeasible problem that came prior. |
Beta Was this translation helpful? Give feedback.
-
I would suggest trying to disable the automatic rho adapation and instead either hard-code it to the value from the previous solve (that is going to probably be close to the desired value), or switch it to doing rho adaptation based on the number of iterations passed. By default, rho is adapted using a criteria based on the current runtime versus setup time, and so the only thing I can think of that would cause issues depending solely on the timing of when you run your code is that somehow this adapation is causing problems. If that doesn't work, we will probably need to see example code so we can try to replicate this on our end. |
Beta Was this translation helpful? Give feedback.
-
So if I understand Primal Infeasible correctly it means OSQP can not find a solution within the feasible region. Can someone clarify if I am correct or how would to debug this problem
Given an example problem where Primal Infeasible was triggered below with constraints like below
Ax = b
-0.747804 0.747804 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 -0.747804 0.747804
0.50329 -1.00658 0.50329 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0.50329 -1.00658 0.50329
-0.30109 0.903271 -0.903271 0.30109 0 0 0 0 0 0
0 0 0 0 0 0 -0.30109 0.903271 -0.903271 0.30109
0.15761 -0.630438 0.945658 -0.630438 0.15761 0 0 0 0 0
0 0 0 0 0 0.15761 -0.630438 0.945658 -0.630438 0.15761
where b = 2.95717
-3.8288
0.0242947
1e-06
-0.00034574
1e-06
-0.00575917
1e-06
-0.0637891
1e-06
and finally all variables in -5 <= x <= 3.
Using matrix inversion we can easily find a solution which satisfies all the constraints, but OSQP with its conditions can not find this solution and gives primal infeasible. In fact this is the only answer in the feasible region. Is this a problem if the feasible region of solutions is too small?
[[2.95 ]
[2.97677376]
[2.99754752]
[2.99565462]
[2.59106669]
[-3.85318726]
[-3.8288 ]
[-3.8288 ]
[-3.8288 ]
[-3.8288 ]]
Beta Was this translation helpful? Give feedback.
All reactions