IPOPT iteration behavior

OpenSim Moco is a software toolkit to solve optimal control problems with musculoskeletal models defined in OpenSim using the direct collocation method.
POST REPLY
User avatar
Ross Miller
Posts: 375
Joined: Tue Sep 22, 2009 2:02 pm

IPOPT iteration behavior

Post by Ross Miller » Fri Feb 03, 2023 11:12 am

Hi all,

As a non-expert in the theory and practice of optimization, I'm often puzzled by the path IPOPT elects to take as it converges on a solution. An example from a current simulation is below, where the objective was decreasing towards a seemingly good result (a good objective for this problem would be f = 0.55 or so) and a decreasing constraint violation (I use 1e-4 but 1e-3 is probably fine for this problem), then it suddenly jumps to what appears to be a worse solution, then iterates to progressively worse solutions for a while. This isn't fatal, it will many times settle back onto an f = 0.55ish solution and converge, but this can take thousands of iterations sometimes.

This behavior is typical for me, nearly all of the direct collocation simulations I've run in IPOPT, pre- and post-Moco, have behaved like this.

Mostly I'm wondering if this iteration behavior is unique to me and the models and problems I use or if everyone typically sees this? I'm aware that the numbers IPOPT prints in its output here aren't the exact term that it's minimizing, it just seems odd to me that it so often seemingly "jumps" to solutions that appear so much worse than the current iteration.

Ross

Code: Select all

iter    objective    inf_pr   inf_du lg(mu)  ||d||  lg(rg) alpha_du alpha_pr  ls
   0  3.3457153e+00 7.01e+03 3.79e+00   0.0 0.00e+00    -  0.00e+00 0.00e+00   0
   1  1.9323662e+00 6.40e+03 4.35e+03  -3.6 7.11e+00    -  3.15e-02 1.07e-01h  1
   2  1.1995643e+00 4.74e+03 4.00e+03  -3.6 2.65e+00    -  9.06e-02 2.66e-01h  1
   3  1.0245839e+00 4.09e+03 3.77e+03  -3.6 1.64e+00    -  1.56e-01 1.38e-01h  1
   4  9.3975322e-01 3.72e+03 3.61e+03  -3.7 1.35e+00    -  2.27e-01 9.10e-02h  1
   5  8.0317949e-01 3.03e+03 3.24e+03  -3.8 1.18e+00    -  6.48e-02 1.87e-01h  1
   6  6.9982057e-01 2.33e+03 2.71e+03  -3.8 9.33e-01    -  4.21e-03 2.31e-01h  1
   7  6.9377655e-01 2.28e+03 2.66e+03  -3.8 7.50e-01    -  4.87e-01 2.23e-02h  1
   8  6.2653298e-01 1.57e+03 1.94e+03  -4.1 7.07e-01    -  2.24e-01 3.14e-01h  1
   9  6.0246192e-01 1.09e+03 1.39e+03  -4.2 5.16e-01    -  3.83e-01 3.11e-01h  1
iter    objective    inf_pr   inf_du lg(mu)  ||d||  lg(rg) alpha_du alpha_pr  ls
  10  5.9777886e-01 9.28e+02 1.19e+03  -4.4 3.72e-01    -  5.83e-01 1.48e-01h  1
  11  5.9306847e-01 6.91e+02 8.97e+02  -4.8 3.19e-01    -  6.23e-01 2.57e-01h  1
  12  5.9192123e-01 6.39e+02 8.32e+02  -5.2 2.46e-01    -  3.40e-01 7.47e-02h  1
  13  5.9067326e-01 8.25e+01 2.03e+02  -5.4 2.27e-01    -  4.25e-02 8.83e-01h  1
  14  8.4058595e-01 2.25e+01 5.09e+01  -5.4 7.66e-01  -2.0 5.21e-01 7.31e-01h  1
  15  8.4239288e-01 1.81e+01 1.41e+02  -5.8 1.72e+00    -  2.06e-02 2.11e-01h  1
  16  9.5865263e-01 1.75e+01 1.78e+02  -5.8 5.44e+00    -  3.97e-03 3.26e-02h  1
  17  1.0229900e+00 1.74e+01 1.95e+02  -5.8 8.66e+00    -  2.72e-02 8.62e-03h  5
  18  5.7639479e-01 1.26e-02 1.82e+03  -5.8 7.18e-01    -  4.21e-02 1.00e+00h  1
  19  5.6126314e-01 4.62e-02 3.73e+02  -5.8 5.54e-01    -  2.92e-01 9.02e-01h  1
iter    objective    inf_pr   inf_du lg(mu)  ||d||  lg(rg) alpha_du alpha_pr  ls
  20  5.6066259e-01 4.02e-02 3.42e+02  -5.9 6.00e-01    -  4.99e-01 1.31e-01h  1
  21  5.5971847e-01 3.69e-02 3.32e+02  -6.2 6.37e-01  -3.0 5.37e-01 9.05e-02h  1
  22  5.5946204e-01 3.61e-02 3.24e+02  -6.6 1.47e-01  -0.7 3.90e-01 2.14e-02h  1
  23  5.5945661e-01 3.58e-02 3.23e+02  -6.8 3.73e-01  -1.2 1.29e-01 6.73e-03h  1
  24  5.6085558e-01 2.37e-02 2.27e+02  -6.9 2.83e-01  -0.8 5.78e-02 3.35e-01h  1
  25  5.6061220e-01 2.32e-02 2.21e+02  -6.9 9.18e-02  -0.3 1.35e-01 2.04e-02h  1
  26  5.6061553e-01 2.32e-02 2.21e+02  -6.9 5.95e-01    -  5.88e-02 1.83e-04h  1
  27  5.7194940e-01 5.03e-01 2.21e+02  -1.1 2.19e+02    -  3.68e-05 1.86e-04f  1
  28  1.2137577e+00 1.11e+02 3.29e+02  -1.1 1.87e+03    -  5.17e-05 4.23e-04f  1
  29  1.2380244e+00 1.11e+02 3.35e+02  -1.1 1.48e+01   0.0 5.86e-04 4.84e-03f  1
iter    objective    inf_pr   inf_du lg(mu)  ||d||  lg(rg) alpha_du alpha_pr  ls
  30  1.5112590e+00 1.16e+02 3.97e+02  -1.1 2.15e+01  -0.4 2.67e-03 1.11e-02f  1
  31  1.7647221e+01 5.40e+03 2.62e+03  -1.1 3.63e+02    -  6.31e-03 9.76e-03f  1
  32  1.5277088e+01 4.94e+03 2.55e+03  -1.1 1.86e+00    -  1.00e+00 8.66e-02f  1
  33  7.5235633e+00 3.04e+03 1.93e+03  -1.1 1.27e+00    -  3.33e-01 3.92e-01f  1
  34  4.9574577e+00 1.15e+03 1.11e+03  -1.1 1.25e+00    -  6.81e-01 6.22e-01f  1
  35  5.1496975e+00 2.56e+02 8.88e+02  -1.1 1.30e+00    -  7.60e-01 8.96e-01f  1
  36  5.6307089e+00 8.68e+01 3.06e+02  -1.1 7.60e-01    -  1.00e+00 1.00e+00f  1
  37  2.0372669e+01 9.74e+03 4.71e+03  -1.1 5.41e+00    -  3.42e-01 9.59e-01f  1
  38  9.5111233e+00 3.56e+03 2.57e+03  -1.1 3.36e+00    -  5.92e-01 7.30e-01f  1
  39  8.7920687e+00 1.72e+03 1.15e+03  -1.1 2.71e+00    -  4.67e-01 5.28e-01f  1
iter    objective    inf_pr   inf_du lg(mu)  ||d||  lg(rg) alpha_du alpha_pr  ls
  40  9.2542108e+00 1.86e+03 8.80e+02  -1.1 7.05e+00    -  5.16e-01 4.07e-01f  1
  41  1.3340931e+01 1.96e+04 8.72e+02  -1.1 1.11e+01    -  8.50e-01 6.74e-01f  1
  42  1.0272738e+01 7.18e+03 8.84e+02  -1.1 6.97e+00  -1.0 3.03e-01 6.65e-01h  1

User avatar
Ton van den Bogert
Posts: 166
Joined: Thu Apr 27, 2006 11:37 am

Re: IPOPT iteration behavior

Post by Ton van den Bogert » Fri Feb 03, 2023 11:25 am

Ross, I can confirm that I often see this behavior in IPOPT and I have wondered about this too.

It definitely seems wasteful to iterate like this. Maybe it helps IPOPT escape from a local minimum, but then you wonder if there is a strategy behind it.

My earlier collocation work was with the SNOPT solver, which never did this. It had a "merit function" which combines the objective with the constraint violations, and only accepted an iteration if the merit function decreases. It would sometimes do an iteration where the objective, or the constraint violations increased, but you would never see both get worse simultaneously.

I took a quick look on the IPOPT forum but it does not seem to have come up.

It has been a while since I read the paper on IPOPT [1]. I will take another look and report back if it gives me any ideas.

Ton van den Bogert

[1] https://link.springer.com/content/pdf/1 ... 0559-y.pdf

User avatar
Ross Miller
Posts: 375
Joined: Tue Sep 22, 2009 2:02 pm

Re: IPOPT iteration behavior

Post by Ross Miller » Fri Feb 03, 2023 11:46 am

It is nice to know it's not just me!

I saw this behavior pre-Moco where I was always using symbolic gradients, so I don't think it is an issue unique to Moco's approach to gradients.

I haven't used SNOPT much but had trouble getting convergence with it, it seemed to need a very good initial guess. Maybe getting a solution on a coarse grid with IPOPT (I can usually use "not great" initial guesses for that) then using that coarse solution as the guess in SNOPT would work well.

Ross

User avatar
Pasha van Bijlert
Posts: 226
Joined: Sun May 10, 2020 3:15 am

Re: IPOPT iteration behavior

Post by Pasha van Bijlert » Wed Feb 08, 2023 4:15 am

Hello,

I've several times run into IPOPT behaviour that I wonder if it might be related? Basically, when optimizing for walking, IPOPT will sometimes get "stuck" iterating at very high objective costs (100-1000x the cost of a "good" solution), but with extremely low primary and dual infeasibilities (e-11). Each iteration only marginally improves the objective, and I've never let it fully run its course (I stop the optimizations at 2000 iterations). If I use this (non-optimal, but not infeasible, e.g. really tiny steps) solution as an initial guess, IPOPT takes huge jumps and will often converge to a very good solution in 200-300 iterations. Because re-initializing the optimization circumvents this issue, I thought it might be related to slack variables being tightened too early in the optimization process (or a similar issue related to resetting the "jump size" that IPOPT can take per iteration).

This tends to happen when trying to generate a walking gait from an initial guess that isn't already a walking gait (either a dynamically inconsistent guess - the model just floats forwards, or dynamically consistent static standing). To bring this back to the behaviour you're describing, Ross: have you ever tried stopping the optimization after the objective increased? Do you think a (qualitative) appraisal of such a solution could be insightful regarding what is happening in the optimizer when it makes these (seemingly counter-productive) huge jumps ?

Cheers,
Pasha

POST REPLY