How to constrain the position of a hand during predicting?

OpenSim Moco is a software toolkit to solve optimal control problems with musculoskeletal models defined in OpenSim using the direct collocation method.
User avatar
Aaron Fox
Posts: 289
Joined: Sun Aug 06, 2017 10:54 pm

Re: How to constrain the position of a hand during predicting?

Post by Aaron Fox » Mon Jun 07, 2021 6:37 pm

Hi Simon,

I sort of took a pragmatic/practical approach in setting these task weights. I thought firstly about what the most important element of the tasks were - and that was the physical movement and hence the marker final goals became the most important weight. I then somewhat arbitrarily set this as 5x as important as the 'control' goal in the task - I'd agree that there could be some fluctuations with altering these goal weights. A reason for this is as you increase the marker goal weight - the control weight effectively gets less important, so perhaps the model slightly changes some muscle activations and movements because minimising these is deemed less important.

I don't think there is a way to test every single combination within the realms of a realistic timeframe. I think you've started doing the groundwork of what I'd label as a sensitivity analysis - in determining how much of an impact your methodological choices has on your outputs. It's strange that there is one arbitrary result happening at that random weight - was your initial guess the same in each simulation, as that could effect this? Nonetheless I don't think I know enough about the behaviour of optimal control problems and NLP solvers to completely understand what's happening. With respect to your sensitivity analysis, it does allow you to potentially report some bounds around how big of an impact the decisions you make has on your outcomes. The fact that it's pretty small seems to be a good thing.

The other thing to consider within all of this is the 'realism' of your simulations and OCP - who's to say we use the same task weightings within our real-world movement selections every time we squat to grab something?

Aaron

User avatar
Chonghui Zhang
Posts: 10
Joined: Wed May 19, 2021 1:58 am

Re: How to constrain the position of a hand during predicting?

Post by Chonghui Zhang » Tue Jun 08, 2021 9:25 pm

Hi Simon and Aaron,

The resulting values(wJ= weight value * the true obj. value) are truncated and rounded to only 6 decimal, which you can see from the solution file. If you only use MocoMarkerFinalGoal with a small weight, the truncation and round error would be significant. When you calculate J through dividing wJ by w, these errors would be included.

In addition, when I use MocoMarkerFinalGoal with different weights to predict reaching. I expected that the resulting movements would be varied from each other, and the only thing they would have in common is that the hand ends in the same place. However, in fact, I saw a series of nearly same trajectories, instead of random movements. That's strange. I think the MocoMarkerFinalGoal might define some "control strategies" to make the model move in the similar pattern, although the weights are different. So if you add another goal, such as control goal. Control goal would be balanced with MocoMarkerFinalGoal. I guess the resulting movement would not be just using minimal control effort to reach something. I'm not sure, hope Nick could tell me. I also read Aaron's paper, I don't know to what extent the resulting movement would be affected by MocoMarkerFinalGoal. The effect would exist but I hope it's negligible.

San

User avatar
Nicholas Bianco
Posts: 1044
Joined: Thu Oct 04, 2012 8:09 pm

Re: How to constrain the position of a hand during predicting?

Post by Nicholas Bianco » Wed Jun 09, 2021 9:48 pm

Hi everyone,

All that MocoMarkerFinalGoal does is add the following term to the cost function:

Code: Select all

J = w * norm(x_model - x_ref)^2
where "w" is the goal weight, "x_model" is the model marker's current 3D location in the ground frame, "x_ref" is the reference position, and "norm()" is the 2-norm. The goal does not "define control strategies" or anything like that, it is just another term of the objective function that the optimizer has to minimize.

It's not surprising to me that many of the trajectories end up being similar. If the model is able meet the final position exactly, then increasing the weight on the marker goal shouldn't change the solution much. The more you increase the cost weight, the more that term becomes similar to a constraint in the optimization. (In fact, interior-point methods, like IPOPT, treat optimization constraints similar to cost terms with large weights and then uses gradient descent to find a solution).

For the one solution that deviated from the rest in Simon's sensitivity analysis, this could be related to the initial guess and/or something funny about the cost landscape at that particular weight. Moco converts each OCP to an NLP which is solved by IPOPT. As I said above, IPOPT uses gradient descent, and is not a global optimization method, so it's always possible to converge to a local minimum. That's why sensitivity analyses are important!

-Nick

User avatar
Simon Jeng
Posts: 87
Joined: Fri Sep 07, 2018 8:26 pm

Re: How to constrain the position of a hand during predicting?

Post by Simon Jeng » Fri Jun 11, 2021 11:53 pm

Hi everyone,

Thanks for the reply, I have a better understanding now. :D

The initial guesses were same. As Nick said, that unexpected solution may because it converged to a local minimum.

Another interesting thing I found was that the whole movement could be predicted even if I only used MoCoMarkerFinalGoal in the OCP. The predicted motion has a smooth torque trajectory and a smooth angular velocity trajectory. MoCoMarkerFinalGoal is of end-point mode so there is no cost to minimize before the final time. How does the solver predicts the motion when only MoCoMarkerFinalGoal is used?

Thanks,
Simon

POST REPLY