Hello,
I would like to write an inverse kinematics procedure to minimize the error between 3D points in an OpenSim model and the corresponding 2D points on an image.
Currently I am using the least squares python function and the OpenSim API to:
1. get the marker positions in the current state of the model 2. project those 3D positions onto the 2D image 3. calculate the error between image points and model points 4. update the state 5. repeat 1-4 until error is minimized.
I am iterating this procedure for 115 frames of a motion and it is very slow (4 hours), whereas using the built-in inverse kinematics tool takes a few seconds to run for this number of frames.
To do step 1 am using the API to running the following code:
model.assemble(state)
for marker in markerset:
markerpos = model.get_MarkerSet().get(marker).getLocationInGround(state)
This step appears to be the bottleneck in the optimization (in particular running the model.assemble(state) command for each state).
Is there is a more efficient way to get the model marker positions for a given state? I am assuming the inverse kinematics tool has to do something like this, but it clearly does it much faster than what I am doing.
Any help would be much appreciated!
Thanks,
Inverse Kinematics using 2D images
- Tylan Templin
- Posts: 40
- Joined: Mon Jan 15, 2018 10:55 am