Questions on OpenCap-Vicon Comparison and OpenCap Core Models/Markerset Usage

New project for OpenCap, which is a new software package to estimate 3D human movement dynamics from smartphone videos. OpenCap strongly relies on OpenSim.
POST REPLY
User avatar
Xinlei Hong
Posts: 3
Joined: Sat Oct 05, 2024 2:50 pm

Questions on OpenCap-Vicon Comparison and OpenCap Core Models/Markerset Usage

Post by Xinlei Hong » Wed Dec 11, 2024 9:20 am

Hi,

I am conducting an experiment to validate the accuracy and effectiveness of OpenCap by comparing it with Vicon. I have a few questions and would greatly appreciate your insights:

1. Using OpenCap's Scaled Model with Vicon Data
I saw a reply in another thread where you mentioned that after collecting data with OpenCap, it is possible to export the scaled model, replace its markerset with the markerset from Vicon data in OpenSim, and then proceed with scaling, IK, and other steps to analyze the Vicon data.

My question is: would this approach still allow for a meaningful comparison between Vicon and OpenCap data? My understanding is that this method essentially uses the OpenCap-scaled model rather than an unscaled model to reprocess Vicon data, which might influence the independence of the comparison. Could this affect the validity of the comparison results? Is this workflow recommended?

2. Purpose and Use of OpenCap Core Model and Markerset Files
In the OpenCap Core OpenSim pipeline on GitHub, I found several model and markerset files for analysis in OpenSim as pictures atttached
(https://github.com/stanfordnmbl/opencap ... ine/Models)

Based on my understanding:

1) LaiUhlrich2022.osim is a full-body model used by OpenCap for general motion analysis.
2) LaiUhlrich2022_shoulder.osim includes detailed modeling of shoulder and scapular movements, making it suitable for upper-limb and shoulder-related analyses.
3) The XML files define markersets for different use cases, e.g., LaiUhlrich2022_markers_mocap.xml appears to be designed for use with traditional motion capture systems like Vicon.

My questions is: Are these files intended for cloud-based analysis with OpenCap, or are they primarily for local analysis?

If there are any inaccuracies in my understanding of these files, I would appreciate any clarification or corrections.

Thank you very much for your assistance, and I look forward to your response!
Attachments
6d6ee082b0275adb99044235b9cc436.png
6d6ee082b0275adb99044235b9cc436.png (201.77 KiB) Viewed 381 times
6c5419b01b08cfb27cc4ca68ab403a0.png
6c5419b01b08cfb27cc4ca68ab403a0.png (264.72 KiB) Viewed 381 times

Tags:

User avatar
Matt Petrucci
Posts: 233
Joined: Fri Feb 24, 2012 11:49 am

Re: Questions on OpenCap-Vicon Comparison and OpenCap Core Models/Markerset Usage

Post by Matt Petrucci » Thu Dec 12, 2024 10:56 am

Hi Xinlei,

1. Yes, although you want to put the new markers as close to the correct locations on the model. You could also consider using AddBiomechanics to scale and do IK. It will also help to get the markers in the right place.

Note, the model will be rescaled when you scale it to the Vicon data (so it will not be the same scaled model from OpenCap). The advantage of doing this is all the other aspects of the model (joints, coordinates, etc.) will be the same.

2. Your understanding is correct! And these files can be used for either. They are in the typical .osim and .xml format that can either be used with the OpenSim GUI or API.

Hope this helps,
Matt

User avatar
Xinlei Hong
Posts: 3
Joined: Sat Oct 05, 2024 2:50 pm

Re: Questions on OpenCap-Vicon Comparison and OpenCap Core Models/Markerset Usage

Post by Xinlei Hong » Mon Jan 13, 2025 3:46 pm

Hi Matt,

Thank you very much for your helpful response! That's really make sense! I have two more question I'd like to ask:

1. Can I use an iPad and iPhone simultaneously to collect data with OpenCap?

2. If the cameras on my Apple devices (iPad/iPhone) support a maximum recording frame rate of 120 fps, does that mean I cannot use OpenCap to collect data at a 240 Hz sampling rate?

I appreciate your time and assistance. Looking forward to your reply!

Best regards,
Xinlei Hong

User avatar
Matt Petrucci
Posts: 233
Joined: Fri Feb 24, 2012 11:49 am

Re: Questions on OpenCap-Vicon Comparison and OpenCap Core Models/Markerset Usage

Post by Matt Petrucci » Tue Jan 14, 2025 2:30 pm

Hi Xinlei,

1. Yes, you can use a combo of iPads and iPhones.
2. Correct, the max frame rate will be determined by the device with the lowest fps.

Hope this helps,
Matt

User avatar
Xinlei Hong
Posts: 3
Joined: Sat Oct 05, 2024 2:50 pm

Re: Questions on OpenCap-Vicon Comparison and OpenCap Core Models/Markerset Usage

Post by Xinlei Hong » Fri Jan 24, 2025 10:03 am

Hi Matt,

I am currently using OpenCap to conduct biomechanical experiments related to running and sprinting, and I am in the validation phase. During my pilot experiments, I have encountered a few issues and would like to ask for your help:

(1) When using the neutral position calibration model in OpenCap, everything seems fine. However, after importing the model into OpenSim, I noticed that the feet are positioned below the ground. I believe this might affect the accuracy of the data. Could you provide any insights on how to address this? (P1,2)

(2) The calibration images in the exported OpenCap folder appear to be of low resolution. Similarly, the sync videos in the camera folders are significantly lower in resolution and file size compared to the original videos. Is this normal? Could this impact the accuracy of the data?

(3) I observed that in some OpenCap experiments, the subject's feet detach from the ground by a certain distance, and this phenomenon becomes more pronounced in areas farther from the cameras. I would like to understand the possible reasons behind this.(P3)

(4) I noticed that OpenCap recommends performing a distinctive motion, such as raising a hand, before and after the movement to help with action recognition, especially when the subject is farther from the cameras. However, in my experiments, the starting point of the sprint may be far from the cameras, and the subject may run out of the camera's field of view at the end of the trial. This makes it difficult to capture such distinctive motions at the start and end of the trial. How can I ensure the data collection quality in such cases? Also, does OpenCap have a maximum recommended capture distance?

(5) Is it possible to use videos from other sources, such as high-speed cameras, as input for OpenCap after proper adjustments? I am considering using OpenCap's local processing to eliminate the limitations of iOS devices.

(6) Currently, I plan to align the data collected simultaneously from Vicon and OpenCap using cross-correlation. Do you have any suggestions for better alignment methods?

Apologies for the many questions, and I truly appreciate your help.

Best wishes,
Xinlei
Attachments
3198ad0d2165f1c1ea59d96d613bc37.png
3198ad0d2165f1c1ea59d96d613bc37.png (281.37 KiB) Viewed 44 times
d849ea9c34987f8f46c211d12e6bcb6.png
d849ea9c34987f8f46c211d12e6bcb6.png (273.01 KiB) Viewed 44 times
00b9176ca92f632069845b21a08cff6.png
00b9176ca92f632069845b21a08cff6.png (346.29 KiB) Viewed 44 times

POST REPLY