Page 1 of 1

OpenCap Validation Session

Posted: Mon Jan 29, 2024 12:27 am
by knarotam
Hi,

I am trying to access the OpenCap validation data (published when OpenCap was released). I know you can download it but it doesn't come in the same form of the OpenCap-Download Files.

I was wondering if there is a way to get it in the same form with the same information as the data files downloaded when downloading OpenCap data.

If not, does anyone know a way to use the same post processing code, i.e. gait_analysis.py functions with the existing validation files.

Thanks.

Re: OpenCap Validation Session

Posted: Wed Jan 31, 2024 10:39 am
by suhlrich
The easiest way would be to write a script to reorganize the validation videos into a file structure similar to what gets created when gait_analysis.py downloads the data. The data within the session folders of the validation is organized quite closely to what gait_analysis expects.

Re: OpenCap Validation Session

Posted: Mon Mar 04, 2024 12:20 am
by knarotam
Hi,

I have the right file structure but I am running into an issue that I am not sure I can resolve.

The markers that seem to be in the trial .trc files seem to not correspond.

The OpenCAP data has 63 markers, but the validation data has 51 markers and it is hard to convert the markers since I am pretty sure they do not represent the same thing.

I was wondering if you guys had any solutions. I can think of a couple solutions:
1. is if I can get like a list of the corresponding markers and then which markers are not needed. But the issue is that the OpenCAP has more markers.
2. Get the Valdidation Data downloaded as OpenCAP files (so the marker files are the same number of markers)

Thanks.

Re: OpenCap Validation Session

Posted: Mon Mar 04, 2024 12:07 pm
by antoinefalisse
Could you elaborate on this: The markers that seem to be in the trial .trc files seem to not correspond.

Tahnks,
Antoine

Re: OpenCap Validation Session

Posted: Wed Mar 06, 2024 3:53 pm
by knarotam
Hi,

Basically, I have attached two screenshots each cooresponds to a .trc file: 1 from the Validation Files and 1 from the OPENCap output files.

My issue is the code throws an error parsing through the .trc for the Validation when put into the OPENCap python code.

For example, when I run it through some functions in gait_analysis.py, it will throw errors:
Traceback (most recent call last):
File "/Users/krishnarotam/Documents/opencap-processing/Examples/LimberValidationCode.py", line 126, in <module>
OpenCap_walking1_gait_r = gait_analysis(
File "/Users/krishnarotam/Documents/opencap-processing/Examples/../ActivityAnalyses/gait_analysis.py", line 52, in __init__
self.gaitEvents = self.segment_walking(n_gait_cycles=n_gait_cycles,leg=leg)
File "/Users/krishnarotam/Documents/opencap-processing/Examples/../ActivityAnalyses/gait_analysis.py", line 723, in segment_walking
self.markerDict['markers']['r_calc_study'] -
KeyError: 'r_calc_study'

The key error is that it cannot fine the r_calc_study and that is because this marker doesn't exist in the Validation .trc file. That is, I think that is the issue. The problem is that I do not know how to map the names from the Validation .trc (VT) files and the OPENCap output .trc (OCOT) files since the OCOT files have 63 markers and the VT only has 51. So there is data missing. It is beyond my understanding and I was wondering if you had any tips or advice on how to get the VT's to work with the OPEN CAP python code. Or do you know any other way I can post process this validation data without having to modify a lot of the functions in 'gait_analysis'.

Re: OpenCap Validation Session

Posted: Fri Mar 08, 2024 11:23 am
by antoinefalisse
In our validation, we used these markers:

markers = ['c7','r_shoulder','l_shoulder','r.ASIS','l.ASIS','r.PSIS','l.PSIS','r_knee', 'l_knee','r_ankle','l_ankle','r_calc','l_calc','r_toe','l_toe','r_5meta','l_5meta']

The ones with added _study are the ones returned by OpenCap, the ones without _study are the mocap ones. The names should match (make sure you make them all lower case).

OpenCap returns 20 video keypoints and predict 43 anatomical markers from these keypoints. The anatomical markers are the ones with _study. That's 63 markers in total.

Hope that helps.