Page 1 of 1

OpenCap pipeline locally

Posted: Tue May 09, 2023 5:35 pm
by carlosoleto
Hello everyone.

I will start studying OpenCap capabilities to evaluate jump performance. I was really interested in the option below, found in https://github.com/stanfordnmbl/opencap-core.
Run this pipeline locally using videos collected near-synchronously from another source (e.g., videos collected synchronously with marker-based motion capture). Easy-to-use utilities for this pipeline are under development and will be released soon.
Are there some guidelines for this type of use of OpenCap? Is there a piece of code with some steps like:

- Import videos for calibration (checked box)
- Import motion videos
- Subject calibration
- motion
- Run OpenPose to collect the skeleton
- Run OpenCap to estimate 3D markers
- Run OpenCap to solve the motion

Best regards.

Re: OpenCap pipeline locally

Posted: Thu May 11, 2023 6:56 am
by suhlrich
Hi Carlos,

This is all possible, but requires some re-organization of paths, etc. We plan to release some nice code for this in the next few months.

Here's a start to what code you would need.

- Intrinsic calibration for whatever cameras you are using. https://github.com/stanfordnmbl/opencap ... rinsics.py
- All computations from extrinsic camera calibration through kinematics happen in https://github.com/stanfordnmbl/opencap ... in/main.py
- This is an example to show you how you'd do this with many subjects in batch. You'll need to organize your folders appropriately first. https://github.com/stanfordnmbl/opencap ... ematics.py

Re: OpenCap pipeline locally

Posted: Fri May 12, 2023 6:41 am
by carlosoleto
Thanks a lot, Scott.

I will start my way on your post. This is really helpful.

Best regards and congratulations to you and the team on the project.

Re: OpenCap pipeline locally

Posted: Fri Aug 11, 2023 2:48 am
by biomechlwade
suhlrich wrote:
Thu May 11, 2023 6:56 am
Hi Carlos,

This is all possible, but requires some re-organization of paths, etc. We plan to release some nice code for this in the next few months.

Here's a start to what code you would need.

- Intrinsic calibration for whatever cameras you are using. https://github.com/stanfordnmbl/opencap ... rinsics.py
- All computations from extrinsic camera calibration through kinematics happen in https://github.com/stanfordnmbl/opencap ... in/main.py
- This is an example to show you how you'd do this with many subjects in batch. You'll need to organize your folders appropriately first. https://github.com/stanfordnmbl/opencap ... ematics.py
Hi Scott and Antoine,

I was just wondering if you did end up releasing some code for this as Scott mentioned above.

Much appreciated!!

Logan

Re: OpenCap pipeline locally

Posted: Fri Aug 11, 2023 8:16 am
by antoinefalisse
Hi Logan,

Not yet unfortunately.

Best,
Antoine

Re: OpenCap pipeline locally

Posted: Wed Nov 22, 2023 6:50 am
by roots1
Hi Antoine,

We're really keen on testing this locally on our computer for longer/higher res recordings. I'm sure there's a lot of other developments that are keeping your team busy, but any ballpark idea for when this code will be released?

Cheers,
Corey

Re: OpenCap pipeline locally

Posted: Wed Nov 22, 2023 1:54 pm
by suhlrich
Hi Corey,

We don't have an estimated timeline unfortunately.

Thanks,
Scott

Re: OpenCap pipeline locally

Posted: Thu Nov 07, 2024 11:31 pm
by sashaportnova
Hi,

I just wanted to check if the code for running the OpenCap pipeline on non-OpenCap collected videos have been released in the last year.

Re: OpenCap pipeline locally

Posted: Mon Nov 11, 2024 1:32 pm
by mpetrucc

Re: OpenCap pipeline locally

Posted: Mon Nov 25, 2024 1:26 pm
by carlosoleto
Hello, everyone; I made some progress in trying to set up the pipeline to run locally.

I'm just focusing on using the already available "cameraIntrinsicsExtrinsics.pickle" in the downloaded data from OpenCap, and redoing a pose estimation (creating the TRC file with augmented markers).

So far I'm stopped at configuring tensorflow and the GPU and cudatoolkit. But I already have some pre-augmented maker file. But they are weird.

I don't know if this is the right place to discuss this subject, but I would like some help with it.

The markers in the file are in 3D format, but they behave like a 2D camera. When the person is far away, the markers are closer, and when the person is closer, the distance between the markers is greater.
dist1.png
dist1.png (45.66 KiB) Viewed 51 times
dist2.png
dist2.png (60.89 KiB) Viewed 51 times
I just want to know if this is the expected behavior, or if the extrinsics configurations are faulty. Any thoughts on that?

Best regards.