OpenCap is a software package to estimate 3D human movement dynamics from smartphone videos.
OpenCap combines computer vision, deep learning, and musculoskeletal simulation to quantify human movement dynamics from smartphone videos.
See our preprint for more description of OpenCap and our validation experiments:
Uhlrich SD*, Falisse A*, Kidzinski L*, Ko M, Chaudhari AS, Hicks JL, Delp SL, 2022. OpenCap: 3D human movement dynamics from smartphone videos. biorxiv. https://doi.org/10.1101/2022.07.07.499061. *contributed equally
- To start collecting data with OpenCap, visit https://app.opencap.ai.
- To find more information about OpenCap, visit https://opencap.ai.
- To find the source code for computing kinematics from videos, visit https://github.com/stanfordnmbl/opencap-core
- To find code for post-processing OpenCap data and generating dynamic simulations, visit https://github.com/stanfordnmbl/opencap-processing
OpenCap comprises an iOS application, a web application, and cloud computing. To collect data, users open an application on two or more iOS devices and pair them with the OpenCap web application. The web application enables users to record videos simultaneously on the iOS devices and to visualize the resulting 3-dimensional (3D) kinematics. In the cloud, 2D keypoints are extracted from multi-view videos using open-source pose estimation algorithms. The videos are time synchronized using cross-correlations of keypoint velocities, and 3D keypoints are computed by triangulating these synchronized 2D keypoints. These 3D keypoints are converted into a more comprehensive 3D anatomical marker set using a recurrent neural network (LSTM) trained on motion capture data. 3D kinematics are then computed from marker trajectories using inverse kinematics and a musculoskeletal model with biomechanical constraints. Finally, kinetic measures are estimated using muscle-driven dynamic simulations that track 3D kinematics.
This repository (see Downloads) contains the experimental data used in the validation study. More details on the participant population can be found in our preprint. More details about the specifics of the included data can also be found in the README included in the downloaded folders.
1) Lab Validation Data:
Population and activities: 10 individuals performing four activities (squats, sit-to-stand, drop vertical jump, and walking) with varied kinematic patterns.
Raw data: Marker-based motion capture, ground reaction forces, electromyography from 10 lower-extremity muscles, RGB video from 5 cameras.
Processed data: OpenSim models, inverse kinematics, inverse dynamics, muscle driven simulations.
We provide this dataset with and without RGB videos, for file size considerations.
2) Field Study Data:
Population and activities: 100 individuals performing natural and asymmetric squats.
Processed data: OpenSim models, inverse kinematics, muscle driven simulations from OpenCap using two cameras. RGB videos are not provided with this dataset, due to the more restrictive IRB protocol that we used for this portion of the study.