OpenCap is a software package to estimate 3D human movement dynamics from smartphone videos.

OpenCap combines computer vision, deep learning, and musculoskeletal simulation to quantify human movement dynamics from smartphone videos.

OpenCap comprises an iOS application, a web application, and cloud computing. To collect data, users open an application on two or more iOS devices and pair them with the OpenCap web application. The web application enables users to record videos simultaneously on the iOS devices and to visualize the resulting 3-dimensional (3D) kinematics. In the cloud, 2D keypoints are extracted from multi-view videos using open-source pose estimation algorithms. The videos are time synchronized using cross-correlations of keypoint velocities, and 3D keypoints are computed by triangulating these synchronized 2D keypoints. These 3D keypoints are converted into a more comprehensive 3D anatomical marker set using a recurrent neural network (LSTM) trained on motion capture data. 3D kinematics are then computed from marker trajectories using inverse kinematics and a musculoskeletal model with biomechanical constraints. Finally, kinetic measures are estimated using muscle-driven dynamic simulations that track 3D kinematics.

- To start collecting data with OpenCap, visit
- To find more information about OpenCap, visit

Source code and data will be released upon paper publication.