AboutDownloadsForumsSource CodeIssuesNews

This project provides modular deep learning models that estimate knee joint moments from inertial measurement units, smartphone cameras, or both.

License: Data


Wearable sensing and computer vision could move biomechanics from specialized laboratories to natural environments, but better algorithms are needed to extract meaningful outcomes from these emerging modalities. We present here new models for estimating the knee adduction moment (KAM) and knee flexion moment (KFM) from smartphone cameras and wearable inertial measurement units (IMUs).

CODE: https://github.com/TheOne-1/KAM_and_KFM_Estimation

DATA: The data of this project are stored in "all_17_subjects.h5". Each subject's data are in a 3-dimensional matrix. The first dimension is for walking steps. The second dimension is for samples (collected at 100 Hz) from heel-strike - 20 samples to toe-off + 20 samples. Both heel-strike and toe-off are detected using right foot IMU data. The length of the second dimension is 152, which is the length of the longest step. Zeros were appended in the end of shorter steps. The third dimension is for 256 data fields, whose name is stored as an attribute named "columns" in the h5 file. An example Python script is provided for loading the data.

To cite this work:
@article{tan2022imu,
title={IMU and Smartphone Camera Fusion for Knee Adduction and Knee Flexion Moment Estimation During Walking},
author={Tan, Tian and Wang, Dianxin and Shull, Peter B and Halilaj, Eni},
journal={IEEE Transactions on Industrial Informatics},
year={2022},
}

T. Tan, D. Wang, P. B. Shull and E. Halilaj, "IMU and Smartphone Camera Fusion for Knee Adduction and Knee Flexion Moment Estimation During Walking," in IEEE Transactions on Industrial Informatics, 2022, doi: 10.1109/TII.2022.3189648.

Link to paper: https://ieeexplore.ieee.org/abstract/document/9826418

Feedback