Display output videos with 2D keyfeature overlay

New project for OpenCap, which is a new software package to estimate 3D human movement dynamics from smartphone videos. OpenCap strongly relies on OpenSim.
POST REPLY
User avatar
Brian Horsak
Posts: 3
Joined: Fri Sep 08, 2017 4:00 am

Display output videos with 2D keyfeature overlay

Post by Brian Horsak » Wed Aug 02, 2023 9:42 am

Hi,

I would like to visually inspect the videos and the keyfeatures estimated via e.g. openpose for the opencap videos. Does anyone know how to achieve this? I know that the videos are stored in ..\Videos\Cam\InputMedia\* and I think the necessary keyfeatures in ..\Videos\Cam\OutputPkl\*.

However I was not able to find some code in python to visualize both. Hope anyone can help with some example code or a link to some more information on how to achieve this ; )

Thx, Brian

User avatar
Brian Horsak
Posts: 3
Joined: Fri Sep 08, 2017 4:00 am

Re: Display output videos with 2D keyfeature overlay

Post by Brian Horsak » Fri Aug 04, 2023 2:50 am

Ok, so i figured it out. I am placing a code snippet here if someone is intersted:

Code: Select all

# -*- coding: utf-8 -*-
"""
Created on Wed Aug  2 18:21:04 2023

This script reads the output videos from opencap and the keyfeatures stored in 
the *.pkl format to playback a video of the keyfeature overlay for visual 
inspection. Note: you need the ffmpeg codec provided by opencap. Follow their 
instructions to create the opencap-core environment: 
https://github.com/stanfordnmbl/opencap-core

@author: bhorsak
"""
 
# Check if the script is being run directly
if __name__ == "__main__":
    
    # Import
    import cv2
    import pickle
    
    # Load the video file:
    video_path = r"yourpathv"
    
    
    # Load the keypoints:    
    pickle_path = r"yourpath"
    
    with open(pickle_path, 'rb') as f:
        key_features_data = pickle.load(f)
                   
    # Create a VideoCapture object and read from input file
    cap = cv2.VideoCapture(video_path, cv2.CAP_FFMPEG)
    
    # Check if camera opened successfully
    if (cap.isOpened()== False):
        print("Error opening video file")
    
    # Read until video is completed
    while(cap.isOpened()):
        ret, frame = cap.read()
        
        if ret == True:
            frame_index = int(cap.get(cv2.CAP_PROP_POS_FRAMES))-1
            key_features = key_features_data[frame_index]        
        
            # Blur face using the nose marker
            x = int(key_features[0]['pose_keypoints_2d'][0])
            y = int(key_features[0]['pose_keypoints_2d'][1])
            y = y + 25 # offsets to place the roi in the center of the face, this is video dependet
            x = x - 50 # offsets to place the roi in the center of the face, this is video dependet
            h = 75 # same here
            w = 100 # same here
                
            # Extract the face region
            face_roi = frame[y-h:y, x:x+w]
            
            # Apply blur to the face region
            blurred_face = cv2.GaussianBlur(face_roi, (35, 35), 0)
            
            # Replace the original face region with the blurred face
            frame[y-h:y, x:x+w] = blurred_face            
            
        
            # Overlay sticks - draw first so that keypoints are above sticks
            for keypoints in key_features:                
                for baseIdx in range(3,42,3):
                    invalidSet = [3, 12, 21, 24, 33] # Skip specific connections so no unwanted lines are drawn, eg. between left ankle and right hip
                    if baseIdx not in invalidSet:
                        x1, y1 = int(keypoints['pose_keypoints_2d'][baseIdx]), int(keypoints['pose_keypoints_2d'][baseIdx+1])
                        x2, y2 = int(keypoints['pose_keypoints_2d'][baseIdx+3]), int(keypoints['pose_keypoints_2d'][baseIdx+4])
                        
                        # Only plot if there are no gaps in tracking data (gaps will be 0,0)
                        if x1 != 0 and x2 != 0 and y1 != 0 and y2 != 0:
                            cv2.line(frame, (x1, y1), (x2, y2), (255, 255, 255), 1) 
            
            # Now overlay the joint key features
            for keypoints in key_features: 
                for keypointIdx, colIdx in zip(range(0,75,3), range(0,24)):
                    x, y = int(keypoints['pose_keypoints_2d'][keypointIdx]), int(keypoints['pose_keypoints_2d'][keypointIdx+1])
                    cv2.circle(frame, (x, y), 4, (0, 0, 255), -1)  # Draw a red circle at the keypoint
            
            
            # Display the frame with the overlayed key features
            cv2.imshow('Frame', frame)
            
            # Wait for user to press any key
            cv2.waitKey(0)        
            
            # Press 'q' to exit the loop and close the window
            if cv2.waitKey(0) & 0xFF == ord('q'):
                break
            
        else:
            break 
    
    # Release video and close windows
    cap.release()
    cv2.destroyAllWindows()           

User avatar
Antoine Falisse
Posts: 437
Joined: Wed Jan 07, 2015 2:21 am

Re: Display output videos with 2D keyfeature overlay

Post by Antoine Falisse » Mon Aug 07, 2023 6:46 am

Thanks for sharing Brian. When using the cloud service, we do not save the videos with the overlaid video keypoints for storage reasons. If you re-process the data locally, you should get these videos in your local folder and the videos posted to the web app will be the ones with the video keypoints. Best, Antoine

POST REPLY