Projects with this topic
-
Analyzing human movements in microgravity conditions raises unique challenges due to the absence of a stable gravitational vector, complicating traditional pose estimation and motion tracking. This thesis develops an automated, markerless video processing tool to both qualitatively and quantitatively detect motion in such environments. The system leverages advanced computer vision techniques and deep learning models to extract three- dimensional body landmarks from video footage. It provides a robust base for calculating joint angles and rotations despite the drifting and rotational movements inherent in mi- crogravity. Implemented in Python with MediaPipe’s pose model for pose estimation, OpenCV for video processing, and PySide for a user-friendly GUI, the system can process and annotate motion events. The tool was validated on authentic video segments from the International Space Station, in which the analyzed data was evaluated against manual annotation. Results indicate that the system provides a good baseline to detect move- ments but the resulting data still needs a review. The tool still substantially reduces the manual annotation effort typically required in manual annotation processes. In addition, the software offers both graphical and command-line interfaces, enhancing its accessibility for diverse research applications. This work demonstrates that markerless, video-based pose estimation can be successfully adapted for microgravity environments but still needs refining and further investigation.
Updated -
The repository for my bachelor thesis, its program and latex code
Updated