Design of an edge computing-based motion capture and model-driven system
Research Article
Open Access
CC BY

Design of an edge computing-based motion capture and model-driven system

Qidou Li 1*
1 School of Social Science, Soochow University
*Corresponding author: 2303408042@stu.suda.edu.cn
Published on 24 September 2025
Journal Cover
AEI Vol.16 Issue 9
ISSN (Print): 2977-3911
ISSN (Online): 2977-3903
Download Cover

Abstract

Motion capture has become a core technology for applications involving virtual digital humans, such as virtual streamers (VTubers). This paper proposes a motion-capture and model-driven system that relies on keypoint detection performed on edge-computing devices. The system implements independent keypoint detection for multiple body parts and parallel real-time processing on edge devices, enabling real-time keypoint detection of the body, hands, and face. By offloading tasks to edge devices, it reduces the resource footprint of motion capture and lowers deployment costs. Experimental results show that employing edge-computing devices can significantly reduce device load and enable deployment across a variety of platforms.

Keywords:

edge computing, computer vision, multi-camera system, task offloading, MaixCAM

View PDF
Li,Q. (2025). Design of an edge computing-based motion capture and model-driven system. Advances in Engineering Innovation,16(9),6-11.

References

[1]. Li, C. (2017). Research on human–computer interaction technology in virtual reality systems (Master’s thesis). Zhejiang University, Hangzhou, China.

[2]. Feng, X. (2022). Challenges and countermeasure suggestions for the development of virtual digital humans under the background of the metaverse.Cultural Industry, (36), 19–21.

[3]. Wang, J. (2023). Research on the application of virtual digital human technology in the field of media.Modern Television Technology, (4), 102–105.

[4]. Hu, X. (2005). Theoretical research on 3D animation and virtual reality technology (Doctoral dissertation). Wuhan University, Wuhan, China.

[5]. Sharma, S., Verma, S., Kumar, M., et al. (2019). Use of motion capture in 3D animation: Motion capture systems, challenges, and recent trends. In 2019 International Conference on Machine Learning, Big Data, Cloud and Parallel Computing (COMITCon) (pp. 289–294). Faridabad, India: IEEE. https: //doi.org/10.1109/COMITCon.2019.8862448

[6]. Zhao, Z., Liu, F., Cai, Z., et al. (2018). Edge computing: Platforms, applications, and challenges.Journal of Computer Research and Development, 55(2), 327–337.

[7]. Maji, D., Nagori, S., Mathew, M., et al. (2022, April 14). YOLO-Pose: Enhancing YOLO for multi-person pose estimation using object keypoint similarity loss [Preprint]. arXiv. https: //arxiv.org/abs/2204.06806

[8]. Khanam, R., & Hussain, M. (2024, October 23). YOLOv11: An overview of the key architectural enhancements [Preprint]. arXiv. https: //arxiv.org/abs/2410.17725

[9]. Zhang, F., Bazarevsky, V., Vakunov, A., et al. (2020, June 18). MediaPipe Hands: On-device real-time hand tracking [Preprint]. arXiv. https: //arxiv.org/abs/2006.10214

Cite this article

Li,Q. (2025). Design of an edge computing-based motion capture and model-driven system. Advances in Engineering Innovation,16(9),6-11.

Data availability

The datasets used and/or analyzed during the current study will be available from the authors upon reasonable request.

About volume

Journal: Advances in Engineering Innovation

Volume number: Vol.16
Issue number: Issue 9
ISSN: 2977-3903(Print) / 2977-3911(Online)