Real-Time Hand Gesture Recognition from EMG Biosignals Using Interpretable Deep Learning for Adaptive Prosthesis
##plugins.themes.bootstrap3.article.main##
Abstract
Surface electromyogram (sEMG)-based hand gesture recognition has the potential to improve natural prosthetic control, but its performance often suffers from domain shift (electrode drift, fatigue, cross-day variability) and limited model interpretability. This study proposes an interpretable and adaptive deep learning framework that combines two representation streams (time and time-frequency) with multi-head attention and attribution consistency regularization to generate stable and clinician-auditable relevance maps. Robustness across sessions and subjects is enhanced through self-paced pretraining (temporal contraction), few-shot calibration, and domain alignment (DANN). Uncertainty estimation and calibration (MC-Dropout + temperature scaling) trigger confidence-gated control as a safety safeguard. Evaluation across three scenarios shows within-session accuracy of 96.8% (macro-F1 96.1%), cross-session accuracy of 91.7% (macro-F1 90.5%, ECE ≈ 3.6%), and cross-subject accuracy of 86.4%. Edge optimization (INT8 + structured pruning) reduces inference latency from 92 ms (FP32) to ~52–66 ms with only a ~2% accuracy reduction, and power consumption is ~5–6 W, meeting real-time response requirements. Reliability diagrams confirm the calibrated probabilities, while ablation analysis demonstrates significant contributions of attention, loss attribution, and DANN to performance and stability. Overall, this framework bridges the lab-clinic gap by providing an accurate, explainable, adaptive, efficient, and safe solution for real-time hand prosthesis control, while opening up research directions for longitudinal and continuous control based on user co-adaptation.
##plugins.themes.bootstrap3.article.details##
[2] S. J. Julier and J. K. Uhlmann, “Unscented filtering and nonlinear estimation,” Proceedings of the IEEE, vol. 92, no. 3, pp. 401–422, 2004. DOI: 10.1109/JPROC.2004.1433208.
[3] E. A. Wan and R. van der Merwe, “The Unscented Kalman Filter for Nonlinear Estimation,” Proc. IEEE Adaptive Systems for Signal Processing, Communications, and Control (AS-SPCC), 2000. DOI: 10.1109/ASSPCC.2000.882463.
[4] J. Zhang and S. Singh, “LOAM: Lidar Odometry and Mapping in Real-time,” Robotics: Science and Systems (RSS), 2014. DOI: 10.15607/RSS.2014.X.007.
[5] T. Shan, B. Englot, D. Meyers, W. Wang, C. Ratti, and D. Rus, “LIO-SAM: Tightly-coupled Lidar Inertial Odometry via Smoothing and Mapping,” IEEE/RSJ IROS, 2020. DOI: 10.1109/IROS45743.2020.9341176.
[6] W. Xu, Y. Cai, D. He, J. Lin, and F. Zhang, “FAST-LIO2: Fast Direct LiDAR–Inertial Odometry,” IEEE Transactions on Robotics, vol. 38, no. 4, pp. 2053–2073, 2022. DOI: 10.1109/TRO.2022.3141876.
[7] Y. Yang and G. Huang, “Aided Inertial Navigation with Geometric Features: Observability Analysis,” arXiv:1805.05876, 2018. DOI: 10.48550/arXiv.1805.05876.
[8] M. Li and A. I. Mourikis, “High-precision, consistent EKF-based visual–inertial odometry,” The International Journal of Robotics Research, 2013. DOI: 10.1177/0278364913481251
[9] A. I. Mourikis and S. I. Roumeliotis, “A Multi-State Constraint Kalman Filter for Vision-Aided Inertial Navigation,” IEEE ICRA, 2007. DOI: 10.1109/ROBOT.2007.364024.
[10] J. Wahlström and I. Skog, “Fifteen Years of Progress at Zero Velocity: A Review,” arXiv:2008.09208, 2020. DOI: 10.48550/arXiv.2008.09208.
[11] M. Ma, Q. Song, Y. Gu, Y. Li, and Z. Zhou, “An Adaptive Zero Velocity Detection Algorithm Based on Multi-Sensor Fusion for a Pedestrian Navigation System,” Sensors, 18(10):3261, 2018. DOI: 10.3390/s18103261.
[12] L. Ojeda and J. Borenstein, “Non-GPS Navigation for Security Personnel and First Responders,” Journal of Navigation, vol. 60, pp. 391–407, 2007. DOI: 10.1017/S037346330700433X.
[13] R. Piché, S. Särkkä, J. Hartikainen, “Online tests of Kalman filter consistency,” International Journal of Adaptive Control and Signal Processing, 2016. DOI: 10.1002/acs.2571.
[14] Y. Bar-Shalom, X. R. Li, and T. Kirubarajan, Estimation with Applications to Tracking and Navigation: Theory, Algorithms and Software. Wiley, 2001. (NIS dan uji konsistensi klasik).
[15] Z. Zhao, C. Hong, L. Wang, et al., “Robust LiDAR–Inertial Odometry with Ground Condition Constraints,” Sensors, 22(19):7424, 2022. DOI: 10.3390/s22197424.
[16] J. Lin, C. Zheng, W. Xu, and F. Zhang, “R2LIVE: A Robust, Real-time, LiDAR-Inertial-Visual Tightly-Coupled State Estimator and Mapping,” arXiv:2102.12400, 2021. DOI: 10.48550/arXiv.2102.12400.
[17] J. Lv, X. Zuo, K. Hu, J. Xu, G. Huang, and Y. Liu, “Observability-Aware Intrinsic and Extrinsic Calibration of LiDAR–IMU Systems,” IEEE Transactions on Robotics, early access 2022. DOI: 10.1109/TRO.2022.3166972.
[18] D. Lee and M. Jeon, “LiDAR odometry survey: recent advancements and perspectives,” Intelligent Service Robotics, 2024. DOI: 10.1007/s11370-024-00515-8.
[19] H. Lee, S. Chae, and J. Choi, “Extrinsic Calibration of Multiple 3D LiDAR Sensors by the Use of Plane Objects,” Sensors, 22(19):7421, 2022. DOI: 10.3390/s22197421.
[20] J. Zhang and S. Singh, “LOAM: Lidar Odometry and Mapping in Real-time,” RSS, 2014 (another source page). DOI: 10.15607/RSS.2014.X.007