##plugins.themes.bootstrap3.article.main##

Hartono Ridwan

Abstract

This research proposes GNN-FT-SLAM, a disturbance-tolerant sensor fusion framework for autonomous robots that combines Graph Neural Networks (GNN) at the perception layer with an uncertainty-aware graph-factor SLAM backend. GNN constructs a multicenter graph (camera, LiDAR, IMU, odometry) to contextually model measurement reliability and predict adaptive covariances that are then used as factor weights in SLAM optimization. The pipeline includes multicenter synchronization, dynamic graph construction, reliability-focused message passing, probabilistic (aleatoric/epistemic) heads, as well as fault detection–isolation and modality reconfiguration (fallback and dynamic factor activation) modules. Evaluations on nominal, synthetic stress (motion blur, glare/low-light, LiDAR sparsity, IMU bias), and real-world fault scenarios demonstrate performance improvements over robust baselines (ORB-SLAM3, LIO-SAM, VINS-Mono): 32–55% reduction in ATE, improved RPE, fault detection AUROC up to 0.92, and improved uncertainty calibration (NLL and ECE decreased). The system runs in real-time (~27 Hz) on an edge GPU with an average latency of 37 ms. These findings confirm that combining deep learning graph representations and probabilistic inference results in adaptive, uncertainty-aware, and fault-tolerant sensor fusion, relevant for autonomous robot operations in dynamic and cluttered environments.

##plugins.themes.bootstrap3.article.details##

How to Cite
Ridwan, H. (2025). Fault-Tolerant Sensor Fusion for Autonomous Mobile Robots Using Graph Neural Networks and Uncertainty-Aware SLAM . Journal of Electrical Engineering, 3(02), 47–53. https://doi.org/10.54209/elimensi.v3i02.402
References
[1] C. Campos, R. Elvira, J. J. Gómez Rodríguez, J. M. M. Montiel, J. D. Tardós, “ORB-SLAM3: An Accurate Open-Source Library for Visual, Visual–Inertial, and Multimap SLAM,” IEEE Transactions on Robotics, vol. 37, no. 6, pp. 1874–1890, 2021. DOI: 10.1109/TRO.2021.3075644.
[2] T. Shan, B. Englot, D. Meyers, W. Wang, C. Ratti, D. Rus, “LIO-SAM: Tightly-Coupled Lidar Inertial Odometry via Smoothing and Mapping,” Proc. IEEE/RSJ IROS, 2020, pp. 5135–5142. DOI: 10.1109/IROS45743.2020.9341176.
[3] T. Qin, P. Li, S. Shen, “VINS-Mono: A Robust and Versatile Monocular Visual-Inertial State Estimator,” IEEE Transactions on Robotics, vol. 34, no. 4, pp. 1004–1020, 2018. DOI: 10.1109/TRO.2018.2853729.
[4] C. Forster, L. Carlone, F. Dellaert, D. Scaramuzza, “IMU Preintegration on Manifold for Efficient Visual-Inertial Maximum-a-Posteriori Estimation,” Robotics: Science and Systems (RSS), 2015. DOI: 10.15607/RSS.2015.XI.006.
[5] F. Dellaert, M. Kaess, “Factor Graphs for Robot Perception,” Foundations and Trends in Robotics, vol. 6, no. 1–2, pp. 1–139, 2017. DOI: 10.1561/2300000043.
[6] K. Eckenhoff, P. Geneva, G. Huang, “Closed-Form Preintegration Methods for Graph-Based Visual–Inertial Navigation,” The International Journal of Robotics Research, vol. 38, no. 5, pp. 563–586, 2019. DOI: 10.1177/0278364919835021.
[7] A. Kendall, Y. Gal, “What Uncertainties Do We Need in Bayesian Deep Learning for Computer Vision?” NeurIPS, 2017. (tentang aleatorik & epistemik).
[8] J. T. Barron, “A General and Adaptive Robust Loss Function,” CVPR, 2019, pp. 4331–4339. DOI: 10.1109/CVPR.2019.00446.
[9] Z. Teed, J. Deng, “DROID-SLAM: Deep Visual SLAM for Monocular, Stereo, and RGB-D Cameras,” NeurIPS, 2021.
[10] A. Rosinol, M. Abate, Y. Chang, L. Carlone, “Kimera: an Open-Source Library for Real-Time Metric-Semantic Localization and Mapping,” Proc. IEEE ICRA, 2020, pp. 1689–1696. DOI: 10.1109/ICRA40945.2020.9196885.
[11] J. Czarnowski, T. Laidlow, R. Clark, A. J. Davison, “DeepFactors: Real-Time Probabilistic Dense Monocular SLAM,” IEEE Robotics and Automation Letters, vol. 5, no. 2, pp. 721–728, 2020. DOI: 10.1109/LRA.2020.2965415.
[12] J. Song, H. Jo, Y. Jin, S. J. Lee, “Uncertainty-Aware Depth Network for Visual Inertial Odometry of Mobile Robots,” Sensors, vol. 24, no. 20, 6665, 2024. DOI: 10.3390/s24206665.
[13] A. I. Mourikis, S. I. Roumeliotis, “A Multi-State Constraint Kalman Filter for Vision-Aided Inertial Navigation,” Proc. IEEE ICRA, 2007, pp. 3565–3572. DOI: 10.1109/ROBOT.2007.364024.
[14] R. Mascaro, D. M. Rosen, L. Carlone, “Scene Representations for Robotic Spatial Perception,” Annual Review of Control, Robotics, and Autonomous Systems, vol. 8, 2025. DOI: 10.1146/annurev-control-040423-030709.
[15] A. Rosinol, A. Gupta, M. Abate, L. Carlone, “Kimera-Multi: A System for Distributed Multi-Robot Metric-Semantic SLAM,” Proc. IEEE ICRA, 2021, pp. 11210–11218. DOI: 10.1109/ICRA48506.2021.9561090.