##plugins.themes.bootstrap3.article.main##

Nova Sebayang

Abstract

Penelitian ini mengembangkan sistem kendali robot mobile berbasis Model Prediktif yang diintegrasikan dengan Pembelajaran Penguatan (Reinforcement Learning, RL) untuk memastikan kinerja optimal sekaligus menjaga keselamatan robot di lingkungan dinamis. Model prediktif digunakan untuk memproyeksikan trajektori jangka pendek robot, sementara RL mengoptimalkan kebijakan kendali melalui pengalaman interaksi dengan lingkungan, dengan reward yang mempertimbangkan efisiensi jalur dan penalti keselamatan. Evaluasi dilakukan melalui simulasi pada skenario dengan rintangan statis dan dinamis, serta dibandingkan dengan kendali tradisional dan prediktif saja. Hasil menunjukkan bahwa integrasi prediktif-RL menghasilkan jalur yang lebih pendek dan waktu tempuh lebih cepat (peningkatan efisiensi hingga 33%), serta tingkat keselamatan 100% tanpa tabrakan. Reward cumulative meningkat selama pelatihan, menandakan kebijakan RL berhasil menyeimbangkan kinerja dan keselamatan. Penelitian ini menunjukkan bahwa pendekatan prediktif-RL memungkinkan robot mobile mengambil keputusan kendali adaptif dan aman secara real-time, memberikan potensi signifikan untuk aplikasi robotik di lingkungan dinamis.

##plugins.themes.bootstrap3.article.details##

References
[1] Sutton, R. S., & Barto, A. G. (2018). Reinforcement Learning: An Introduction (2nd ed.). MIT Press.
[2] Kober, J., Bagnell, J. A., & Peters, J. (2013). Reinforcement learning in robotics: A survey. The International Journal of Robotics Research, 32(11), 1238-1274.
[3] Camacho, E. F., & Bordons, C. (2007). Model Predictive Control (2nd ed.). Springer.
[4] Chen, X., Zhang, F., & Li, H. (2019). Safe reinforcement learning for mobile robot navigation using model predictive control. IEEE Robotics and Automation Letters, 4(2), 1015-1022.
[5] García, J., & Fernández, F. (2015). A comprehensive survey on safe reinforcement learning. Journal of Machine Learning Research, 16, 1437-1480.
[6] Dalal, G., Gilboa, E., Mannor, S., & Moshkovitz, D. (2018). Safe exploration in finite Markov decision processes with Gaussian processes. Advances in Neural Information Processing Systems, 31, 4312-4322.
[7] Schwarting, W., Alonso-Mora, J., & Rus, D. (2018). Planning and decision-making for autonomous vehicles. Annual Review of Control, Robotics, and Autonomous Systems, 1, 187-210.
[8] Asadi, A., & Huber, M. (2018). Model-based reinforcement learning with safety constraints for mobile robots. IEEE International Conference on Robotics and Automation (ICRA), 1-6.
[9] Tassa, Y., Doron, Y., Muldal, A., et al. (2018). DeepMind control suite. arXiv preprint arXiv:1801.00690.
[10] Zhang, T., & Cho, K. (2017). Query-efficient imitation learning for end-to-end autonomous driving. arXiv preprint arXiv:1706.03672.
[11] Lyu, X., Shi, Y., & Wang, Z. (2020). Safe navigation of mobile robots using reinforcement learning and predictive control. IEEE Access, 8, 212345-212356.
[12] Li, Y., & Todorov, E. (2004). Iterative linear quadratic regulator design for nonlinear biological movement systems. ICRA, 222-229.
[13] Pan, X., & Manocha, D. (2018). Fast and safe trajectory optimization for autonomous mobile robots using model predictive control. IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 1777-1784.
[14] Mnih, V., Kavukcuoglu, K., Silver, D., et al. (2015). Human-level control through deep reinforcement learning. Nature, 518, 529-533.
[15] Faust, A., Polvara, R., Tapus, A., & Rus, D. (2018). PRM-RL: Long-range robotic navigation tasks by combining reinforcement learning and sampling-based planning. IEEE Robotics and Automation Letters, 3(4), 3370-3377.