학술논문

Multi-Agent Reinforcement Learning for Side-by-Side Navigation of Autonomous Wheelchairs
Document Type
Conference
Source
2024 IEEE International Conference on Autonomous Robot Systems and Competitions (ICARSC) Autonomous Robot Systems and Competitions (ICARSC), 2024 IEEE International Conference on. :138-143 May, 2024
Subject
Computing and Processing
Robotics and Control Systems
Signal Processing and Analysis
Navigation
Wheelchairs
Operating systems
Heuristic algorithms
Reinforcement learning
Mobile robots
Testing
Intelligent Robotics
Multi-Agent
Reinforcement Learning
Robot Operating System (ROS)
Language
ISSN
2573-9387
Abstract
This paper explores the use of Robotics and decentralized Multi-Agent Reinforcement Learning (MARL) for side-by-side navigation in Intelligent Wheelchairs (IW). Evolving from a previous work approach using traditional single-agent methodologies, it adopts a Multi-Agent Deep Deterministic Policy Gradient (MADDPG) algorithm to provide control input and enable a pair of IW to be deployed as decentralized computing agents in real-world environments, discarding the need to rely on communication between each other. In this study, the Flatland 2D simulator, in conjunction with the Robot Operating System (ROS), is used as a realistic environment to train and test the navigation algorithm. An overhaul of the reward function is introduced, which now provides individual rewards for each agent and revised reward incentives. Additionally, the logic for identifying side-by-side navigation was improved, to encourage dynamic alignment control. The preliminary results outline a promising research direction, with the IWs learning to navigate in various realistic hallways testing scenarios. The outcome also suggests that while the MADDPG approach holds potential over single-agent techniques for the decentralized IW robotics application, further investigation are needed for real-world deployment.