Please use this identifier to cite or link to this item: http://210.212.227.212:8080/xmlui/handle/123456789/569
Full metadata record
DC FieldValueLanguage
dc.contributor.authorSachin, John Thomas-
dc.contributor.authorImthias, Ahamed T P-
dc.date.accessioned2024-07-08T05:35:01Z-
dc.date.available2024-07-08T05:35:01Z-
dc.date.issued2024-06-30-
dc.identifier.urihttp://210.212.227.212:8080/xmlui/handle/123456789/569-
dc.description.abstractReinforcement learning is a technique that enables agents to learn optimal behaviors through interactions with their environment, using rewards and penalties to shape their actions. In this project, we address the challenge of enabling a mobile robot to navigate through environments such as a factory layout or a hospital setting while avoiding collisions with obstacles. The main objective of the agent is to navigate through these environments without colliding with any of the static or dynamic obstacles in its way. The robot, or the agent, is equipped with three proximity sensors that determine the proximity of the obsta- cles in their respective directions. The learning is achieved through the combination of a reinforcement learning approach known as the Q-learning algorithm, which is a value-based reinforcement learning technique, and the A* algorithm, a heuristic-based search algorithm. Q-learning is notable for its ability to handle problems with various state and action spaces, as well as its simplicity and versatility in various applications, including robotics, game play- ing, and autonomous systems. However, Q-learning faces the difficulty of large state spaces. To address this, we consider a heuristic approach to handle both large spaces and testing in unknown and dynamic environments. For testing and visualization, Python’s Pygame library is involved. The agent undergoes training within a grid-based environment. The hybrid approach of combining Q-learning with the A* algorithm ensures faster learning and lesser computational time. This combination leverages the strengths of both methods, with Q-learning providing robust policy learning and A* offering efficient pathfinding through heuristic search. This ensures that the agent learns to efficiently navigate complex environ- ments while minimizing computational overhead, ultimately enhancing its ability to operate autonomously and safely in real-world scenarios.en_US
dc.language.isoenen_US
dc.relation.ispartofseries;TKM22MEAI15-
dc.titlePATH PLANNING IN ROBOTICS USING HYBRID Q-LEARNING APPROACHen_US
dc.typeTechnical Reporten_US
Appears in Collections:2024

Files in This Item:
File Description SizeFormat 
Sachin_PATH_PLANNING_IN_ROBOTICS_USING_REINFORCEMENT_LEARNING (6) (2)(1).pdf2.29 MBAdobe PDFView/Open


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.