DSpace Repository

PATH PLANNING IN ROBOTICS USING HYBRID Q-LEARNING APPROACH

Show simple item record

dc.contributor.author Sachin, John Thomas
dc.contributor.author Imthias, Ahamed T P
dc.date.accessioned 2024-07-08T05:35:01Z
dc.date.available 2024-07-08T05:35:01Z
dc.date.issued 2024-06-30
dc.identifier.uri http://210.212.227.212:8080/xmlui/handle/123456789/569
dc.description.abstract Reinforcement learning is a technique that enables agents to learn optimal behaviors through interactions with their environment, using rewards and penalties to shape their actions. In this project, we address the challenge of enabling a mobile robot to navigate through environments such as a factory layout or a hospital setting while avoiding collisions with obstacles. The main objective of the agent is to navigate through these environments without colliding with any of the static or dynamic obstacles in its way. The robot, or the agent, is equipped with three proximity sensors that determine the proximity of the obsta- cles in their respective directions. The learning is achieved through the combination of a reinforcement learning approach known as the Q-learning algorithm, which is a value-based reinforcement learning technique, and the A* algorithm, a heuristic-based search algorithm. Q-learning is notable for its ability to handle problems with various state and action spaces, as well as its simplicity and versatility in various applications, including robotics, game play- ing, and autonomous systems. However, Q-learning faces the difficulty of large state spaces. To address this, we consider a heuristic approach to handle both large spaces and testing in unknown and dynamic environments. For testing and visualization, Python’s Pygame library is involved. The agent undergoes training within a grid-based environment. The hybrid approach of combining Q-learning with the A* algorithm ensures faster learning and lesser computational time. This combination leverages the strengths of both methods, with Q-learning providing robust policy learning and A* offering efficient pathfinding through heuristic search. This ensures that the agent learns to efficiently navigate complex environ- ments while minimizing computational overhead, ultimately enhancing its ability to operate autonomously and safely in real-world scenarios. en_US
dc.language.iso en en_US
dc.relation.ispartofseries ;TKM22MEAI15
dc.title PATH PLANNING IN ROBOTICS USING HYBRID Q-LEARNING APPROACH en_US
dc.type Technical Report en_US


Files in this item

This item appears in the following Collection(s)

Show simple item record

Search DSpace


Advanced Search

Browse

My Account