Abstract:
Every robotic navigation application need to ensure that the robots avoid collisions with
the obstacles in its path. Traditional path planning algorithms can be used to drive a robot
from one point to the other in a static environment but they fail in dynamic environment
cases. Ensuring the continuous motion of the robot without colliding with the obstacle is
necessary for exploration. Hence to enhance the area exploration by self learning a novel
method based on deep reinforcement learning is proposed. The current state of the environ ment are obtained from a 360 degree laser reading and the robot chooses an action from a
predefined set of actions. Each action contains a combination of linear and angular velocity
helping the robot to move smoothly in the environment. The trained robot is tested under
an unknown environment and has shown good generalization of learning. The experiments
were conducted on ROS-gazebo framework integrated with openai gym