A reinforcement learning approach to obstacle avoidance of mobile robots
7th International Workshop on Advanced Motion Control. Proceedings (Cat. No.02TH8623) - Trang 462-466
Tóm tắt
One of the basic issues in the navigation of autonomous mobile robots is the obstacle avoidance task that is commonly achieved using a reactive control paradigm where a local mapping from perceived states to actions is acquired. A control strategy with learning capabilities in an unknown environment can be obtained using reinforcement learning where the learning agent is given only sparse reward information. This credit assignment problem includes both temporal and structural aspects. While the temporal credit assignment problem is solved using core elements of the reinforcement learning agent, solution of the structural credit assignment problem requires an appropriate internal state space representation of the environment. In this paper, a discrete coding of the input space using a neural network structure is presented as opposed to the commonly used continuous internal representation. This enables a faster and more efficient convergence of the reinforcement learning process.
Từ khóa
#Learning #Mobile robots #Control engineering computing #Automatic control #Neural networks #Fuzzy logic #Path planning #Robotics and automation #Navigation #State-space methodsTài liệu tham khảo
10.1007/BF00115009
10.1016/B978-1-55860-141-3.50030-4
kohonen, 1984, Self-organization and associative memory
10.1007/BF00992698
lee, 1994, Reinforcement structure/parameter learning for neural-network-based fuzzy logic control system, IEEE Trans on Fuzzy Systems, 2, 46, 10.1109/91.273126
10.1109/TSMC.1983.6313077
10.1109/3477.752807
10.1109/21.364859
krose, 0, Learning to avoid collisions: a reinforcement learning paradigm for mobile robot navigation, 295
rummery, 1995, Problem solving with reinforcement learning
10.1177/027836498600500106
10.1016/0004-3702(88)90053-7
10.1109/ROBOT.1994.351013
