G. Cicirelli, T. D'Orazio, N. Ancona, and A. Distante (Italy)
Neural Networks, Reinforcement Learning, Artificial Intelligence, Wall-following.
The Q-learning algorithm, for its simplicity and well-developed theory, has been largely used in the last years in order to realize different behaviours for autonomous vehicles. The most frequent applications required the standard tabular formulation with discrete sets of state and action. In order to consider continuous variables, function approximator such as neural network are required. In this work we investigate the neural approach of Q-learning on the robot navigation task of wall-following. Some issues have been addressed in order to deal with the convergence problem and the need of huge training sets. The experience re-play paradigm has been applied to reduce the unlearning problem. Different neural network architectures have been implemented to use different spatial decompositions of the sensory input, and comparisons have been carried out to investigate how different choices can affect the learning convergence, the optimality of the final controller and the generalization ability.
Important Links:
Go Back