Paper:
Views over last 60 days: 576
Adaptive Reinforcement Learning Integrating Exploitation-and Exploration-oriented Learning
Satoshi Kurihara*, Rikio Onai** and Toshiharu Sugawara*
*NTT Network Innovation Laboratories 3-9-11 Midori-Cho, Musashino-Shi, Tokyo, 180-8585 Japan Tel: +81 422 59 4139, Fax: +81 422 59 2225
**NTT Software Corporation 209 Yamashita-cho Naka-ku Yokohama-shi, Kanagawa 231-8551 Japan
Received:January 1, 1970Accepted:August 21, 1999Published:December 20, 1999
Keywords:Reinforcement learning, Exploitation-oriented learning, Exploration-oriented learning, Multi-agent model, Dynamic environment
Abstract
We propose and evaluate an adaptive reinforcement learning system that integrates both exploitation- and exploration-oriented learning (ArLee). Compared to conventional reinforcement learning, ArLee is more robust in a dynamically changing environment and conducts exploration-oriented learning efficiently even in a large-scale environment. It is thus well suited for autonomous systems, for example, software agents and mobile robots, that operate in dynamic, large-scale environments, such as the real world and the Internet. Simulation demonstrates the learning system’s basic effectiveness.
Cite this article as:S. Kurihara, R. Onai, and T. Sugawara, “Adaptive Reinforcement Learning Integrating Exploitation-and Exploration-oriented Learning,” J. Adv. Comput. Intell. Intell. Inform., Vol.3 No.6, pp. 474-478, 1999.Data files: