Paper:
Acceleration of Reinforcement Learning with Incomplete Prior Information
Kento Terashima, Hirotaka Takano, and Junichi Murata
Department of Electrical and Electronic Engineering, Kyushu University, 744 Motooka, Nishi-ku, Fukuoka 819-0395, Japan
Reinforcement learning is applicable to complex or unknown problems because the solution search process is done by trial-and-error. However, the calculation time for the trial-and-error search becomes larger as the scale of the problem increases. Therefore, in order to decrease calculation time, some methods have been proposed using the prior information on the problem. This paper improves a previously proposed method utilizing options as prior information. In order to increase the learning speed even with wrong options, methods for option correction by forgetting the policy and extending initiation sets are proposed.
- [1] R. S. Sutton and A. G. Barto, “Reinforcement Learning: An Introduction,” MIT Press, 1998.
- [2] A. McGovern, R. S. Sutton, and A. H. Fagg, “Roles of Macro-Actions in Accelerating Reinforcement Learning,” the Proc. of the 1997 Grace Hopper Celebration of Women in Computing, pp. 1-6, 1997.
- [3] R. S. Sutton, D. Precup, and S. Singh, “Between MDPs and semi-MDPs: A framework for temporal abstraction in reinforcement learning,” Artificial Intelligence, Vol.112, pp. 181-211, 1999.
- [4] M. Shokri, “Knowledge of opposite actions for reinforcement learning,” Applied Soft Computing, Vol.11, pp. 4097-4109, 2011.
- [5] R. S. Sutton, D. Precup, and S. Singh, “Intra-Option Learning about Temporally Abstract Actions,” In Proc. of the 15th Int. Conf. on Machine Learning, pp. 556-564, 1998.
- [6] S. Kato and H. Matsuo, “A Theory of Profit Sharing in Dynamic Environment,” Proc. of the Sixth Pacific Rim Int. Conf. on Artificial Intelligence, pp. 136-145, 2000.
- [7] T. Minato and M. Asada, “Environmental Change Adaptation for Mobile Robot Navigation,” JRSV, Vol.18, No.5, pp. 706-712, 2000.
- [8] T.Matsui, N. Inuzuka, H. Seki, and H. Itoh, “Using Concept Learning for Restructuring Control Policy in Reinforcement Learning,” JSAI, Vol.17, No.2, pp. 135-144, 2002.
- [9] K. Sakai and J. Murata, “Reinforcement learning using a priori information that can cope with incompleteness of information,” Proc. of the SICE Symposium on Systems and Information 2009, 2009.
- [10] K. Terashima and J. Murata, “A study on Use of Prior Information for Acceleration of Reinforcement Learning,” SICE Annual Conf. 2011, pp. 537-543, 2011.
- [11] F. Ogihara and J. Murata, “AMethod for Finding Multiple Subgoals for Reinforcement Learning,” AROB 16th ’11, pp. 804-807, 2011.
This article is published under a Creative Commons Attribution-NoDerivatives 4.0 Internationa License.