A Reinforcement Learning Scheme of Fuzzy Rules with Reduced Conditions
Hiroshi Kawakami*, Osamu Katai* and Tadataka Konishi**
*Departent of Systems Science, Graduate School of Informatics, Kyoto University Yoshida-Honmachi, Kyoto 606-8501, Japan
**Department of Information Technology, Faculty of Engineering, Okayama University 3-1-1 Tsushima-Naka, Okayama 700-8530, Japan
Received:March 15, 1999Accepted:July 20, 1999Published:March 20, 2000
Keywords:Reinforcement learning, Fuzzy inference, Condition reduction, Continuous value, Q-learning
This paper proposes a new method of Q-learning for the case where the states (conditions) and actions of systems are assumed to be continuous. The components of Q-tables are interpolated by fuzzy inference. The initial set of fuzzy rules is made of all combinations of conditions and actions relevant to the problem. Each rule is then associated with a value by which the Q-values of condition/action pairs are estimated. The values are revised by the Q-learning algorithm so as to make the fuzzy rule system effective. Although this framework may require a huge number of the initial fuzzy rules, we will show that considerable reduction of the number can be done by adopting what we call Condition Reduced Fuzzy Rules (CRFR). The antecedent part of CRFR consists of all actions and the selected conditions, and its consequent is set to be its Q-value. Finally, experimental results show that controllers with CRFRs perform equally well to the system with the most detailed fuzzy control rules, while the total number of parameters that have to be revised through the whole learning process is considerably reduced, and the number of the revised parameters at each step of learning increased.
Cite this article as:H. Kawakami, O. Katai, and T. Konishi, “A Reinforcement Learning Scheme of Fuzzy Rules with Reduced Conditions,” J. Adv. Comput. Intell. Intell. Inform., Vol.4 No.2, pp. 146-151, 2000.Data files: