Paper:
Views over last 60 days: 422
Reinforcement Leaning of Fuzzy Control Rules with Context-Specitic Segmentation of Actions
Hideki Yamagishi*, Hiroshi Kawakami*, Tadashi Horiuchi*, and Osamu Katai*
*Dept. of Systems Science, Graduate School of lnformatics, Kyoto University, Yoshidahon-machi, Sakyo-ku, Kyoto, 606-8501 Japan
**Dept. of Information Engineering, Matsue National College of Technology, 14-4, Nishi-ikuma, Matsue, 690-8518 Japan
Received:April 12, 2002Accepted:May 28, 2002Published:February 20, 2002
Abstract
Knowledge acquisition mainly involves two approaches: deriving general or abstract rules from human expertise such as heuristics of target systems, refined properly using further information, and extracting proper rules from experimental information, i.e., information on rewards and penalties obtained from all the possible alternative rules initially prepared – our approach. Reinforcement learning methods are applied to problems where meaningful I/O sets cannot be specified beforehand. There are, however few algorithms to extract heuristics for action selection by using results of reinforcement learning. We propose a way to apply symbolic processing methods such as C4.5 to results of reinforcement learning where methods of fuzzy inference are incorporated. We also derive a proper action decision tree where conditions of proper actions for agents are effectively integrated and simplified.
Cite this article as:H. Yamagishi, H. Kawakami, T. Horiuchi, and O. Katai, “Reinforcement Leaning of Fuzzy Control Rules with Context-Specitic Segmentation of Actions,” J. Adv. Comput. Intell. Intell. Inform., Vol.6 No.1, pp. 19-24, 2002.Data files: