JACIII Vol.6 No.1 pp. 19-24
doi: 10.20965/jaciii.2002.p0019


Reinforcement Leaning of Fuzzy Control Rules with Context-Specitic Segmentation of Actions

Hideki Yamagishi*, Hiroshi Kawakami*, Tadashi Horiuchi*, and Osamu Katai*

*Dept. of Systems Science, Graduate School of lnformatics, Kyoto University, Yoshidahon-machi, Sakyo-ku, Kyoto, 606-8501 Japan

**Dept. of Information Engineering, Matsue National College of Technology, 14-4, Nishi-ikuma, Matsue, 690-8518 Japan

April 12, 2002
May 28, 2002
February 20, 2002
Knowledge acquisition mainly involves two approaches: deriving general or abstract rules from human expertise such as heuristics of target systems, refined properly using further information, and extracting proper rules from experimental information, i.e., information on rewards and penalties obtained from all the possible alternative rules initially prepared – our approach. Reinforcement learning methods are applied to problems where meaningful I/O sets cannot be specified beforehand. There are, however few algorithms to extract heuristics for action selection by using results of reinforcement learning. We propose a way to apply symbolic processing methods such as C4.5 to results of reinforcement learning where methods of fuzzy inference are incorporated. We also derive a proper action decision tree where conditions of proper actions for agents are effectively integrated and simplified.
Cite this article as:
H. Yamagishi, H. Kawakami, T. Horiuchi, and O. Katai, “Reinforcement Leaning of Fuzzy Control Rules with Context-Specitic Segmentation of Actions,” J. Adv. Comput. Intell. Intell. Inform., Vol.6 No.1, pp. 19-24, 2002.
Data files:

Creative Commons License  This article is published under a Creative Commons Attribution 4.0 International License.

*This site is desgined based on HTML5 and CSS3 for modern browsers, e.g. Chrome, Firefox, Safari, Edge, Opera.

Last updated on Jun. 03, 2024