single-jc.php

JACIII Vol.4 No.5 pp. 349-354
doi: 10.20965/jaciii.2000.p0349
(2000)

Paper:

Self-Learning Fuzzy Logic Controller using Q-Learning

Min-Soeng Kim, Sun-Gi Hong and Ju-Jang Lee

Department of Electrical Engineering and Computer Science Korea Advanced Institute of Science and Technology 373-1 Kusong-dong Yusong-gu Taejon 305-701 Korea Telephone : +82-42-869-3432,

Received:
August 18, 2000
Accepted:
October 1, 2000
Published:
September 20, 2000
Keywords:
Fuzzy logic controller, Q-Learning, Reinforcement learning, Self-Organizing fuzzy logic controller
Abstract
Fuzzy logic controllers consist of if-then fuzzy rules generally adopted from a priori expert knowledge. However, it is not always easy or cheap to obtain expert knowledge. Q-learning can be used to acquire knowledge from experiences even without the model of the environment. The conventional Q-learning algorithm cannot deal with continuous states and continuous actions. However, the fuzzy logic controller can inherently receive continuous input values and generate continuous output values. Thus, in this paper, the Q-learning algorithm is incorporated into the fuzzy logic controller to compensate for each method’s disadvantages. Modified fuzzy rules are proposed in order to incorporate the Q-learning algorithm into the fuzzy logic controller. This combination results in the fuzzy logic controller that can learn through experience. Since Q-values in Q-learning are functional values of the state and the action, we cannot directly apply the conventional Q-learning algorithm to the proposed fuzzy logic controller. Interpolation is used in each modified fuzzy rule so that the Q-value is updatable.
Cite this article as:
M. Kim, S. Hong, and J. Lee, “Self-Learning Fuzzy Logic Controller using Q-Learning,” J. Adv. Comput. Intell. Intell. Inform., Vol.4 No.5, pp. 349-354, 2000.
Data files:

*This site is desgined based on HTML5 and CSS3 for modern browsers, e.g. Chrome, Firefox, Safari, Edge, Opera.

Last updated on Apr. 19, 2024