single-jc.php

JACIII Vol.10 No.1 pp. 84-92
doi: 10.20965/jaciii.2006.p0084
(2006)

Paper:

FCAPS: Fuzzy Controller with Approximated Policy Search Approach

Agus Naba*, and Kazuo Miyashita**

*Graduate School of Systems and Information Engineering, University of Tsukuba, 1-2-1 Namiki, Tsukuba, Ibaraki, Japan

**National Institute of Advanced Industrial Science and Technology (AIST), 1-2-1 Namiki, Tsukuba, Ibaraki, Japan

Received:
April 13, 2005
Accepted:
July 7, 2005
Published:
January 20, 2006
Keywords:
adaptive tuning, gradient descent search, fuzzy controller, reinforcement learning
Abstract
A fuzzy controller requires an engineer to tune its rules for controlling a given plant. To reduce the burden, we develop a gradient-based tuning method for the fuzzy controller. The developed method is closely related to a theory of reinforcement learning, but takes advantages of a practical assumption made for faster learning. In reinforcement learning, values of problem states need to be acquired through lots of trial-and-error interactions between the controller and the plant. And the plant dynamics should also be learned by the controller. In this research, we assume that an approximated value function of the problem states can be represented as a function of a Euclidean distance from a goal state and an action executed at the state. And, we propose to use it for the gradient search as an evaluation function. Our experimental results on a pole-balancing problem show that the proposed method can tune the fuzzy controller to have an optimal policy for reaching the goal state despite an unknown plant dynamics in not only a set-point problem but also a tracking problem.
Cite this article as:
A. Naba and K. Miyashita, “FCAPS: Fuzzy Controller with Approximated Policy Search Approach,” J. Adv. Comput. Intell. Intell. Inform., Vol.10 No.1, pp. 84-92, 2006.
Data files:
References
  1. [1] K. J. Astrom, and K. Furuta, “Swinging up a pendulum by energy control,” Automatica, Vol.36, pp. 287-295, 2000.
  2. [2] L. Baird, and A. Moore, “Gradient descent for general reinforcement learning,” Advances in Neural Information Processing Systems, Vol.11, 1999.
  3. [3] A. G. Barto, R. S. Sutton, and C. W. Anderson, “Neuronlike Adaptive Elements That Can Solve Difficult Learning Control Problems,” IEEE Trans. On SMC, Vol.13, No.5, pp. 834-846, 1983.
  4. [4] H. R. Berenji, and P. Khedkar, “Learning and Tuning Fuzzy Logic Controllers Through Reinforcement,” IEEE Trans. On Neural Network, Vol.3, No.5, pp. 724-740, 1992.
  5. [5] C. Chung, and J. Hauser, “Nonlinear Control of a Swinging Pendulum,” Automatica, Vol.31, pp. 851-862, 1995.
  6. [6] A. O. Esogbue, W. E. Hearnes, and Q. Song, “A Reinforcement Learning Fuzzy Controller for Set-Point Regulator Problems,” In Proceedings of the FUZZ-IEEE ’96 Conference, Vol.3, pp. 2136-2142, New Orleans, LA, 1996.
  7. [7] F. Hsu, and L. Fu, “A novel adaptive fuzzy variable structure control for a class of nonlinear uncertain systems via backstepping,” Fuzzy sets and systems, Vol.122, pp. 83-106, 2001.
  8. [8] L. Jouffe, “Fuzzy Inference System Learning by Reinforcement Method,” IEEE Trans. On SMC-Part C: Application And Reviews, Vol.28, No.3, pp. 338-355, 1998.
  9. [9] C. Liang, and J. Su, “A new approach to the design of fuzzy sliding mode controller,” Fuzzy sets and systems, Vol.139, pp. 111-124, 2003.
  10. [10] C.-K. Lin, “A Reinforcement Learning Adaptive Fuzzy Controller for Robots,” Fuzzy sets and systems, Vol.137, pp. 339-352, 2003.
  11. [11] Y. Y. Nazaruddin, A. Naba, and T. H. Liong, “Modified Adaptive Fuzzy Control System Using Universal Supervisory Controller,” In Proceedings SCI 2000/ISAS 2000, Vol.IX, Orlando, USA, July 23-26, 2000.
  12. [12] K. Ogata, “Modern Control Engineering,” Prentice-Hall, Inc., 1997.
  13. [13] S. K. Oh, W. Pedrycz, S. B. Rho, and T. C. Ahn, “Parameters estimation of fuzzy controller and its application to inverted pendulum,” Engineering Applications of Artificial Intelligence, Vol.17, pp. 37-60, 2004.
  14. [14] J. Park, G. Park, S. Kim, and C. Moon, “Direct adaptive selfstructuring fuzzy controller for nonaffine nonlinear system,” Fuzzy sets and systems, Vol.153, pp. 429-445, 2005.
  15. [15] L. Peshkin, “Reinforcement Learning by Policy Search,” Ph.D. thesis, Brown University, Providence, RI, 2001.
  16. [16] J. C. Santamaria, R. R. Sutton, and A. Ram, “Experiment with Reinforcement Learning in Problems with Continuous State and Action Spaces,” Adaptive Behaviour, Vol.6, No.2, pp. 163-218, 1998.
  17. [17] R. S. Sutton, “Learning to Predict by the Methods of Temporal Differences,” Machine Learning, Vol.3, pp. 9-44, 1988.
  18. [18] R. S. Sutton, D. McAllester, S. Singh, and Y. Mansour, “Policy Gradient Methods for Reinforcement Learning with Function Approximation,” Advances in Neural Information Processing System, Vol.12, pp. 1057-1063, 2000.
  19. [19] S. Tong, and H. Li, “Direct adaptive fuzzy output tracking control of nonlinear systems,” Fuzzy sets and systems, Vol.128, pp. 107-115, 2002.
  20. [20] L.-X. Wang, “A COURSE IN FUZZY SYSTEMS AND CONTROL,” Prentice-Hall International, Inc., 1997.

*This site is desgined based on HTML5 and CSS3 for modern browsers, e.g. Chrome, Firefox, Safari, Edge, Opera.

Last updated on Apr. 19, 2024