JACIII Vol.16 No.3 pp. 397-403
doi: 10.20965/jaciii.2012.p0397


Substitute Target Learning Based Control System for Control Knowledge Acquisition Within Constrained Environment

Syafiq Fauzi Kamarulzaman, Takeshi Shibuya, and Seiji Yasunobu

Department of Intelligent Interaction Technologies, University of Tsukuba, 1-1-1 Tennodai, Tsukuba, Ibaraki 305-8573, Japan

September 30, 2011
December 6, 2011
May 20, 2012
reinforcement learning, inverted pendulum, substitute target

Real-time operations are usually conducted within a constrained environment. A human requires constantly updated knowledge to respond flexibly under different constraints to configure a control method around the constraints. In this research, a control system based on substitute target learning is proposed to enable control operations to configure their own control method around the constraints. This control system is applied to an inverted pendulum control system and its effectiveness is confirmed through a series of simulations and an experiment using a real machine.

Cite this article as:
S. Kamarulzaman, T. Shibuya, and S. Yasunobu, “Substitute Target Learning Based Control System for Control Knowledge Acquisition Within Constrained Environment,” J. Adv. Comput. Intell. Intell. Inform., Vol.16, No.3, pp. 397-403, 2012.
Data files:
  1. [1] E. Kawana and S. Yasunobu, “An Intelligent Control System Using Object Model by Real-Time Learning,” Proc. of SICE Annual Conf., pp. 2792-2797, 2007.
  2. [2] R. S. Sutton and A. G. Barto, “Reinforcement Learning An Introduction,” MIT Press, 1998.
  3. [3] T. Matsubara and S. Yasunobu, “An Intelligent Control Based on Fuzzy Target and Its Application to car like Vehicle,” Proc. of SICE Annual Conf., 2004.
  4. [4] S. Yasunobu and H. Yamasaki, “Evolutionary Control Method and Swing Up and Stabilization Control of Inverted Pendulum,” Proc. of 9th IFSA World Congress, pp. 2078-2083, 2001.
  5. [5] V. N. Vichugov, G. P. Tsapko, and S. G. Tsapko, “Application of Reinforcement Learning in Control System Development,” Proc. of The 9th Russian-Korean Int. Symposium on Science and Technology , pp. 732-733, 2005.
  6. [6] N. Kazuhiro, T. Tsubone, and Y. Wada, “Possibility of reinforcement learning using event-related potential toward an adaptive BCI,” IEEE Conf., pp. 1720-1725, 2009.
  7. [7] M. Riedmiller, “Neural Reinforcement Learning to swing-up and balance a real pole,” IEEE Conf., pp. 3191-3196, 2005.
  8. [8] S. Nakamura and S. Hashimoto, “Hybrid Learning Strategy to solve Pendulum Swing-Up Problem for Real Hardware,” IEEE Int. Conf. on Robotics and Biomimetics, pp. 1972-1977, 2007.

*This site is desgined based on HTML5 and CSS3 for modern browsers, e.g. Chrome, Firefox, Safari, Edge, Opera.

Last updated on Aug. 05, 2022