Paper:
Substitute Target Learning Based Control System for Control Knowledge Acquisition Within Constrained Environment
Syafiq Fauzi Kamarulzaman, Takeshi Shibuya, and Seiji Yasunobu
Department of Intelligent Interaction Technologies, University of Tsukuba, 1-1-1 Tennodai, Tsukuba, Ibaraki 305-8573, Japan
- [1] E. Kawana and S. Yasunobu, “An Intelligent Control System Using Object Model by Real-Time Learning,” Proc. of SICE Annual Conf., pp. 2792-2797, 2007.
- [2] R. S. Sutton and A. G. Barto, “Reinforcement Learning An Introduction,” MIT Press, 1998.
- [3] T. Matsubara and S. Yasunobu, “An Intelligent Control Based on Fuzzy Target and Its Application to car like Vehicle,” Proc. of SICE Annual Conf., 2004.
- [4] S. Yasunobu and H. Yamasaki, “Evolutionary Control Method and Swing Up and Stabilization Control of Inverted Pendulum,” Proc. of 9th IFSA World Congress, pp. 2078-2083, 2001.
- [5] V. N. Vichugov, G. P. Tsapko, and S. G. Tsapko, “Application of Reinforcement Learning in Control System Development,” Proc. of The 9th Russian-Korean Int. Symposium on Science and Technology , pp. 732-733, 2005.
- [6] N. Kazuhiro, T. Tsubone, and Y. Wada, “Possibility of reinforcement learning using event-related potential toward an adaptive BCI,” IEEE Conf., pp. 1720-1725, 2009.
- [7] M. Riedmiller, “Neural Reinforcement Learning to swing-up and balance a real pole,” IEEE Conf., pp. 3191-3196, 2005.
- [8] S. Nakamura and S. Hashimoto, “Hybrid Learning Strategy to solve Pendulum Swing-Up Problem for Real Hardware,” IEEE Int. Conf. on Robotics and Biomimetics, pp. 1972-1977, 2007.
This article is published under a Creative Commons Attribution-NoDerivatives 4.0 Internationa License.