single-jc.php

JACIII Vol.16 No.7 pp. 888-893
doi: 10.20965/jaciii.2012.p0888
(2012)

Paper:

Autonomous Vehicle Path Tracking Based on Natural Gradient Methods

Ki-Young Kwon*, Keun-Woo Jung**, Dong-Su Yang*,
and Jooyoung Park*

*Department of Control and Instrumentation Engineering, Korea University, Sejong-ro 2511, Sejong 339-700, Korea

**LG Electronics, Gasan-dong, Geumcheon-gu, Seoul 153-802, Korea

Received:
July 1, 2012
Accepted:
October 25, 2012
Published:
November 20, 2012
Keywords:
natural gradient, actor-critic, evolution strategy, autonomous vehicles, path-tracking
Abstract
Recently, reinforcement learning and evolution strategy have become major tools in the field of machine learning, and have shown excellent performance in various engineering problems. In particular, the Natural Actor-Critic (NAC) approach and the Natural Evolution Strategies (NES) have led to considerable interests in the area of natural-gradient-based machine learning methods with many successful applications. In this paper, we apply the NAC and the NES to pathtracking control problems for autonomous vehicles. Simulation results show that these methods can yield better performance compared to the conventional PID controllers.
Cite this article as:
K. Kwon, K. Jung, D. Yang, and J. Park, “Autonomous Vehicle Path Tracking Based on Natural Gradient Methods,” J. Adv. Comput. Intell. Intell. Inform., Vol.16 No.7, pp. 888-893, 2012.
Data files:
References
  1. [1] R. S. Sutton and A. G. Barto, “Reinforcement Learning: An Introduction, MIT Press,” Cambridge, MA, 1998
  2. [2] D.Wierstra, T. Schaul, T. Glasmachers, Y. Sun, and J. Schmidhuber, “Natural evolution strategies,” arXiv:1106.4487, 2011.
  3. [3] Y. Sun, D. Wierstra, T. Schaul, and J. Schmidhuber, “Stochastic search using the natural gradient,” Proc. of ICML’09, pp. 1161-1168, 2009.
  4. [4] J. Peters and S. Schaal, “Natural actor-critic,” Neurocomputing, Vol.71, pp. 1180-1190, 2008.
  5. [5] D. Min, K. Jung, K. Kwon, and J. Park, “Mobile robot control based on a recent reinforcement learning method,” Proc. of KIIS Spring Conf. 2011, Vol.21, No.1, pp. 67-70, 2011.
  6. [6] J. Park, J. Kim, and D. Kang, “An RLS-based natural actor-critic algorithm for locomotion of a two-linked robot arm,” Lecture Notes in Artificial Intelligence, Vol.3801, pp. 65-72, December 2005.
  7. [7] B. Kim, J. Park, S. Park, and S. Kang, “Impedance learning for robotic contact tasks using natural actor-critic algorithm,” IEEE Trans. on Systems, Man, and Cybernetics, Part B: Cybernetics, Vol.40, No.2, pp. 433-443, April 2010.
  8. [8] M. Riedmiller, J. Peters, and S. Schaal, “Evaluation of policy gradient methods and variants on the cart-pole benchmark,” Proc. of 2007 IEEE Int. Symposium on Approximate Dynamic Programming and Reinforcement Learning, pp. 254-261, 2007.
  9. [9] X. Xu, H. Zhang, B. Dai, and H. He, “Self-learning path-tracking control of autonomous vehicles using kernel-based approximate dynamic programming,” Proc. of Int. Joint Conf. on Neural Networks 2008, pp. 2182-2189, 2008.
  10. [10] J. Guldner, H. Tan, and S. Patwardhan, “On fundamental issues of vehicle steering control for highway automation,” Technical Report, California PATHWorking Paper, UCB-ITS-PWP-97-11, University of California, Berkley, 1997.
  11. [11] G. Lu, J. Huang, and M. Tomizuka, “Vehicle lateral control under fault in front and/or rear sensor,” California PATH Research Report UCB-ITS-PRR-2003-26, University of California, Berkley, 2003.

*This site is desgined based on HTML5 and CSS3 for modern browsers, e.g. Chrome, Firefox, Safari, Edge, Opera.

Last updated on Apr. 22, 2024