JACIII Vol.11 No.1 pp. 79-86
doi: 10.20965/jaciii.2007.p0079


Genetic Network Programming with Actor-Critic

Hiroyuki Hatakeyama*, Shingo Mabu**, Kotaro Hirasawa*,
and Jinglu Hu*

*Graduate School of Information, Production and Systems, Waseda University, 2-7 Hibikino, Wakamatsu-ku, Kitakyushu, Fukuoka 808-0135, Japan

**Advanced Research Institute for Science and Engineering, Waseda University, 2-2 Hibikino, Wakamatsu-ku, Kitakyushu, Fukuoka 808-0135, Japan

January 30, 2006
May 23, 2006
January 20, 2007
Genetic Network Programming, evolutionary computation, reinforcement learning, Khepera robot

A new graph-based evolutionary algorithm named “Genetic Network Programming, GNP” has been already proposed. GNP represents its solutions as graph structures, which can improve the expression ability and performance. In addition, GNP with Reinforcement Learning (GNP-RL) was proposed a few years ago. Since GNP-RL can do reinforcement learning during task execution in addition to evolution after task execution, it can search for solutions efficiently. In this paper, GNP with Actor-Critic (GNP-AC) which is a new type of GNP-RL is proposed. Originally, GNP deals with discrete information, but GNP-AC aims to deal with continuous information. The proposed method is applied to the controller of the Khepera simulator and its performance is evaluated.

Cite this article as:
Hiroyuki Hatakeyama, Shingo Mabu, Kotaro Hirasawa, and
and Jinglu Hu, “Genetic Network Programming with Actor-Critic,” J. Adv. Comput. Intell. Intell. Inform., Vol.11, No.1, pp. 79-86, 2007.
Data files:
  1. [1] T. Eguchi, K. Hirasawa, J. Hu, and N. Ota, “A Study of Evolutionary Multiagent Models Based on Symbiosis,” IEEE Trans. on Systems, Man and Cybernetics, Part B, Vol.35, No.1, pp. 179-193, 2006.
  2. [2] S. Mabu, K. Hirasawa, J. Hu, and J. Murata, “Online Learning of Genetic Network Programming,” in 2002 Congress on Evolutionary Computation, pp. 321-326, 2002.
  3. [3] S. Mabu, K. Hirasawa, and J. Hu, “Genetic Network Programming with Learning and Evolution for Adapting to Dynamical Environments,” in 2003 Congress on Evolutionary Computation, pp. 69-76, 2003.
  4. [4] S. Mabu, K. Hirasawa, and J. Hu, “Genetic Network Programming with Reinforcement Learning and its Performance Evaluation,” in 2004 Genetic and Evolutionary Computation Conference, part II, pp. 710-711, 2004.
  5. [5] R. S. Sutton and A. G. Barto, “Reinforcement Learning – An Introduction,” MIT Press Cambridge, Massachusetts, London, England, 1998.
  6. [6] D. B. Fogel, “An introduction to simulated evolutionary optimization,” IEEE Trans. on Neural Networks, Vol.5, No.1, pp. 3-14, 1994.
  7. [7] O. Michel, “Khepera Simulator Package version 2.0,” Freeware mobile robot simulator written at the University of Nice Sophia-Antipolis by Olivier Michel, 1996.
    Downloadable from the World Wide Web at

*This site is desgined based on HTML5 and CSS3 for modern browsers, e.g. Chrome, Firefox, Safari, Edge, Opera.

Last updated on Mar. 05, 2021