JACIII Vol.19 No.6 pp. 867-879
doi: 10.20965/jaciii.2015.p0867


Three Layers Framework Concept for Adjustable Artificial Intelligence

Benoît Vallade, Alexandre David, and Tomoharu Nakashima

Department of Computer Science and Intelligent Systems, Osaka Prefecture University
4F B4 Bldg., 1-1 Gakuen-cho, Nakaku, Sakai, Osaka 599-8531, Japan

April 20, 2015
October 7, 2015
Online released:
November 20, 2015
November 20, 2015
artificial intelligence, video games, optimization search algorithm, neural networks, geometry friends competition

This paper proposes a concept of layered framework for adjustable artificial intelligence. Artificial intelligences are used in various areas of computer science for decision making tasks. Traditionally artificial intelligences are developed in order to be used for a specific purpose within a particular software. However, this paper stands as the first step of a research in progress whose final objective is to design an artificial intelligence adjustable to every types of problems without any modification in its source code. The present work focuses on a framework of such an artificial intelligence and is conducted in the context of video games. This framework, composed of three layers, would be re-usable for all types of game.

  1. [1]  A. Nareyek, “AI in Computer Games,” Queue, Vol.1, No.10, pp. 58-65, February 2004.
  2. [2]  S. Rabin, “AI Game Programming Wisdom 3 (Game Development Series),” Charles River Media, Inc., 2006.
  3. [3]  J. B. G. Rocha, “Geometry Friends,” 2009.
  4. [4]  C. Browne, E. Powley, D. Whitehouse, S. Lucas, P. Cowling, P. Rohlfshagen, S. Tavener, D. Perez, S. Samothrakis, and S. Colton, “A Survey of Monte Carlo Tree Search Methods,” IEEE Trans. on Computational Intelligence and AI in Games, Vol.4, No.1, pp. 1-43, March 2012.
  5. [5]  G. Chaslot, S. Bakkes, I. Szita, and P. Spronck, “Monte-Carlo Tree Search: A New Framework for Game AI,” AIIDE, 2008.
  6. [6]  J. Méhat and T. Cazenave, “Combining UCT and nested Monte Carlo search for single-player general game playing,” IEEE Trans. on Computational Intelligence and AI in Games, Vol.2, No.4, pp. 271-277, 2010.
  7. [7]  M. Hausknecht, J. Lehman, R. Miikkulainen, and P. Stone, “A neuroevolution approach to general atari game playing,” IEEE Trans. on Computational Intelligence and AI in Games, Vol.6, No.4, pp. 355-366, 2014.
  8. [8]  E. Powley, D. Whitehouse, and P. Cowling, “Monte Carlo Tree Search with macro-actions and heuristic route planning for the Physical Travelling Salesman Problem,” 2012 IEEE Conf. on Computational Intelligence and Games (CIG), pp. 234-241, Sept 2012.
  9. [9]  T. Anthony, D. Polani, and C. Nehaniv, “General Self-Motivation and Strategy Identification: Case Studies Based on Sokoban and Pac-Man,” IEEE Trans. on Computational Intelligence and AI in Games, Vol.6, No.1, pp. 1-17, 2014.
  10. [10]  P. Stone, “Layered Learning in Multiagent Systems: A Winning Approach to Robotic Soccer,” MIT Press, 2000.
  11. [11]  D. Perez, P. Rohlfshagen, and S. Lucas, “The physical travelling salesman problem: WCCI 2012 competition,” 2012 IEEE Congress on Evolutionary Computation (CEC), pp. 1-8, June 2012.
  12. [12]  M. Buro, “Real-time strategy games: A new AI research challenge,” IJCAI, pp. 1534-1535, 2003.
  13. [13]  K. O. Stanley and R. Miikkulainen, “Evolving neural networks through augmenting topologies,” Evolutionary computation, Vol.10, No.2, pp. 99-127, 2002.
  14. [14]  B. Vallade and T. Nakashima, “A New Approach of Path-finding by Possibilities Search,” 2014.
  15. [15]  B. Vallade and T. Nakashima, “Improving Particle Swarm Optimization Algorithm and its Application to Traveling Salesman Problems with a Dynamic Search Space,” 2013.

*This site is desgined based on HTML5 and CSS3 for modern browsers, e.g. Chrome, Firefox, Safari, Edge, IE9,10,11, Opera.

Last updated on Mar. 28, 2017