single-jc.php

JACIII Vol.13 No.6 pp. 649-657
doi: 10.20965/jaciii.2009.p0649
(2009)

Paper:

Information Theoretic Approach for Measuring Interaction in Multiagent Domain

Sachiyo Arai and Yoshihisa Ishigaki

Graduate School of Engineering, Chiba University, 1-33 Yayoi-cho, Inage-ku, Chiba

Received:
April 15, 2009
Accepted:
August 4, 2009
Published:
November 20, 2009
Keywords:
reinforcement learning, cooperative behavior, multi agent system
Abstract
Although a large number of reinforcement learning algorithms have been proposed for the generation of cooperative behaviors, the question of how to evaluate mutual benefit or loss among them is still open. As far as we know, an emerged behavior is regarded as a cooperative behavior when embedded agents have finally achieved their global goal, regardless of whether or not mutual interference has had any effect during the course of the learning process of each agent. Thus, we cannot detect any harmful interaction on the way to achieving a fully-converged policy. In this paper, we propose a measure based on information theory for evaluating the degree of interaction during the learning process from the viewpoint of information sharing. In order to discuss the bad effects of concurrent learning, we apply our proposed measure to a situation in which there exist conflicts among the agents, and we show the availability of our measure.
Cite this article as:
S. Arai and Y. Ishigaki, “Information Theoretic Approach for Measuring Interaction in Multiagent Domain,” J. Adv. Comput. Intell. Intell. Inform., Vol.13 No.6, pp. 649-657, 2009.
Data files:
References
  1. [1] A. Namatame, “Emergent Collective Behaviors and Evolution in a Group of Rational Agents,” Proc. of the Australia-Japan Joint Workshop on Intelligent and Evolutionary Systems, pp. 131-140, 1997.
  2. [2] L. Gasser, N. Rouquette, R.W. Hil, and J. Lieb, “Representing and Using Organizational Knowledge in Distributed Artificial Systems,” L. Gasser and M.H. Huhns (Eds.), Distributed Artificial Intelligence, Vol.2, pp. 55-78, Morgan Kaufmann, 1989.
  3. [3] S.S. Richard and G.B. Andrew, “Reinforcement Learning --An Introduction--, The MIT Press, 1998.
  4. [4] S. Arai, K. Miyazaki, and S. Kobayashi, “Methodology in Multiagent Reinforcement Learning -Approaches by Q-learning and Profit Sharing,” J. of Japanese Society for Artificial Intelligence, Vol.13, No.4, pp. 105-114, 1998 (in Japanese).
  5. [5] K. Iwata, K. Ikeda, and H. Sakai, “A New Criterion Using Information Gain fot Action Selection Strategy in Reinforcement Learning,” IEEE Trans. Neural Networks, Vol.15, No.4, pp. 792-799, 2004.
  6. [6] H. Van Dyke Parunak, Sven Brueckner, “Entropy and Self-Organaization in Multi-Agent Systems,” Proc. the fifth Int. Conf. on Autonomous Agents, pp. 124-130, 2001.
  7. [7] N. Ynagisawa, H. Kawamura, M. Yamamoto, and A. Ouchi, “Quantification of Interactive Behavior in Multiagent Systems,” IEICE technical report, Artificial Intelligence and knowledge-based Processing, pp. 71-76, 2004 (in Japanese).
  8. [8] C.J.H. Watkins and P. Dayan, “Technical note: Q-learning,” Machine Learning, Vol.8, pp. 55-68, 1992.

*This site is desgined based on HTML5 and CSS3 for modern browsers, e.g. Chrome, Firefox, Safari, Edge, Opera.

Last updated on Dec. 02, 2024