Paper:
Information Theoretic Approach for Measuring Interaction in Multiagent Domain
Sachiyo Arai and Yoshihisa Ishigaki
Graduate School of Engineering, Chiba University, 1-33 Yayoi-cho, Inage-ku, Chiba
- [1] A. Namatame, “Emergent Collective Behaviors and Evolution in a Group of Rational Agents,” Proc. of the Australia-Japan Joint Workshop on Intelligent and Evolutionary Systems, pp. 131-140, 1997.
- [2] L. Gasser, N. Rouquette, R.W. Hil, and J. Lieb, “Representing and Using Organizational Knowledge in Distributed Artificial Systems,” L. Gasser and M.H. Huhns (Eds.), Distributed Artificial Intelligence, Vol.2, pp. 55-78, Morgan Kaufmann, 1989.
- [3] S.S. Richard and G.B. Andrew, “Reinforcement Learning --An Introduction--, The MIT Press, 1998.
- [4] S. Arai, K. Miyazaki, and S. Kobayashi, “Methodology in Multiagent Reinforcement Learning -Approaches by Q-learning and Profit Sharing,” J. of Japanese Society for Artificial Intelligence, Vol.13, No.4, pp. 105-114, 1998 (in Japanese).
- [5] K. Iwata, K. Ikeda, and H. Sakai, “A New Criterion Using Information Gain fot Action Selection Strategy in Reinforcement Learning,” IEEE Trans. Neural Networks, Vol.15, No.4, pp. 792-799, 2004.
- [6] H. Van Dyke Parunak, Sven Brueckner, “Entropy and Self-Organaization in Multi-Agent Systems,” Proc. the fifth Int. Conf. on Autonomous Agents, pp. 124-130, 2001.
- [7] N. Ynagisawa, H. Kawamura, M. Yamamoto, and A. Ouchi, “Quantification of Interactive Behavior in Multiagent Systems,” IEICE technical report, Artificial Intelligence and knowledge-based Processing, pp. 71-76, 2004 (in Japanese).
- [8] C.J.H. Watkins and P. Dayan, “Technical note: Q-learning,” Machine Learning, Vol.8, pp. 55-68, 1992.
This article is published under a Creative Commons Attribution-NoDerivatives 4.0 Internationa License.