JRM Vol.31 No.4 pp. 558-565
doi: 10.20965/jrm.2019.p0558


Agreement Algorithm Based on a Trial and Error Method for the Best of Proportions Problem

Nhuhai Phung, Masao Kubo, and Hiroshi Sato

Department of Computer Science, National Defense Academy of Japan
1-10-20 Hashirimizu, Yokosuka, Kanagawa 239-8686, Japan

February 22, 2019
June 10, 2019
August 20, 2019
best-of-n problem, swarm robotics, multi-agent, best of proportions problem, agreement algorithm
The best-of-n problem (BSTn) is a collective decision-making problem in which a swarm of robots needs to make a collective decision about a set of n choices; specifically, to decide what choice offers the best alternative [1]. The BSTn captures the structure and logic of the discrete consensus achievement problems that appear in several swarm robotics scenarios. Although numerous algorithms have been proposed recently to deal with more than two choices, the number of choices that can be dealt with is not large. The bias and raising threshold (BRT) algorithm proposed by Phung et al. [2] enables swarms to deal with a large number of choices (n≫2). However, the algorithm’s goodness has not been evaluated in any practical problems, and it is necessary to evaluate the algorithm in a problem where a large number of choices exist. In this paper, we consider the best of proportions (BOP) problem; that is a version of BSTn in which a large number of choices can be dealt with by adjusting the values of different proportions. In previous research on swarms that needed to solve the BOP problem, there is only a study on the response threshold models for the division of labor. In the present study, we investigate a scenario of the BOP and apply the BRT algorithm to find the best proportion. In our previous work [3], a fixed proportion setting method has been adopted. Here, we adopt a stochastic proportion setting method to verify the relationship between the efficiency and the number of choices in a more general case. The results show that with a larger number of choices, the decision making becomes more efficient with high equality; that is a result that has not been found in [3].
Dynamics of the decision making process

Dynamics of the decision making process

Cite this article as:
N. Phung, M. Kubo, and H. Sato, “Agreement Algorithm Based on a Trial and Error Method for the Best of Proportions Problem,” J. Robot. Mechatron., Vol.31 No.4, pp. 558-565, 2019.
Data files:
  1. [1] G. Valentini, E. Ferrante et al., “The Best-of-n Problem in Robot Swarms: Formalization, State of the Art, and Novel Perspectives,” Frontiers in Robotics and AI, Vol.4, pp. 1-18, 2017.
  2. [2] N. H. Phung, M. Kubo et al., “Agreement Algorithm with Trial and Error Method at Macro Level,” Artificial Life and Robotics, Vol.23, No.4, pp. 564-570, 2018.
  3. [3] N. H. Phung, M. Kubo et al., “El Farol Bar Problem by Agreement Algorithm based on Trial and Error Behavior at the Macro Lever,” Proc. of The 22nd Asia Pacific Symp. on Intelligent and Evolutionary Systems, 2018.
  4. [4] C. A. C. Parker and H. Zhang, “Cooperative Decision-Making in Decentralized Multiple-Robot Systems: the Best-of-n Problem,” IEEE/ASME Trans. on Mechatronics, Vol.14, Issue 2, pp. 240-251, 2009.
  5. [5] S. Iwanaga and A. Namatame, “The Complexity of Collective Decision,” Nonlinear Dynamics, Psychology, and Life Sciences, Vol.6, Issue 2, pp. 137-158, 2002.
  6. [6] J. Wessnitzer and C. Melhuish, “Collective Decision-Making and Behaviour Transitions in Distributed Ad Hoc Wireless Networks of Mobile Robots: Target-Hunting,” W. Banzhaf, J. Ziegler, T. Christaller, P. Dittrich, and J. T. Kim (Eds.), “Advances in Artificial Life. ECAL 2003. Lecture Notes in Computer Science,” Vol.2801, pp. 893-902, Springer, 2003.
  7. [7] G. Valentini, H. Hamann et al., “Self-Organized Collective Decision Making: The Weighted Voter Model,” Proc. of the 2014 Int. Conf. on Autonomous Agents and Multi-agent Systems, pp. 45-52, 2014.
  8. [8] T. D. Seeley, P. K. Visscher et al., “Stop Signals Provide Cross Inhibition in Collective Decision-Making by Honeybee Swarms,” Science, Vol.335, Issue 6064, pp. 108-111, 2012.
  9. [9] D. Pais, P. M. Hogan et al., “A Mechanism for Value-Sensitive Decision-Making,” PloS One, doi: 10.1371/journal.pone.0073216, 2013.
  10. [10] A. Reina, G. Valentini et al., “A Design Pattern for Decentralised Decision-Making,” PloS One, doi: 10.1371/journal.pone.0140950, 2015.
  11. [11] A. Reina, J. A. R. Marshall et al., “Model of the Best-of-N Nest-site Selection Process in Honeybees,” Phys. Rev. E, Vol.95, doi: 10.1103/PhysRevE.95.052411, 2017.
  12. [12] M. Kubo, N. H. Phung et al., “Efficient Collective Search by Agents that Remember Failures,” J. of Robotics, Networking and Artificial Life, Vol.5, No.1, pp. 67-70, 2018.
  13. [13] A. Scheidler, A. Brutschy et al., “The k-Unanimity Rule for Self-Organized Decision Making in Swarms of Robots,” IEEE Trans. on Cybernetics, Vol.46, Issue 4, pp. 1175-1188, 2016.
  14. [14] Y. Yang, C. Zhou et al., “Swarm Robots Task Allocation Based on Response Threshold Model,” Proc. of 4th Int. Conf. on Autonomous Robots and Agents, pp. 171-176, 2009.
  15. [15] Y. Yongming, C. Xihui et al., “Swarm Robots Task Allocation Based on Local Communication,” Proc. of 2010 Int. Conf. on Computer, Mechatronics, Control and Electronic Engineering, Vol.5, pp. 415-418, 2009.
  16. [16] E. Bonabeau, G. Theraulaz et al., “Fixed Response Thresholds and the Regulation of Division of Labor in Insect Societies,” Bulletin of Mathematical Biology, Vol.60, No.4 , pp. 753-807, 1998.
  17. [17] W. B. Arthur, “Inductive Reasoning and Bounded Rationality,” American Economic Review: Papers and Proc., Vol.84, No.2, pp. 404-411, 1994.
  18. [18] D. Whitehead, “The El Farol Bar Problem Revisited: Reinforcement Learning in a Potential Game,” ESE Discussion Papers 186, Edinburgh School of Economics, University of Edinburgh, 2008.
  19. [19] T. S. Schelling, “Micromotives and Macrobehavior,” W. W. Norton & Company, 1978.

*This site is desgined based on HTML5 and CSS3 for modern browsers, e.g. Chrome, Firefox, Safari, Edge, Opera.

Last updated on May. 19, 2024