single-jc.php

JACIII Vol.15 No.8 pp. 1186-1196
doi: 10.20965/jaciii.2011.p1186
(2011)

Paper:

Improving Recovery Capability of Multiple Robots in Different Scale Structure Assembly

Masayuki Otani, Kiyohiko Hattori, Hiroyuki Sato,
and Keiki Takadama

Department of Informatics, The University of Electro-Communications, 1-5-1 Chofugaoka, Chofu, Tokyo 182-8585, Japan

Received:
May 19, 2011
Accepted:
August 13, 2011
Published:
October 20, 2011
Keywords:
recovery capability, multiple robots, largescale structure assembly, deadlock avoidance, distributed control
Abstract
This paper focuses on the distributed control of multiple robots that may be broken and investigates recovery capability, which means how robots can complete assembly, when some are broken, through the assembly of a different-scale solar-powered satellite. We thus conduct simulation at different failure rates of robots that use our proposed deadlock avoidance. Through intensive simulation, we show that (1) our proposed method with no information sharing keeps high recovery capability and (2) this method is robust against differences in structure scale.
Cite this article as:
M. Otani, K. Hattori, H. Sato, and K. Takadama, “Improving Recovery Capability of Multiple Robots in Different Scale Structure Assembly,” J. Adv. Comput. Intell. Intell. Inform., Vol.15 No.8, pp. 1186-1196, 2011.
Data files:
References
  1. [1] D. Duhant, E. Carrillo, and S. Saint-Aime, “Avoiding Deadlock in Multi-agent Systems,” IEEE Int. Conf. on Systems, Man and Cybernetics 2007, pp. 1642-1647, 2007.
  2. [2] Y. Arai et al., “Collision Avoidance in Multi-Robot Environment based on Local Communication,” J. of the Robotics Society of Japan, Vol.19, No.1, pp. 45-58, 2001. (in Japanese)
  3. [3] P. Glaser, “Power from the Sun – Its Future,” Science, Vol.162, No.22, pp. 857-861, 1968.
  4. [4] DOE/NASA, “Reference System Report,” SPS Concept Development and Evaluation Program, DOE/ER-0023, 1978.
  5. [5] Y. Kobayashi, T. Saito, and H. Kanai, “Overview of the USEF SSPS Activities,” JSASS Proc. of the 48th Space Science and Technology Conference, pp. 81-86, Nov. 2004.
  6. [6] J. C. Latombe, “Robot Motion Planning,” Kluwer Academic Publisher, 1991.
  7. [7] K. Gupta and A. P. del Pobil (Eds.), “Practical Motion Planning in Robotics: Current Approaches and Future Directions,” John Wiley & Sons, pp. 325-347, 1998.
  8. [8] Y. Imasaki and Y. Zhang, “Efficient Route Selection Approaches in Mobile Ad Hoc Networks,” IPSJ SIG Technical Reports, Vol.2005, No.63, pp. 33-38, 2008. (in Japanese)
  9. [9] J. Boyan and M. Littman, “Packet Routing in Dynamically Changing Networks: A Reinforcement Learning Approach,” Advances in Neural Information Processing Systems Vol.6, (NIPS6), pp. 671-678, 1994.
  10. [10] D. Subramanian, P. Druschel, and J. Chen, “Ants and reinforcement learning: A case study in routing in dynamic networks,” In Proceedings of the Fifteenth Int. Conf. on Artificial Intelligence, pp. 832-838, 1997.
  11. [11] C.Watkins, “Learning from Delayed Rewards,” Ph.D. thesis, King’s College, 1989.
  12. [12] S. Murata, D. Jodoi, H. Furuya, Y. Terada, and K. Takadama, “Inflatable Tensegrity Module for a Large-Scale Space Structure and its Construction Scinario,” The 56th Int. Astronautical Congress (IAC05), IAC-05-D1.1.01, 2005.
  13. [13] Y. Yoshimura et al., “Iterative Transportation Planning of Multiple Objects by Cooperative Mobile Robots,” J. of the Robotics Society of Japan, Vol.16, No.4, pp. 499-507, 1996. (in Japanese)
  14. [14] M. Otani and K. Takadama, “Toward Robust Deadlock Avoidance Method Among Multiple Robots: Analyzing Communication Failure Cases,” The 59th Int. Astronautical Congress, (IAC2008), IAC-08-B3.6.11, 2008.
  15. [15] T. Taniguchi, K. Ogawa, and T. Sawaragi, “Implicit Estimation of Other’s Intention Without Direct Observation of Actions in a Collaborative Task: Situation-Sensitive Reinforcement Learning,” SICE Annual Conference 2007 (SICE2007), pp. 996-1003, 2007.

*This site is desgined based on HTML5 and CSS3 for modern browsers, e.g. Chrome, Firefox, Safari, Edge, Opera.

Last updated on Apr. 19, 2024