single-jc.php

JACIII Vol.20 No.6 pp. 919-927
doi: 10.20965/jaciii.2016.p0919
(2016)

Paper:

Geometric Relation-Based Cognitive Sharing for Flying and Ground Mobile Robot Cooperation

Yifeng Cai and Kosuke Sekiyama

Department of Micro-Nano System Engineering, Nagoya University
Furo-cho, Chikusa-ku, Nagoya 464-8603, Japan

Received:
March 27, 2016
Accepted:
July 26, 2016
Published:
November 20, 2016
Keywords:
entropy based representation selection, cognitive sharing, UAV and ground robot cooperation
Abstract
Cognitive sharing of objects is fundamental in a heterogeneous robot system composed of a Unmanned Aerial Vehicle and a ground robot. Since the viewpoint of a UAV is greatly different from a ground robot, they may have different perceptions about the same objects. That makes it difficult to realize cognitive sharing. In this paper, we proposed a cognitive sharing method which is based on Geometric Relation-based Triangle Representations. The method is able to make a UAV and a ground robot identify the same object from similar objects without sharing appearance information in unstructured environment. To copy with the problem of increasing computational cost for the recognition of objects in the Region of Interest, entropy evaluation is employed to evaluate and select unique representations. We illustrated the proposed method with robots in real world.
Cite this article as:
Y. Cai and K. Sekiyama, “Geometric Relation-Based Cognitive Sharing for Flying and Ground Mobile Robot Cooperation,” J. Adv. Comput. Intell. Intell. Inform., Vol.20 No.6, pp. 919-927, 2016.
Data files:
References
  1. [1] M. Saska, T. Krajnik, and L. Pfeucil, “Cooperative UAV-UGV autonomous indoor surveillance,” 2012 9th Int. Multi-Conf. on Systems, Signals and Devices (SSD), pp. 1-6, 2012.
  2. [2] N. Michael, S. Shen and K. Mohta et al., “Collaborative Mapping of an Earthquake Damaged Building via Ground and Aerial Robots,” Field and Service Robotics, Vol.92, pp. 33-47, 2013.
  3. [3] C. Forster, M. Pizzoli, and S. Scaramuzza, “Air-Ground Localization and Map Augmentation Using Monocular Dense Reconstruction,” Int. Conf. on Intelligent Robots and Systems (IROS), pp. 3971-3978, 2013.
  4. [4] B. Grocholsky, J. Keller, V. Kumar, and G. Pappas, “Cooperative air and ground surveillance,” Robotics and Automation Magazine, IEEE, Vol.13, pp. 16-25, 2006.
  5. [5] K. Tan, A. Wasif, and C. Tan, “Objects Tracking Utilizing Square Grid Rid Reader Antenna Network,” J. of Electromagnetic Waves and Applications, Vol.22, pp. 27-38, 2008.
  6. [6] Y. Xue, G. Tian, R. Li, and H. Jiang, “A new object search and recognition method based on artificial object mark in complex indoor environment,” 2010 8th World Congress on Intelligent Control and Automation (WCICA), pp. 6648-6653, 2010.
  7. [7] E. Mueggler, M. F. Faessler, and D. Scaramuzza, “Aerial guided navigation of a ground robot among movable obstacles,” 2014 IEEE Int. Symp. on Safety, Security, and Rescue Robotics (SSRR), pp. 1-8, 2014.
  8. [8] M. Garzon, J. Valente, D. Zapata, and A. Barrientos, “An Aerial Ground Robotic System for Navigation and Obstacle Mapping in Large Outdoor Areas,” Sensors, Vol.13, pp. 1247-1267, Jan. 2013.
  9. [9] M. Rokunuzzaman, T. Umeda, K. Sekiyama, and T. Fukuda, “A region of interest (roi) sharing protocol for multirobot cooperation with distributed sensing based on semantic stability,” IEEE Trans. on Systems, Man, and Cybernetics: Systems, Vol.44, pp. 457-467, 2013.
  10. [10] A. Dewan, A. Mahendran, N. Soni, and K. Krishna, “Heterogeneous UGV-MAV exploration using integer programming,” 2013 IEEE/RSJ Int. Conf. on Intelligent Robots and Systems (IROS), pp. 5742-5749, 2013.
  11. [11] K. H. Jo and J. Lee, “Multi-robot Cooperative Localization with Optimally Fused Information of Odometer and GPS,” Int. Conf. on Control, Automation and Systems, pp. 601-605, 2007.
  12. [12] S. Tomita, K. Sekiyama, and T. Fukuda, “Consensus Making Algorithms for Cognitive Sharing of Object in Multi-Robot Systems,” ROBOMECH, Vol.1, Sep. 2014.
  13. [13] X. Zhao, X. Zhang, G. Zhao, X. Li, K. Zhang, and R. Qian, “Triangle matching combined with singular features in fingerprints,” Int. Conf. on Mechatronic Science, Electric Engineering and Computer, pp. 2069-2072, 2011.
  14. [14] Z. Yang, Z. Zhu, and W. Zhao, “A Triangle Matching Algorithm for Gravity-aided Navigation for Underwater Vehicles,” The J. of Navigation, Vol.67, pp. 227-247, 2014.
  15. [15] X. Guo and C. Xiaochun, “Good match exploration using triangle constraint,” Pattern Recognitional Letters, Vol.33, pp. 872-881, 2012.
  16. [16] F. David, B. Emmanuel, B. Stephane, D. Guillaume, G. Alexander, H. Lot, J. Islem, P. Rafael, and T. Adriana, “RGBD object recognition and visual texture classification for indoor semantic mapping,” 2012 IEEE Int. Conf. on TePRA, pp. 127-132, 2012.
  17. [17] J. R. Smith and S.-F. Chang, “Single Color Extraction and Image Query,” Int. Conf. on Image Processing (ICIP-95), pp. 1-4, 1995.
  18. [18] W. Hu, X. Zhou, W. Li, W. Luo, X. Zhang, and S. Maybank, “Active Contour-Based Visual Tracking by Integrating Colors, Shapes, and Motions,” IEEE Trans. on Image Processing, Vol.22, No.5, pp. 1788-1792, May 2013.
  19. [19] T. Umeda, K. Sekiyama, and T. Fukuda, “Vision-vased object tracking by multi-robots,” J. of Robotics and Mechatronics, Vol.24, No.3, 2012.

*This site is desgined based on HTML5 and CSS3 for modern browsers, e.g. Chrome, Firefox, Safari, Edge, Opera.

Last updated on Apr. 22, 2024