single-rb.php

JRM Vol.21 No.6 pp. 765-772
doi: 10.20965/jrm.2009.p0765
(2009)

Paper:

Positional Features and Algorithmic Predictability of Visual Regions-of-Interest in Robot Hand Movement

Toyomi Fujita* and Claudio M. Privitera**

*Department of Electronics and Intelligent Systems Tohoku Institute of Technology Sendai 982-8577, Japan

**School of Optometry, University of California Berkeley, CA 94720, USA

Received:
April 20, 2009
Accepted:
October 26, 2009
Published:
December 20, 2009
Keywords:
human visual scanpath, regions-of-interest, robot gazing
Abstract
Visual functions are important for robots who engage in cooperative work with other robots. In order to develop an effective visual function for robots, we investigate human visual scanpath features in a scene of robot hand movement. Human regions-of-interest (hROIs) are measured in psychophysical experiments and compared using a positional similarity index, Sp, on the basis of scanpath theory. Results show consistent hROI loci due to dominant top-down active looking in such a scene. This paper also discusses how bottom-up image processing algorithms (IPAs) are able to predict hROIs. We compare algorithmic regions-of-interest (aROIs) generated by IPAs, with the hROIs obtained from robot hand movement images. Results suggest that bottom-up IPAs with support size almost equal to fovea size have a high ability to predict the hROIs.
Cite this article as:
T. Fujita and C. Privitera, “Positional Features and Algorithmic Predictability of Visual Regions-of-Interest in Robot Hand Movement,” J. Robot. Mechatron., Vol.21 No.6, pp. 765-772, 2009.
Data files:
References
  1. [1] D. Noton and L. Stark, “Eye Movements and visual Perception” Scientific American, 224(6), pp. 34-43, June 1971.
  2. [2] L. Stark and Y. Choi, “Experimental Metaphysics:The Scanpath as an Epistemological Mechanism,” chapter 1, pp. 3-69, Elsevier Science B.V., 1996.
  3. [3] L. Itti, C. Koch, and E. Niebur, “A model of saliency-based visual attention for rapid scene analysis,” IEEE Trans. on Pattern Analysis and Machine Intelligence, 20(11), pp. 1254-1259, November 1998.
  4. [4] K. Schill, E. Umkehrer, S. Beinlich, G. Krieger, and C. Zetzsche, “Scene analysis with saccadic eye movements: Top-down and bottom-up modeling” J. of Electronic Imaging, 10(1), pp. 152-160, January 2001.
  5. [5] I. A. Rybak, V. I. Gusakova, A. V. Golovan, L. N. Podladchikova, and N. A. Shevtsova, “A model of attention-guided visual perception and recognition” Vision Research, 38, 1998.
  6. [6] C. M. Privitera and L. W. Stark, “Algorithms for Defining Visual Regions-of-Interest: Comparison with Eye Fixations” PAMI, 22(9), pp. 970-982, 2000.
  7. [7] T. Fujita, C. M. Privitera, and L. W. Stark, “Consistency and Predictability of Visual Regions-of-Interest to Image Types” In 5th Asia-Pacific Conf. on Vision (APCV’08) Abstract, 2008.
  8. [8] L. W. Stark, C. M.Privitera, H. Yang, M. Azzariti, Y. F. Ho, T. Blackmon, and D. Chernyak, “Representation of human vision in the brain: How does human perception recognize images?,” J. of Electronic Imaging, 10(1), pp. 123-151, January 2001.
  9. [9] D. Noton and L. Stark, “Scanpaths in Eye Movements during Pattern Perception,” Science, 171, pp. 308-311, January 1971.
  10. [10] C. M. Privitera, O. Gallo, G. Grimoldi, T. Fujita, and L. W. Stark, “Combining conspicuity maps for hROIs prediction,” 2nd Int. Workshop on Attention and Performance in Computational Vision (WAPCV 2004), pp. 104-111, 2004.
  11. [11] L. Itti and C. Koch, “Computational modelling of visual attention," Nature Reviews Neuroscience, 2, pp. 194-203, March 2001.
  12. [12] A. Rotenstein, A. Rothenstein, M. Robinson, and J. Tsotsos, “Robot middleware should support task-directed perception,” In ICRA 2007 Workshop on Software Development and Integration into Robotics, 2007.
  13. [13] B. Khadhouri and Y. Demiris, “Compound Effects of Top-down and Bottom-up Influences on Visual Attention During Action Recognition” In Int. joint conf. on artificial intelligence (IJCAI), pp. 1458-1463, 2005.
  14. [14] S.-W. Ban, M. Lee, and H.-S. Yang, “A face detection using biologically motivated bottom-up saliency map model and top-down perception model” Neurocomputing, 56, pp. 475-480, 2004.

*This site is desgined based on HTML5 and CSS3 for modern browsers, e.g. Chrome, Firefox, Safari, Edge, Opera.

Last updated on Dec. 06, 2024