single-jc.php

JACIII Vol.18 No.4 pp. 499-510
doi: 10.20965/jaciii.2014.p0499
(2014)

Paper:

Visual Attention Region Prediction Based on Eye Tracking Using Fuzzy Inference

Mao Wang*, Yoichiro Maeda**, and Yasutake Takahashi***

*Department of System Design Engineering, Graduate School of Engineering, University of Fukui, 3-9-1 Bunkyo, Fukui 910-8507, Japan

**Department of Robotics, Faculty of Engineering, Osaka Institute of Technology, 5-16-1 Omiya, Asahi-ku, Osaka 535-8585, Japan

***Department of Human and Artificial Intelligent Systems, Graduate School of Engineering, University of Fukui, 3-9-1 Bunkyo, Fukui 910-8507, Japan

Received:
November 2, 2013
Accepted:
April 12, 2014
Published:
July 20, 2014
Keywords:
visual attention, eye tracking, neural network, saliency map, fuzzy inference
Abstract

Visual attention region prediction has attracted the attention of intelligent systems researchers because it makes the interaction between human beings and intelligent nonhuman agents to be more intelligent. Visual attention region prediction uses multiple input factors such as gestures, face images and eye gaze position. Physically, disabled persons may find it difficult to move in some way. In this paper, we propose using gaze position estimation as input to a prediction system achieved by extracting image features. Our approach is divided into two parts: user gaze estimation and visual attention region inference. The neural network has been used in user gaze estimation as the decision making unit, following which the user gaze position at the computer screen is then estimated. We proposed that prediction in visual attention region inference of the visual attention region be inferred by using fuzzy inference after image feature maps and saliency maps have been extracted and computed. User experiments conducted to evaluate the prediction accuracy of our proposed method surveyed prediction results. These results indicated that the prediction we proposed performs better at the attention regions position prediction level depending on the image.

References
  1. [1] F. Sadri, “Logic-Based Approaches to Intention Recognition,” Handbook of Research on Ambient Intelligence: Trends and Perspectives, 2010.
  2. [2] D. V. Pynadath and M. P. Wellman. “Accounting for Context in Plan Recognition, with Application to Traffic Monitoring,” Proc. of the Eleventh Int. Conf. on Uncertainty in Artificial Intelligence, pp. 472-481, 1995.
  3. [3] L. M. Pereira and H. T. Anh, “Intention Recognition via Causal Bayes Networks Plus Plan Generation,” Progress in Artificial Intelligence, pp. 138-149, 2009.
  4. [4] K. A. Tahboub, “Intelligent Human-Machine Interaction Based on Dynamic Bayesian Networks Probabilistic Intention Recognition,” J. of Intelligent and Robotic Systems, Vol.45, pp. 31-52, 2006.
  5. [5] J. W. Harris and H. Stocker, “Handbook of Mathematics and Computational Science,” Springer-Verlag New York, 1998.
  6. [6] W. Mao and J. Gratch, “A Utility-Based Approach to Intention Recognition,” Proc. of the AAMAS 2004 Workshop on Agent Tracking: Modeling Other Agents from Observations, 2004.
  7. [7] L. Itti, C. Koch, and E. Niebur, “A Model of Saliency-Based Visual Attention for Rapid Scene Analysis,” IEEE Trans. Pattern Analysis and Machine Intelligence, Vol.20, No.11, pp. 1254-1259, 1998.
  8. [8] L. Itti and C. Koch, “A saliency-based search mechanism for overt and covert shifts of visual attention,” Vision Research, Vol.40, pp. 1489-1506, 2000.
  9. [9] J. W. Lee, C. W. Cho, K. Y. Shin, E. C. Lee, and K. R. Park, “3D gaze tracking method using Purkinje images on eye optical model and pupil,” Optics and Lasers in Engineering, Vol.50, No.5, pp. 736-751, 2012.
  10. [10] A. Bykat, “Convex hull of a finite set of points in two dimensions,” Info. Proc. Letters, Vol.7, pp. 296-298, 1978.
  11. [11] I. Mitsugami, N. Ukita, and M. Kidode, “Robot Navigation by Eye Pointing,” Proc. Entertainment Computing, pp. 256-267, 2005.
  12. [12] D. Walther, U. Rutishauser, C. Koch, and P. Perona, “On the usefulness of attention for object recognition,” Workshop on Attention and Performance in Computational Vision, pp. 96-103, 2004.
  13. [13] L. Itti, “Models of bottom-up and top-down visual attention,” Ph.D. thesis, California Institute of Technology, 2000.
  14. [14] W. O. Lee, J. W. Lee, K. R. Park, E. C. Lee, and M. Whang, “Object recognition and selection method by gaze tracking and SURF algorithm,” 2011 Int. Conf. on Multimedia and Signal Processing, pp. 261-265, 2011.

*This site is desgined based on HTML5 and CSS3 for modern browsers, e.g. Chrome, Firefox, Safari, Edge, IE9,10,11, Opera.

Last updated on Aug. 21, 2017