Paper:
Visual Attention Region Prediction Based on Eye Tracking Using Fuzzy Inference
Mao Wang*, Yoichiro Maeda**, and Yasutake Takahashi***
*Department of System Design Engineering, Graduate School of Engineering, University of Fukui, 3-9-1 Bunkyo, Fukui 910-8507, Japan
**Department of Robotics, Faculty of Engineering, Osaka Institute of Technology, 5-16-1 Omiya, Asahi-ku, Osaka 535-8585, Japan
***Department of Human and Artificial Intelligent Systems, Graduate School of Engineering, University of Fukui, 3-9-1 Bunkyo, Fukui 910-8507, Japan
- [1] F. Sadri, “Logic-Based Approaches to Intention Recognition,” Handbook of Research on Ambient Intelligence: Trends and Perspectives, 2010.
- [2] D. V. Pynadath and M. P. Wellman. “Accounting for Context in Plan Recognition, with Application to Traffic Monitoring,” Proc. of the Eleventh Int. Conf. on Uncertainty in Artificial Intelligence, pp. 472-481, 1995.
- [3] L. M. Pereira and H. T. Anh, “Intention Recognition via Causal Bayes Networks Plus Plan Generation,” Progress in Artificial Intelligence, pp. 138-149, 2009.
- [4] K. A. Tahboub, “Intelligent Human-Machine Interaction Based on Dynamic Bayesian Networks Probabilistic Intention Recognition,” J. of Intelligent and Robotic Systems, Vol.45, pp. 31-52, 2006.
- [5] J. W. Harris and H. Stocker, “Handbook of Mathematics and Computational Science,” Springer-Verlag New York, 1998.
- [6] W. Mao and J. Gratch, “A Utility-Based Approach to Intention Recognition,” Proc. of the AAMAS 2004 Workshop on Agent Tracking: Modeling Other Agents from Observations, 2004.
- [7] L. Itti, C. Koch, and E. Niebur, “A Model of Saliency-Based Visual Attention for Rapid Scene Analysis,” IEEE Trans. Pattern Analysis and Machine Intelligence, Vol.20, No.11, pp. 1254-1259, 1998.
- [8] L. Itti and C. Koch, “A saliency-based search mechanism for overt and covert shifts of visual attention,” Vision Research, Vol.40, pp. 1489-1506, 2000.
- [9] J. W. Lee, C. W. Cho, K. Y. Shin, E. C. Lee, and K. R. Park, “3D gaze tracking method using Purkinje images on eye optical model and pupil,” Optics and Lasers in Engineering, Vol.50, No.5, pp. 736-751, 2012.
- [10] A. Bykat, “Convex hull of a finite set of points in two dimensions,” Info. Proc. Letters, Vol.7, pp. 296-298, 1978.
- [11] I. Mitsugami, N. Ukita, and M. Kidode, “Robot Navigation by Eye Pointing,” Proc. Entertainment Computing, pp. 256-267, 2005.
- [12] D. Walther, U. Rutishauser, C. Koch, and P. Perona, “On the usefulness of attention for object recognition,” Workshop on Attention and Performance in Computational Vision, pp. 96-103, 2004.
- [13] L. Itti, “Models of bottom-up and top-down visual attention,” Ph.D. thesis, California Institute of Technology, 2000.
- [14] W. O. Lee, J. W. Lee, K. R. Park, E. C. Lee, and M. Whang, “Object recognition and selection method by gaze tracking and SURF algorithm,” 2011 Int. Conf. on Multimedia and Signal Processing, pp. 261-265, 2011.
This article is published under a Creative Commons Attribution-NoDerivatives 4.0 Internationa License.