JACIII Vol.15 No.7 pp. 878-887
doi: 10.20965/jaciii.2011.p0878


Experimental Evaluations of Approaching Hand/Eye-Vergence Visual Servoing

Fujia Yu*, Wei Song**, Mamoru Minami*, Akira Yanou*,
and Mingcong Deng*

*Graduate School of Natural Science and Technology, Okayama University, 3-1-1 Tsushimanaka, Kita-ku, Okayama 700-8530, Japan

**University of Shanghai, 99 Shangda Road, BaoShan District, Shanghai 200444, China

March 7, 2011
May 9, 2011
September 20, 2011
approach, visual servoing, 6-DoF, eyevergence

We focus on controlling a robot’s end-effector to track a moving object meanwhile approaching an object with a desired tracking pose for grasping the object – a process we call Approaching Visual Servoing (AVS). AVS using binocular cameras requires inheritable eyevergence to keep a target in camera images at the center of the camera frame because approaching motion narrows the camera’s visual field or may even lose sight of the object. Experiments using our proposed hand and eye-vergence dual control involved full 6-degree-of-freedom AVS to a moving object by using a 7-link manipulator with a binocular camera, confirming the feasibility of hand and eye-vergence control.

Cite this article as:
Fujia Yu, Wei Song, Mamoru Minami, Akira Yanou, and
and Mingcong Deng, “Experimental Evaluations of Approaching Hand/Eye-Vergence Visual Servoing,” J. Adv. Comput. Intell. Intell. Inform., Vol.15, No.7, pp. 878-887, 2011.
Data files:
  1. [1] S. Hutchinson, G. Hager, and P. Corke, “A Tutorial on Visual Servo Control,” IEEE Trans. on Robotics and Automation, Vol.12, No.5, pp. 651-670, 1996.
  2. [2] E.Malis, F. Chaumentte, and S. Boudet, “2-1/2-D Visual Servoing,” IEEE Trans. on Robotics and Automation, Vol.15, No.2, pp. 238-250, 1999.
  3. [3] O. Tahri and F. c. Chaumette, “Point-Based and Region-Based Image Moments for Visual Servoing of Planar Objects,” IEEE Tran. on Robotics, Vol.21, No.6, Dec 2005.
  4. [4] V. Lippiello, B. Siciliano, and L. Villani, “Position-Based Visual Servoing in Industrial Multirobot Cells Using a Hybrid Camera Configuration,” IEEE Trans. on Robotics, Vol.23, No.1, pp. 73-86, Feb. 2007.
  5. [5] T. Hamel and R. Mahony, “Visual Servoing of and Under-Actuated Dynamic Rigid-Body System: An Image-Based Approach,” IEEE Trans. on Robotics and Automation, Vol.18, No.2, April 2002.
  6. [6] T. Hiramatsu, T. Fukao, K. Kurashiki, and K. Osuka, “Image-based Path Following Control of Mobile Robots with Central Catadioptric Cameras” IEEE Int. Conf. on Robotics and Automation Kobe, Japan, May 12-17, 2009.
  7. [7] W. Sepp, S. Fuchs, and G. Hirzinger, “Hierarchical Featureless Tracking for Position-Based 6-DoF Visual Servoing,” Proc. of the 2006 IEEE/RSJ Int. Conf. on Intelligent Robotics and Systems (IROS), pp. 4310-4315, 2006.
  8. [8] W. Song, M. Minami, Y. Mae, and S. Aoyagi, “On-line Evolutionary Head Pose Measurement by Feedforward Stereo Model Matching,” IEEE Int. Conf. on Robotics and Automation (ICRA), pp. 4394-4400, 2007.
  9. [9] H. Suzuki and M. Minami, “Visual Servoing to catch fish Using Global/local GA Search,” IEEE/ASME Trans. on Mechatronics, Vol.10, Issue 3, pp. 352-357, Jun. 2005.
  10. [10] W. Song, M. Minami, and S. Aoyagi, “Evolutionary Pose Measurement by Stereo Model Matching,” J. of Advanced Computational Intelligence and Intelligent Informatics (JACIII), Vol.9, No.2, pp. 150-158, March 2005.
  11. [11] F. Yu, W. Song, M. Minami, A. Yanou, and M. Deng, “Visual servoing by Lyapunov-guaranteed stable on-line 6-D pose tracking,” FAN2010, 2010.
  12. [12] W. Song Y. Fujia, and M. Minami, “3D Visual Servoing by Feedforward Evolutionary Recognition,” J. of Advanced Mechanical Design, Systems, and Manufacturing (JSME), Vol.4, No.4 pp. 739-755, 2010.

*This site is desgined based on HTML5 and CSS3 for modern browsers, e.g. Chrome, Firefox, Safari, Edge, Opera.

Last updated on Feb. 25, 2021