Image-based Visual Servoing for Optimal Grasping
Hideo Fujimoto*, Liu-Cun Zhu* and Karim Abdel-Malek**
*Department of Systems Engineering, Nagoya Institute of Technology, Gokiso-cho, Showa-ku, Nagoya 466-8555, Japan
**Department of Mechanical Engineering, The University of Iowa, Iowa City, IA 52242, U.S.A.
One of the most common tasks in robotics is grasping. Although the formulation of optimal grasping has been addressed using a variety of approaches, there are only a few grasping systems that can operate in uncertain dynamic environments. In this paper, we present an image-based visual servoing method and system for optimal object grasping by introducing the method of visual vectors. A CCD camera mounted on a robot end-effector constructs the visually guided servo control system and the control scheme lends itself to task-level specification of manipulation goals. The proposed approach integrates vision, grasp planning, and vision=guided control to accomplish the optimal grasping task. The grasping task is to control the robot so the vectors of the end-effector’s landmark (e.g., finger vector) and a target object’s grasp coincide. These vectors can be used to perform the work of a stable grasping of an object that is presented in an unstructured manner. Visual vectors in image frame are obtained by analyzing the object’s image and projection. Our objective in implementing vector processing is to estimate the vector error between the finger and grasp vectors, and to control the robot to eliminate kinematic errors. The proposed model is illustrated through examples and its effectiveness is validated using computer simulation.
This article is published under a Creative Commons Attribution-NoDerivatives 4.0 International License.
Copyright© 2001 by Fuji Technology Press Ltd. and Japan Society of Mechanical Engineers. All right reserved.