single-rb.php

JRM Vol.24 No.1 pp. 191-204
doi: 10.20965/jrm.2012.p0191
(2012)

Paper:

Calculation of 6-DOF Pose of Arbitrary Inclined Nuts for a Grasping Task by Dual-Arm Robot

Ruhizan Liza Ahmad Shauri and Kenzo Nonami

Department of Mechanical Engineering, Division of Artificial Systems Science, Graduate School of Engineering, Chiba University, 1-33 Yayoi-cho, Inage-ku, Chiba 263-8522, Japan

Received:
June 21, 2011
Accepted:
September 27, 2011
Published:
February 20, 2012
Keywords:
small and indistinguishable colored object, 6-DOF pose estimation, visual servoing, seven-link robot arm, dynamic system
Abstract
The capability to manipulate small objects is one of the important requirements for producing assembly work robots. Moreover, a robot that exhibits humanlike skills could be used to reduce the high labor cost for complex tasks. Therefore, we propose a seven-link dual-arm robot with three-fingered hands for cooperative tasks to manipulate small parts such as nuts and bolts in an unstructured environment. As an initial experiment, we need to obtain the six degrees of freedom (DOF) posture of a hexagonal M10 nut (diameter, 19.6 mm), which is small and possesses an indistinguishable color. These constraints have made it difficult to recognize such a target by current available methods where a higher order of posture data is necessary for robot operation. Hence, we propose a technique that we have labeled as Confirm-Estimate-Rotate (CER), which employs integration between the image and robot algorithms in consecutive iteration loops via a visual servoing structure. Real-time experimental results indicate the capability of our method to change the seven-link arm robot posture safely to match the posture of a target in an inclined position. Furthermore, a statistical grasping result by this method has shown a moderate performance for nuts in arbitrary poses. Thus, this shows that the method could be applied to solve the problem of aligning nuts and bolts from the previous screwing task performed by the dual-arm robot in the next future work.
Cite this article as:
R. Shauri and K. Nonami, “Calculation of 6-DOF Pose of Arbitrary Inclined Nuts for a Grasping Task by Dual-Arm Robot,” J. Robot. Mechatron., Vol.24 No.1, pp. 191-204, 2012.
Data files:
References
  1. [1] H. Iwata and S. Sugano, “Human robot interference adapting control coordinating human following and task execution,” Intelligent Robots and Systems, 2004 (IROS 2004), Proc., 2004 IEEE/RSJ Int. Conference on, Vol.3, pp. 2879-2885, 2004.
  2. [2] M. Dhome, M. Richetin, J. T. Lapreste, and G. Rives, “Determination of the attitude of 3D objects from a single perspective view,” Pattern Analysis and Machine Intelligence, IEEE Trans. on, Vol.11, pp. 1265-1278, 1989.
  3. [3] D. F. DeMenthon and L. S. Davis, “Exact and approximate solutions of the perspective-three-point problem,” Pattern Analysis and Machine Intelligence, IEEE Trans. on, Vol.14, pp. 1100-1105, 1992.
  4. [4] B. M. Haralick, C.-N. Lee, K. Ottenberg, and M. Nvlle, “Review and analysis of solutions of the three point perspective pose estimation problem,” Int. J. of Computer Vision, Vol.13, pp. 331-356, 1994.
  5. [5] D. Nister, “An efficient solution to the five-point relative pose problem,” Pattern Analysis and Machine Intelligence, IEEE Trans. on, Vol.26, pp. 756-770, 2004.
  6. [6] G.Wang, J.Wu, and Z. Ji, “Single view based pose estimation from circle or parallel lines,” Pattern Recogn. Lett., Vol.29, pp. 977-985, 2008.
  7. [7] F. Janabi-Sharifi and M. Marey, “A Kalman-Filter-Based Method for Pose Estimation in Visual Servoing,” Robotics, IEEE Trans. on, Vol.26, pp. 939-947, 2010.
  8. [8] D. F. DeMenthon and L. S. Davis, “Model-based object pose in 25 lines of code,” Int. J. of Computer Vision, Vol.15, pp. 123-141, 1995.
  9. [9] D. Oberkampf, D. F. DeMenthon, and L. S. Davis, “Iterative Pose Estimation Using Coplanar Feature Points,” Computer Vision and Image Understanding, Vol.63, pp. 495-511, 1996.
  10. [10] V. Lippiello, B. Siciliano, and L. Villani, “Position-Based Visual Servoing in Industrial Multirobot Cells Using a Hybrid Camera Configuration,” IEEE Trans. on Robotics, Vol.23, pp. 73-86, 2007.
  11. [11] Z. Zhang, “A Flexible New Technique for Camera Calibration,” IEEE Trans. on Pattern Analysis and Machine Intelligence, Vol.22, pp. 1330-1334, 2000.
  12. [12] A. Saxena, J. Driemeyer, and A. Ng, “Learning 3-D Object Orientation from Images,” 2009 IEEE Int. Conference on Robotics and Automation, Kobe Int. Conference Center, 2009.
  13. [13] A. Saxena, J. Driemeyer, and A. Y. Ng, “Robotic Grasping of Novel Objects using Vision,” The Int. J. of Robotics Research, Vol.27, pp. 157-173, 2008.
  14. [14] A. Saxena, L. Wong, and A. Y. Ng, “Learning grasp strategies with partial shape information,” Proc. of the 23rd national conference on Artificial intelligence – Vol.3, Chicago, Illinois: AAAI Press, 2008.
  15. [15] D. G. Lowe, “Object recognition from local scale-invariant features,” The Seventh IEEE Int. Conference on Computer Vision, Vol.2, pp. 1150-1157, 1999.
  16. [16] A. Leonardis, H. Bischof, A. Pinz, H. Bay, T. Tuytelaars, and L. Van Gool, “SURF: Speeded Up Robust Features,” Computer Vision – ECCV 2006, Vol.3951, Lecture Notes in Computer Science, Springer Berlin/Heidelberg, pp. 404-417, 2006.
  17. [17] H. Kato and M. Billinghurst, “Marker Tracking and HMD Calibration for a Video-Based Augmented Reality Conferencing System,” Proc. of the 2nd IEEE and ACM Int. Workshop on Augmented Reality, IEEE Computer Society, 1999.
  18. [18] H. Kato, M. Billinghurst, I. Poupyrev, K. Imamoto, and K. Tachibana, “Virtual object manipulation on a table-top AR environment,” Augmented Reality, 2000 (ISAR 2000), Proc. IEEE and ACM Int. Symposium on, pp. 111-119, 2000.
  19. [19] J. T. Feddema and O. R. Mitchell, “Vision-guided servoing with feature-based trajectory generation,” IEEE Trans. on Robotics and Automation, Vol.5, pp. 691-700, 1989.
  20. [20] K. Yamaguchi, A. Kaisumi, Y. Hirata, and K. Kosuge, “Development of Handling Device for Small Parts. 3rd Report: Design of Handling Device for Gear Assembly,” 2011 JSME Conference on Robotics and Mechatronics, Okayama, pp. 1A1-E03, 2011.
  21. [21] S. Makita, Y. Kadono, Y. Maeda, S. Miura, I. Kunioka, and K. Yoshida, “Manipulation of submillimeter-sized electronic parts using force control and vision-based position control,” Intelligent Robots and Systems, 2007, IROS 2007, IEEE/RSJ Int. Conference on, pp. 1834-1839, 2007.
  22. [22] R. L. A. Shauri and K. Nonami, “Assembly manipulation of small objects by dual-arm manipulator,” Assembly Automation, Vol.31, pp. 263 - 274, 2011.
  23. [23] R. L. A. Shauri, S. Toritani, K. Saiki, D. Nakagawa, and K. Nonami, “Dexterous Manipulation for Assembly Work Using Multilink Dual Arm Manipulator with Vision Sensors,” The Int. Conference on Motion and Vibration Control (MOVIC) 2010, Tokyo, Japan, 2010.
  24. [24] R. Lienhart and J. Maydt, “An extended set of Haar-like features for rapid object detection,” Image Processing, 2002, Proc., 2002 Int. Conference on, Vol.1, pp. I-900-I-903, 2001.
  25. [25] D. Lowe, “Three-dimensional object recognition from single twodimensional images,” Artificial Intelligence, Vol.31, pp. 355-395, 1987.
  26. [26] G. Bradski and A. Kaehler, “Learning OpenCV: Computer Vision with the OpenCV Library,” 1st ed: O’Reilly Media, Inc., October 2008.
  27. [27] S. Hutchinson, G. D. Hager, and P. I. Corke, “A tutorial on visual servo control,” Robotics and Automation, IEEE Trans. on, Vol.12, pp. 651-670, 1996.
  28. [28]
    Supporting Online Materials:[a] Fanuc Ltd., “FANUC Robotics Intelligence Robot Solutions,” 2010.
    http://www.fanucrobotics.com/products/intelligent-solutions.aspx
  29. [29] [b] Yaskawa Electric Corporation, “Robotics Continues to Evolve, Meeting New Challenge,” 2003-2010.
    http://www.yaskawa.co.jp/en/business/bus03.html

*This site is desgined based on HTML5 and CSS3 for modern browsers, e.g. Chrome, Firefox, Safari, Edge, Opera.

Last updated on Apr. 19, 2024