single-rb.php

JRM Vol.34 No.5 pp. 965-974
doi: 10.20965/jrm.2022.p0965
(2022)

Paper:

Robotic Pouring Based on Real-Time Observation and Visual Feedback by a High-Speed Vision System

Hairui Zhu* and Yuji Yamakawa**

*Department of Mechanical Engineering, The University of Tokyo
4-6-1 Komaba, Meguro-ku, Tokyo 153-8505, Japan

**Interfaculty Initiative in Information Studies, The University of Tokyo
4-6-1 Komaba, Meguro-ku, Tokyo 153-8505, Japan

Received:
March 28, 2022
Accepted:
July 11, 2022
Published:
October 20, 2022
Keywords:
robot pouring, high-speed vision, robot control
Abstract

Making robots capable of pouring can be useful in both service and industrial applications. Considering the importance of controlling liquid vibration in mixing chemical reagents and other industrial applications, we investigated in the this study robotic pouring with the aim of controlling liquid vibration, more specifically, the beer-foam ratio during beer pouring. We propose a vision-based measurement method that can measure the liquid volume with an error of less than 5% in real time. Besides, together with a proposed robot pouring controller, we develop a robot pouring system that can control ratio of beer-foam volume with an error of less than 5% during pouring. The flexibility of the developed system was also demonstrated through experiments using different types of container and beer.

Robotic pouring

Robotic pouring

Cite this article as:
H. Zhu and Y. Yamakawa, “Robotic Pouring Based on Real-Time Observation and Visual Feedback by a High-Speed Vision System,” J. Robot. Mechatron., Vol.34 No.5, pp. 965-974, 2022.
Data files:
References
  1. [1] A. J. Sbnchez and J. M. Martinez, “Robot-arm pick and place behavior programming system using visual perception,” Proc. of 15th Int. Conf. on Pattern Recognition (ICPR-2000), Vol.4, pp. 507-510, 2000.
  2. [2] V. Lippiello, F. Ruggiero, B. Siciliano, and L. Villani, “Visual grasp planning for unknown objects using a multifingered robotic hand,” IEEE/ASME Trans. on Mechatronics, Vol.18, No.3, pp. 1050-1059, 2013.
  3. [3] R. Kumar, S. Lal, S. Kumar, and P. Chand, “Object detection and recognition for a pick and place robot,” Asia-Pacific World Congress on Computer Science and Engineering, pp. 1-7, 2014.
  4. [4] A. C. Bernal and G. M. Aguilar, “Vision system via usb for object recognition and manipulation with scorbot-er 4u,” Int. J. of Computer Applications, Vol.56, No.18, 2012.
  5. [5] P. J. Sanz, R. Marin, and J. S. Sánchez, “Including efficient object recognition capabilities in online robots: from a statistical to a neural-network classifier,” IEEE Trans. on Systems, Man, and Cybernetics, Part C (Applications and Reviews), Vol.35, No.1, pp. 87-96, 2005.
  6. [6] C.-Y. Tsai, C.-C. Wong, C.-J. Yu, C.-C. Liu, and T.-Y. Liu, “A hybrid switched reactive-based visual servo control of 5-dof robot manipulators for pick-and-place tasks,” IEEE Systems J., Vol.9, No.1, pp. 119-130, 2014.
  7. [7] M. Kazemi, K. K. Gupta, and M. Mehrandezh, “Randomized kinodynamic planning for robust visual servoing,” IEEE Trans. on Robotics, Vol.29, No.5, pp. 1197-1211, 2013.
  8. [8] P. Andhare and S. Rawat, “Pick and place industrial robot controller with computer vision,” 2016 Int. Conf. on Computing Communication Control and automation (ICCUBEA), pp. 1-4, 2016.
  9. [9] H. Inoue and M. Inaba, “Hand-eye coordination in rope handling,” robotics research: The first Int. symposium, 1984.
  10. [10] G. L. Foresti and F. A. Pellegrino, “Automatic visual recognition of deformable objects for grasping and manipulation,” IEEE Trans. on Systems, Man, and Cybernetics, Part C, (Applications and Reviews), Vol.34, No.3, pp. 325-333, 2004.
  11. [11] A.-M. Cretu, P. Payeur, and E. M. Petriu, “Soft object deformation monitoring and learning for model-based robotic hand manipulation,” IEEE Trans. on Systems, Man, and Cybernetics, Part B (Cybernetics), Vol.42, No.3, pp. 740-753, 2012.
  12. [12] A.-M. Cretu, E. M. Petriu, P. Payeur, and F. F. Khalil, “Estimation of deformable object properties from shape and force measurements for virtualized reality applications,” 2010 IEEE Int. Symposium on Haptic Audio Visual Environments and Games, pp. 1-6, 2010.
  13. [13] E. Seemann, K. Nickel, and R. Stiefelhagen, “Head pose estimation using stereo vision for human-robot interaction,” Proc. of 2004 Sixth IEEE Int. Conf. on Automatic Face and Gesture Recognition, pp. 626-631, 2004.
  14. [14] M. Hasanuzzaman, V. Ampornaramveth, T. Zhang, M. A. Bhuiyan, Y. Shirai, and H. Ueno, “Real-time vision-based gesture recognition for human robot interaction,” 2004 IEEE Int. Conf. on Robotics and Biomimetics, pp. 413-418, 2004.
  15. [15] A. D. Santis, V. Lippiello, B. Siciliano, and L. Villani, “Human-robot interaction control using force and vision,” Advances in Control Theory and Applications, pp. 51-70, Springer, 2007.
  16. [16] D. Kim, J. Lee, H.-S. Yoon, J. Kim, and J. Sohn, “Vision-based arm gesture recognition for a long-range human-robot interaction,” The J. of Supercomputing, Vol.65, No.1, pp. 336-352, 2013.
  17. [17] K. Yano, T. Toda, and K. Terashima, “Sloshing suppression control of automatic pouring robot by hybrid shape approach,” Proc. of the 40th IEEE Conf. on Decision and Control (Cat. No. 01CH37228), Vol.2, pp. 1328-1333, 2001.
  18. [18] Y. Noda, K. Yano, and K. Terashima, “Control of self-transfer-type automatic pouring robot with cylindrical ladle,” IFAC Proc. Volumes, Vol.38, No.1, pp. 295-300, 2005.
  19. [19] P. Pastor, H. Hoffmann, T. Asfour, and S. Schaal, “Learning and generalization of motor skills by learning from demonstration,” 2009 IEEE Int. Conf. on Robotics and Automation, pp. 763-768, 2009.
  20. [20] L. Rozo, P. Jiménez, and C. Torras, “Force-based robot learning of pouring skills using parametric hidden markov models,” 9th Int. Workshop on Robot Motion and Control, pp. 227-232, 2013.
  21. [21] A. Yamaguchi and C. G. Atkeson, “Stereo vision of liquid and particle flow for robot pouring,” 2016 IEEE-RAS 16th Int. Conf. on Humanoid Robots (Humanoids), pp. 1173-1180, 2016.
  22. [22] S. Kagami, “Utilizing opencv for high-speed vision processing,” J. of the Robotics Society of Japan, Vol.31, No.3, pp. 244-248, 2013.
  23. [23] G. Liger-Belair, “How many bubbles in your glass of bubbly?,” The J. of Physical Chemistry B, Vol.118, No.11, pp. 3156-3163, 2014.

*This site is desgined based on HTML5 and CSS3 for modern browsers, e.g. Chrome, Firefox, Safari, Edge, Opera.

Last updated on Apr. 05, 2024