single-rb.php

JRM Vol.31 No.1 pp. 57-62
doi: 10.20965/jrm.2019.p0057
(2019)

Review:

Recent Trends in the Research of Industrial Robots and Future Outlook

Yukiyasu Domae

National Institute of Advanced Industrial Science and Technology (AIST)
Central 1, 1-1-1 Umezono, Tsukuba, Ibaraki 305-8560, Japan

Received:
November 19, 2018
Accepted:
December 4, 2018
Published:
February 20, 2019
Keywords:
industrial robot, factory automation, warehouse automation, manipulation, picking
Abstract

To respond to needs that have greatly diversified since the 2000s, there has been dramatic development of industrial robots with advanced intelligence. The aim of this paper was to review studies and trends in applications of these technologies. In particular, it describes factory automation and warehouse automation, practical examples of which are notably plentiful; as well as pattern recognition, a key technology underlying such technological advancements. The recent trends in deep learning technologies and the future prospects of industrial robots regarding aspects of sensing and planning were also examined.

Robot systems in the Amazon Picking Challenge 2015

Robot systems in the Amazon Picking Challenge 2015

Cite this article as:
Y. Domae, “Recent Trends in the Research of Industrial Robots and Future Outlook,” J. Robot. Mechatron., Vol.31 No.1, pp. 57-62, 2019.
Data files:
References
  1. [1] L. Westerlund, “The Extended Arm of Man – A History of the Industrial Robot,” Informationsförlaget, 2000.
  2. [2] H. Inoue, T. Kanade et al., “Robotics Creation,” Iwanami Shoten, 2004 (in Japanese).
  3. [3] H. Yonezawa, H. Koichi et al., “Long-term operational experience with a robot cell production system controlled by low carbon-footprint Senju (thousand-handed) Kannon Model robots and an approach to improving operating efficiency,” Proc. of Automation Science and Engineering, pp. 291-298, 2011.
  4. [4] K. Ono, T. Hayashi et al., “Development for Industrial Robotics Application,” IHI Engineering Review, 2009.
  5. [5] M. Sarkans and L. Roosimolder, “Implementation of robot welding cells using modular approach,” Estonian J. of Engineering, Vol.16, No.4, pp. 317-327, 2010.
  6. [6] R. Haraguchi, Y. Domae et al., “Development of Production Robot System that can Assemble Products with Cable and Connector,” J. Robot. Mechatron., Vol.23, No.6, pp. 939-950, 2011.
  7. [7] M. Hashimoto, Y. Domae et al., “Current status and future trends on robot vision technology,” J. Robot. Mechatron., Vol.29, No.2, pp. 275-286, 2017.
  8. [8] Y. Domae, A. Noda et al., “Robot Programming for Assembly Tasks,” Workshop of motion planning for industrial robots, in conjunction with ICRA, 2014.
  9. [9] H. Do and T. Choi, “Automation of cell production system for cellular phones using dual-arm robots,” J. of Advanced Manufacturing Technology, Vol.83, Nos.5-8, pp. 1349-1360, 2016.
  10. [10] N. Correll, K. E. Bekris et al., “Lessons from the Amazon Picking Challenge,” Proc. of CoRR, 2016.
  11. [11] N. Corell, K. E. Bekris et al., “Analysis and Observations From the First Amazon,” IEEE Trans. on Automation Science and Engineering, Vol.15, No.1, pp. 172-187, 2018.
  12. [12] J. Durham, “Designing the Amazon Robotics Challenge,” Workshop of Warehouse Picking Automation Workshop 2017, in conjunction with ICRA, 2017.
  13. [13] W. Liu, D. Anguelov et al., “SSD: Single Shot MultiBox Detector,” Proc. of ECCV, 2016.
  14. [14] H. Fujiyoshi et al., “Team C2M: Two Cooperative Robots for Picking and Stowing in Amazon Picking Challenge 2016,” Warehouse Picking Automation Workshop 2017, in conjunction with ICRA, 2017.
  15. [15] K. Yu, N. Fazeli et al., “Summary of Team MIT’s participation in Amazon Picking Challenge,” arXiv:1604.03639, 2016.
  16. [16] D. Morrison, A. W. Tow et al., “Cartman: The low-cost Cartesian Manipulator that won the Amazon Robotics Challenge,” 2018 IEEE Int. Conf. on Robotics and Automation (ICRA), 2018.
  17. [17] A. Zeng, K. Yu et al., “Robotic Pick-and-Place of Novel Objects in Clutter Multi-Affordance Grasping and Cross-Domain Image Matching,” 2018 IEEE Int. Conf. on Robotics and Automation (ICRA), 2018.
  18. [18] P. J. Besl and N. D. McKey, “A Method for Registration of 3D Shapes,” Trans. on PAMI, Vol.14, No.2, pp. 239-256, 1992.
  19. [19] S. Rusinkiewicz and M. Levoy, “Efficient Variants of the ICP Algorithm,” Proc. of 3rd Int. Conf. on 3D Digital Imaging and Modeling, 2001.
  20. [20] D. Chetverikov, D. Svirko et al., “The Trimmes Iterative Closest Point algorithm,” Proc. of 16th Int. Conf. on Pattern Recognition, Vol.3, pp. 545-548, 2002.
  21. [21] H. Barrow, J. Tenenbaum et al., “Parametric correspondence and chamfer matching: Two new techniques for image matching,” Int. Joint Conf. of Artif. Intel., 1977.
  22. [22] M. Liu, O. Tuzel et al., “Fast Directional chamfer matching,” Proc. of CVPR, pp. 1696-1703, 2010.
  23. [23] B. Drost, M. Ulrich et al., “Model Globally, Match Locally: Efficient and Robust 3D Object Recognition,” Proc. of CVPR, pp. 998-1005, 2010.
  24. [24] C. Choi, Y. Taguchi et al., “Voting-Based Pose Estimation for Robotic Assembly Using a 3D Sensor,” 2012 IEEE Int. Conf. on Robotics and Automation (ICRA), pp. 1724-1731, 2012.
  25. [25] S. Caccamo, E. Ataer-Cansizoglu et al., “Joint 3D reconstruction of a static scene and moving objects,” Proc. of 3DV, 2017.
  26. [26] W. Abbeloos, E. Ataer-Cnsizoglu et al., “3D object discovery and modeling using single RGB-D images containing multiple object instances,” Proc. of 3DV, 2018.
  27. [27] T. Do, M. Cai et al., “Deep-6DPose: Recovering 6D Object Pose from a Single RGB Image,” arXiv:1802.10367, 2018.
  28. [28] R. Jonschkowski, C. Eppner et al., “Probabilistic Multi-Class Segmentation for the Amazon Picking Challenge,” Proc. of IROS, 2016.
  29. [29] A. Zeng, K. Yu et al., “Multi-view Self-supervised Deep Learning for 6D Pose Estimation in the Amazon Picking Challenge,” 2017 IEEE Int. Conf. on Robotics and Automation (ICRA), 2017.
  30. [30] J. Redmon, S. Divvala et al., “You Only Look Once: unified, Real-Time Object Detection,” Proc. of CVPR, 2016.
  31. [31] Y. Domae, H. Okuda et al., “Fast graspability evaluation on single depth maps for bin picking with general grippers,” 2014 IEEE Int. Conf. on Robotics and Automation (ICRA), pp. 1997-2004, 2014.
  32. [32] I. Lenz, H. Lee et al., “Deep Learning for Detecting Robotic Grasps,” 2016 IEEE Int. Conf. on Robotics and Automation (ICRA), pp. 1957-1964, 2016.
  33. [33] Y. Jiang, S. Moseson et al., “Efficient Grasping from RGBD images: Learning using a new Rectangle Representation,” 2011 IEEE Int. Conf. on Robotics and Automation (ICRA), 2011.
  34. [34] L. Pinto and A. Gupta, “Supersizing self-supervision: Learning to grasp from 50K tries and 700 robot hours,” 2016 IEEE Int. Conf. on Robotics and Automation (ICRA), 2016.
  35. [35] S. Levine, P. Pastor et al., “Learning hand-eye coordination for robotic grasping with deep learning and large-scare data collection,” J. of Robotics Research, Vol.37, Nos.4-5, pp. 421-436, 2018.
  36. [36] A. Rodriguez, M. Mason et al., “From Caging to Grasping,” Proc. of Robotics Science and Systems, 2011.
  37. [37] J. Mahler, F. T. Pokorny et al., “Dex-Net 1.0: A Cloud-Based Network of 3D Objects for Robust Grasp Planning Using a Multi-Armed Bandit Model with Correlated Rewards,” 2016 IEEE Int. Conf. on Robotics and Automation (ICRA), pp. 1957-1964, 2016.
  38. [38] J. Mahler, J. Liang et al., “Dex-Net 2.0: Deep Learning to Plan Robust Grasps with Synthetic Point Clouds and Analytic Grasp Metrics,” Science and Systems (RSS), 2017 IEEE Int. Conf. on Robotics and Automation (ICRA), 2017.
  39. [39] J. Mahler, M. Matl et al., “Dex-Net 3.0: Computing Robust Vacuum Suction Grasp Targets in point Clouds using a New Analytic Model and Deep Learning,” 2018 IEEE Int. Conf. on Robotics and Automation (ICRA), 2018.
  40. [40] L. Pusong, B. DeRose et al., “Dex-Net as a Service (DNaaS): A Cloud-Based Robust Robot Grasp Planning System,” IEEE Proc. of Automation Science and Engineering (CASE), 2018.
  41. [41] U. Viereck, A. Pas et al., “Learning a visuomotor controller for real world robotic grasping using simulated depth images,” Proc. of CoRL, 2017.
  42. [42] A. Shrivastava, T. Pfister et al., “Learning from Simulated and unsupervised images through Adversarial Training,” Proc. of CVPR, 2016.
  43. [43] K. Bousmalis, A. Irpan et al., “Using Simulation and Domain Adaptation to Improve Efficiency of Deep Robotic Grasping,” 2018 IEEE Int. Conf. on Robotics and Automation (ICRA), 2018.
  44. [44] R. Matsumura, K. Harada et al., “Learning Based Industrial Bin-picking Trained with Apporoximate Physics Simulator,” Proc. of the Int. Conf. on Intelligent Autonomous Systems (IAS-15), 2018.
  45. [45] T. Mikolov, M. Karafiat et al., “Recurrent neural network based language model,” Proc. of Interspeech, 2010.
  46. [46] S. Hochreiter and J. Schmidhuber, “Long short-term memory,” Neural Computation, Vol.9, No.8, pp. 1735-1780, 1997.
  47. [47] J. Hatori, Y. Kikuchi et al., “Interactively Picking Real-World Objects with Unconstrained Spoken Language Instructions,” 2018 IEEE Int. Conf. on Robotics and Automation (ICRA), 2018.
  48. [48] P. Yang, K. Sasaki et al., “Repeatable Folding Task by Humanoid Robot Worker using Deep Learning,” IEEE Robotics and Automation Letters (RA-L), Vol.2, No.2, pp. 397-403, 2016.
  49. [49] S. Gu, E. Holly et al., “Deep Reinforcement Learning for Robotic Manipulation with Asynchronous Off-Policy Updates,” 2017 IEEE Int. Conf. on Robotics and Automation (ICRA), 2017.
  50. [50] C. Finn and S. Levine, “Deep Visual Foresight for Planning Robot Motion,” 2017 IEEE Int. Conf. on Robotics and Automation (ICRA), 2017.
  51. [51] J. Sung, J. Salisbury et al., “Learning to Represent Haptic Feedback for Partially-Observable Tasks,” 2017 IEEE Int. Conf. on Robotics and Automation (ICRA), 2017.
  52. [52] R. Calandra and A. Owens, “The Feeling of Success: Does Touch Sensing Help Predict Grasp Outcomes?,” Proc. of CoRL, 2017.
  53. [53] S. Luo, W. Yuan et al., “ViTac: Features Sharing between Vision and Tactile Sensing for Cloth Texture Recognition,” 2018 IEEE Int. Conf. on Robotics and Automation (ICRA), 2018.
  54. [54] W. Yuan, C. Zhu et al., “Shape-independent Hardness Estimation Using Deep Learning and a GelSight Tactile Sensor,” 2017 IEEE Int. Conf. on Robotics and Automation (ICRA), 2017.
  55. [55] A. Yamaguchi and C. Atkenson, “Combining Finger Vision and Optical Tactile Sensing: Reducing and Handling Errors While Cutting Vegetables,” Proc. of Humanoids, 2016.
  56. [56] G. Izatt and R. Tedrake, “Tracking Objects with Point Clouds from Vision and Touch,” 2017 IEEE Int. Conf. on Robotics and Automation (ICRA), pp. 4000-4007, 2017.
  57. [57] J. Sung, J. Salisbury et al., “Learning to Represent Haptic Feedback for Partially-Observable Tasks,” 2017 IEEE Int. Conf. on Robotics and Automation (ICRA), 2017.

*This site is desgined based on HTML5 and CSS3 for modern browsers, e.g. Chrome, Firefox, Safari, Edge, Opera.

Last updated on Oct. 11, 2024