IJAT Vol.13 No.4 pp. 475-481
doi: 10.20965/ijat.2019.p0475


Developing a Support System for Loading Planning

Takayuki Nakamura, Jun’ichi Kaneko, Takeyuki Abe, and Kenichiro Horio

Saitama University
255 Simo-Okubo, Sakura-ku, Saitama City, Saitama 338-8570, Japan

Corresponding author

December 26, 2018
April 11, 2019
July 5, 2019
augmented reality, point clouds, simultaneous localization and mapping (SLAM)

While loading equipment into factories or facilities, pre-loading planning on computers to determine a loading object’s posture and its loading path will help reduce trials and errors at the site location. During such pre-loading planning, we need to detect the shape data of both the factory area and loading object and their interference states and then plan the loading object’s posture accordingly. In practice, however, there are a considerable number of issues while acquiring the latest factory area data and while grasping where the loading object in complex postures will result in interference, both simultaneously and for a short time period. In this study, we aimed at resolving the above-mentioned difficulties by developing a system that can detect where the loading object will interfere by means of polygonal shapes and can visualize the area by means of point clouds, both working in conjunction so that the developed system can automatically plan a loading object’s posture and can visualize its planned posture through augmented reality.

Cite this article as:
T. Nakamura, J. Kaneko, T. Abe, and K. Horio, “Developing a Support System for Loading Planning,” Int. J. Automation Technol., Vol.13, No.4, pp. 475-481, 2019.
Data files:
  1. [1] N. Kochi, T. Ito, K. Kitamura, and S. Kaneko, “Development of 3D Image Measurement System and Stereo-matching Method, and Its Archeological Measurement,” IEEJ Trans. Information and Systems, Vol.96, No.6, 2013 (Translated from Denki Gakkai Ronbunshi, Vol.132, No.3, pp. 391-400, 2012).
  2. [2] Y. Takita, “Creating a 3D Cuboid Map Using Multi-Layer 3D LIDAR with a Swing Mechanism,” J. Robot. Mechatron., Vol.30, No.4, pp. 523-531, 2018.
  3. [3] K. Endou, T. Ikenoya, and R. Kurazume, “Development of 3D Scanning System Using Automatic Guiding Total Station,” J. Robot. Mechatron., Vol.24, No.6, pp. 992-999, 2012.
  4. [4] B. Boeckmans, M. Zhang, F. Welkenhuyzen, and J. Kruth, “Determination of Aspect Ratio Limitations, Accuracy and Repeatability of a Laser Line Scanning CMM Probe,” Int. J. Automation Technol., Vol.9, No.5, pp. 466-472, 2015.
  5. [5] H. Nguyen and B. Lee, “3D Model Reconstruction System Development Based on Laser-Vision Technology,” Int. J. Automation Technol., Vol.10, No.5, pp. 813-820, 2016.
  6. [6] N. Akai, L. Morales, and H. Murase, “Teaching-Playback Navigation Without a Consistent Map,” J. Robot. Mechatron., Vol.30, No.4, pp. 591-597, 2018.
  7. [7] S. Deguchi and G. Ishigami, “Computationally Efficient Mapping for a Mobile Robot with a Downsampling Method for the Iterative Closest Point,” J. Robot. Mechatron., Vol.30, No.1, pp. 65-75, 2018.
  8. [8] S. Yamauchi, K. Ogata, K. Suzuki, and T. Kawashima, “Development of an Accurate Video Shooting Method Using Multiple Drones Automatically Flying over Onuma Quasi-National Park,” J. Robot. Mechatron., Vol.30, No.3, pp. 436-442, 2018.
  9. [9] Z. Materna, M. Španěl, M. Mast, V. Beran, F. Weisshardt, M. Burmester, and P. Smrž, “Teleoperating Assistive Robots: A Novel User Interface Relying on Semi-Autonomy and 3D Environment Mapping,” J. Robot. Mechatron., Vol.29, No.2, pp. 381-394, 2017.
  10. [10] T. Yamashita, M. Pal, K. Matsuzaki, and H. Tomozawa, “Development of a Virtual Reality Experience System for Interior Damage Due to an Earthquake – Utilizing E-Defense Shake Table Test –,” J. Disaster Res., Vol.12, No.5, pp. 882-890, 2017.
  11. [11] S. Tagami and K. Tobisawa, “AR (Augment Reality), the new prospects of human future its transition of far and rhe outlook,” J. of Information Processing and Management, Vol.59, No.8, pp. 526-534, 2016.
  12. [12] K. Ootsubo, D. Kato, T. Kawamura, and H. Yamada, “Support System for Slope Shaping Based on a Teleoperated Construction Robot,” J. Robot. Mechatron., Vol.28, No.2, pp. 149-157, 2016.
  13. [13] K. Komoriya and K. Tani, “Development of a Laser Range Sensor for a Mobile Robot,” J. Robot. Mechatron., Vol.3, No.5, pp. 373-378, 1991.
  14. [14] T. Watanabe, T. Niwa, and H. Masuda, “Registration of Point-Clouds from Terrestrial and Portable Laser Scanners,” Int. J. Automation Technol., Vol.10, No.2, pp. 163-171, 2016.
  15. [15] N. Hidaka, T. Michikawa, A. Motamedi, N. Yabuki, and T. Fukuda, “Polygonization of Point Cloud of Tunnels Using Lofting Operation,” Int. J. Automation Technol., Vol.12, No.3, pp. 356-368, 2018.
  16. [16] Y. Midorikawa and H. Masuda, “Extraction of Rotational Surfaces and Generalized Cylinders from Point-Clouds Using Section Curves,” Int. J. Automation Technol., Vol.12, No.6, pp. 901-910, 2018.
  17. [17] J. Su, R. Miyazaki, T. Tamaki, and K. Kaneda, “3D Modeling of Lane Marks Using a Combination of Images and Mobile Mapping Data,” Int. J. Automation Technol., Vol.12, No.3, pp. 386-394, 2018.
  18. [18] K. Morishige and M. Kaneko, “Tool Path Generation for Five-Axis Controlled Machining with Consideration of Motion of Two Rotational Axes,” Int. J. Automation Technol., Vol.5, No.3, pp. 412-419, 2011.
  19. [19] M. Hasegawa, T. Iwasaki, and K. Horio, “Method of Planning Tool Postures for Deep Groove Machining of Complex Shapes – Development of an Automatic Planning Method that Considers the Motions of the Rotational Axis when the Tool Reverses Direction in Grooved Shapes –,” Int. J. Automation Technol., Vol.11, No.2, pp. 226-234, 2017.
  20. [20] S. Harasaki and H. Saito, “Virtual Object Overlay onto Uncalibrated Camera Image Sequence Enabling Tracking of Natural Features,” J. Robot. Mechatron., Vol.15, No.3, pp. 263-270, 2003.

*This site is desgined based on HTML5 and CSS3 for modern browsers, e.g. Chrome, Firefox, Safari, Edge, IE9,10,11, Opera.

Last updated on Aug. 21, 2019