single-au.php

IJAT Vol.13 No.4 pp. 464-474
doi: 10.20965/ijat.2019.p0464
(2019)

Paper:

Estimating 3D Position of Strongly Occluded Object with Semi-Real Time by Using Auxiliary 3D Points in Occluded Space

Shinichi Sumiyoshi and Yuichi Yoshida

Denso IT Laboratory, Inc.
28th Floor, Shibuya Cross Tower, 2-15-1 Shibuya, Shibuya-ku, Tokyo 150-0002, Japan

Corresponding author

Received:
November 27, 2018
Accepted:
April 3, 2019
Published:
July 5, 2019
Keywords:
occluded space detection, auxiliary point-cloud generation, strongly occluded object detector, semi-real-time system, 3D position estimation
Abstract

While several methods have been proposed for detecting three-dimensional (3D) objects in semi-real time by sparsely acquiring features from 3D point clouds, the detection of strongly occluded objects still poses difficulties. Herein, we propose a method of detecting strongly occluded objects by setting up virtual auxiliary point clouds in the vicinity of the target object. By generating auxiliary point clouds only in the occluded space estimated from a detected object at the front of the sensor-observed region, i.e., the occluder, the processing efficiency and accuracy are improved. Experiments are performed with various strongly occluded scenes based on real environmental data, and the results confirm that the proposed method is capable of achieving a mean processing time of 0.5 s for detecting strongly occluded objects.

Cite this article as:
S. Sumiyoshi and Y. Yoshida, “Estimating 3D Position of Strongly Occluded Object with Semi-Real Time by Using Auxiliary 3D Points in Occluded Space,” Int. J. Automation Technol., Vol.13 No.4, pp. 464-474, 2019.
Data files:
References
  1. [1] S. Henderson and S. Feiner, “Exploring the benefits of augmented reality documentation for maintenance and repair,” IEEE Trans. on visualization and computer graphics, Vol.17, No.10, pp. 1355-1368, 2011.
  2. [2] A. Syberfeldt, O. Danielsson, M. Holm, and L. Wang, “Visual assembling guidance using augmented reality,” Procedia Manufacturing, Vol.1, pp. 98-109, 2015.
  3. [3] R.-J. Chang and J.-C. Jau, “Augmented Reality in Peg-in-Hole Microassembly Operations,” Int. J. Automation Technol., Vol.10, No.3, pp. 438-446, 2016.
  4. [4] R. B. Rusu, N. Blodow, and M. Beetz, “Fast point feature histograms (FPFH) for 3D registration,” 2009 IEEE Int. Conf. on Robotics and Automation (ICRA’09), pp. 3212-3217, 2009.
  5. [5] B. Drost and S. Ilic, “3D object detection and localization using multimodal point pair features,” 2012 2nd Int. Conf. on 3D Imaging, Modeling, Processing, Visualization and Transmission (3DIMPVT), pp. 9-16, 2012.
  6. [6] F. Michel, A. Kirillov, E. Brachmann, A. Krull, S. Gumhold, B. Savchynskyy, and C. Rother, “Global hypothesis generation for 6D object pose estimation,” 2017 IEEE Conf. on Computer Vision and Pattern Recognition (CVPR), pp. 115-124, 2017.
  7. [7] S. Hinterstoisser, S. Holzer, C. Cagniart, S. Ilic, K. Konolige, N. Navab, and V. Lepetit, “Multimodal templates for real-time detection of texture-less objects in heavily cluttered scenes,” 2011 IEEE Int. Conf. on Computer Vision (ICCV), pp. 858-865, 2011.
  8. [8] S. Hinterstoisser, V. Lepetit, S. Ilic, S. Holzer, G. Bradski, K. Konolige, and N. Navab, “Model based training, detection and pose estimation of texture-less 3d objects in heavily cluttered scenes,” Asian Conf. on Computer Vision, pp. 548-562, 2012.
  9. [9] E. Brachmann, A. Krull, F. Michel, S. Gumhold, J. Shotton, and C. Rother, “Learning 6d object pose estimation using 3d object coordinates,” European Conf. on Computer Vision, pp. 536-551, 2014.
  10. [10] D. Hoiem, C. Rother, and J. Winn, “3D layoutcrf for multi-view object class recognition and segmentation,” 2007 IEEE Conf. on Computer Vision and Pattern Recognition (CVPR’07), pp. 1-8. 2007.
  11. [11] W. Kehl, F. Milletari, F. Tombari, S. Ilic, and N. Navab, “Deep learning of local RGB-D patches for 3D object detection and 6D pose estimation,” European Conf. on Computer Vision, pp. 205-220, 2016.
  12. [12] A. E. Johnson, “Spin-images: a representation for 3-D surface matching,” Ph.D. thesis, Carnegie Mellon University, 1997.
  13. [13] F. Tombari, S. Salti, and L. Di Stefano, “Unique signatures of histograms for local surface description,” European Conf. on Computer Vision, pp. 356-369, 2010.
  14. [14] F. Tombari, S. Salti, and L. Di Stefano, “A combined texture-shape descriptor for enhanced 3D feature matching,” 2011 18th IEEE Int. Conf. on Image Processing (ICIP), pp. 809-812, 2011.
  15. [15] S. Hinterstoisser, V. Lepetit, N. Rajkumar, and K. Konolige, “Going further with point pair features,” European Conf. on Computer Vision, pp. 834-848, 2016.
  16. [16] H. Deng, T. Birdal, and S. Ilic, “PPF-FoldNet: Unsupervised Learning of Rotation Invariant 3D Local Descriptors,” European Conf. on Computer Vision, pp. 620-638, 2018.
  17. [17] A. E. Johnson and M. Hebert, “Using spin images for efficient object recognition in cluttered 3D scenes,” IEEE Trans. on Pattern Analysis and Machine Intelligence, Vol.21, No.5, pp. 433-449, 1999.
  18. [18] S. Akizuki and M. Hashimoto, “DPN-LRF: A local reference frame for robustly handling density differences and partial occlusions,” Int. Symp. on Visual Computing, pp. 878-887, 2015.
  19. [19] J. Redmon and A. Farhadi, “YOLO9000: Better, Faster, Stronger,” 2017 IEEE Conf. on Computer Vision and Pattern Recognition (CVPR), pp. 6517-6525, 2017.
  20. [20] C. Rother, V. Kolmogorov, and A. Blake, “Grabcut: Interactive foreground extraction using iterated graph cuts,” ACM Trans. on Graphics (TOG), Vol.23, pp. 309-314, 2004.
  21. [21] J. Canny, “A computational approach to edge detection,” IEEE Trans. on Pattern Analysis and Machine Intelligence, Vol.6. pp. 679-698, 1986.
  22. [22] D. G. Lowe, “Local feature view clustering for 3D object recognition,” Proc. of the 2001 IEEE Computer Society Conf. on Computer Vision and Pattern Recognition, CVPR 2001, Vol.1, p. I, 2001.
  23. [23] D. G. Lowe, “Distinctive image features from scale-invariant keypoints,” Int. J. of Computer Vision, Vol.60, No.2, pp. 91-110, 2004.
  24. [24] J. Redmon and A. Farhadi, “YOLO: Real-Time Object Detection.” https://pjreddie.com/darknet/yolo/ [Accessed January 19, 2018]

*This site is desgined based on HTML5 and CSS3 for modern browsers, e.g. Chrome, Firefox, Safari, Edge, Opera.

Last updated on Apr. 22, 2024