single-au.php

IJAT Vol.16 No.2 pp. 197-207
doi: 10.20965/ijat.2022.p0197
(2022)

Paper:

Viewpoint Planning for Object Identification Using Visual Experience According to Long-Term Activity

Kimitoshi Yamazaki, Kazuki Nogami, and Kotaro Nagahama

Shinshu University
4-17-1 Wakasato, Nagano City, Nagano 380-8553, Japan

Corresponding author

Received:
May 13, 2021
Accepted:
September 24, 2021
Published:
March 5, 2022
Keywords:
viewpoint planning, long-term activity, next-best-view (NBV) problem, tidying task
Abstract

In this paper, we propose a viewpoint planning method for object identification. We introduce the policy of maximizing the posterior probability of the orientation of an object observed after a robot moves its viewpoint and show a novel formulation of viewpoint planning. In addition, we propose criteria for viewpoint selection based on past sensing experience. Finally, we confirm the effectiveness of the proposed method via simulations using a mobile manipulator.

Cite this article as:
K. Yamazaki, K. Nogami, and K. Nagahama, “Viewpoint Planning for Object Identification Using Visual Experience According to Long-Term Activity,” Int. J. Automation Technol., Vol.16 No.2, pp. 197-207, 2022.
Data files:
References
  1. [1] R. Pito, “A sensor-based solution to the “next best view” problem,” Proc. of 13th Int. Conf. on Pattern Recognition, pp. 941-945, doi: 10.1109/ICPR.1996.546162, 1996.
  2. [2] S. Chen, Y. Li, and N. M. Kwok, “Active vision in robotic systems: A survey of recent developments,” The Int. J. of Robotics Research, Vol.30, No.11, pp. 1343-1377, doi: 10.1177/0278364911410755, 2011.
  3. [3] R. Zeng, W. Zhao, and Y.-J. Liu, “PC-NBV: A Point Cloud Based Deep Network for Efficient Next Best View Planning,” Proc. of the IEEE/RSJ Int. Conf. on Intelligent Robots and Systems, pp. 7050-7057, doi: 10.1109/IROS45743.2020.9340916, 2020.
  4. [4] M. Lauri, J. Pajarinen, J. Peters, and S. Frintrop, “Multi-Sensor Next-Best-View Planning as Matroid-Constrained Submodular Maximization,” IEEE Robotics and Automation Letters, Vol.5, No.4, pp. 5323-5330, doi: 10.1109/LRA.2020.3007445, 2020.
  5. [5] Y. Wang, M. Carletti, F. Setti, M. Cristani, and A. Del Bue, “Active 3D Classification of Multiple Objects in Cluttered Scenes,” Proc. of the IEEE/CVF Int. Conf. on Computer Vision Workshop (ICCVW), pp. 2602-2610, doi: 10.1109/ICCVW.2019.00318, 2019.
  6. [6] S. Kriegel, T. Bodenmüller, M. Suppa, and G. Hirzinger, “A surface-based Next-Best-View approach for automated 3D model completion of unknown objects,” Proc. of the IEEE Int. Conf. on Robotics and Automation, pp. 4869-4874, doi: 10.1109/ICRA.2011.5979947, 2011.
  7. [7] F. Farshidi, S. Sirouspour, and T. Kirubarajan, “Robust sequential view planning for object recognition using multiple cameras,” Image Vision Comput., Vol.27, No.8, pp. 1072-1082, 2009.
  8. [8] L. Kunze et al., “Artificial Intelligence for Long-Term Robot Autonomy: A Survey,” IEEE Robotics and Automation Letters, Vol.3, pp. 4023-4030, 2018.
  9. [9] E. Marder-Eppstein, E. Berger, T. Foote, B. P. Gerkey, and K. Konolige, “The office marathon: Robust navigation in an indoor office environment,” Proc. of the IEEE Int. Conf. on Robotics and Automation, pp. 300-307, 2010.
  10. [10] W. Burgard et al., “The Interactive Museum Tour-Guide Robot,” AAAI, pp. 11-18, 1998.
  11. [11] W. R. Scott et al., “View Planning for Automated Three-Dimensional Object Reconstruction and Inspection,” ACM Computing Surveys, Vol.35, No.1, pp. 64-96, 2003.
  12. [12] M. Krainin et al., “Autonomous generation of complete 3D object models using next best view manipulation planning,” Proc. of the 2011 IEEE Int. Conf. on Robotics and Automation, pp. 5031-5037, 2011.
  13. [13] Y. Miake and J. Miura, “Viewpoint planning for object search on the desk using a mobile robot,” Proc. of the JSME the Robotics and Mechatronics Conf., 1P1-D12, 2020 (in Japanese).
  14. [14] N. Hawes et al., “The STRANDS Project: Long-Term Autonomy in Everyday Environments,” IEEE Robotics & Automation Magazine, Vol.24, No.3, pp. 146-156, 2017.
  15. [15] F. Balint-Benczedi, Z. Marton, M. Durner, and M. Beets, “Storing and retrieving perceptual episodic memories for long-term manipulation tasks,” Proc. of the IEEE Int. Conf. on Advanced Robotics, pp. 25-31, 2017.
  16. [16] G. Csurka, C. Dance, L. X. Fan, J. Willamowski, and C. Bray, “Visual categorization with bags of keypoints,” Proc. of the ECCV Int. Workshop on Statistical Learning in Computer Vision, pp. 1-16, 2004.
  17. [17] H. Bay et al., “SURF: Speeded Up Robust Features,” ECCV, pp. 404-417, 2006.
  18. [18] T. Yamamoto et al., “Development of the Research Platform of a Domestic Mobile Manipulator Utilized for International Competition and Field Test,” Proc. of the IEEE/RSJ Int. Conf. on Intelligent Robots and Systems, pp. 7675-7682, 2018.

*This site is desgined based on HTML5 and CSS3 for modern browsers, e.g. Chrome, Firefox, Safari, Edge, Opera.

Last updated on Dec. 06, 2024