IJAT Vol.12 No.3 pp. 369-375
doi: 10.20965/ijat.2018.p0369


Evaluation of Classification Performance of Pole-Like Objects from MMS Images Using Convolutional Neural Network and Image Super Resolution

Tomohiro Mizoguchi

Department of Computer Science, College of Engineering, Nihon University
1 Nakagawara, Tokusada, Tamura-machi, Koriyama, Fukushima 963-8642, Japan

Corresponding author

September 12, 2017
February 26, 2018
Online released:
May 1, 2018
May 5, 2018
mobile mapping system, convolutional neural network, image super resolution, pole-like objects

Mobile mapping systems (MMS) can capture point cloud and continuous panoramic images of roads and their surrounding environment. These data are widely used for the maintenance of road-side objects and the creation or update of road ledgers. For these purposes, there is a need to detect and classify each object from captured data, and localize them on 3D maps. Many studies have been reported on the detection and classification of pole-like objects using point clouds captured by a mounted laser scanner. Although MMS images contain valuable information related to color and shape about objects, they have not been well utilized to date for this purpose. It is reasonable to extract shape and color features from images and use them for classification. In this paper, we focus on MMS images rather than point clouds, and evaluate the classification performance for pole-like objects, such as power poles, street lamps, street-side tree, signal lights, and road signs. For classification, Convolutional Neural Network (CNN) is used, because it is known to provide better classification results than conventional methods where hand crafted features and machine learning techniques are commonly used. We also use image super resolution (ISR) techniques based on deep learning for MMS low-resolution images. In contrast to conventional methods in which entire points of pole-like objects are evaluated, our approach selects functional parts attached to the top of the pole (e.g., three-color traffic lights) for classification, because these parts represent unique characteristics of each class of object. We demonstrate the classification performance of our proposed approach through various experiments using MMS images. We also compare the difference in classification results depending on the imaging angles.

Cite this article as:
T. Mizoguchi, “Evaluation of Classification Performance of Pole-Like Objects from MMS Images Using Convolutional Neural Network and Image Super Resolution,” Int. J. Automation Technol., Vol.12 No.3, pp. 369-375, 2018.
Data files:
  1. [1] K. Ishikawa et al., “Development of a vehicle-mounted road surface 3D measurement system,” Proc. Int. Symp. on Automation and Robotics in Construction, pp. 569-573, 2006.
  2. [2] S. Kanai et al., “Cyber Field Engineering – Current Status and the Future,” J. of the Japan Society for Precision Engineering, Vol.76, No.10, pp. 1121-1124, 2010 (in Japanese).
  3. [3] H. Yokoyama et al., “Detection and Classification of Pole-like Objects from Mobile Laser Scanning Data of Urban Environments,” Int. J. of CAD/CAM, Vol.13, No.2, pp. 31-40, 2013.
  4. [4] K. Fukano and H. Masuda, “Detection and Classification of Pole-Like Objects from Mobile Mapping Data,” ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences, Vol.II-3/W5, pp. 57-64, 2015.
  5. [5] B. Rodriguez-Cuenca et al., “Automatic Detection and Classification of Pole-Like Objects in Urban Point Cloud Data Using an Anomaly Detection Algorithm,” Remote Sensing, Vol.7, No.10, pp. 12680-12703, 2015.
  6. [6] C. Ordonez et al., “Automatic Detection and Classification of Pole-Like Objects for Urban Cartography Using Mobile Laser Scanning Data,” Sensors, Vol.17, No.7, p. 1465, 2017.
  7. [7] R. Timofte et al., “Multi-view traffic sign detection, recognition, and 3D localization,” Machine Vision and Applications, Vol.25, Issue 3, pp. 633-647, 2014.
  8. [8] M. Soilan et al., “Traffic sign detection in MLS acquired point clouds for geometric and image-based semantic inventory,” ISPRS J. of Photogrammetry and Remote Sensing, Vol.114, pp. 92-101, 2016.
  9. [9] Krizhevsky et al., “ImageNet classification with deep convolutional neural networks,” Advances in Neural Information Processing Systems, Vol.25, 2012.
  10. [10] K. Simonyan and A. Zisserman, “Very Deep Convolutional Networks for Large-Scale Image Recognition,” Proc. Int. Conf. on Learning Representations, 2015.
  11. [11] K. He et al., “Deep Residual Learning for Image Recognition,” Proc. IEEE Conf. on Computer Vision and Pattern Recognition, 2016.
  12. [12] S. Ren et al., “Faster R-CNN: towards real-time object detection with region proposal networks,” Proc. Int. Conf. on Neural Information Processing Systems, pp. 91-99, 2015.
  13. [13] S. Gupta et al., “Learning rich features from RGB-D images for object detection and segmentation,” Proc. European Conf. on Computer Vision, pp. 345-360, 2014.
  14. [14] J. Schlosser et al., “Fusing LIDAR and Images for Pedestrian Detection using Convolutional Neural Networks,” Proc. IEEE Int. Conf. on Robotics and Automation, 2016.
  15. [15] C. Dong et al., “Image Super-Resolution Using Deep Convolutional Networks,” IEEE Trans. on Pattern Analysis and Machine Intelligence, Vol.38, Issue 2, pp. 295-307, 2016.
  16. [16] D. Dai et al., “Is Image Super-Resolution Helpful for Other Vision Tasks?,” Proc. IEEE Winter Conf. on Applications of Computer Vision, 2016.
  17. [17] [accessed April 7, 2018]
  18. [18] C. C. T. Mendes et al., “Exploiting Fully Convolutional Neural Network for Fast Road Detection,” Proc. IEEE Int. Conf. on Robotics and Automation, pp. 3174-3179, 2016.

*This site is desgined based on HTML5 and CSS3 for modern browsers, e.g. Chrome, Firefox, Safari, Edge, Opera.

Last updated on Jul. 12, 2024