single-dr.php

JDR Vol.13 No.5 pp. 928-942
(2018)
doi: 10.20965/jdr.2018.p0928

Paper:

Damage Detection Method for Buildings with Machine-Learning Techniques Utilizing Images of Automobile Running Surveys Aftermath of the 2016 Kumamoto Earthquake

Shohei Naito*,†, Hiromitsu Tomozawa**, Yuji Mori**, Hiromitsu Nakamura*, and Hiroyuki Fujiwara*

*National Research Institute for Earth Science and Disaster Resilience (NIED)
3-1 Tennodai, Tsukuba, Ibaraki 305-0006, Japan

Corresponding author

**Mizuho Information and Research Institute, Inc., Tokyo, Japan

Received:
April 2, 2018
Accepted:
August 10, 2018
Published:
October 1, 2018
Keywords:
damage detection, machine-learning, image analyzing, running survey, SVM
Abstract

In order to understand the damage situation immediately after the occurrence of a disaster and to support disaster response, we developed a method to classify the degree of building damage in three stages with machine-learning using road-running survey images obtained immediately after the Kumamoto earthquake. Machine-learning involves a learning phase and a discrimination phase. As training data, we used images from a camera installed in the travel direction of an automobile, in which the degree of damage was visually categorized. In the learning phase, class separation is carried out by support vector machine (SVM) on a basis of a feature calculated from training patch images for each extracted damage category. In the discrimination phase, input images are provided with raster scan so that the class separation is carried out in units of the patch image. In this manner, learning, discrimination, and parameter tuning are repeated. By doing so, we developed a damage-discrimination model for each patch image and validated the discrimination accuracy using a cross-validation method. Furthermore, we developed a method using an optical flow for preventing double counting of damaged areas in cases where an identical building is captured in multiple photos.

Cite this article as:
S. Naito, H. Tomozawa, Y. Mori, H. Nakamura, and H. Fujiwara, “Damage Detection Method for Buildings with Machine-Learning Techniques Utilizing Images of Automobile Running Surveys Aftermath of the 2016 Kumamoto Earthquake,” J. Disaster Res., Vol.13 No.5, pp. 928-942, 2018.
Data files:
References
  1. [1] http://www.bousai.go.jp/kaigirep/hakusho/h14/bousai2002/html/honmon/hm120406.htm (in Japanese) [accessed June 14, 2018].
  2. [2] http://www.bousai.go.jp/kaigirep/hakusho/h22/bousai2010/html/honbun/2b_2s_2_05.htm (in Japanese) [accessed June 14, 2018].
  3. [3] H. Nakamura et al., “Development of real-time earthquake damage information system in Japan,” 16th World Conf. on Earthquake, Santiago, Chile, 2017.
  4. [4] N. Nojima, M. Sugito, and N. Kanazawa, “Modeling post-earthquake emergency decision process based on data synthesis of seismic and damage information,” J. of JSCE, No.724, Issue 62, pp. 187-200, 2003 (in Japanese with English abstract).
  5. [5] A. Kusaka et al., “Bayesian updating of damaged building distribution in post-earthquake assessment,” J. of Japan Association for Earthquake Engineering, Vol.17, No.1, pp. 16-29, 2017 (in Japanese with English abstract).
  6. [6] S. Naito et al., “The investigation of building damages caused by the 2016 Kumamoto earthquake utilizing aerial photographic interpretation,” The 37th JSCE Earthquake Engineering Symp., 2017 (in Japanese with English abstract).
  7. [7] S. Naito et al., “Investigation of damages in immediate vicinity of co-seismic faults during the 2016 Kumamoto earthquake,” J. Disaster Res., Vol.12, No.5, pp. 899-915, 2017.
  8. [8] H. Hasegawa, F. Yamazaki, and M. Matsuoka, “Visual detection of building damage due to the 1995 Hyogoken-Nanbu earthquake using aerial hdtv images,” J. of JSCE, No.682, Issue 56, pp. 257-265, 2001 (in Japanese with English abstract).
  9. [9] H. Inoue, S. Uchiyama, and H. Suzuki, “Multicopter aerial photography for natural disaster research,” Report of National Research Institute for Earth Science and Disaster Prevention, No.81, 2014 (in Japanese with English abstract).
  10. [10] https://www.google.co.jp/intl/ja/streetview/ [accessed February 23, 2018].
  11. [11] K. Sakai et al., “Possibility of utilization of omnidirectional video in road maintenance,” Seisan Kenkyu, Vol.69, No.2, 2017 (in Japanese with English abstract).
  12. [12] http://www.mlit.go.jp/jidosha/anzen/subcontents/jikoboushi.html (in Japanese) [accessed February 23, 2018].
  13. [13] T. Yamashita et al., “Development of a virtual reality experience system for interior damage due to an earthquake – utilizing E-Defence shake table test –,” J. Disaster Res., Vol.12, No.5, pp. 882-890, 2017.
  14. [14] K. Sakurada, T. Okatani, and K. Deguchi, “Detecting changes in 3D structure of a scene from multi-view images captured by a vehicle-mounted camera,” IEEE Conf. on Computer Vision and Pattern Recognition (CVPR2013), 2013.
  15. [15] K. Deguchi, “Image archive of 3.11 earthquake and tsunami disasters and spatio-temporal modeling of town areas supported by computer vision techniques,” Oukan, Vol.11, No.2, 2017 (in Japanese).
  16. [16] S. Suzuki, F. Kanazawa, and S. Motomizu, “A study on challenges of technologies collectiong anomalous event information on road network using in-vehicle camera images,” Proc. of Infrastructure Planning, Vol.45, 2012 (in Japanese with English abstract).
  17. [17] P. F. Felzenszwalb et al., “Object detection with discriminatively trained part based models,” IEEE Conf. on Computer Vision and Pattern Recognition (CVPR), 2008.
  18. [18] D. Deguchi et al., “Intelligent traffic sign detector: Adaptive learning based on online gathering of training samples,” 2011 IEEE Intelligent Vehicles Symp. (IV), Germany, 2011.
  19. [19] Y. Shibayama, K. Fujimura, and S. Kamijo, “Acquisition of pedestrian trajectory around vehicle using on-board camera,” Seisan Kenkyu, Vol.63, No.2, 2011 (in Japanese with English abstract).
  20. [20] R. Sezaki, Y. Maruyama, and S. Nagata, “Extraction of road damage after an earthquake using images captured by a car-mounted camera,” Proc. of the Annual Conf. of the Institute of Social Safety Science, No.41, pp. 53-56, 2017 (in Japanese with English abstract).
  21. [21] P. Chun et al., “Deep learning based crack ratio evaluation on asphalt pavement from image taken by car-mounted camera,” J. of Japan Society of Civil Engineers, E1, Vol.73, No.3, I_97_105, 2017.
  22. [22] Y. Shirahama et al., “Characteristics of the surface ruptures associated with the 2016 Kumamoto earthquake sequence, central Kyushu, Japan,” Earth Planets and Space, Vol.68, No.191, 2016.
  23. [23] http://www.bousai.go.jp/taisaku/pdf/shishin011.pdf (in Japanese) [accessed February 21, 2018].
  24. [24] S. Okada and N. Takai, “Classifications of structural types damage patterns of buildings for earthquake field investigation,” J. Struct. Constr. Eng., AIJ, No.524, pp. 65-72, 1999 (in Japanese with English abstract).
  25. [25] D. G. Lowe, “Distinctive image features from scale-invariant keypoints,” Int. J. of Computer Vision, Vol.60, No.2, pp. 91-110, 2004.
  26. [26] H. Bay, T. Tuytelaars, and L. V. Gool, “SURF: Speeded up robust features,” Computer Vision and Image Understanding, Vol.110, No.3, pp. 346-359, 2008.
  27. [27] P. F. Alcantarilla, A. Bartoli, and A. J. Davison, “KAZE features,” ECCV 2012, Part VI, LNCS 7577, pp. 214-227, 2012.
  28. [28] P. F. Alcantarilla, J. Nuevo, and A. Bartoli, “Fast Explicit Diffusion for Accelerated Features in Nonlinear Scale Spaces,” Proc. of the British Machine Vision Conf., BMVA Press, 2013.
  29. [29] E. Rublee et al., “ORB: an efficient alternative to SIFT or SURF,” ICCV, 2011.
  30. [30] G. Csurka et al., “Visual categorization with Bags of Keypoints,” ECCV Int. Workshop on Statistical Learning in Computer Vision, 2004.
  31. [31] T. Nagahashi, A. Ihara, and H. Fujiyoshi, “Tendency of image local features that are effective for discrimination by using Bag-of-Features in object category recognition,” Information Proc. Society of Japan, SIG Technical Report, 2009 (in Japanese with English abstract).
  32. [32] T. Harada, “Image recognition,” Machine Learning Professional Series, Kodansha Ltd., 2017 (in Japanese).
  33. [33] V. N. Vapnik, “Statistical Learning Theory,” John Wiley and Sons, 1998.
  34. [34] I. Takeuchi and M. Karasuyama, “Support Vector Machine,” Machine Learning Professional Series, Kodansha Ltd., 2015 (in Japanese).
  35. [35] K. Nakamura, M. Koeda, and E. Ueda, “Introduction to computer vision and machine lerning with OpenCV,” Kodansha Ltd., 2017 (in Japanese).
  36. [36] B. D. Lucas and T. Kanade, “An iterative image registration technique with an application to stereo vision,” Proc. of Int. Joint Conf. on Artificial Intelligence, pp. 674-679, 1981.
  37. [37] G. Farneback, “Two-frame motion estimate based on polynomial expansion,” Proc. of Scandinavian Conf. on Image Analysis, pp. 363-370, 2003.
  38. [38] Y. Ueoka et al., “Visual detection and deep learning interpretation of building damages by the 2016 Kumamoto earthquake using aerial photographs,” Proc. of the Annual Conf. of the Institute of Social Safety Science, No.41, pp. 127-130, 2017 (in Japanese with English abstract).
  39. [39] Y. Kamagatani et al., “Based on deep learning for building damage caused by the 2016 Kumamoto earthquake attempt of classification of degree of disaster,” Institute of Social Safety Science, No.41, pp. 185-186, 2017 (in Japanese with English abstract).
  40. [40] M. Fadaee, A. Bisazza, and C. Monz, “Data augmentation for low-resource neural machine translation,” Proc. of the 55th Annual Meeting of the Association for Computational Linguistics, Vol.2, pp. 567-573, 2017.
  41. [41] Y. LeCun et al., “Backpropagation applied to handwritten zip code recognition,” Neural Conputation, Vol.1, Issue 4, pp. 541-551, 1989.
  42. [42] S. Uchiyama, H. Inoue, and H. Suzuki, “Approaches for reconstructing a three-dimensional model by SfM to utilize and apply this Model for research on natural disasters,” Report of National Research Institute for Earth Science and Disaster Prevention, No.81, 2014 (in Japanese with English abstract).
  43. [43] http://www.bousai.go.jp/taisaku/keikaku/kihon.html (in Japanese) [accessed February 23, 2018].

*This site is desgined based on HTML5 and CSS3 for modern browsers, e.g. Chrome, Firefox, Safari, Edge, Opera.

Last updated on Dec. 06, 2024