single-rb.php

JRM Vol.27 No.2 pp. 126-135
doi: 10.20965/jrm.2015.p0126
(2015)

Paper:

A Robust Appearance Model and Similarity Measure for Image Matching

Dong Liang*,**, Shun’ ichi Kaneko**, and Yutaka Satoh***

*College of Computer Science and Technology, Nanjing University of Aeronautics and Astronautics
Yudao Street 29, Nanjing 210016, China

**Graduate School of Information Science and Technology, Hokkaido University
Kita 14, Nishi 9, Sapporo 060-0814, Japan

***National Institute of Advanced Industrial Science and Technology (AIST)
Tsukuba-shi 305-8568, Japan

Received:
August 19, 2014
Accepted:
January 13, 2015
Published:
April 20, 2015
Keywords:
similarity measure, illumination invariance, image matching
Abstract

CP3 histogram
An ideal similarity measure for matching image should be discriminative, producing a conspicuous correlation peak and suppressing false local maxima. Image matching tasks in practice, however, often involves complex conditions, such as blurring and fluctuating illumination. These may cause the similarity measure to not be discriminative enough. We utilized a robust scene modeling method to model the appearance of an image and propose an associated similarity measure for image matching. The proposed method utilizes a spatio-temporal learning stage to select a group of supporting pixels for each target pixel, then builds a differential statistic model of them to describe the uniqueness of the spatial structure and to provide illumination invariance for robust matching. We utilized this method for image matching in several challenging environments. Experimental results show that the proposed similarity measure produces explicit correlation peaks to achieve robust image matching.
Cite this article as:
D. Liang, S. Kaneko, and Y. Satoh, “A Robust Appearance Model and Similarity Measure for Image Matching,” J. Robot. Mechatron., Vol.27 No.2, pp. 126-135, 2015.
Data files:
References
  1. [1] B. Zitova and J. Flusser, “Image registration methods: a survey,” Image and vision computing, Vol.21, No.11, pp. 977-1000, 2003.
  2. [2] J. Maintz and M. A. Viergever, “A survey of medical image registration,” Medical image analysis, Vol.2, No.1, pp. 1-36, 1998.
  3. [3] D. G. Lowe, “Distinctive image features from scale-invariant keypoints,” Int. J. of computer vision, Vol.60, No.2, pp. 91-110, 2004.
  4. [4] H. Bay, T. Tuytelaars, and L. Van Gool, “Surf: Speeded up robust features,” in Computer Vision – ECCV 2006, pp. 404-417, Springer, 2006.
  5. [5] J. Aggarwal, L. S. Davis, and W. Martin, “Correspondence processes in dynamic scene analysis,” Proc. of the IEEE, Vol.69, No.5, pp. 562-572, 1981.
  6. [6] D. I. Barnea and H. F. Silverman, “A class of algorithms for fast digital image registration,” IEEE Trans. on Computers, Vol.100, No.2, pp. 179-186, 1972.
  7. [7] S. Kaneko, Y. Satoh, and S. Igarashi, “Using selective correlation coefficient for robust image registration,” Pattern Recognition, Vol.36, No.5, pp. 1165-1173, 2003.
  8. [8] S. Kaneko, I. Murase, and S. Igarashi, “Robust image registration by increment sign correlation,” Pattern Recognition, Vol.35, No.10, pp. 2223-2234, 2002.
  9. [9] D. Liang, S. Kaneko et al., “Robust object detection in severe imaging conditions using co-occurrence background model,” Int. J. of Optomechatronics, Vol.8, No.1, pp. 14-29, 2014.
  10. [10] D. Liang, S. Kaneko et al., “Co-occurrence-based adaptive background model for robust object detection,” Proc. of 10th IEEE Int. Conf. on Advanced Video and Signal Based Surveillance, pp. 401-406, 2013.
  11. [11] D. H. Ballard, “Generalizing the hough transform to detect arbitrary shapes,” Pattern recognition, Vol.13, No.2, pp. 111-122, 1981.
  12. [12] R. Ozaki, Y. Satoh, K. Iwata, and K. Sakaue, “Template matching by the statistical reach feature method,” Electronics and Communications in Japan, Vol.96, No.11, pp. 54-69, 2013.
  13. [13] C. Stauffer and W. E. L. Grimson, “Adaptive background mixture models for real-time tracking,” IEEE Computer Society Conf. on Computer Vision and Pattern Recognition 1999, 1999.
  14. [14] Q.-s. Chen, M. Defrise, and F. Deconinck, “Symmetric phase-only matched filtering of fourier-mellin transforms for image registration and recognition,” IEEE Trans. on Pattern Analysis and Machine Intelligence, Vol.16, No.12, pp. 1156-1168, 1994.
  15. [15] F. Ullah and S. Kaneko, “Using orientation codes for rotation-invariant template matching,” Pattern recognition, Vol.37, No.2, pp. 201-209, 2004.

*This site is desgined based on HTML5 and CSS3 for modern browsers, e.g. Chrome, Firefox, Safari, Edge, Opera.

Last updated on Oct. 01, 2024