JACIII Vol.10 No.1 pp. 11-16
doi: 10.20965/jaciii.2006.p0011


Voting-Based Approach to Nullspace Search for Correspondence Matching and Shape Recovery

Kazuhiko Kawamoto*, Atsushi Imiya**, and Kaoru Hirota***

*Faculty of Engineering, Kyushu Institute of Technology, 1-1 Sensui-cho, Tobata-ku, Kitakyushu 804-8550, Japan

**Institute of Media and Information Technology, Chiba University, 1-33 Yayoi-cho, Inage-ku, Chiba 263-8522, Japan

***Dept. of Computational Intelligence and Systems Science, Tokyo Institute of Technology, Mail-Box G3-49, 4259 Nagatsuta, Midori-ku, Yokohama 226-8502, Japan

April 30, 2003
June 6, 2005
January 20, 2006
correspondence matching, shape recovery, occlusion, voting

A simultaneous search, called nullspace search, for matching correspondences among images and recovering 3-D objects is proposed by using a voting-based method to circumvent erroneous recovery of 3-D objects arising from wrong matched correspondences among images. It is able to avoid occlusion problems and cope with remarkable changes in visibility in a long image sequence. An experiment is done with synthetic and real image sequences, consisted of 30 images of a sphere and 10 images of a toy house, under the condition that 3-D points are occluded at most 50% of the sequence and the camera moves with rotational as well as translational motions. The proposed method gives a basis for organizing multiple dynamic images where occlusion occurs frequently.

Cite this article as:
Kazuhiko Kawamoto, Atsushi Imiya, and Kaoru Hirota, “Voting-Based Approach to Nullspace Search for Correspondence Matching and Shape Recovery,” J. Adv. Comput. Intell. Intell. Inform., Vol.10, No.1, pp. 11-16, 2006.
Data files:
  1. [1] B. Gottesfeld, “A survey of image registration techniques,” ACM Computing Surveys, 24-4, pp. 325-376, 1992.
  2. [2] H. C. Longuet-Higgins, “A Computer Algorithm for Reconstructing a Scene from Two Projections,” Nature, 293, pp. 133-135, 1981.
  3. [3] O. D. Faugeras, and S. Maybank, “Motion From Point Matches: Multiplicity of Solutions,” Int. J. Comput. Vision, 4, pp. 225-246, 1990.
  4. [4] C. Tomasi, and T. Kanade, “Shape and Motion from Image Streams under Orthography: A Factorization Method,” Int. J. Comput. Vision, 9(2), pp. 137-154, 1992.
  5. [5] Z. Zhang, R. Deriche, O. Faugeras, and Q.-T. Luong, “A Robust Technique for Matching Two Uncalibrated Images through the Recovery of the Unknown Epipolar Geometry,” Artificial Intelligence Journal, 78, pp. 87-119, 1995.
  6. [6] R. I. Hartley, and P. Sturm, “Triangulation,” Comput. Vision Image Understanding, 68, pp. 146-157, 1997.
  7. [7] A. K. Jain, R. P. W. Duin, and J. Mao, “Statistical pattern recognition: a review,” IEEE Trans. Patt. Anal. Mach. Intell., 22-1, pp. 4-37, 2000.
  8. [8] T. Kanade, P. Rander, and P. J. Narayanan, “Virtualized Reality: Constructing Virtual Worlds from Real Scenes,” IEEE Multi., Immersive Telepresence, 4-1, pp. 34-47, 1997.
  9. [9] P. V. C. Hough, “A Method and Means for Recognizing Complex Patterns,” U.S. Patent, 3,069,654, 1962.
  10. [10] M. A. Fischer, and R. C. Bolles, “Random sample consensus: A paradigm for model fitting with applications to image analysis and automated cartography,” Comm. ACM, 24-6, pp. 381-395, 1981.
  11. [11] L. Xu, and E. Oja, “Randomized Hough Transform: Basic Mechanisms, Algorithms, and Computational Complexities,” Comput. Vision, Graphics, and Image Process.: Image Understanding, 57-2, pp. 131-154, 1993.
  12. [12] O. D. Faugeras, “Three-Dimensional Computer Vision: A Geometric Viewpoint,” MIT Press, Cambridge, MA, 1993.
  13. [13] R. I. Hartley, and A. Zisserman, “Multiple View Geometry in Computer Vision,” Cambridge University Press, 2000.
  14. [14] S. M. Smith, and J. M. Brady, “SUSAN – A New Approach to Low Level Image Processing,” Int. J. Comput. Vision, 23-1, pp. 45-78, 1997.

*This site is desgined based on HTML5 and CSS3 for modern browsers, e.g. Chrome, Firefox, Safari, Edge, Opera.

Last updated on Mar. 05, 2021