Image Labeling by Integration of Local Co-Occurrence Histogram and Global Features
Takuto Omiya and Kazuhiro Hotta
Department of Electronical and Electronic Engineering, Meijo University, 1-501 Shiogamaguchi, Tenpaku-ku, Nagoya, Aichi 468-8502, Japan
In this paper, we perform image labeling based on the probabilistic integration of local and global features. Several conventional methods label pixels or regions using features extracted from local regions and local contextual relationships between neighboring regions. However, labeling results tend to depend on local viewpoints. To overcome this problem, we propose an image labeling method that utilizes both local and global features. We compute the posterior probability distributions of the local and global features independently, and they are integrated by the product. To compute the probability of the global region (entire image), Bag-of-Words is used. In contrast, local cooccurrence between color and texture features is used to compute local probability. In the experiments, we use the MSRC21 dataset. The result demonstrates that the use of global viewpoint significantly improves labeling accuracy.
-  K. Barnard and D. Forsyth, “Learning the semantics of words and pictures,” Proc. Int. Conf. on Computer Vision, Vol.2, pp. 408-415, 2001.
-  J. Lafferty, A. McCallum, and F. Pereira, “Conditional random fields: probabilistic models for segmenting and labeling sequence data,” Proc. Int. Conf. on Machine Learning, pp. 282-289, 2001.
-  J. Shotton, J. Winn, C. Rother, and A. Criminisi, “Textonboost: joint appearance, shape and context modeling for multi-class object recognition and segmentation,” Proc. European Conf. on Computer Vision, pp. 1-15, 2006.
-  S. Gould, J. Rodgers, D. Cohen, G. Elidan, and D. Koller, “Multiclass segmentation with relative location prior,” Int. J. of Computer Vision, Vol.80, pp. 300-316, 2008.
-  Z. Tu, “Auto-context and its application to high-level vision tasks,” Proc. Computer Vision and Pattern Recognition, pp. 1-8, 2008.
-  T. Omiya and K. Hotta, “Image labeling using integration of local and global features,” Proc. Int. Conf. on Pattern Recognition Applications and Methods, Barcelona, Spain, pp. 613-618, 2013.
-  V. Vapnik, “The name of statistical learning theory,” Springerverlag New York, 1995.
-  G. Csurka, C. Dance, L. Fan, J. Willamowski, and C. Bray, “Visual categorization with bags of keypoints,” Proc. ECCV Workshop on Statistical Learning in Computer Vision, 2004.
-  E. Nowak, F. Jurie, and B. Triggs, “Sampling strategies for bag-offeatures image classification,” Proc. European Conf. on Computer Vision, pp. 490-503, 2006.
-  C. Galleguillos, A. Rabinovich, and S. Belongie, “Object categorization using co-occurrence,” Proc. Computer Vision and Pattern Recognition, pp. 1-8, 2008.
-  L. Ladicky, C. Russell, P. Kohli, and P. Torr, “Graph cut based inference with co-occurrence statistics,” Proc. European Conf. on Computer Vision, pp. 239-253, 2010.
-  T. Ojala, “Multiresolution gray-scale and rotation invariant texture classification with local binary patterns,” Pattern Analysis and Machine Intelligence, Vol.24, pp. 971-987, 2002.
-  R. Arandjelovic and A. Zisserman, “Three things everyone should know to improve object retrieval,” Proc. Computer Vision and Pattern Recognition, pp. 2911-2918, 2012.
-  D. Lowe, “Object recognition from local scale-invariant features,” Proc. Int. Conf. on Computer Vision, Vol.2, pp. 1150-1157, 1999.
-  L. Fei-Fei and P. Perona, “A Bayesian hierarchical model for learning natural scene categories,” Proc. Computer Vision and Pattern Recognition, Vol.2, pp. 524-531, 2005.
-  J. Zhang, M. Marzaklek, S. Lazebnik, and C. Schmid, “Local features and kernels for classification of texture and object categories: a comprehensive study,” Int. J. of Computer Vision, Vol.73, pp. 213-238, 2007.
-  O. Chapelle, P. Haffner, and V. Vapnik, “Support vector machines for histogram-based image classification,” Neural Networks, Vol.10, pp. 1055-1064, 1999.
-  LIBSVM,
[Accessed April 12, 2012].
This article is published under a Creative Commons Attribution-NoDerivatives 4.0 Internationa License.