JACIII Vol.27 No.2 pp. 207-214
doi: 10.20965/jaciii.2023.p0207

Research Paper:

Research on Texture Classification Based on Multi-Scale Information Fusion

Lin Wang, Lihong Li, and Yaya Su

School of Information and Electrical Engineering, Hebei University of Engineering
No.19 Taiji Road, Handan, Hebei 056038, China

Corresponding author

August 24, 2022
November 14, 2022
March 20, 2023
texture image, multi-scale fusion, image classification, convolutional neural networks

Texture feature is an important visual cue for an image, which is the unified description of human visual attributes and sensory attributes. The inherent problem of texture image is that the difference of intra-class images is large and the disparity of inter-class images is small. This problem increases the difficulty of texture image recognition. Therefore, improving the relevance embedding of intra-class images can reduce the classification errors caused by this problem. To solve this problem, this paper proposes a multi-scale information fusion network algorithm, which adopts a cascade structure. It combines multi-scale feature information with the corresponding background information. The shallow background information guides the next stage of feature formation and enhances the similarity of intra-class images. The intra-class feature information obtained is more general. The algorithm has been tested on data sets describable texture database (DTD) and Flickr material dataset (FMD), which has achieved good results.

Cite this article as:
L. Wang, L. Li, and Y. Su, “Research on Texture Classification Based on Multi-Scale Information Fusion,” J. Adv. Comput. Intell. Intell. Inform., Vol.27, No.2, pp. 207-214, 2023.
Data files:
  1. [1] L. Liu, L. Zhao, C. Guo, L. Wang, and J. Tang, “Texture Classification: State-of-the-Art Methods and Prospects,” Acta Automatica Sinica, Vol.44, No.4, pp. 584-607, 2018 (in Chinese).
  2. [2] L. Liu and G. Kuang, “Overview of Image Textural Feature Extraction Methods,” Chinese J. of Image and Graphics, No.4, pp. 622-635, 2009 (in Chinese).
  3. [3] Z.-J. Zha, X.-S. Hua, T. Mei, J. Wang, G.-J. Qi, and Z. Wang, “Joint multi-label multi-instance learning for image classification,” IEEE Conf. on Computer Vision and Pattern Recognition (CVPR), Article No.4587384, 2008.
  4. [4] T. Paul, P. Banerjee, A. Mukherjee, and S. K. Bandhyopadhyay, “Technologies in Texture Analysis–A Review,” Current J. of Applied Science and Technology, Vol.13, No.6, Article No.BJAST.19082, 2016.
  5. [5] R.-L. Moisés, O. Sergiyenko, W. Flores-Fuentes, and J. C. Rodríguez-Quiñonez, “Optoelectronics in Machine Vision-Based Theories and Applications,” Engineering Science Reference, 2019.
  6. [6] T. Wang, Y. Chen, M. N. Qiao et al., “A fast and robust convolutional neural network-based defect detection model in product quality control,” The Int. J. of Advanced Manufacturing Technology, Vol.94, No.9, pp. 3465-3471, 2018.
  7. [7] M. Garg and G. Dhiman, “Deep convolution neural network approach for defect inspection of textured surfaces,” J. of the Institute of Electronics and Computer, Vol.2, pp. 28-38, 2020.
  8. [8] W. Zhai, Y. Cao, J. Zhang, and Z.-J. Zha, “Deep Multiple-Attribute-Perceived Network for Real-World Texture Recognition,” Proc. of IEEE/CVF Int. Conf. on Computer Vision (ICCV), pp. 3612-3621, 2019.
  9. [9] L.-C. Chen, G. Papandreou, I. Kokkinos, K. Murphy, and A. L. Yuille, “DeepLab: Semantic Image Segmentation with Deep Convolutional Nets, Atrous Convolution, and Fully Connected CRFs,” IEEE Trans. on Pattern Analysis and Machine Intelligence, Vol.40, No.4, pp. 834-848, 2017.
  10. [10] R. Girshick, J. Donahue, T. Darrell, and J. Malik, “Rich feature hierarchies for accurate object detection and semantic segmentation,” Proc. of the IEEE Conf. on CVPR, pp. 580-587, 2014.
  11. [11] K. Duan, L. Xie, H. Qi, S. Bai, Q. Huang, and Q. Tian, “Corner Proposal Network for Anchor-Free, Two-Stage Object Detection,” A. Vedaldi, H. Bischof, T. Brox, and J. M. Frahm (Eds.), “Computer Vision–ECCV 2020,” Lecture Notes in Computer Science, Vol.12348, Springer, Cham., 2020.
  12. [12] K. Simonyan and A. Zisserman, “Very Deep Convolutional Networks for Large-Scale Image Recognition,” arXiv:1409.1556, 2014.
  13. [13] K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” Proc. of the IEEE Conf. on CVPR, pp. 770-778, 2016.
  14. [14] S. Yun, D. Han, S. J. Oh, Y. Yoo, and J. Choe, “Cutmix: Regularization strategy to train strong classifiers with localizable features,” Proc. of IEEE on ICCV, pp. 6023-6032, 2019.
  15. [15] R. M. Haralick, K. Shanmugam, and I. Dinstein, “Textural features for image classification,” IEEE Trans. on Systems, Man and Cybernetics, Vol.3, No.6, pp. 610-621, 1973.
  16. [16] T. Ojala, M. Pietikainen, and T. Maenpaa, “Multiresolution Gray-Scale and Rotation Invariant Texture Classification with Local Binary Patterns,” IEEE Trans. on Pattern Analysis and Machine Intelligence, Vol.24, No.7, pp. 971-987, 2002.
  17. [17] M. Cimpoi, S. Maji, and A. Vedaldi, “Deep filter banks for texture recognition and segmentation,” Proc. of the IEEE Conf. on CVPR, pp. 3828-3836, 2015.
  18. [18] F. Perronnin, J. Sánchez, and T. Mensink, “Improving the Fisher Kernel for Large-Scale Image Classification,” European Conf. on Computer Vision, Vol.6314, pp. 143-156, 2010.
  19. [19] J. Sánchez, F. Perronnin, T. Mensink, and J. Verbeek, “Image Classification with the Fisher Vector: Theory and Practice,” Int. J. of Computer Vision, Vol.105, No.3, pp. 222-245, 2013.
  20. [20] Y. Song, F. Zhang, Q. Li, H. Huang, L. J. O’Donnell, and W. Cai, “Locally-Transferred Fisher Vectors for Texture Classification,” Proc. of IEEE on ICCV, pp. 4992-4930, 2017.
  21. [21] H. Zhang, J. Xue, and K. Dana, “Deep TEN: Texture Encoding Network,” Proc. of the IEEE Conf. on CVPR, pp. 2896-2905, 2017.
  22. [22] T.-Y. Lin, A. RoyChowdhury, and S. Maji, “Bilinear CNN Models for Fine-Grained Visual Recognition,” Proc. of IEEE on ICCV, pp. 1449-1457, 2015.
  23. [23] J. Xue, H. Zhang, and K. Dana, “Deep Texture Manifold for Ground Terrain Recognition,” Proc. of the IEEE Conf. on CVPR, pp. 558-567, 2018.
  24. [24] W. Zhai, Y. Cao, Z.-J. Zha, H. Xie, and F. Wu, “Deep Structure-Revealed Network for Texture Recognition,” Proc. of the IEEE Conf. on CVPR, pp. 11007-11016, 2020.
  25. [25] K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” Proc. of the IEEE Conf. on CVPR, pp. 770-778, 2016.
  26. [26] J. K. Hawkins, “Textual properties for pattern recognition,” B. Lipkin (Ed.), “Picture Processing and Psychopictorics,” pp. 347-370, Academic Press, 1970.
  27. [27] R.-S. Wang, “Image Understanding,” Changsha Natona University of Defense Technology Press, pp. 145-146, 1994.
  28. [28] M. Cimpoi, S. Maji, I. Kokkinos, S. Mohamed, and A. Vedaldi, “Describing textures in the wild,” Proc. of the IEEE Conf. on CVPR, pp. 3606-3613, 2014.
  29. [29] L. Sharan, C. Liu, R. Rosenholtz, and E. H. Adelson, “Recognizing Materials Using Perceptually Inspired Features,” Int. J. of Computer Vision (IJCV), Vol.103, pp. 348-371, 2013.
  30. [30] L. Sharan, R. Rosenholtz, and E. H. Adelson, “Material perception: What can you see in a brief glance?,” J. of Vision, Vol.9, No.8, Article No.784, 2009.
  31. [31] T. DeVries and G. W. Taylor, “Improved regularization of convolutional neural networks with cutout,” arXiv:1708.04552, 2017.
  32. [32] H. Zhang, M. Cisse, Y. N. Dauphin, and D. Lopez-Paz, “mixup: Beyond empirical risk minimization,” arXiv:1710.09412, 2018.
  33. [33] J. B. Joolee and S. Jeon, “Deep Multi-Modal Network Based Data-Driven Haptic Textures Modeling,” IEEE World Haptics Conf. (WHC), p. 1140, 2021.
  34. [34] K. Metwaly, A. Kim, E. Branson, and V. Monga, “GlideNet: Global, Local and Intrinsic Based Dense Embedding NETwork for Multi-Category Attributes Prediction,” Proc. of the IEEE Conf. on CVPR, pp. 4835-4846, 2022.

*This site is desgined based on HTML5 and CSS3 for modern browsers, e.g. Chrome, Firefox, Safari, Edge, Opera.

Last updated on Mar. 19, 2023