single-jc.php

JACIII Vol.27 No.2 pp. 190-197
doi: 10.20965/jaciii.2023.p0190
(2023)

Research Paper:

Research on Image Inpainting Algorithms Based on Attention Guidance

Yankun Shen, Yaya Su, Lin Wang, and Dongli Jia ORCID Icon

School of Information and Electrical Engineering, Hebei University of Engineering
No.19 Taiji Road, Handan, Hebei 056038, China

Corresponding author

Received:
July 7, 2022
Accepted:
October 23, 2022
Published:
March 20, 2023
Keywords:
image inpainting, generative adversarial network, attention mechanism, deformable convolution
Abstract

In recent years, the use of deep learning in image inpainting has yielded positive results. However, existing image inpainting algorithms do not pay sufficient attention to the structural and textural features of the image when inpainting, which leads to issues in the inpainting results such as blurring and distortion. To solve the above problems, a channel attention mechanism was introduced to emphasize the importance of structure and texture after extraction by the convolutional network. A bidirectional gated feature fusion module was employed to exchange and fuse the structural and textural features, ensuring the overall consistency of the image. In addition, the features of the image were better captured by selecting a deformable convolution that can adapt the receptive field to replace the ordinary convolution in the contextual feature aggregation module. This resulted in highly vivid and realistic restoration results with more reasonable details. The experiments showed that, compared with the current mainstream network, the repair results of this algorithm were more realistic, and the superiority of this algorithm was proved by qualitative and quantitative experiments.

Cite this article as:
Y. Shen, Y. Su, L. Wang, and D. Jia, “Research on Image Inpainting Algorithms Based on Attention Guidance,” J. Adv. Comput. Intell. Intell. Inform., Vol.27 No.2, pp. 190-197, 2023.
Data files:
References
  1. [1] M. Bertalmio et al., “Image inpainting,” Proc. of the 27th Annual Conf. on Computer Graphics and Interactive Techniques, pp. 417-424, 2000.
  2. [2] J. Yu et al., “Generative image inpainting with contextual attention,” Proc. of the IEEE Conf. on Computer Vision and Pattern Recognition (CVPR), pp. 5505-5514, 2018.
  3. [3] K. Nazeri et al., “EdgeConnect: Structure guided image inpainting using edge prediction,” Proc. of the IEEE/CVF Int. Conf. on Computer Vision (ICCV) Workshops, pp. 3265-3274, 2019.
  4. [4] Y. Ren et al., “Structureflow: Image inpainting via structure-aware appearance flow,” Proc. of the IEEE/CVF ICCV, pp. 181-190, 2019.
  5. [5] Y. Liu et al., “NAM: Normalization-based Attention Module,” arXiv:2111.12419, 2021.
  6. [6] X. Guo et al., “Image Inpainting via Conditional Texture and Structure Dual Generation,” Proc. of the IEEE/CVF ICCV, pp. 14134-14143, 2021.
  7. [7] J. Dai et al., “Deformable convolutional networks,” Proc. of the IEEE ICCV, pp. 764-773, 2017.
  8. [8] C. Barnes et al., “PatchMatch: A randomized correspondence algorithm for structural image editing,” ACM Trans. Graph., Vol.28, No.3, Article No.24, 2009.
  9. [9] A. Criminisi et al., “Region filling and object removal by exemplar-based image inpainting,” IEEE Trans. on Image Processing, Vol.13, No.9, pp. 1200-1212, 2004.
  10. [10] T. F. Chan and J. Shen, “Nontexture inpainting by curvature-driven diffusions,” J. of Visual Communication and Image Representation, Vol.12, No.4, pp. 436-449, 2001.
  11. [11] I. Goodfellow et al., “Generative adversarial nets,” arXiv: 1406.2661, 2014.
  12. [12] D. Pathak et al., “Context encoders: Feature learning by inpainting,” Proc. of the IEEE Conf. on CVPR, pp. 2536-2544, 2016.
  13. [13] C. Yang et al., “High-resolution image inpainting using multi-scale neural patch synthesis,” Proc. of the IEEE Conf. on CVPR, pp. 4076-4084, 2017.
  14. [14] S. Iizuka et al., “Globally and locally consistent image completion,” ACM Trans. on Graphics (ToG), Vol.36, No.4, Article No.107, 2017.
  15. [15] G. Liu et al., “Image inpainting for irregular holes using partial convolutions,” Proc. of the European Conf. on Computer Vision (ECCV), pp. 89-105, 2018.
  16. [16] J. Yu et al., “Free-form image inpainting with gated convolution,” Proc. of the IEEE/CVF ICCV, pp. 4470-4479, 2019.
  17. [17] T.-C. Wang et al., “High-resolution image synthesis and semantic manipulation with conditional GANs,” Proc. of the IEEE Conf. on CVPR, pp. 8798-8807, 2018.
  18. [18] W. Xiong et al., “Foreground-aware image inpainting,” Proc. of the IEEE Conf. on CVPR, pp. 5833-5841, 2019.
  19. [19] J. Li et al., “Progressive reconstruction of visual structure for image inpainting,” Proc. of the IEEE/CVF ICCV, pp. 5961-5970, 2019.
  20. [20] J. Yang et al., “Learning to incorporate structure knowledge for image inpainting,” Proc. of the AAAI Conf. on Artificial Intelligence, Vol.34, No.7, pp. 12605-12612, 2020.
  21. [21] H. Liu et al., “Rethinking image inpainting via a mutual encoder-decoder with feature equalizations,” ECCV, pp. 725-741, 2020.
  22. [22] O. Ronneberger et al., “U-net: Convolutional networks for biomedical image segmentation,” Int. Conf. on Medical Image Computing and Computer-Assisted Intervention, pp. 234-241, 2015.
  23. [23] K. He et al., “Deep residual learning for image recognition,” Proc. of the IEEE Conf. on CVPR, pp. 770-778, 2016.
  24. [24] T. Miyato et al., “Spectral normalization for generative adversarial networks,” arXiv:1802.05957, 2018.
  25. [25] T. Karras et al., “Progressive growing of gans for improved quality, stability, and variation,” arXiv:1710.10196, 2017.
  26. [26] C. Doersch et al., “What makes paris look like paris?,” ACM ToG, Vol.31, No.4, Article No.101, 2012.

*This site is desgined based on HTML5 and CSS3 for modern browsers, e.g. Chrome, Firefox, Safari, Edge, Opera.

Last updated on Apr. 19, 2024